creating intital scaffolding for claude code plugins

This commit is contained in:
Noah Zweben MacBook
2025-11-20 11:47:24 -08:00
commit 4ca561fb85
191 changed files with 30170 additions and 0 deletions

103
plugins/README.md Normal file
View File

@@ -0,0 +1,103 @@
# Claude Code Plugins
This directory contains some official Claude Code plugins that extend functionality through custom commands, agents, and workflows. These are examples of what's possible with the Claude Code plugin system—many more plugins are available through community marketplaces.
## What are Claude Code Plugins?
Claude Code plugins are extensions that enhance Claude Code with custom slash commands, specialized agents, hooks, and MCP servers. Plugins can be shared across projects and teams, providing consistent tooling and workflows.
Learn more in the [official plugins documentation](https://docs.claude.com/en/docs/claude-code/plugins).
## Plugins in This Directory
### [agent-sdk-dev](./agent-sdk-dev/)
**Claude Agent SDK Development Plugin**
Streamlines the development of Claude Agent SDK applications with scaffolding commands and verification agents.
- **Command**: `/new-sdk-app` - Interactive setup for new Agent SDK projects
- **Agents**: `agent-sdk-verifier-py` and `agent-sdk-verifier-ts` - Validate SDK applications against best practices
- **Use case**: Creating and verifying Claude Agent SDK applications in Python or TypeScript
### [commit-commands](./commit-commands/)
**Git Workflow Automation Plugin**
Simplifies common git operations with streamlined commands for committing, pushing, and creating pull requests.
- **Commands**:
- `/commit` - Create a git commit with appropriate message
- `/commit-push-pr` - Commit, push, and create a PR in one command
- `/clean_gone` - Clean up stale local branches marked as [gone]
- **Use case**: Faster git workflows with less context switching
### [code-review](./code-review/)
**Automated Pull Request Code Review Plugin**
Provides automated code review for pull requests using multiple specialized agents with confidence-based scoring to filter false positives.
- **Command**:
- `/code-review` - Automated PR review workflow
- **Use case**: Automated code review on pull requests with high-confidence issue detection (threshold ≥80)
### [feature-dev](./feature-dev/)
**Comprehensive Feature Development Workflow Plugin**
Provides a structured 7-phase approach to feature development with specialized agents for exploration, architecture, and review.
- **Command**: `/feature-dev` - Guided feature development workflow
- **Agents**:
- `code-explorer` - Deeply analyzes existing codebase features
- `code-architect` - Designs feature architectures and implementation blueprints
- `code-reviewer` - Reviews code for bugs, quality issues, and project conventions
- **Use case**: Building new features with systematic codebase understanding and quality assurance
## Installation
These plugins are included in the Claude Code repository. To use them in your own projects:
1. Install Claude Code globally:
```bash
npm install -g @anthropic-ai/claude-code
```
2. Navigate to your project and run Claude Code:
```bash
claude
```
3. Use the `/plugin` command to install plugins from marketplaces, or configure them in your project's `.claude/settings.json`.
For detailed plugin installation and configuration, see the [official documentation](https://docs.claude.com/en/docs/claude-code/plugins).
## Plugin Structure
Each plugin follows the standard Claude Code plugin structure:
```
plugin-name/
├── .claude-plugin/
│ └── plugin.json # Plugin metadata
├── commands/ # Slash commands (optional)
├── agents/ # Specialized agents (optional)
└── README.md # Plugin documentation
```
## Contributing
When adding new plugins to this directory:
1. Follow the standard plugin structure
2. Include a comprehensive README.md
3. Add plugin metadata in `.claude-plugin/plugin.json`
4. Document all commands and agents
5. Provide usage examples
## Learn More
- [Claude Code Documentation](https://docs.claude.com/en/docs/claude-code/overview)
- [Plugin System Documentation](https://docs.claude.com/en/docs/claude-code/plugins)
- [Agent SDK Documentation](https://docs.claude.com/en/api/agent-sdk/overview)

View File

@@ -0,0 +1,9 @@
{
"name": "agent-sdk-dev",
"description": "Claude Agent SDK Development Plugin",
"version": "1.0.0",
"author": {
"name": "Ashwin Bhat",
"email": "ashwin@anthropic.com"
}
}

View File

@@ -0,0 +1,208 @@
# Agent SDK Development Plugin
A comprehensive plugin for creating and verifying Claude Agent SDK applications in Python and TypeScript.
## Overview
The Agent SDK Development Plugin streamlines the entire lifecycle of building Agent SDK applications, from initial scaffolding to verification against best practices. It helps you quickly start new projects with the latest SDK versions and ensures your applications follow official documentation patterns.
## Features
### Command: `/new-sdk-app`
Interactive command that guides you through creating a new Claude Agent SDK application.
**What it does:**
- Asks clarifying questions about your project (language, name, agent type, starting point)
- Checks for and installs the latest SDK version
- Creates all necessary project files and configuration
- Sets up proper environment files (.env.example, .gitignore)
- Provides a working example tailored to your use case
- Runs type checking (TypeScript) or syntax validation (Python)
- Automatically verifies the setup using the appropriate verifier agent
**Usage:**
```bash
/new-sdk-app my-project-name
```
Or simply:
```bash
/new-sdk-app
```
The command will interactively ask you:
1. Language choice (TypeScript or Python)
2. Project name (if not provided)
3. Agent type (coding, business, custom)
4. Starting point (minimal, basic, or specific example)
5. Tooling preferences (npm/yarn/pnpm or pip/poetry)
**Example:**
```bash
/new-sdk-app customer-support-agent
# → Creates a new Agent SDK project for a customer support agent
# → Sets up TypeScript or Python environment
# → Installs latest SDK version
# → Verifies the setup automatically
```
### Agent: `agent-sdk-verifier-py`
Thoroughly verifies Python Agent SDK applications for correct setup and best practices.
**Verification checks:**
- SDK installation and version
- Python environment setup (requirements.txt, pyproject.toml)
- Correct SDK usage and patterns
- Agent initialization and configuration
- Environment and security (.env, API keys)
- Error handling and functionality
- Documentation completeness
**When to use:**
- After creating a new Python SDK project
- After modifying an existing Python SDK application
- Before deploying a Python SDK application
**Usage:**
The agent runs automatically after `/new-sdk-app` creates a Python project, or you can trigger it by asking:
```
"Verify my Python Agent SDK application"
"Check if my SDK app follows best practices"
```
**Output:**
Provides a comprehensive report with:
- Overall status (PASS / PASS WITH WARNINGS / FAIL)
- Critical issues that prevent functionality
- Warnings about suboptimal patterns
- List of passed checks
- Specific recommendations with SDK documentation references
### Agent: `agent-sdk-verifier-ts`
Thoroughly verifies TypeScript Agent SDK applications for correct setup and best practices.
**Verification checks:**
- SDK installation and version
- TypeScript configuration (tsconfig.json)
- Correct SDK usage and patterns
- Type safety and imports
- Agent initialization and configuration
- Environment and security (.env, API keys)
- Error handling and functionality
- Documentation completeness
**When to use:**
- After creating a new TypeScript SDK project
- After modifying an existing TypeScript SDK application
- Before deploying a TypeScript SDK application
**Usage:**
The agent runs automatically after `/new-sdk-app` creates a TypeScript project, or you can trigger it by asking:
```
"Verify my TypeScript Agent SDK application"
"Check if my SDK app follows best practices"
```
**Output:**
Provides a comprehensive report with:
- Overall status (PASS / PASS WITH WARNINGS / FAIL)
- Critical issues that prevent functionality
- Warnings about suboptimal patterns
- List of passed checks
- Specific recommendations with SDK documentation references
## Workflow Example
Here's a typical workflow using this plugin:
1. **Create a new project:**
```bash
/new-sdk-app code-reviewer-agent
```
2. **Answer the interactive questions:**
```
Language: TypeScript
Agent type: Coding agent (code review)
Starting point: Basic agent with common features
```
3. **Automatic verification:**
The command automatically runs `agent-sdk-verifier-ts` to ensure everything is correctly set up.
4. **Start developing:**
```bash
# Set your API key
echo "ANTHROPIC_API_KEY=your_key_here" > .env
# Run your agent
npm start
```
5. **Verify after changes:**
```
"Verify my SDK application"
```
## Installation
This plugin is included in the Claude Code repository. To use it:
1. Ensure Claude Code is installed
2. The plugin commands and agents are automatically available
## Best Practices
- **Always use the latest SDK version**: `/new-sdk-app` checks for and installs the latest version
- **Verify before deploying**: Run the verifier agent before deploying to production
- **Keep API keys secure**: Never commit `.env` files or hardcode API keys
- **Follow SDK documentation**: The verifier agents check against official patterns
- **Type check TypeScript projects**: Run `npx tsc --noEmit` regularly
- **Test your agents**: Create test cases for your agent's functionality
## Resources
- [Agent SDK Overview](https://docs.claude.com/en/api/agent-sdk/overview)
- [TypeScript SDK Reference](https://docs.claude.com/en/api/agent-sdk/typescript)
- [Python SDK Reference](https://docs.claude.com/en/api/agent-sdk/python)
- [Agent SDK Examples](https://docs.claude.com/en/api/agent-sdk/examples)
## Troubleshooting
### Type errors in TypeScript project
**Issue**: TypeScript project has type errors after creation
**Solution**:
- The `/new-sdk-app` command runs type checking automatically
- If errors persist, check that you're using the latest SDK version
- Verify your `tsconfig.json` matches SDK requirements
### Python import errors
**Issue**: Cannot import from `claude_agent_sdk`
**Solution**:
- Ensure you've installed dependencies: `pip install -r requirements.txt`
- Activate your virtual environment if using one
- Check that the SDK is installed: `pip show claude-agent-sdk`
### Verification fails with warnings
**Issue**: Verifier agent reports warnings
**Solution**:
- Review the specific warnings in the report
- Check the SDK documentation references provided
- Warnings don't prevent functionality but indicate areas for improvement
## Author
Ashwin Bhat (ashwin@anthropic.com)
## Version
1.0.0

View File

@@ -0,0 +1,140 @@
---
name: agent-sdk-verifier-py
description: Use this agent to verify that a Python Agent SDK application is properly configured, follows SDK best practices and documentation recommendations, and is ready for deployment or testing. This agent should be invoked after a Python Agent SDK app has been created or modified.
model: sonnet
---
You are a Python Agent SDK application verifier. Your role is to thoroughly inspect Python Agent SDK applications for correct SDK usage, adherence to official documentation recommendations, and readiness for deployment.
## Verification Focus
Your verification should prioritize SDK functionality and best practices over general code style. Focus on:
1. **SDK Installation and Configuration**:
- Verify `claude-agent-sdk` is installed (check requirements.txt, pyproject.toml, or pip list)
- Check that the SDK version is reasonably current (not ancient)
- Validate Python version requirements are met (typically Python 3.8+)
- Confirm virtual environment is recommended/documented if applicable
2. **Python Environment Setup**:
- Check for requirements.txt or pyproject.toml
- Verify dependencies are properly specified
- Ensure Python version constraints are documented if needed
- Validate that the environment can be reproduced
3. **SDK Usage and Patterns**:
- Verify correct imports from `claude_agent_sdk` (or appropriate SDK module)
- Check that agents are properly initialized according to SDK docs
- Validate that agent configuration follows SDK patterns (system prompts, models, etc.)
- Ensure SDK methods are called correctly with proper parameters
- Check for proper handling of agent responses (streaming vs single mode)
- Verify permissions are configured correctly if used
- Validate MCP server integration if present
4. **Code Quality**:
- Check for basic syntax errors
- Verify imports are correct and available
- Ensure proper error handling
- Validate that the code structure makes sense for the SDK
5. **Environment and Security**:
- Check that `.env.example` exists with `ANTHROPIC_API_KEY`
- Verify `.env` is in `.gitignore`
- Ensure API keys are not hardcoded in source files
- Validate proper error handling around API calls
6. **SDK Best Practices** (based on official docs):
- System prompts are clear and well-structured
- Appropriate model selection for the use case
- Permissions are properly scoped if used
- Custom tools (MCP) are correctly integrated if present
- Subagents are properly configured if used
- Session handling is correct if applicable
7. **Functionality Validation**:
- Verify the application structure makes sense for the SDK
- Check that agent initialization and execution flow is correct
- Ensure error handling covers SDK-specific errors
- Validate that the app follows SDK documentation patterns
8. **Documentation**:
- Check for README or basic documentation
- Verify setup instructions are present (including virtual environment setup)
- Ensure any custom configurations are documented
- Confirm installation instructions are clear
## What NOT to Focus On
- General code style preferences (PEP 8 formatting, naming conventions, etc.)
- Python-specific style choices (snake_case vs camelCase debates)
- Import ordering preferences
- General Python best practices unrelated to SDK usage
## Verification Process
1. **Read the relevant files**:
- requirements.txt or pyproject.toml
- Main application files (main.py, app.py, src/\*, etc.)
- .env.example and .gitignore
- Any configuration files
2. **Check SDK Documentation Adherence**:
- Use WebFetch to reference the official Python SDK docs: https://docs.claude.com/en/api/agent-sdk/python
- Compare the implementation against official patterns and recommendations
- Note any deviations from documented best practices
3. **Validate Imports and Syntax**:
- Check that all imports are correct
- Look for obvious syntax errors
- Verify SDK is properly imported
4. **Analyze SDK Usage**:
- Verify SDK methods are used correctly
- Check that configuration options match SDK documentation
- Validate that patterns follow official examples
## Verification Report Format
Provide a comprehensive report:
**Overall Status**: PASS | PASS WITH WARNINGS | FAIL
**Summary**: Brief overview of findings
**Critical Issues** (if any):
- Issues that prevent the app from functioning
- Security problems
- SDK usage errors that will cause runtime failures
- Syntax errors or import problems
**Warnings** (if any):
- Suboptimal SDK usage patterns
- Missing SDK features that would improve the app
- Deviations from SDK documentation recommendations
- Missing documentation or setup instructions
**Passed Checks**:
- What is correctly configured
- SDK features properly implemented
- Security measures in place
**Recommendations**:
- Specific suggestions for improvement
- References to SDK documentation
- Next steps for enhancement
Be thorough but constructive. Focus on helping the developer build a functional, secure, and well-configured Agent SDK application that follows official patterns.

View File

@@ -0,0 +1,145 @@
---
name: agent-sdk-verifier-ts
description: Use this agent to verify that a TypeScript Agent SDK application is properly configured, follows SDK best practices and documentation recommendations, and is ready for deployment or testing. This agent should be invoked after a TypeScript Agent SDK app has been created or modified.
model: sonnet
---
You are a TypeScript Agent SDK application verifier. Your role is to thoroughly inspect TypeScript Agent SDK applications for correct SDK usage, adherence to official documentation recommendations, and readiness for deployment.
## Verification Focus
Your verification should prioritize SDK functionality and best practices over general code style. Focus on:
1. **SDK Installation and Configuration**:
- Verify `@anthropic-ai/claude-agent-sdk` is installed
- Check that the SDK version is reasonably current (not ancient)
- Confirm package.json has `"type": "module"` for ES modules support
- Validate that Node.js version requirements are met (check package.json engines field if present)
2. **TypeScript Configuration**:
- Verify tsconfig.json exists and has appropriate settings for the SDK
- Check module resolution settings (should support ES modules)
- Ensure target is modern enough for the SDK
- Validate that compilation settings won't break SDK imports
3. **SDK Usage and Patterns**:
- Verify correct imports from `@anthropic-ai/claude-agent-sdk`
- Check that agents are properly initialized according to SDK docs
- Validate that agent configuration follows SDK patterns (system prompts, models, etc.)
- Ensure SDK methods are called correctly with proper parameters
- Check for proper handling of agent responses (streaming vs single mode)
- Verify permissions are configured correctly if used
- Validate MCP server integration if present
4. **Type Safety and Compilation**:
- Run `npx tsc --noEmit` to check for type errors
- Verify that all SDK imports have correct type definitions
- Ensure the code compiles without errors
- Check that types align with SDK documentation
5. **Scripts and Build Configuration**:
- Verify package.json has necessary scripts (build, start, typecheck)
- Check that scripts are correctly configured for TypeScript/ES modules
- Validate that the application can be built and run
6. **Environment and Security**:
- Check that `.env.example` exists with `ANTHROPIC_API_KEY`
- Verify `.env` is in `.gitignore`
- Ensure API keys are not hardcoded in source files
- Validate proper error handling around API calls
7. **SDK Best Practices** (based on official docs):
- System prompts are clear and well-structured
- Appropriate model selection for the use case
- Permissions are properly scoped if used
- Custom tools (MCP) are correctly integrated if present
- Subagents are properly configured if used
- Session handling is correct if applicable
8. **Functionality Validation**:
- Verify the application structure makes sense for the SDK
- Check that agent initialization and execution flow is correct
- Ensure error handling covers SDK-specific errors
- Validate that the app follows SDK documentation patterns
9. **Documentation**:
- Check for README or basic documentation
- Verify setup instructions are present if needed
- Ensure any custom configurations are documented
## What NOT to Focus On
- General code style preferences (formatting, naming conventions, etc.)
- Whether developers use `type` vs `interface` or other TypeScript style choices
- Unused variable naming conventions
- General TypeScript best practices unrelated to SDK usage
## Verification Process
1. **Read the relevant files**:
- package.json
- tsconfig.json
- Main application files (index.ts, src/\*, etc.)
- .env.example and .gitignore
- Any configuration files
2. **Check SDK Documentation Adherence**:
- Use WebFetch to reference the official TypeScript SDK docs: https://docs.claude.com/en/api/agent-sdk/typescript
- Compare the implementation against official patterns and recommendations
- Note any deviations from documented best practices
3. **Run Type Checking**:
- Execute `npx tsc --noEmit` to verify no type errors
- Report any compilation issues
4. **Analyze SDK Usage**:
- Verify SDK methods are used correctly
- Check that configuration options match SDK documentation
- Validate that patterns follow official examples
## Verification Report Format
Provide a comprehensive report:
**Overall Status**: PASS | PASS WITH WARNINGS | FAIL
**Summary**: Brief overview of findings
**Critical Issues** (if any):
- Issues that prevent the app from functioning
- Security problems
- SDK usage errors that will cause runtime failures
- Type errors or compilation failures
**Warnings** (if any):
- Suboptimal SDK usage patterns
- Missing SDK features that would improve the app
- Deviations from SDK documentation recommendations
- Missing documentation
**Passed Checks**:
- What is correctly configured
- SDK features properly implemented
- Security measures in place
**Recommendations**:
- Specific suggestions for improvement
- References to SDK documentation
- Next steps for enhancement
Be thorough but constructive. Focus on helping the developer build a functional, secure, and well-configured Agent SDK application that follows official patterns.

View File

@@ -0,0 +1,176 @@
---
description: Create and setup a new Claude Agent SDK application
argument-hint: [project-name]
---
You are tasked with helping the user create a new Claude Agent SDK application. Follow these steps carefully:
## Reference Documentation
Before starting, review the official documentation to ensure you provide accurate and up-to-date guidance. Use WebFetch to read these pages:
1. **Start with the overview**: https://docs.claude.com/en/api/agent-sdk/overview
2. **Based on the user's language choice, read the appropriate SDK reference**:
- TypeScript: https://docs.claude.com/en/api/agent-sdk/typescript
- Python: https://docs.claude.com/en/api/agent-sdk/python
3. **Read relevant guides mentioned in the overview** such as:
- Streaming vs Single Mode
- Permissions
- Custom Tools
- MCP integration
- Subagents
- Sessions
- Any other relevant guides based on the user's needs
**IMPORTANT**: Always check for and use the latest versions of packages. Use WebSearch or WebFetch to verify current versions before installation.
## Gather Requirements
IMPORTANT: Ask these questions one at a time. Wait for the user's response before asking the next question. This makes it easier for the user to respond.
Ask the questions in this order (skip any that the user has already provided via arguments):
1. **Language** (ask first): "Would you like to use TypeScript or Python?"
- Wait for response before continuing
2. **Project name** (ask second): "What would you like to name your project?"
- If $ARGUMENTS is provided, use that as the project name and skip this question
- Wait for response before continuing
3. **Agent type** (ask third, but skip if #2 was sufficiently detailed): "What kind of agent are you building? Some examples:
- Coding agent (SRE, security review, code review)
- Business agent (customer support, content creation)
- Custom agent (describe your use case)"
- Wait for response before continuing
4. **Starting point** (ask fourth): "Would you like:
- A minimal 'Hello World' example to start
- A basic agent with common features
- A specific example based on your use case"
- Wait for response before continuing
5. **Tooling choice** (ask fifth): Let the user know what tools you'll use, and confirm with them that these are the tools they want to use (for example, they may prefer pnpm or bun over npm). Respect the user's preferences when executing on the requirements.
After all questions are answered, proceed to create the setup plan.
## Setup Plan
Based on the user's answers, create a plan that includes:
1. **Project initialization**:
- Create project directory (if it doesn't exist)
- Initialize package manager:
- TypeScript: `npm init -y` and setup `package.json` with type: "module" and scripts (include a "typecheck" script)
- Python: Create `requirements.txt` or use `poetry init`
- Add necessary configuration files:
- TypeScript: Create `tsconfig.json` with proper settings for the SDK
- Python: Optionally create config files if needed
2. **Check for Latest Versions**:
- BEFORE installing, use WebSearch or check npm/PyPI to find the latest version
- For TypeScript: Check https://www.npmjs.com/package/@anthropic-ai/claude-agent-sdk
- For Python: Check https://pypi.org/project/claude-agent-sdk/
- Inform the user which version you're installing
3. **SDK Installation**:
- TypeScript: `npm install @anthropic-ai/claude-agent-sdk@latest` (or specify latest version)
- Python: `pip install claude-agent-sdk` (pip installs latest by default)
- After installation, verify the installed version:
- TypeScript: Check package.json or run `npm list @anthropic-ai/claude-agent-sdk`
- Python: Run `pip show claude-agent-sdk`
4. **Create starter files**:
- TypeScript: Create an `index.ts` or `src/index.ts` with a basic query example
- Python: Create a `main.py` with a basic query example
- Include proper imports and basic error handling
- Use modern, up-to-date syntax and patterns from the latest SDK version
5. **Environment setup**:
- Create a `.env.example` file with `ANTHROPIC_API_KEY=your_api_key_here`
- Add `.env` to `.gitignore`
- Explain how to get an API key from https://console.anthropic.com/
6. **Optional: Create .claude directory structure**:
- Offer to create `.claude/` directory for agents, commands, and settings
- Ask if they want any example subagents or slash commands
## Implementation
After gathering requirements and getting user confirmation on the plan:
1. Check for latest package versions using WebSearch or WebFetch
2. Execute the setup steps
3. Create all necessary files
4. Install dependencies (always use latest stable versions)
5. Verify installed versions and inform the user
6. Create a working example based on their agent type
7. Add helpful comments in the code explaining what each part does
8. **VERIFY THE CODE WORKS BEFORE FINISHING**:
- For TypeScript:
- Run `npx tsc --noEmit` to check for type errors
- Fix ALL type errors until types pass completely
- Ensure imports and types are correct
- Only proceed when type checking passes with no errors
- For Python:
- Verify imports are correct
- Check for basic syntax errors
- **DO NOT consider the setup complete until the code verifies successfully**
## Verification
After all files are created and dependencies are installed, use the appropriate verifier agent to validate that the Agent SDK application is properly configured and ready for use:
1. **For TypeScript projects**: Launch the **agent-sdk-verifier-ts** agent to validate the setup
2. **For Python projects**: Launch the **agent-sdk-verifier-py** agent to validate the setup
3. The agent will check SDK usage, configuration, functionality, and adherence to official documentation
4. Review the verification report and address any issues
## Getting Started Guide
Once setup is complete and verified, provide the user with:
1. **Next steps**:
- How to set their API key
- How to run their agent:
- TypeScript: `npm start` or `node --loader ts-node/esm index.ts`
- Python: `python main.py`
2. **Useful resources**:
- Link to TypeScript SDK reference: https://docs.claude.com/en/api/agent-sdk/typescript
- Link to Python SDK reference: https://docs.claude.com/en/api/agent-sdk/python
- Explain key concepts: system prompts, permissions, tools, MCP servers
3. **Common next steps**:
- How to customize the system prompt
- How to add custom tools via MCP
- How to configure permissions
- How to create subagents
## Important Notes
- **ALWAYS USE LATEST VERSIONS**: Before installing any packages, check for the latest versions using WebSearch or by checking npm/PyPI directly
- **VERIFY CODE RUNS CORRECTLY**:
- For TypeScript: Run `npx tsc --noEmit` and fix ALL type errors before finishing
- For Python: Verify syntax and imports are correct
- Do NOT consider the task complete until the code passes verification
- Verify the installed version after installation and inform the user
- Check the official documentation for any version-specific requirements (Node.js version, Python version, etc.)
- Always check if directories/files already exist before creating them
- Use the user's preferred package manager (npm, yarn, pnpm for TypeScript; pip, poetry for Python)
- Ensure all code examples are functional and include proper error handling
- Use modern syntax and patterns that are compatible with the latest SDK version
- Make the experience interactive and educational
- **ASK QUESTIONS ONE AT A TIME** - Do not ask multiple questions in a single response
Begin by asking the FIRST requirement question only. Wait for the user's answer before proceeding to the next question.

View File

@@ -0,0 +1,10 @@
{
"name": "code-review",
"description": "Automated code review for pull requests using multiple specialized agents with confidence-based scoring",
"version": "1.0.0",
"author": {
"name": "Boris Cherny",
"email": "boris@anthropic.com"
}
}

View File

@@ -0,0 +1,246 @@
# Code Review Plugin
Automated code review for pull requests using multiple specialized agents with confidence-based scoring to filter false positives.
## Overview
The Code Review Plugin automates pull request review by launching multiple agents in parallel to independently audit changes from different perspectives. It uses confidence scoring to filter out false positives, ensuring only high-quality, actionable feedback is posted.
## Commands
### `/code-review`
Performs automated code review on a pull request using multiple specialized agents.
**What it does:**
1. Checks if review is needed (skips closed, draft, trivial, or already-reviewed PRs)
2. Gathers relevant CLAUDE.md guideline files from the repository
3. Summarizes the pull request changes
4. Launches 4 parallel agents to independently review:
- **Agents #1 & #2**: Audit for CLAUDE.md compliance
- **Agent #3**: Scan for obvious bugs in changes
- **Agent #4**: Analyze git blame/history for context-based issues
5. Scores each issue 0-100 for confidence level
6. Filters out issues below 80 confidence threshold
7. Posts review comment with high-confidence issues only
**Usage:**
```bash
/code-review
```
**Example workflow:**
```bash
# On a PR branch, run:
/code-review
# Claude will:
# - Launch 4 review agents in parallel
# - Score each issue for confidence
# - Post comment with issues ≥80 confidence
# - Skip posting if no high-confidence issues found
```
**Features:**
- Multiple independent agents for comprehensive review
- Confidence-based scoring reduces false positives (threshold: 80)
- CLAUDE.md compliance checking with explicit guideline verification
- Bug detection focused on changes (not pre-existing issues)
- Historical context analysis via git blame
- Automatic skipping of closed, draft, or already-reviewed PRs
- Links directly to code with full SHA and line ranges
**Review comment format:**
```markdown
## Code review
Found 3 issues:
1. Missing error handling for OAuth callback (CLAUDE.md says "Always handle OAuth errors")
https://github.com/owner/repo/blob/abc123.../src/auth.ts#L67-L72
2. Memory leak: OAuth state not cleaned up (bug due to missing cleanup in finally block)
https://github.com/owner/repo/blob/abc123.../src/auth.ts#L88-L95
3. Inconsistent naming pattern (src/conventions/CLAUDE.md says "Use camelCase for functions")
https://github.com/owner/repo/blob/abc123.../src/utils.ts#L23-L28
```
**Confidence scoring:**
- **0**: Not confident, false positive
- **25**: Somewhat confident, might be real
- **50**: Moderately confident, real but minor
- **75**: Highly confident, real and important
- **100**: Absolutely certain, definitely real
**False positives filtered:**
- Pre-existing issues not introduced in PR
- Code that looks like a bug but isn't
- Pedantic nitpicks
- Issues linters will catch
- General quality issues (unless in CLAUDE.md)
- Issues with lint ignore comments
## Installation
This plugin is included in the Claude Code repository. The command is automatically available when using Claude Code.
## Best Practices
### Using `/code-review`
- Maintain clear CLAUDE.md files for better compliance checking
- Trust the 80+ confidence threshold - false positives are filtered
- Run on all non-trivial pull requests
- Review agent findings as a starting point for human review
- Update CLAUDE.md based on recurring review patterns
### When to use
- All pull requests with meaningful changes
- PRs touching critical code paths
- PRs from multiple contributors
- PRs where guideline compliance matters
### When not to use
- Closed or draft PRs (automatically skipped anyway)
- Trivial automated PRs (automatically skipped)
- Urgent hotfixes requiring immediate merge
- PRs already reviewed (automatically skipped)
## Workflow Integration
### Standard PR review workflow:
```bash
# Create PR with changes
/code-review
# Review the automated feedback
# Make any necessary fixes
# Merge when ready
```
### As part of CI/CD:
```bash
# Trigger on PR creation or update
# Automatically posts review comments
# Skip if review already exists
```
## Requirements
- Git repository with GitHub integration
- GitHub CLI (`gh`) installed and authenticated
- CLAUDE.md files (optional but recommended for guideline checking)
## Troubleshooting
### Review takes too long
**Issue**: Agents are slow on large PRs
**Solution**:
- Normal for large changes - agents run in parallel
- 4 independent agents ensure thoroughness
- Consider splitting large PRs into smaller ones
### Too many false positives
**Issue**: Review flags issues that aren't real
**Solution**:
- Default threshold is 80 (already filters most false positives)
- Make CLAUDE.md more specific about what matters
- Consider if the flagged issue is actually valid
### No review comment posted
**Issue**: `/code-review` runs but no comment appears
**Solution**:
Check if:
- PR is closed (reviews skipped)
- PR is draft (reviews skipped)
- PR is trivial/automated (reviews skipped)
- PR already has review (reviews skipped)
- No issues scored ≥80 (no comment needed)
### Link formatting broken
**Issue**: Code links don't render correctly in GitHub
**Solution**:
Links must follow this exact format:
```
https://github.com/owner/repo/blob/[full-sha]/path/file.ext#L[start]-L[end]
```
- Must use full SHA (not abbreviated)
- Must use `#L` notation
- Must include line range with at least 1 line of context
### GitHub CLI not working
**Issue**: `gh` commands fail
**Solution**:
- Install GitHub CLI: `brew install gh` (macOS) or see [GitHub CLI installation](https://cli.github.com/)
- Authenticate: `gh auth login`
- Verify repository has GitHub remote
## Tips
- **Write specific CLAUDE.md files**: Clear guidelines = better reviews
- **Include context in PRs**: Helps agents understand intent
- **Use confidence scores**: Issues ≥80 are usually correct
- **Iterate on guidelines**: Update CLAUDE.md based on patterns
- **Review automatically**: Set up as part of PR workflow
- **Trust the filtering**: Threshold prevents noise
## Configuration
### Adjusting confidence threshold
The default threshold is 80. To adjust, modify the command file at `commands/code-review.md`:
```markdown
Filter out any issues with a score less than 80.
```
Change `80` to your preferred threshold (0-100).
### Customizing review focus
Edit `commands/code-review.md` to add or modify agent tasks:
- Add security-focused agents
- Add performance analysis agents
- Add accessibility checking agents
- Add documentation quality checks
## Technical Details
### Agent architecture
- **2x CLAUDE.md compliance agents**: Redundancy for guideline checks
- **1x bug detector**: Focused on obvious bugs in changes only
- **1x history analyzer**: Context from git blame and history
- **Nx confidence scorers**: One per issue for independent scoring
### Scoring system
- Each issue independently scored 0-100
- Scoring considers evidence strength and verification
- Threshold (default 80) filters low-confidence issues
- For CLAUDE.md issues: verifies guideline explicitly mentions it
### GitHub integration
Uses `gh` CLI for:
- Viewing PR details and diffs
- Fetching repository data
- Reading git blame and history
- Posting review comments
## Author
Boris Cherny (boris@anthropic.com)
## Version
1.0.0

View File

@@ -0,0 +1,92 @@
---
allowed-tools: Bash(gh issue view:*), Bash(gh search:*), Bash(gh issue list:*), Bash(gh pr comment:*), Bash(gh pr diff:*), Bash(gh pr view:*), Bash(gh pr list:*)
description: Code review a pull request
disable-model-invocation: false
---
Provide a code review for the given pull request.
To do this, follow these steps precisely:
1. Use a Haiku agent to check if the pull request (a) is closed, (b) is a draft, (c) does not need a code review (eg. because it is an automated pull request, or is very simple and obviously ok), or (d) already has a code review from you from earlier. If so, do not proceed.
2. Use another Haiku agent to give you a list of file paths to (but not the contents of) any relevant CLAUDE.md files from the codebase: the root CLAUDE.md file (if one exists), as well as any CLAUDE.md files in the directories whose files the pull request modified
3. Use a Haiku agent to view the pull request, and ask the agent to return a summary of the change
4. Then, launch 5 parallel Sonnet agents to independently code review the change. The agents should do the following, then return a list of issues and the reason each issue was flagged (eg. CLAUDE.md adherence, bug, historical git context, etc.):
a. Agent #1: Audit the changes to make sure they compily with the CLAUDE.md. Note that CLAUDE.md is guidance for Claude as it writes code, so not all instructions will be applicable during code review.
b. Agent #2: Read the file changes in the pull request, then do a shallow scan for obvious bugs. Avoid reading extra context beyond the changes, focusing just on the changes themselves. Focus on large bugs, and avoid small issues and nitpicks. Ignore likely false positives.
c. Agent #3: Read the git blame and history of the code modified, to identify any bugs in light of that historical context
d. Agent #4: Read previous pull requests that touched these files, and check for any comments on those pull requests that may also apply to the current pull request.
e. Agent #5: Read code comments in the modified files, and make sure the changes in the pull request comply with any guidance in the comments.
5. For each issue found in #4, launch a parallel Haiku agent that takes the PR, issue description, and list of CLAUDE.md files (from step 2), and returns a score to indicate the agent's level of confidence for whether the issue is real or false positive. To do that, the agent should score each issue on a scale from 0-100, indicating its level of confidence. For issues that were flagged due to CLAUDE.md instructions, the agent should double check that the CLAUDE.md actually calls out that issue specifically. The scale is (give this rubric to the agent verbatim):
a. 0: Not confident at all. This is a false positive that doesn't stand up to light scrutiny, or is a pre-existing issue.
b. 25: Somewhat confident. This might be a real issue, but may also be a false positive. The agent wasn't able to verify that it's a real issue. If the issue is stylistic, it is one that was not explicitly called out in the relevant CLAUDE.md.
c. 50: Moderately confident. The agent was able to verify this is a real issue, but it might be a nitpick or not happen very often in practice. Relative to the rest of the PR, it's not very important.
d. 75: Highly confident. The agent double checked the issue, and verified that it is very likely it is a real issue that will be hit in practice. The existing approach in the PR is insufficient. The issue is very important and will directly impact the code's functionality, or it is an issue that is directly mentioned in the relevant CLAUDE.md.
e. 100: Absolutely certain. The agent double checked the issue, and confirmed that it is definitely a real issue, that will happen frequently in practice. The evidence directly confirms this.
6. Filter out any issues with a score less than 80. If there are no issues that meet this criteria, do not proceed.
7. Use a Haiku agent to repeat the eligibility check from #1, to make sure that the pull request is still eligible for code review.
8. Finally, use the gh bash command to comment back on the pull request with the result. When writing your comment, keep in mind to:
a. Keep your output brief
b. Avoid emojis
c. Link and cite relevant code, files, and URLs
Examples of false positives, for steps 4 and 5:
- Pre-existing issues
- Something that looks like a bug but is not actually a bug
- Pedantic nitpicks that a senior engineer wouldn't call out
- Issues that a linter, typechecker, or compiler would catch (eg. missing or incorrect imports, type errors, broken tests, formatting issues, pedantic style issues like newlines). No need to run these build steps yourself -- it is safe to assume that they will be run separately as part of CI.
- General code quality issues (eg. lack of test coverage, general security issues, poor documentation), unless explicitly required in CLAUDE.md
- Issues that are called out in CLAUDE.md, but explicitly silenced in the code (eg. due to a lint ignore comment)
- Changes in functionality that are likely intentional or are directly related to the broader change
- Real issues, but on lines that the user did not modify in their pull request
Notes:
- Do not check build signal or attempt to build or typecheck the app. These will run separately, and are not relevant to your code review.
- Use `gh` to interact with Github (eg. to fetch a pull request, or to create inline comments), rather than web fetch
- Make a todo list first
- You must cite and link each bug (eg. if referring to a CLAUDE.md, you must link it)
- For your final comment, follow the following format precisely (assuming for this example that you found 3 issues):
---
### Code review
Found 3 issues:
1. <brief description of bug> (CLAUDE.md says "<...>")
<link to file and line with full sha1 + line range for context, note that you MUST provide the full sha and not use bash here, eg. https://github.com/anthropics/claude-code/blob/1d54823877c4de72b2316a64032a54afc404e619/README.md#L13-L17>
2. <brief description of bug> (some/other/CLAUDE.md says "<...>")
<link to file and line with full sha1 + line range for context>
3. <brief description of bug> (bug due to <file and code snippet>)
<link to file and line with full sha1 + line range for context>
🤖 Generated with [Claude Code](https://claude.ai/code)
<sub>- If this code review was useful, please react with 👍. Otherwise, react with 👎.</sub>
---
- Or, if you found no issues:
---
### Code review
No issues found. Checked for bugs and CLAUDE.md compliance.
🤖 Generated with [Claude Code](https://claude.ai/code)
- When linking to code, follow the following format precisely, otherwise the Markdown preview won't render correctly: https://github.com/anthropics/claude-cli-internal/blob/c21d3c10bc8e898b7ac1a2d745bdc9bc4e423afe/package.json#L10-L15
- Requires full git sha
- You must provide the full sha. Commands like `https://github.com/owner/repo/blob/$(git rev-parse HEAD)/foo/bar` will not work, since your comment will be directly rendered in Markdown.
- Repo name must match the repo you're code reviewing
- # sign after the file name
- Line range format is L[start]-L[end]
- Provide at least 1 line of context before and after, centered on the line you are commenting about (eg. if you are commenting about lines 5-6, you should link to `L4-7`)

View File

@@ -0,0 +1,10 @@
{
"name": "commit-commands",
"description": "Streamline your git workflow with simple commands for committing, pushing, and creating pull requests",
"version": "1.0.0",
"author": {
"name": "Anthropic",
"email": "support@anthropic.com"
}
}

View File

@@ -0,0 +1,225 @@
# Commit Commands Plugin
Streamline your git workflow with simple commands for committing, pushing, and creating pull requests.
## Overview
The Commit Commands Plugin automates common git operations, reducing context switching and manual command execution. Instead of running multiple git commands, use a single slash command to handle your entire workflow.
## Commands
### `/commit`
Creates a git commit with an automatically generated commit message based on staged and unstaged changes.
**What it does:**
1. Analyzes current git status
2. Reviews both staged and unstaged changes
3. Examines recent commit messages to match your repository's style
4. Drafts an appropriate commit message
5. Stages relevant files
6. Creates the commit
**Usage:**
```bash
/commit
```
**Example workflow:**
```bash
# Make some changes to your code
# Then simply run:
/commit
# Claude will:
# - Review your changes
# - Stage the files
# - Create a commit with an appropriate message
# - Show you the commit status
```
**Features:**
- Automatically drafts commit messages that match your repo's style
- Follows conventional commit practices
- Avoids committing files with secrets (.env, credentials.json)
- Includes Claude Code attribution in commit message
### `/commit-push-pr`
Complete workflow command that commits, pushes, and creates a pull request in one step.
**What it does:**
1. Creates a new branch (if currently on main)
2. Stages and commits changes with an appropriate message
3. Pushes the branch to origin
4. Creates a pull request using `gh pr create`
5. Provides the PR URL
**Usage:**
```bash
/commit-push-pr
```
**Example workflow:**
```bash
# Make your changes
# Then run:
/commit-push-pr
# Claude will:
# - Create a feature branch (if needed)
# - Commit your changes
# - Push to remote
# - Open a PR with summary and test plan
# - Give you the PR URL to review
```
**Features:**
- Analyzes all commits in the branch (not just the latest)
- Creates comprehensive PR descriptions with:
- Summary of changes (1-3 bullet points)
- Test plan checklist
- Claude Code attribution
- Handles branch creation automatically
- Uses GitHub CLI (`gh`) for PR creation
**Requirements:**
- GitHub CLI (`gh`) must be installed and authenticated
- Repository must have a remote named `origin`
### `/clean_gone`
Cleans up local branches that have been deleted from the remote repository.
**What it does:**
1. Lists all local branches to identify [gone] status
2. Identifies and removes worktrees associated with [gone] branches
3. Deletes all branches marked as [gone]
4. Provides feedback on removed branches
**Usage:**
```bash
/clean_gone
```
**Example workflow:**
```bash
# After PRs are merged and remote branches are deleted
/clean_gone
# Claude will:
# - Find all branches marked as [gone]
# - Remove any associated worktrees
# - Delete the stale local branches
# - Report what was cleaned up
```
**Features:**
- Handles both regular branches and worktree branches
- Safely removes worktrees before deleting branches
- Shows clear feedback about what was removed
- Reports if no cleanup was needed
**When to use:**
- After merging and deleting remote branches
- When your local branch list is cluttered with stale branches
- During regular repository maintenance
## Installation
This plugin is included in the Claude Code repository. The commands are automatically available when using Claude Code.
## Best Practices
### Using `/commit`
- Review the staged changes before committing
- Let Claude analyze your changes and match your repo's commit style
- Trust the automated message, but verify it's accurate
- Use for routine commits during development
### Using `/commit-push-pr`
- Use when you're ready to create a PR
- Ensure all your changes are complete and tested
- Claude will analyze the full branch history for the PR description
- Review the PR description and edit if needed
- Use when you want to minimize context switching
### Using `/clean_gone`
- Run periodically to keep your branch list clean
- Especially useful after merging multiple PRs
- Safe to run - only removes branches already deleted remotely
- Helps maintain a tidy local repository
## Workflow Integration
### Quick commit workflow:
```bash
# Write code
/commit
# Continue development
```
### Feature branch workflow:
```bash
# Develop feature across multiple commits
/commit # First commit
# More changes
/commit # Second commit
# Ready to create PR
/commit-push-pr
```
### Maintenance workflow:
```bash
# After several PRs are merged
/clean_gone
# Clean workspace ready for next feature
```
## Requirements
- Git must be installed and configured
- For `/commit-push-pr`: GitHub CLI (`gh`) must be installed and authenticated
- Repository must be a git repository with a remote
## Troubleshooting
### `/commit` creates empty commit
**Issue**: No changes to commit
**Solution**:
- Ensure you have unstaged or staged changes
- Run `git status` to verify changes exist
### `/commit-push-pr` fails to create PR
**Issue**: `gh pr create` command fails
**Solution**:
- Install GitHub CLI: `brew install gh` (macOS) or see [GitHub CLI installation](https://cli.github.com/)
- Authenticate: `gh auth login`
- Ensure repository has a GitHub remote
### `/clean_gone` doesn't find branches
**Issue**: No branches marked as [gone]
**Solution**:
- Run `git fetch --prune` to update remote tracking
- Branches must be deleted from the remote to show as [gone]
## Tips
- **Combine with other tools**: Use `/commit` during development, then `/commit-push-pr` when ready
- **Let Claude draft messages**: The commit message analysis learns from your repo's style
- **Regular cleanup**: Run `/clean_gone` weekly to maintain a clean branch list
- **Review before pushing**: Always review the commit message and changes before pushing
## Author
Anthropic (support@anthropic.com)
## Version
1.0.0

View File

@@ -0,0 +1,53 @@
---
description: Cleans up all git branches marked as [gone] (branches that have been deleted on the remote but still exist locally), including removing associated worktrees.
---
## Your Task
You need to execute the following bash commands to clean up stale local branches that have been deleted from the remote repository.
## Commands to Execute
1. **First, list branches to identify any with [gone] status**
Execute this command:
```bash
git branch -v
```
Note: Branches with a '+' prefix have associated worktrees and must have their worktrees removed before deletion.
2. **Next, identify worktrees that need to be removed for [gone] branches**
Execute this command:
```bash
git worktree list
```
3. **Finally, remove worktrees and delete [gone] branches (handles both regular and worktree branches)**
Execute this command:
```bash
# Process all [gone] branches, removing '+' prefix if present
git branch -v | grep '\[gone\]' | sed 's/^[+* ]//' | awk '{print $1}' | while read branch; do
echo "Processing branch: $branch"
# Find and remove worktree if it exists
worktree=$(git worktree list | grep "\\[$branch\\]" | awk '{print $1}')
if [ ! -z "$worktree" ] && [ "$worktree" != "$(git rev-parse --show-toplevel)" ]; then
echo " Removing worktree: $worktree"
git worktree remove --force "$worktree"
fi
# Delete the branch
echo " Deleting branch: $branch"
git branch -D "$branch"
done
```
## Expected Behavior
After executing these commands, you will:
- See a list of all local branches with their status
- Identify and remove any worktrees associated with [gone] branches
- Delete all branches marked as [gone]
- Provide feedback on which worktrees and branches were removed
If no branches are marked as [gone], report that no cleanup was needed.

View File

@@ -0,0 +1,20 @@
---
allowed-tools: Bash(git checkout --branch:*), Bash(git add:*), Bash(git status:*), Bash(git push:*), Bash(git commit:*), Bash(gh pr create:*)
description: Commit, push, and open a PR
---
## Context
- Current git status: !`git status`
- Current git diff (staged and unstaged changes): !`git diff HEAD`
- Current branch: !`git branch --show-current`
## Your task
Based on the above changes:
1. Create a new branch if on main
2. Create a single commit with an appropriate message
3. Push the branch to origin
4. Create a pull request using `gh pr create`
5. You have the capability to call multiple tools in a single response. You MUST do all of the above in a single message. Do not use any other tools or do anything else. Do not send any other text or messages besides these tool calls.

View File

@@ -0,0 +1,17 @@
---
allowed-tools: Bash(git add:*), Bash(git status:*), Bash(git commit:*)
description: Create a git commit
---
## Context
- Current git status: !`git status`
- Current git diff (staged and unstaged changes): !`git diff HEAD`
- Current branch: !`git branch --show-current`
- Recent commits: !`git log --oneline -10`
## Your task
Based on the above changes, create a single git commit.
You have the capability to call multiple tools in a single response. Stage and create the commit using a single message. Do not use any other tools or do anything else. Do not send any other text or messages besides these tool calls.

View File

@@ -0,0 +1,9 @@
{
"name": "example-plugin",
"version": "1.0.0",
"description": "A comprehensive example plugin demonstrating all Claude Code extension options including commands, agents, skills, hooks, and MCP servers",
"author": {
"name": "Your Name",
"email": "your.email@example.com"
}
}

View File

@@ -0,0 +1,6 @@
{
"example-server": {
"type": "http",
"url": "https://mcp.example.com/api"
}
}

View File

@@ -0,0 +1,82 @@
# Example Plugin
A comprehensive example plugin demonstrating Claude Code extension options.
## Structure
```
example-plugin/
├── .claude-plugin/
│ └── plugin.json # Plugin metadata
├── .mcp.json # MCP server configuration
├── commands/
│ └── example-command.md # Slash command definition
├── agents/
│ └── example-agent.md # Agent definition
└── skills/
└── example-skill/
└── SKILL.md # Skill definition
```
## Extension Options
### Commands (`commands/`)
Slash commands are user-invoked via `/command-name`. Define them as markdown files with frontmatter:
```yaml
---
description: Short description for /help
argument-hint: <arg1> [optional-arg]
allowed-tools: [Read, Glob, Grep]
---
```
### Agents (`agents/`)
Agents are spawned by Claude via the Task tool. Define them as markdown files:
```yaml
---
name: agent-name
description: When to use this agent
tools: Glob, Grep, Read, Write
model: sonnet
color: blue
---
```
### Skills (`skills/`)
Skills are model-invoked capabilities. Create a `SKILL.md` in a subdirectory:
```yaml
---
name: skill-name
description: Trigger conditions for this skill
version: 1.0.0
---
```
### MCP Servers (`.mcp.json`)
Configure external tool integration via Model Context Protocol:
```json
{
"server-name": {
"type": "http",
"url": "https://mcp.example.com/api"
}
}
```
## Installation
Copy or symlink this plugin directory to your Claude Code plugins location.
## Usage
- `/example-command [args]` - Run the example slash command
- The example agent is available for Claude to spawn when relevant
- The example skill activates based on task context

View File

@@ -0,0 +1,57 @@
---
name: example-agent
description: An example agent that demonstrates agent definition structure and frontmatter options. Use this agent when the user asks to perform example tasks or needs a template reference.
tools: Glob, Grep, Read, Write, Edit, Bash, WebFetch, WebSearch, TodoWrite
model: sonnet
color: blue
---
You are an example agent that demonstrates the agent definition format for Claude Code plugins.
## Agent Purpose
This agent serves as a template showing:
- Required and optional frontmatter fields
- Recommended prompt structure
- How to define agent capabilities and behavior
## Frontmatter Options Reference
Agents support these frontmatter fields:
- **name** (required): Agent identifier used in Task tool
- **description** (required): When to use this agent - Claude reads this to decide which agent to spawn
- **tools** (required): Comma-separated list of allowed tools
- **model** (optional): "haiku", "sonnet", or "opus" (defaults to parent model)
- **color** (optional): Terminal color for agent output (red, green, blue, yellow, magenta, cyan)
## Available Tools
Common tools to include:
- **Glob**: Find files by pattern
- **Grep**: Search file contents
- **Read**: Read file contents
- **Write**: Create new files
- **Edit**: Modify existing files
- **Bash**: Execute shell commands
- **WebFetch**: Fetch web content
- **WebSearch**: Search the web
- **TodoWrite**: Track task progress
- **NotebookRead**: Read Jupyter notebooks
- **LSP**: Language server operations
## Agent Behavior
When spawned, this agent should:
1. Understand the task from the prompt
2. Use available tools systematically
3. Report findings or complete the requested work
4. Return a clear summary to the parent conversation
## Best Practices
- Write clear, actionable descriptions so Claude knows when to spawn this agent
- Only include tools the agent actually needs
- Use appropriate model based on task complexity
- Structure prompts with clear sections and instructions

View File

@@ -0,0 +1,37 @@
---
description: An example slash command that demonstrates command frontmatter options
argument-hint: <required-arg> [optional-arg]
allowed-tools: [Read, Glob, Grep, Bash]
---
# Example Command
This command demonstrates slash command structure and frontmatter options.
## Arguments
The user invoked this command with: $ARGUMENTS
## Instructions
When this command is invoked:
1. Parse the arguments provided by the user
2. Perform the requested action using allowed tools
3. Report results back to the user
## Frontmatter Options Reference
Commands support these frontmatter fields:
- **description**: Short description shown in /help
- **argument-hint**: Hints for command arguments shown to user
- **allowed-tools**: Pre-approved tools for this command (reduces permission prompts)
- **model**: Override the model (e.g., "haiku", "sonnet", "opus")
## Example Usage
```
/example-command my-argument
/example-command arg1 arg2
```

View File

@@ -0,0 +1,84 @@
---
name: example-skill
description: This skill should be used when the user asks to "demonstrate skills", "show skill format", "create a skill template", or discusses skill development patterns. Provides a reference template for creating Claude Code plugin skills.
version: 1.0.0
---
# Example Skill
This skill demonstrates the structure and format for Claude Code plugin skills.
## Overview
Skills are model-invoked capabilities that Claude autonomously uses based on task context. Unlike commands (user-invoked) or agents (spawned by Claude), skills provide contextual guidance that Claude incorporates into its responses.
## When This Skill Applies
This skill activates when the user's request involves:
- Creating or understanding plugin skills
- Skill template or reference needs
- Skill development patterns
## Skill Structure
### Required Files
```
skills/
└── skill-name/
└── SKILL.md # Main skill definition (required)
```
### Optional Supporting Files
```
skills/
└── skill-name/
├── SKILL.md # Main skill definition
├── README.md # Additional documentation
├── references/ # Reference materials
│ └── patterns.md
├── examples/ # Example files
│ └── sample.md
└── scripts/ # Helper scripts
└── helper.sh
```
## Frontmatter Options
Skills support these frontmatter fields:
- **name** (required): Skill identifier
- **description** (required): Trigger conditions - describe when Claude should use this skill
- **version** (optional): Semantic version number
- **license** (optional): License information or reference
## Writing Effective Descriptions
The description field is crucial - it tells Claude when to invoke the skill.
**Good description patterns:**
```yaml
description: This skill should be used when the user asks to "specific phrase", "another phrase", mentions "keyword", or discusses topic-area.
```
**Include:**
- Specific trigger phrases users might say
- Keywords that indicate relevance
- Topic areas the skill covers
## Skill Content Guidelines
1. **Clear purpose**: State what the skill helps with
2. **When to use**: Define activation conditions
3. **Structured guidance**: Organize information logically
4. **Actionable instructions**: Provide concrete steps
5. **Examples**: Include practical examples when helpful
## Best Practices
- Keep skills focused on a single domain
- Write descriptions that clearly indicate when to activate
- Include reference materials in subdirectories for complex skills
- Test that the skill activates for expected queries
- Avoid overlap with other skills' trigger conditions

View File

@@ -0,0 +1,9 @@
{
"name": "explanatory-output-style",
"version": "1.0.0",
"description": "Adds educational insights about implementation choices and codebase patterns (mimics the deprecated Explanatory output style)",
"author": {
"name": "Dickson Tsai",
"email": "dickson@anthropic.com"
}
}

View File

@@ -0,0 +1,72 @@
# Explanatory Output Style Plugin
This plugin recreates the deprecated Explanatory output style as a SessionStart
hook.
WARNING: Do not install this plugin unless you are fine with incurring the token
cost of this plugin's additional instructions and output.
## What it does
When enabled, this plugin automatically adds instructions at the start of each
session that encourage Claude to:
1. Provide educational insights about implementation choices
2. Explain codebase patterns and decisions
3. Balance task completion with learning opportunities
## How it works
The plugin uses a SessionStart hook to inject additional context into every
session. This context instructs Claude to provide brief educational explanations
before and after writing code, formatted as:
```
`★ Insight ─────────────────────────────────────`
[2-3 key educational points]
`─────────────────────────────────────────────────`
```
## Usage
Once installed, the plugin activates automatically at the start of every
session. No additional configuration is needed.
The insights focus on:
- Specific implementation choices for your codebase
- Patterns and conventions in your code
- Trade-offs and design decisions
- Codebase-specific details rather than general programming concepts
## Migration from Output Styles
This plugin replaces the deprecated "Explanatory" output style setting. If you
previously used:
```json
{
"outputStyle": "Explanatory"
}
```
You can now achieve the same behavior by installing this plugin instead.
More generally, this SessionStart hook pattern is roughly equivalent to
CLAUDE.md, but it is more flexible and allows for distribution through plugins.
Note: Output styles that involve tasks besides software development, are better
expressed as
[subagents](https://docs.claude.com/en/docs/claude-code/sub-agents), not as
SessionStart hooks. Subagents change the system prompt while SessionStart hooks
add to the default system prompt.
## Managing changes
- Disable the plugin - keep the code installed on your device
- Uninstall the plugin - remove the code from your device
- Update the plugin - create a local copy of this plugin to personalize this
plugin
- Hint: Ask Claude to read
https://docs.claude.com/en/docs/claude-code/plugins.md and set it up for
you!

View File

@@ -0,0 +1,15 @@
#!/usr/bin/env bash
# Output the explanatory mode instructions as additionalContext
# This mimics the deprecated Explanatory output style
cat << 'EOF'
{
"hookSpecificOutput": {
"hookEventName": "SessionStart",
"additionalContext": "You are in 'explanatory' output style mode, where you should provide educational insights about the codebase as you help with the user's task.\n\nYou should be clear and educational, providing helpful explanations while remaining focused on the task. Balance educational content with task completion. When providing insights, you may exceed typical length constraints, but remain focused and relevant.\n\n## Insights\nIn order to encourage learning, before and after writing code, always provide brief educational explanations about implementation choices using (with backticks):\n\"`★ Insight ─────────────────────────────────────`\n[2-3 key educational points]\n`─────────────────────────────────────────────────`\"\n\nThese insights should be included in the conversation, not in the codebase. You should generally focus on interesting insights that are specific to the codebase or the code you just wrote, rather than general programming concepts. Do not wait until the end to provide insights. Provide them as you write code."
}
}
EOF
exit 0

View File

@@ -0,0 +1,15 @@
{
"description": "Explanatory mode hook that adds educational insights instructions",
"hooks": {
"SessionStart": [
{
"hooks": [
{
"type": "command",
"command": "${CLAUDE_PLUGIN_ROOT}/hooks-handlers/session-start.sh"
}
]
}
]
}
}

View File

@@ -0,0 +1,9 @@
{
"name": "feature-dev",
"version": "1.0.0",
"description": "Comprehensive feature development workflow with specialized agents for codebase exploration, architecture design, and quality review",
"author": {
"name": "Sid Bidasaria",
"email": "sbidasaria@anthropic.com"
}
}

View File

@@ -0,0 +1,412 @@
# Feature Development Plugin
A comprehensive, structured workflow for feature development with specialized agents for codebase exploration, architecture design, and quality review.
## Overview
The Feature Development Plugin provides a systematic 7-phase approach to building new features. Instead of jumping straight into code, it guides you through understanding the codebase, asking clarifying questions, designing architecture, and ensuring quality—resulting in better-designed features that integrate seamlessly with your existing code.
## Philosophy
Building features requires more than just writing code. You need to:
- **Understand the codebase** before making changes
- **Ask questions** to clarify ambiguous requirements
- **Design thoughtfully** before implementing
- **Review for quality** after building
This plugin embeds these practices into a structured workflow that runs automatically when you use the `/feature-dev` command.
## Command: `/feature-dev`
Launches a guided feature development workflow with 7 distinct phases.
**Usage:**
```bash
/feature-dev Add user authentication with OAuth
```
Or simply:
```bash
/feature-dev
```
The command will guide you through the entire process interactively.
## The 7-Phase Workflow
### Phase 1: Discovery
**Goal**: Understand what needs to be built
**What happens:**
- Clarifies the feature request if it's unclear
- Asks what problem you're solving
- Identifies constraints and requirements
- Summarizes understanding and confirms with you
**Example:**
```
You: /feature-dev Add caching
Claude: Let me understand what you need...
- What should be cached? (API responses, computed values, etc.)
- What are your performance requirements?
- Do you have a preferred caching solution?
```
### Phase 2: Codebase Exploration
**Goal**: Understand relevant existing code and patterns
**What happens:**
- Launches 2-3 `code-explorer` agents in parallel
- Each agent explores different aspects (similar features, architecture, UI patterns)
- Agents return comprehensive analyses with key files to read
- Claude reads all identified files to build deep understanding
- Presents comprehensive summary of findings
**Agents launched:**
- "Find features similar to [feature] and trace implementation"
- "Map the architecture and abstractions for [area]"
- "Analyze current implementation of [related feature]"
**Example output:**
```
Found similar features:
- User authentication (src/auth/): Uses JWT tokens, middleware pattern
- Session management (src/session/): Redis-backed, 24hr expiry
- API security (src/api/middleware/): Rate limiting, CORS
Key files to understand:
- src/auth/AuthService.ts:45 - Core authentication logic
- src/middleware/authMiddleware.ts:12 - Request authentication
- src/config/security.ts:8 - Security configuration
```
### Phase 3: Clarifying Questions
**Goal**: Fill in gaps and resolve all ambiguities
**What happens:**
- Reviews codebase findings and feature request
- Identifies underspecified aspects:
- Edge cases
- Error handling
- Integration points
- Backward compatibility
- Performance needs
- Presents all questions in an organized list
- **Waits for your answers before proceeding**
**Example:**
```
Before designing the architecture, I need to clarify:
1. OAuth provider: Which OAuth providers? (Google, GitHub, custom?)
2. User data: Store OAuth tokens or just user profile?
3. Existing auth: Replace current auth or add alongside?
4. Sessions: Integrate with existing session management?
5. Error handling: How to handle OAuth failures?
```
**Critical**: This phase ensures nothing is ambiguous before design begins.
### Phase 4: Architecture Design
**Goal**: Design multiple implementation approaches
**What happens:**
- Launches 2-3 `code-architect` agents with different focuses:
- **Minimal changes**: Smallest change, maximum reuse
- **Clean architecture**: Maintainability, elegant abstractions
- **Pragmatic balance**: Speed + quality
- Reviews all approaches
- Forms opinion on which fits best for this task
- Presents comparison with trade-offs and recommendation
- **Asks which approach you prefer**
**Example output:**
```
I've designed 3 approaches:
Approach 1: Minimal Changes
- Extend existing AuthService with OAuth methods
- Add new OAuth routes to existing auth router
- Minimal refactoring required
Pros: Fast, low risk
Cons: Couples OAuth to existing auth, harder to test
Approach 2: Clean Architecture
- New OAuthService with dedicated interface
- Separate OAuth router and middleware
- Refactor AuthService to use common interface
Pros: Clean separation, testable, maintainable
Cons: More files, more refactoring
Approach 3: Pragmatic Balance
- New OAuthProvider abstraction
- Integrate into existing AuthService
- Minimal refactoring, good boundaries
Pros: Balanced complexity and cleanliness
Cons: Some coupling remains
Recommendation: Approach 3 - gives you clean boundaries without
excessive refactoring, and fits your existing architecture well.
Which approach would you like to use?
```
### Phase 5: Implementation
**Goal**: Build the feature
**What happens:**
- **Waits for explicit approval** before starting
- Reads all relevant files identified in previous phases
- Implements following chosen architecture
- Follows codebase conventions strictly
- Writes clean, well-documented code
- Updates todos as progress is made
**Notes:**
- Implementation only starts after you approve
- Follows patterns discovered in Phase 2
- Uses architecture designed in Phase 4
- Continuously tracks progress
### Phase 6: Quality Review
**Goal**: Ensure code is simple, DRY, elegant, and functionally correct
**What happens:**
- Launches 3 `code-reviewer` agents in parallel with different focuses:
- **Simplicity/DRY/Elegance**: Code quality and maintainability
- **Bugs/Correctness**: Functional correctness and logic errors
- **Conventions/Abstractions**: Project standards and patterns
- Consolidates findings
- Identifies highest severity issues
- **Presents findings and asks what you want to do**:
- Fix now
- Fix later
- Proceed as-is
- Addresses issues based on your decision
**Example output:**
```
Code Review Results:
High Priority Issues:
1. Missing error handling in OAuth callback (src/auth/oauth.ts:67)
2. Memory leak: OAuth state not cleaned up (src/auth/oauth.ts:89)
Medium Priority:
1. Could simplify token refresh logic (src/auth/oauth.ts:120)
2. Consider extracting OAuth config validation
All tests pass. Code follows project conventions.
What would you like to do?
```
### Phase 7: Summary
**Goal**: Document what was accomplished
**What happens:**
- Marks all todos complete
- Summarizes:
- What was built
- Key decisions made
- Files modified
- Suggested next steps
**Example:**
```
Feature Complete: OAuth Authentication
What was built:
- OAuth provider abstraction supporting Google and GitHub
- OAuth routes and middleware integrated with existing auth
- Token refresh and session integration
- Error handling for all OAuth flows
Key decisions:
- Used pragmatic approach with OAuthProvider abstraction
- Integrated with existing session management
- Added OAuth state to prevent CSRF
Files modified:
- src/auth/OAuthProvider.ts (new)
- src/auth/AuthService.ts
- src/routes/auth.ts
- src/middleware/authMiddleware.ts
Suggested next steps:
- Add tests for OAuth flows
- Add more OAuth providers (Microsoft, Apple)
- Update documentation
```
## Agents
### `code-explorer`
**Purpose**: Deeply analyzes existing codebase features by tracing execution paths
**Focus areas:**
- Entry points and call chains
- Data flow and transformations
- Architecture layers and patterns
- Dependencies and integrations
- Implementation details
**When triggered:**
- Automatically in Phase 2
- Can be invoked manually when exploring code
**Output:**
- Entry points with file:line references
- Step-by-step execution flow
- Key components and responsibilities
- Architecture insights
- List of essential files to read
### `code-architect`
**Purpose**: Designs feature architectures and implementation blueprints
**Focus areas:**
- Codebase pattern analysis
- Architecture decisions
- Component design
- Implementation roadmap
- Data flow and build sequence
**When triggered:**
- Automatically in Phase 4
- Can be invoked manually for architecture design
**Output:**
- Patterns and conventions found
- Architecture decision with rationale
- Complete component design
- Implementation map with specific files
- Build sequence with phases
### `code-reviewer`
**Purpose**: Reviews code for bugs, quality issues, and project conventions
**Focus areas:**
- Project guideline compliance (CLAUDE.md)
- Bug detection
- Code quality issues
- Confidence-based filtering (only reports high-confidence issues ≥80)
**When triggered:**
- Automatically in Phase 6
- Can be invoked manually after writing code
**Output:**
- Critical issues (confidence 75-100)
- Important issues (confidence 50-74)
- Specific fixes with file:line references
- Project guideline references
## Usage Patterns
### Full workflow (recommended for new features):
```bash
/feature-dev Add rate limiting to API endpoints
```
Let the workflow guide you through all 7 phases.
### Manual agent invocation:
**Explore a feature:**
```
"Launch code-explorer to trace how authentication works"
```
**Design architecture:**
```
"Launch code-architect to design the caching layer"
```
**Review code:**
```
"Launch code-reviewer to check my recent changes"
```
## Best Practices
1. **Use the full workflow for complex features**: The 7 phases ensure thorough planning
2. **Answer clarifying questions thoughtfully**: Phase 3 prevents future confusion
3. **Choose architecture deliberately**: Phase 4 gives you options for a reason
4. **Don't skip code review**: Phase 6 catches issues before they reach production
5. **Read the suggested files**: Phase 2 identifies key files—read them to understand context
## When to Use This Plugin
**Use for:**
- New features that touch multiple files
- Features requiring architectural decisions
- Complex integrations with existing code
- Features where requirements are somewhat unclear
**Don't use for:**
- Single-line bug fixes
- Trivial changes
- Well-defined, simple tasks
- Urgent hotfixes
## Requirements
- Claude Code installed
- Git repository (for code review)
- Project with existing codebase (workflow assumes existing code to learn from)
## Troubleshooting
### Agents take too long
**Issue**: Code exploration or architecture agents are slow
**Solution**:
- This is normal for large codebases
- Agents run in parallel when possible
- The thoroughness pays off in better understanding
### Too many clarifying questions
**Issue**: Phase 3 asks too many questions
**Solution**:
- Be more specific in your initial feature request
- Provide context about constraints upfront
- Say "whatever you think is best" if truly no preference
### Architecture options overwhelming
**Issue**: Too many architecture options in Phase 4
**Solution**:
- Trust the recommendation—it's based on codebase analysis
- If still unsure, ask for more explanation
- Pick the pragmatic option when in doubt
## Tips
- **Be specific in your feature request**: More detail = fewer clarifying questions
- **Trust the process**: Each phase builds on the previous one
- **Review agent outputs**: Agents provide valuable insights about your codebase
- **Don't skip phases**: Each phase serves a purpose
- **Use for learning**: The exploration phase teaches you about your own codebase
## Author
Sid Bidasaria (sbidasaria@anthropic.com)
## Version
1.0.0

View File

@@ -0,0 +1,34 @@
---
name: code-architect
description: Designs feature architectures by analyzing existing codebase patterns and conventions, then providing comprehensive implementation blueprints with specific files to create/modify, component designs, data flows, and build sequences
tools: Glob, Grep, LS, Read, NotebookRead, WebFetch, TodoWrite, WebSearch, KillShell, BashOutput
model: sonnet
color: green
---
You are a senior software architect who delivers comprehensive, actionable architecture blueprints by deeply understanding codebases and making confident architectural decisions.
## Core Process
**1. Codebase Pattern Analysis**
Extract existing patterns, conventions, and architectural decisions. Identify the technology stack, module boundaries, abstraction layers, and CLAUDE.md guidelines. Find similar features to understand established approaches.
**2. Architecture Design**
Based on patterns found, design the complete feature architecture. Make decisive choices - pick one approach and commit. Ensure seamless integration with existing code. Design for testability, performance, and maintainability.
**3. Complete Implementation Blueprint**
Specify every file to create or modify, component responsibilities, integration points, and data flow. Break implementation into clear phases with specific tasks.
## Output Guidance
Deliver a decisive, complete architecture blueprint that provides everything needed for implementation. Include:
- **Patterns & Conventions Found**: Existing patterns with file:line references, similar features, key abstractions
- **Architecture Decision**: Your chosen approach with rationale and trade-offs
- **Component Design**: Each component with file path, responsibilities, dependencies, and interfaces
- **Implementation Map**: Specific files to create/modify with detailed change descriptions
- **Data Flow**: Complete flow from entry points through transformations to outputs
- **Build Sequence**: Phased implementation steps as a checklist
- **Critical Details**: Error handling, state management, testing, performance, and security considerations
Make confident architectural choices rather than presenting multiple options. Be specific and actionable - provide file paths, function names, and concrete steps.

View File

@@ -0,0 +1,51 @@
---
name: code-explorer
description: Deeply analyzes existing codebase features by tracing execution paths, mapping architecture layers, understanding patterns and abstractions, and documenting dependencies to inform new development
tools: Glob, Grep, LS, Read, NotebookRead, WebFetch, TodoWrite, WebSearch, KillShell, BashOutput
model: sonnet
color: yellow
---
You are an expert code analyst specializing in tracing and understanding feature implementations across codebases.
## Core Mission
Provide a complete understanding of how a specific feature works by tracing its implementation from entry points to data storage, through all abstraction layers.
## Analysis Approach
**1. Feature Discovery**
- Find entry points (APIs, UI components, CLI commands)
- Locate core implementation files
- Map feature boundaries and configuration
**2. Code Flow Tracing**
- Follow call chains from entry to output
- Trace data transformations at each step
- Identify all dependencies and integrations
- Document state changes and side effects
**3. Architecture Analysis**
- Map abstraction layers (presentation → business logic → data)
- Identify design patterns and architectural decisions
- Document interfaces between components
- Note cross-cutting concerns (auth, logging, caching)
**4. Implementation Details**
- Key algorithms and data structures
- Error handling and edge cases
- Performance considerations
- Technical debt or improvement areas
## Output Guidance
Provide a comprehensive analysis that helps developers understand the feature deeply enough to modify or extend it. Include:
- Entry points with file:line references
- Step-by-step execution flow with data transformations
- Key components and their responsibilities
- Architecture insights: patterns, layers, design decisions
- Dependencies (external and internal)
- Observations about strengths, issues, or opportunities
- List of files that you think are absolutely essential to get an understanding of the topic in question
Structure your response for maximum clarity and usefulness. Always include specific file paths and line numbers.

View File

@@ -0,0 +1,46 @@
---
name: code-reviewer
description: Reviews code for bugs, logic errors, security vulnerabilities, code quality issues, and adherence to project conventions, using confidence-based filtering to report only high-priority issues that truly matter
tools: Glob, Grep, LS, Read, NotebookRead, WebFetch, TodoWrite, WebSearch, KillShell, BashOutput
model: sonnet
color: red
---
You are an expert code reviewer specializing in modern software development across multiple languages and frameworks. Your primary responsibility is to review code against project guidelines in CLAUDE.md with high precision to minimize false positives.
## Review Scope
By default, review unstaged changes from `git diff`. The user may specify different files or scope to review.
## Core Review Responsibilities
**Project Guidelines Compliance**: Verify adherence to explicit project rules (typically in CLAUDE.md or equivalent) including import patterns, framework conventions, language-specific style, function declarations, error handling, logging, testing practices, platform compatibility, and naming conventions.
**Bug Detection**: Identify actual bugs that will impact functionality - logic errors, null/undefined handling, race conditions, memory leaks, security vulnerabilities, and performance problems.
**Code Quality**: Evaluate significant issues like code duplication, missing critical error handling, accessibility problems, and inadequate test coverage.
## Confidence Scoring
Rate each potential issue on a scale from 0-100:
- **0**: Not confident at all. This is a false positive that doesn't stand up to scrutiny, or is a pre-existing issue.
- **25**: Somewhat confident. This might be a real issue, but may also be a false positive. If stylistic, it wasn't explicitly called out in project guidelines.
- **50**: Moderately confident. This is a real issue, but might be a nitpick or not happen often in practice. Not very important relative to the rest of the changes.
- **75**: Highly confident. Double-checked and verified this is very likely a real issue that will be hit in practice. The existing approach is insufficient. Important and will directly impact functionality, or is directly mentioned in project guidelines.
- **100**: Absolutely certain. Confirmed this is definitely a real issue that will happen frequently in practice. The evidence directly confirms this.
**Only report issues with confidence ≥ 80.** Focus on issues that truly matter - quality over quantity.
## Output Guidance
Start by clearly stating what you're reviewing. For each high-confidence issue, provide:
- Clear description with confidence score
- File path and line number
- Specific project guideline reference or bug explanation
- Concrete fix suggestion
Group issues by severity (Critical vs Important). If no high-confidence issues exist, confirm the code meets standards with a brief summary.
Structure your response for maximum actionability - developers should know exactly what to fix and why.

View File

@@ -0,0 +1,125 @@
---
description: Guided feature development with codebase understanding and architecture focus
argument-hint: Optional feature description
---
# Feature Development
You are helping a developer implement a new feature. Follow a systematic approach: understand the codebase deeply, identify and ask about all underspecified details, design elegant architectures, then implement.
## Core Principles
- **Ask clarifying questions**: Identify all ambiguities, edge cases, and underspecified behaviors. Ask specific, concrete questions rather than making assumptions. Wait for user answers before proceeding with implementation. Ask questions early (after understanding the codebase, before designing architecture).
- **Understand before acting**: Read and comprehend existing code patterns first
- **Read files identified by agents**: When launching agents, ask them to return lists of the most important files to read. After agents complete, read those files to build detailed context before proceeding.
- **Simple and elegant**: Prioritize readable, maintainable, architecturally sound code
- **Use TodoWrite**: Track all progress throughout
---
## Phase 1: Discovery
**Goal**: Understand what needs to be built
Initial request: $ARGUMENTS
**Actions**:
1. Create todo list with all phases
2. If feature unclear, ask user for:
- What problem are they solving?
- What should the feature do?
- Any constraints or requirements?
3. Summarize understanding and confirm with user
---
## Phase 2: Codebase Exploration
**Goal**: Understand relevant existing code and patterns at both high and low levels
**Actions**:
1. Launch 2-3 code-explorer agents in parallel. Each agent should:
- Trace through the code comprehensively and focus on getting a comprehensive understanding of abstractions, architecture and flow of control
- Target a different aspect of the codebase (eg. similar features, high level understanding, architectural understanding, user experience, etc)
- Include a list of 5-10 key files to read
**Example agent prompts**:
- "Find features similar to [feature] and trace through their implementation comprehensively"
- "Map the architecture and abstractions for [feature area], tracing through the code comprehensively"
- "Analyze the current implementation of [existing feature/area], tracing through the code comprehensively"
- "Identify UI patterns, testing approaches, or extension points relevant to [feature]"
2. Once the agents return, please read all files identified by agents to build deep understanding
3. Present comprehensive summary of findings and patterns discovered
---
## Phase 3: Clarifying Questions
**Goal**: Fill in gaps and resolve all ambiguities before designing
**CRITICAL**: This is one of the most important phases. DO NOT SKIP.
**Actions**:
1. Review the codebase findings and original feature request
2. Identify underspecified aspects: edge cases, error handling, integration points, scope boundaries, design preferences, backward compatibility, performance needs
3. **Present all questions to the user in a clear, organized list**
4. **Wait for answers before proceeding to architecture design**
If the user says "whatever you think is best", provide your recommendation and get explicit confirmation.
---
## Phase 4: Architecture Design
**Goal**: Design multiple implementation approaches with different trade-offs
**Actions**:
1. Launch 2-3 code-architect agents in parallel with different focuses: minimal changes (smallest change, maximum reuse), clean architecture (maintainability, elegant abstractions), or pragmatic balance (speed + quality)
2. Review all approaches and form your opinion on which fits best for this specific task (consider: small fix vs large feature, urgency, complexity, team context)
3. Present to user: brief summary of each approach, trade-offs comparison, **your recommendation with reasoning**, concrete implementation differences
4. **Ask user which approach they prefer**
---
## Phase 5: Implementation
**Goal**: Build the feature
**DO NOT START WITHOUT USER APPROVAL**
**Actions**:
1. Wait for explicit user approval
2. Read all relevant files identified in previous phases
3. Implement following chosen architecture
4. Follow codebase conventions strictly
5. Write clean, well-documented code
6. Update todos as you progress
---
## Phase 6: Quality Review
**Goal**: Ensure code is simple, DRY, elegant, easy to read, and functionally correct
**Actions**:
1. Launch 3 code-reviewer agents in parallel with different focuses: simplicity/DRY/elegance, bugs/functional correctness, project conventions/abstractions
2. Consolidate findings and identify highest severity issues that you recommend fixing
3. **Present findings to user and ask what they want to do** (fix now, fix later, or proceed as-is)
4. Address issues based on user decision
---
## Phase 7: Summary
**Goal**: Document what was accomplished
**Actions**:
1. Mark all todos complete
2. Summarize:
- What was built
- Key decisions made
- Files modified
- Suggested next steps
---

View File

@@ -0,0 +1,9 @@
{
"name": "frontend-design",
"version": "1.0.0",
"description": "Frontend design skill for UI/UX implementation",
"author": {
"name": "Prithvi Rajasekaran, Alexander Bricken",
"email": "prithvi@anthropic.com, alexander@anthropic.com"
}
}

View File

@@ -0,0 +1,31 @@
# Frontend Design Plugin
Generates distinctive, production-grade frontend interfaces that avoid generic AI aesthetics.
## What It Does
Claude automatically uses this skill for frontend work. Creates production-ready code with:
- Bold aesthetic choices
- Distinctive typography and color palettes
- High-impact animations and visual details
- Context-aware implementation
## Usage
```
"Create a dashboard for a music streaming app"
"Build a landing page for an AI security startup"
"Design a settings panel with dark mode"
```
Claude will choose a clear aesthetic direction and implement production code with meticulous attention to detail.
## Learn More
See the [Frontend Aesthetics Cookbook](https://github.com/anthropics/claude-cookbooks/blob/main/coding/prompting_for_frontend_aesthetics.ipynb) for detailed guidance on prompting for high-quality frontend design.
## Authors
Prithvi Rajasekaran (prithvi@anthropic.com)
Alexander Bricken (alexander@anthropic.com)

View File

@@ -0,0 +1,42 @@
---
name: frontend-design
description: Create distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
license: Complete terms in LICENSE.txt
---
This skill guides creation of distinctive, production-grade frontend interfaces that avoid generic "AI slop" aesthetics. Implement real working code with exceptional attention to aesthetic details and creative choices.
The user provides frontend requirements: a component, page, application, or interface to build. They may include context about the purpose, audience, or technical constraints.
## Design Thinking
Before coding, understand the context and commit to a BOLD aesthetic direction:
- **Purpose**: What problem does this interface solve? Who uses it?
- **Tone**: Pick an extreme: brutally minimal, maximalist chaos, retro-futuristic, organic/natural, luxury/refined, playful/toy-like, editorial/magazine, brutalist/raw, art deco/geometric, soft/pastel, industrial/utilitarian, etc. There are so many flavors to choose from. Use these for inspiration but design one that is true to the aesthetic direction.
- **Constraints**: Technical requirements (framework, performance, accessibility).
- **Differentiation**: What makes this UNFORGETTABLE? What's the one thing someone will remember?
**CRITICAL**: Choose a clear conceptual direction and execute it with precision. Bold maximalism and refined minimalism both work - the key is intentionality, not intensity.
Then implement working code (HTML/CSS/JS, React, Vue, etc.) that is:
- Production-grade and functional
- Visually striking and memorable
- Cohesive with a clear aesthetic point-of-view
- Meticulously refined in every detail
## Frontend Aesthetics Guidelines
Focus on:
- **Typography**: Choose fonts that are beautiful, unique, and interesting. Avoid generic fonts like Arial and Inter; opt instead for distinctive choices that elevate the frontend's aesthetics; unexpected, characterful font choices. Pair a distinctive display font with a refined body font.
- **Color & Theme**: Commit to a cohesive aesthetic. Use CSS variables for consistency. Dominant colors with sharp accents outperform timid, evenly-distributed palettes.
- **Motion**: Use animations for effects and micro-interactions. Prioritize CSS-only solutions for HTML. Use Motion library for React when available. Focus on high-impact moments: one well-orchestrated page load with staggered reveals (animation-delay) creates more delight than scattered micro-interactions. Use scroll-triggering and hover states that surprise.
- **Spatial Composition**: Unexpected layouts. Asymmetry. Overlap. Diagonal flow. Grid-breaking elements. Generous negative space OR controlled density.
- **Backgrounds & Visual Details**: Create atmosphere and depth rather than defaulting to solid colors. Add contextual effects and textures that match the overall aesthetic. Apply creative forms like gradient meshes, noise textures, geometric patterns, layered transparencies, dramatic shadows, decorative borders, custom cursors, and grain overlays.
NEVER use generic AI-generated aesthetics like overused font families (Inter, Roboto, Arial, system fonts), cliched color schemes (particularly purple gradients on white backgrounds), predictable layouts and component patterns, and cookie-cutter design that lacks context-specific character.
Interpret creatively and make unexpected choices that feel genuinely designed for the context. No design should be the same. Vary between light and dark themes, different fonts, different aesthetics. NEVER converge on common choices (Space Grotesk, for example) across generations.
**IMPORTANT**: Match implementation complexity to the aesthetic vision. Maximalist designs need elaborate code with extensive animations and effects. Minimalist or refined designs need restraint, precision, and careful attention to spacing, typography, and subtle details. Elegance comes from executing the vision well.
Remember: Claude is capable of extraordinary creative work. Don't hold back, show what can truly be created when thinking outside the box and committing fully to a distinctive vision.

View File

@@ -0,0 +1,9 @@
{
"name": "hookify",
"version": "0.1.0",
"description": "Easily create hooks to prevent unwanted behaviors by analyzing conversation patterns",
"author": {
"name": "Daisy Hollman",
"email": "daisy@anthropic.com"
}
}

30
plugins/hookify/.gitignore vendored Normal file
View File

@@ -0,0 +1,30 @@
# Python
__pycache__/
*.py[cod]
*$py.class
*.so
.Python
# Virtual environments
venv/
env/
ENV/
# IDE
.vscode/
.idea/
*.swp
*.swo
# OS
.DS_Store
Thumbs.db
# Testing
.pytest_cache/
.coverage
htmlcov/
# Local configuration (should not be committed)
.claude/*.local.md
.claude/*.local.json

340
plugins/hookify/README.md Normal file
View File

@@ -0,0 +1,340 @@
# Hookify Plugin
Easily create custom hooks to prevent unwanted behaviors by analyzing conversation patterns or from explicit instructions.
## Overview
The hookify plugin makes it simple to create hooks without editing complex `hooks.json` files. Instead, you create lightweight markdown configuration files that define patterns to watch for and messages to show when those patterns match.
**Key features:**
- 🎯 Analyze conversations to find unwanted behaviors automatically
- 📝 Simple markdown configuration files with YAML frontmatter
- 🔍 Regex pattern matching for powerful rules
- 🚀 No coding required - just describe the behavior
- 🔄 Easy enable/disable without restarting
## Quick Start
### 1. Create Your First Rule
```bash
/hookify Warn me when I use rm -rf commands
```
This analyzes your request and creates `.claude/hookify.warn-rm.local.md`.
### 2. Test It Immediately
**No restart needed!** Rules take effect on the very next tool use.
Ask Claude to run a command that should trigger the rule:
```
Run rm -rf /tmp/test
```
You should see the warning message immediately!
## Usage
### Main Command: /hookify
**With arguments:**
```
/hookify Don't use console.log in TypeScript files
```
Creates a rule from your explicit instructions.
**Without arguments:**
```
/hookify
```
Analyzes recent conversation to find behaviors you've corrected or been frustrated by.
### Helper Commands
**List all rules:**
```
/hookify:list
```
**Configure rules interactively:**
```
/hookify:configure
```
Enable/disable existing rules through an interactive interface.
**Get help:**
```
/hookify:help
```
## Rule Configuration Format
### Simple Rule (Single Pattern)
`.claude/hookify.dangerous-rm.local.md`:
```markdown
---
name: block-dangerous-rm
enabled: true
event: bash
pattern: rm\s+-rf
action: block
---
⚠️ **Dangerous rm command detected!**
This command could delete important files. Please:
- Verify the path is correct
- Consider using a safer approach
- Make sure you have backups
```
**Action field:**
- `warn`: Shows warning but allows operation (default)
- `block`: Prevents operation from executing (PreToolUse) or stops session (Stop events)
### Advanced Rule (Multiple Conditions)
`.claude/hookify.sensitive-files.local.md`:
```markdown
---
name: warn-sensitive-files
enabled: true
event: file
action: warn
conditions:
- field: file_path
operator: regex_match
pattern: \.env$|credentials|secrets
- field: new_text
operator: contains
pattern: KEY
---
🔐 **Sensitive file edit detected!**
Ensure credentials are not hardcoded and file is in .gitignore.
```
**All conditions must match** for the rule to trigger.
## Event Types
- **`bash`**: Triggers on Bash tool commands
- **`file`**: Triggers on Edit, Write, MultiEdit tools
- **`stop`**: Triggers when Claude wants to stop (for completion checks)
- **`prompt`**: Triggers on user prompt submission
- **`all`**: Triggers on all events
## Pattern Syntax
Use Python regex syntax:
| Pattern | Matches | Example |
|---------|---------|---------|
| `rm\s+-rf` | rm -rf | rm -rf /tmp |
| `console\.log\(` | console.log( | console.log("test") |
| `(eval\|exec)\(` | eval( or exec( | eval("code") |
| `\.env$` | files ending in .env | .env, .env.local |
| `chmod\s+777` | chmod 777 | chmod 777 file.txt |
**Tips:**
- Use `\s` for whitespace
- Escape special chars: `\.` for literal dot
- Use `|` for OR: `(foo|bar)`
- Use `.*` to match anything
- Set `action: block` for dangerous operations
- Set `action: warn` (or omit) for informational warnings
## Examples
### Example 1: Block Dangerous Commands
```markdown
---
name: block-destructive-ops
enabled: true
event: bash
pattern: rm\s+-rf|dd\s+if=|mkfs|format
action: block
---
🛑 **Destructive operation detected!**
This command can cause data loss. Operation blocked for safety.
Please verify the exact path and use a safer approach.
```
**This rule blocks the operation** - Claude will not be allowed to execute these commands.
### Example 2: Warn About Debug Code
```markdown
---
name: warn-debug-code
enabled: true
event: file
pattern: console\.log\(|debugger;|print\(
action: warn
---
🐛 **Debug code detected**
Remember to remove debugging statements before committing.
```
**This rule warns but allows** - Claude sees the message but can still proceed.
### Example 3: Require Tests Before Stopping
```markdown
---
name: require-tests-run
enabled: false
event: stop
action: block
conditions:
- field: transcript
operator: not_contains
pattern: npm test|pytest|cargo test
---
**Tests not detected in transcript!**
Before stopping, please run tests to verify your changes work correctly.
```
**This blocks Claude from stopping** if no test commands appear in the session transcript. Enable only when you want strict enforcement.
## Advanced Usage
### Multiple Conditions
Check multiple fields simultaneously:
```markdown
---
name: api-key-in-typescript
enabled: true
event: file
conditions:
- field: file_path
operator: regex_match
pattern: \.tsx?$
- field: new_text
operator: regex_match
pattern: (API_KEY|SECRET|TOKEN)\s*=\s*["']
---
🔐 **Hardcoded credential in TypeScript!**
Use environment variables instead of hardcoded values.
```
### Operators Reference
- `regex_match`: Pattern must match (most common)
- `contains`: String must contain pattern
- `equals`: Exact string match
- `not_contains`: String must NOT contain pattern
- `starts_with`: String starts with pattern
- `ends_with`: String ends with pattern
### Field Reference
**For bash events:**
- `command`: The bash command string
**For file events:**
- `file_path`: Path to file being edited
- `new_text`: New content being added (Edit, Write)
- `old_text`: Old content being replaced (Edit only)
- `content`: File content (Write only)
**For prompt events:**
- `user_prompt`: The user's submitted prompt text
**For stop events:**
- Use general matching on session state
## Management
### Enable/Disable Rules
**Temporarily disable:**
Edit the `.local.md` file and set `enabled: false`
**Re-enable:**
Set `enabled: true`
**Or use interactive tool:**
```
/hookify:configure
```
### Delete Rules
Simply delete the `.local.md` file:
```bash
rm .claude/hookify.my-rule.local.md
```
### View All Rules
```
/hookify:list
```
## Installation
This plugin is part of the Claude Code Marketplace. It should be auto-discovered when the marketplace is installed.
**Manual testing:**
```bash
cc --plugin-dir /path/to/hookify
```
## Requirements
- Python 3.7+
- No external dependencies (uses stdlib only)
## Troubleshooting
**Rule not triggering:**
1. Check rule file exists in `.claude/` directory (in project root, not plugin directory)
2. Verify `enabled: true` in frontmatter
3. Test regex pattern separately
4. Rules should work immediately - no restart needed
5. Try `/hookify:list` to see if rule is loaded
**Import errors:**
- Ensure Python 3 is available: `python3 --version`
- Check hookify plugin is installed
**Pattern not matching:**
- Test regex: `python3 -c "import re; print(re.search(r'pattern', 'text'))"`
- Use unquoted patterns in YAML to avoid escaping issues
- Start simple, then add complexity
**Hook seems slow:**
- Keep patterns simple (avoid complex regex)
- Use specific event types (bash, file) instead of "all"
- Limit number of active rules
## Contributing
Found a useful rule pattern? Consider sharing example files via PR!
## Future Enhancements
- Severity levels (error/warning/info distinctions)
- Rule templates library
- Interactive pattern builder
- Hook testing utilities
- JSON format support (in addition to markdown)
## License
MIT License

View File

@@ -0,0 +1,176 @@
---
name: conversation-analyzer
description: Use this agent when analyzing conversation transcripts to find behaviors worth preventing with hooks. Examples: <example>Context: User is running /hookify command without arguments\nuser: "/hookify"\nassistant: "I'll analyze the conversation to find behaviors you want to prevent"\n<commentary>The /hookify command without arguments triggers conversation analysis to find unwanted behaviors.</commentary></example><example>Context: User wants to create hooks from recent frustrations\nuser: "Can you look back at this conversation and help me create hooks for the mistakes you made?"\nassistant: "I'll use the conversation-analyzer agent to identify the issues and suggest hooks."\n<commentary>User explicitly asks to analyze conversation for mistakes that should be prevented.</commentary></example>
model: inherit
color: yellow
tools: ["Read", "Grep"]
---
You are a conversation analysis specialist that identifies problematic behaviors in Claude Code sessions that could be prevented with hooks.
**Your Core Responsibilities:**
1. Read and analyze user messages to find frustration signals
2. Identify specific tool usage patterns that caused issues
3. Extract actionable patterns that can be matched with regex
4. Categorize issues by severity and type
5. Provide structured findings for hook rule generation
**Analysis Process:**
### 1. Search for User Messages Indicating Issues
Read through user messages in reverse chronological order (most recent first). Look for:
**Explicit correction requests:**
- "Don't use X"
- "Stop doing Y"
- "Please don't Z"
- "Avoid..."
- "Never..."
**Frustrated reactions:**
- "Why did you do X?"
- "I didn't ask for that"
- "That's not what I meant"
- "That was wrong"
**Corrections and reversions:**
- User reverting changes Claude made
- User fixing issues Claude created
- User providing step-by-step corrections
**Repeated issues:**
- Same type of mistake multiple times
- User having to remind multiple times
- Pattern of similar problems
### 2. Identify Tool Usage Patterns
For each issue, determine:
- **Which tool**: Bash, Edit, Write, MultiEdit
- **What action**: Specific command or code pattern
- **When it happened**: During what task/phase
- **Why problematic**: User's stated reason or implicit concern
**Extract concrete examples:**
- For Bash: Actual command that was problematic
- For Edit/Write: Code pattern that was added
- For Stop: What was missing before stopping
### 3. Create Regex Patterns
Convert behaviors into matchable patterns:
**Bash command patterns:**
- `rm\s+-rf` for dangerous deletes
- `sudo\s+` for privilege escalation
- `chmod\s+777` for permission issues
**Code patterns (Edit/Write):**
- `console\.log\(` for debug logging
- `eval\(|new Function\(` for dangerous eval
- `innerHTML\s*=` for XSS risks
**File path patterns:**
- `\.env$` for environment files
- `/node_modules/` for dependency files
- `dist/|build/` for generated files
### 4. Categorize Severity
**High severity (should block in future):**
- Dangerous commands (rm -rf, chmod 777)
- Security issues (hardcoded secrets, eval)
- Data loss risks
**Medium severity (warn):**
- Style violations (console.log in production)
- Wrong file types (editing generated files)
- Missing best practices
**Low severity (optional):**
- Preferences (coding style)
- Non-critical patterns
### 5. Output Format
Return your findings as structured text in this format:
```
## Hookify Analysis Results
### Issue 1: Dangerous rm Commands
**Severity**: High
**Tool**: Bash
**Pattern**: `rm\s+-rf`
**Occurrences**: 3 times
**Context**: Used rm -rf on /tmp directories without verification
**User Reaction**: "Please be more careful with rm commands"
**Suggested Rule:**
- Name: warn-dangerous-rm
- Event: bash
- Pattern: rm\s+-rf
- Message: "Dangerous rm command detected. Verify path before proceeding."
---
### Issue 2: Console.log in TypeScript
**Severity**: Medium
**Tool**: Edit/Write
**Pattern**: `console\.log\(`
**Occurrences**: 2 times
**Context**: Added console.log statements to production TypeScript files
**User Reaction**: "Don't use console.log in production code"
**Suggested Rule:**
- Name: warn-console-log
- Event: file
- Pattern: console\.log\(
- Message: "Console.log detected. Use proper logging library instead."
---
[Continue for each issue found...]
## Summary
Found {N} behaviors worth preventing:
- {N} high severity
- {N} medium severity
- {N} low severity
Recommend creating rules for high and medium severity issues.
```
**Quality Standards:**
- Be specific about patterns (don't be overly broad)
- Include actual examples from conversation
- Explain why each issue matters
- Provide ready-to-use regex patterns
- Don't false-positive on discussions about what NOT to do
**Edge Cases:**
**User discussing hypotheticals:**
- "What would happen if I used rm -rf?"
- Don't treat as problematic behavior
**Teaching moments:**
- "Here's what you shouldn't do: ..."
- Context indicates explanation, not actual problem
**One-time accidents:**
- Single occurrence, already fixed
- Mention but mark as low priority
**Subjective preferences:**
- "I prefer X over Y"
- Mark as low severity, let user decide
**Return Results:**
Provide your analysis in the structured format above. The /hookify command will use this to:
1. Present findings to user
2. Ask which rules to create
3. Generate .local.md configuration files
4. Save rules to .claude directory

View File

@@ -0,0 +1,128 @@
---
description: Enable or disable hookify rules interactively
allowed-tools: ["Glob", "Read", "Edit", "AskUserQuestion", "Skill"]
---
# Configure Hookify Rules
**Load hookify:writing-rules skill first** to understand rule format.
Enable or disable existing hookify rules using an interactive interface.
## Steps
### 1. Find Existing Rules
Use Glob tool to find all hookify rule files:
```
pattern: ".claude/hookify.*.local.md"
```
If no rules found, inform user:
```
No hookify rules configured yet. Use `/hookify` to create your first rule.
```
### 2. Read Current State
For each rule file:
- Read the file
- Extract `name` and `enabled` fields from frontmatter
- Build list of rules with current state
### 3. Ask User Which Rules to Toggle
Use AskUserQuestion to let user select rules:
```json
{
"questions": [
{
"question": "Which rules would you like to enable or disable?",
"header": "Configure",
"multiSelect": true,
"options": [
{
"label": "warn-dangerous-rm (currently enabled)",
"description": "Warns about rm -rf commands"
},
{
"label": "warn-console-log (currently disabled)",
"description": "Warns about console.log in code"
},
{
"label": "require-tests (currently enabled)",
"description": "Requires tests before stopping"
}
]
}
]
}
```
**Option format:**
- Label: `{rule-name} (currently {enabled|disabled})`
- Description: Brief description from rule's message or pattern
### 4. Parse User Selection
For each selected rule:
- Determine current state from label (enabled/disabled)
- Toggle state: enabled → disabled, disabled → enabled
### 5. Update Rule Files
For each rule to toggle:
- Use Read tool to read current content
- Use Edit tool to change `enabled: true` to `enabled: false` (or vice versa)
- Handle both with and without quotes
**Edit pattern for enabling:**
```
old_string: "enabled: false"
new_string: "enabled: true"
```
**Edit pattern for disabling:**
```
old_string: "enabled: true"
new_string: "enabled: false"
```
### 6. Confirm Changes
Show user what was changed:
```
## Hookify Rules Updated
**Enabled:**
- warn-console-log
**Disabled:**
- warn-dangerous-rm
**Unchanged:**
- require-tests
Changes apply immediately - no restart needed
```
## Important Notes
- Changes take effect immediately on next tool use
- You can also manually edit .claude/hookify.*.local.md files
- To permanently remove a rule, delete its .local.md file
- Use `/hookify:list` to see all configured rules
## Edge Cases
**No rules to configure:**
- Show message about using `/hookify` to create rules first
**User selects no rules:**
- Inform that no changes were made
**File read/write errors:**
- Inform user of specific error
- Suggest manual editing as fallback

View File

@@ -0,0 +1,175 @@
---
description: Get help with the hookify plugin
allowed-tools: ["Read"]
---
# Hookify Plugin Help
Explain how the hookify plugin works and how to use it.
## Overview
The hookify plugin makes it easy to create custom hooks that prevent unwanted behaviors. Instead of editing `hooks.json` files, users create simple markdown configuration files that define patterns to watch for.
## How It Works
### 1. Hook System
Hookify installs generic hooks that run on these events:
- **PreToolUse**: Before any tool executes (Bash, Edit, Write, etc.)
- **PostToolUse**: After a tool executes
- **Stop**: When Claude wants to stop working
- **UserPromptSubmit**: When user submits a prompt
These hooks read configuration files from `.claude/hookify.*.local.md` and check if any rules match the current operation.
### 2. Configuration Files
Users create rules in `.claude/hookify.{rule-name}.local.md` files:
```markdown
---
name: warn-dangerous-rm
enabled: true
event: bash
pattern: rm\s+-rf
---
⚠️ **Dangerous rm command detected!**
This command could delete important files. Please verify the path.
```
**Key fields:**
- `name`: Unique identifier for the rule
- `enabled`: true/false to activate/deactivate
- `event`: bash, file, stop, prompt, or all
- `pattern`: Regex pattern to match
The message body is what Claude sees when the rule triggers.
### 3. Creating Rules
**Option A: Use /hookify command**
```
/hookify Don't use console.log in production files
```
This analyzes your request and creates the appropriate rule file.
**Option B: Create manually**
Create `.claude/hookify.my-rule.local.md` with the format above.
**Option C: Analyze conversation**
```
/hookify
```
Without arguments, hookify analyzes recent conversation to find behaviors you want to prevent.
## Available Commands
- **`/hookify`** - Create hooks from conversation analysis or explicit instructions
- **`/hookify:help`** - Show this help (what you're reading now)
- **`/hookify:list`** - List all configured hooks
- **`/hookify:configure`** - Enable/disable existing hooks interactively
## Example Use Cases
**Prevent dangerous commands:**
```markdown
---
name: block-chmod-777
enabled: true
event: bash
pattern: chmod\s+777
---
Don't use chmod 777 - it's a security risk. Use specific permissions instead.
```
**Warn about debugging code:**
```markdown
---
name: warn-console-log
enabled: true
event: file
pattern: console\.log\(
---
Console.log detected. Remember to remove debug logging before committing.
```
**Require tests before stopping:**
```markdown
---
name: require-tests
enabled: true
event: stop
pattern: .*
---
Did you run tests before finishing? Make sure `npm test` or equivalent was executed.
```
## Pattern Syntax
Use Python regex syntax:
- `\s` - whitespace
- `\.` - literal dot
- `|` - OR
- `+` - one or more
- `*` - zero or more
- `\d` - digit
- `[abc]` - character class
**Examples:**
- `rm\s+-rf` - matches "rm -rf"
- `console\.log\(` - matches "console.log("
- `(eval|exec)\(` - matches "eval(" or "exec("
- `\.env$` - matches files ending in .env
## Important Notes
**No Restart Needed**: Hookify rules (`.local.md` files) take effect immediately on the next tool use. The hookify hooks are already loaded and read your rules dynamically.
**Block or Warn**: Rules can either `block` operations (prevent execution) or `warn` (show message but allow). Set `action: block` or `action: warn` in the rule's frontmatter.
**Rule Files**: Keep rules in `.claude/hookify.*.local.md` - they should be git-ignored (add to .gitignore if needed).
**Disable Rules**: Set `enabled: false` in frontmatter or delete the file.
## Troubleshooting
**Hook not triggering:**
- Check rule file is in `.claude/` directory
- Verify `enabled: true` in frontmatter
- Confirm pattern is valid regex
- Test pattern: `python3 -c "import re; print(re.search('your_pattern', 'test_text'))"`
- Rules take effect immediately - no restart needed
**Import errors:**
- Check Python 3 is available: `python3 --version`
- Verify hookify plugin is installed correctly
**Pattern not matching:**
- Test regex separately
- Check for escaping issues (use unquoted patterns in YAML)
- Try simpler pattern first, then refine
## Getting Started
1. Create your first rule:
```
/hookify Warn me when I try to use rm -rf
```
2. Try to trigger it:
- Ask Claude to run `rm -rf /tmp/test`
- You should see the warning
4. Refine the rule by editing `.claude/hookify.warn-rm.local.md`
5. Create more rules as you encounter unwanted behaviors
For more examples, check the `${CLAUDE_PLUGIN_ROOT}/examples/` directory.

View File

@@ -0,0 +1,231 @@
---
description: Create hooks to prevent unwanted behaviors from conversation analysis or explicit instructions
argument-hint: Optional specific behavior to address
allowed-tools: ["Read", "Write", "AskUserQuestion", "Task", "Grep", "TodoWrite", "Skill"]
---
# Hookify - Create Hooks from Unwanted Behaviors
**FIRST: Load the hookify:writing-rules skill** using the Skill tool to understand rule file format and syntax.
Create hook rules to prevent problematic behaviors by analyzing the conversation or from explicit user instructions.
## Your Task
You will help the user create hookify rules to prevent unwanted behaviors. Follow these steps:
### Step 1: Gather Behavior Information
**If $ARGUMENTS is provided:**
- User has given specific instructions: `$ARGUMENTS`
- Still analyze recent conversation (last 10-15 user messages) for additional context
- Look for examples of the behavior happening
**If $ARGUMENTS is empty:**
- Launch the conversation-analyzer agent to find problematic behaviors
- Agent will scan user prompts for frustration signals
- Agent will return structured findings
**To analyze conversation:**
Use the Task tool to launch conversation-analyzer agent:
```
{
"subagent_type": "general-purpose",
"description": "Analyze conversation for unwanted behaviors",
"prompt": "You are analyzing a Claude Code conversation to find behaviors the user wants to prevent.
Read user messages in the current conversation and identify:
1. Explicit requests to avoid something (\"don't do X\", \"stop doing Y\")
2. Corrections or reversions (user fixing Claude's actions)
3. Frustrated reactions (\"why did you do X?\", \"I didn't ask for that\")
4. Repeated issues (same problem multiple times)
For each issue found, extract:
- What tool was used (Bash, Edit, Write, etc.)
- Specific pattern or command
- Why it was problematic
- User's stated reason
Return findings as a structured list with:
- category: Type of issue
- tool: Which tool was involved
- pattern: Regex or literal pattern to match
- context: What happened
- severity: high/medium/low
Focus on the most recent issues (last 20-30 messages). Don't go back further unless explicitly asked."
}
```
### Step 2: Present Findings to User
After gathering behaviors (from arguments or agent), present to user using AskUserQuestion:
**Question 1: Which behaviors to hookify?**
- Header: "Create Rules"
- multiSelect: true
- Options: List each detected behavior (max 4)
- Label: Short description (e.g., "Block rm -rf")
- Description: Why it's problematic
**Question 2: For each selected behavior, ask about action:**
- "Should this block the operation or just warn?"
- Options:
- "Just warn" (action: warn - shows message but allows)
- "Block operation" (action: block - prevents execution)
**Question 3: Ask for example patterns:**
- "What patterns should trigger this rule?"
- Show detected patterns
- Allow user to refine or add more
### Step 3: Generate Rule Files
For each confirmed behavior, create a `.claude/hookify.{rule-name}.local.md` file:
**Rule naming convention:**
- Use kebab-case
- Be descriptive: `block-dangerous-rm`, `warn-console-log`, `require-tests-before-stop`
- Start with action verb: block, warn, prevent, require
**File format:**
```markdown
---
name: {rule-name}
enabled: true
event: {bash|file|stop|prompt|all}
pattern: {regex pattern}
action: {warn|block}
---
{Message to show Claude when rule triggers}
```
**Action values:**
- `warn`: Show message but allow operation (default)
- `block`: Prevent operation or stop session
**For more complex rules (multiple conditions):**
```markdown
---
name: {rule-name}
enabled: true
event: file
conditions:
- field: file_path
operator: regex_match
pattern: \.env$
- field: new_text
operator: contains
pattern: API_KEY
---
{Warning message}
```
### Step 4: Create Files and Confirm
**IMPORTANT**: Rule files must be created in the current working directory's `.claude/` folder, NOT the plugin directory.
Use the current working directory (where Claude Code was started) as the base path.
1. Check if `.claude/` directory exists in current working directory
- If not, create it first with: `mkdir -p .claude`
2. Use Write tool to create each `.claude/hookify.{name}.local.md` file
- Use relative path from current working directory: `.claude/hookify.{name}.local.md`
- The path should resolve to the project's .claude directory, not the plugin's
3. Show user what was created:
```
Created 3 hookify rules:
- .claude/hookify.dangerous-rm.local.md
- .claude/hookify.console-log.local.md
- .claude/hookify.sensitive-files.local.md
These rules will trigger on:
- dangerous-rm: Bash commands matching "rm -rf"
- console-log: Edits adding console.log statements
- sensitive-files: Edits to .env or credentials files
```
4. Verify files were created in the correct location by listing them
5. Inform user: **"Rules are active immediately - no restart needed!"**
The hookify hooks are already loaded and will read your new rules on the next tool use.
## Event Types Reference
- **bash**: Matches Bash tool commands
- **file**: Matches Edit, Write, MultiEdit tools
- **stop**: Matches when agent wants to stop (use for completion checks)
- **prompt**: Matches when user submits prompts
- **all**: Matches all events
## Pattern Writing Tips
**Bash patterns:**
- Match dangerous commands: `rm\s+-rf|chmod\s+777|dd\s+if=`
- Match specific tools: `npm\s+install\s+|pip\s+install`
**File patterns:**
- Match code patterns: `console\.log\(|eval\(|innerHTML\s*=`
- Match file paths: `\.env$|\.git/|node_modules/`
**Stop patterns:**
- Check for missing steps: (check transcript or completion criteria)
## Example Workflow
**User says**: "/hookify Don't use rm -rf without asking me first"
**Your response**:
1. Analyze: User wants to prevent rm -rf commands
2. Ask: "Should I block this command or just warn you?"
3. User selects: "Just warn"
4. Create `.claude/hookify.dangerous-rm.local.md`:
```markdown
---
name: warn-dangerous-rm
enabled: true
event: bash
pattern: rm\s+-rf
---
⚠️ **Dangerous rm command detected**
You requested to be warned before using rm -rf.
Please verify the path is correct.
```
5. Confirm: "Created hookify rule. It's active immediately - try triggering it!"
## Important Notes
- **No restart needed**: Rules take effect immediately on the next tool use
- **File location**: Create files in project's `.claude/` directory (current working directory), NOT the plugin's .claude/
- **Regex syntax**: Use Python regex syntax (raw strings, no need to escape in YAML)
- **Action types**: Rules can `warn` (default) or `block` operations
- **Testing**: Test rules immediately after creating them
## Troubleshooting
**If rule file creation fails:**
1. Check current working directory with pwd
2. Ensure `.claude/` directory exists (create with mkdir if needed)
3. Use absolute path if needed: `{cwd}/.claude/hookify.{name}.local.md`
4. Verify file was created with Glob or ls
**If rule doesn't trigger after creation:**
1. Verify file is in project `.claude/` not plugin `.claude/`
2. Check file with Read tool to ensure pattern is correct
3. Test pattern with: `python3 -c "import re; print(re.search(r'pattern', 'test text'))"`
4. Verify `enabled: true` in frontmatter
5. Remember: Rules work immediately, no restart needed
**If blocking seems too strict:**
1. Change `action: block` to `action: warn` in the rule file
2. Or adjust the pattern to be more specific
3. Changes take effect on next tool use
Use TodoWrite to track your progress through the steps.

View File

@@ -0,0 +1,82 @@
---
description: List all configured hookify rules
allowed-tools: ["Glob", "Read", "Skill"]
---
# List Hookify Rules
**Load hookify:writing-rules skill first** to understand rule format.
Show all configured hookify rules in the project.
## Steps
1. Use Glob tool to find all hookify rule files:
```
pattern: ".claude/hookify.*.local.md"
```
2. For each file found:
- Use Read tool to read the file
- Extract frontmatter fields: name, enabled, event, pattern
- Extract message preview (first 100 chars)
3. Present results in a table:
```
## Configured Hookify Rules
| Name | Enabled | Event | Pattern | File |
|------|---------|-------|---------|------|
| warn-dangerous-rm | ✅ Yes | bash | rm\s+-rf | hookify.dangerous-rm.local.md |
| warn-console-log | ✅ Yes | file | console\.log\( | hookify.console-log.local.md |
| check-tests | ❌ No | stop | .* | hookify.require-tests.local.md |
**Total**: 3 rules (2 enabled, 1 disabled)
```
4. For each rule, show a brief preview:
```
### warn-dangerous-rm
**Event**: bash
**Pattern**: `rm\s+-rf`
**Message**: "⚠️ **Dangerous rm command detected!** This command could delete..."
**Status**: ✅ Active
**File**: .claude/hookify.dangerous-rm.local.md
```
5. Add helpful footer:
```
---
To modify a rule: Edit the .local.md file directly
To disable a rule: Set `enabled: false` in frontmatter
To enable a rule: Set `enabled: true` in frontmatter
To delete a rule: Remove the .local.md file
To create a rule: Use `/hookify` command
**Remember**: Changes take effect immediately - no restart needed
```
## If No Rules Found
If no hookify rules exist:
```
## No Hookify Rules Configured
You haven't created any hookify rules yet.
To get started:
1. Use `/hookify` to analyze conversation and create rules
2. Or manually create `.claude/hookify.my-rule.local.md` files
3. See `/hookify:help` for documentation
Example:
```
/hookify Warn me when I use console.log
```
Check `${CLAUDE_PLUGIN_ROOT}/examples/` for example rule files.
```

View File

View File

@@ -0,0 +1,297 @@
#!/usr/bin/env python3
"""Configuration loader for hookify plugin.
Loads and parses .claude/hookify.*.local.md files.
"""
import os
import sys
import glob
import re
from typing import List, Optional, Dict, Any
from dataclasses import dataclass, field
@dataclass
class Condition:
"""A single condition for matching."""
field: str # "command", "new_text", "old_text", "file_path", etc.
operator: str # "regex_match", "contains", "equals", etc.
pattern: str # Pattern to match
@classmethod
def from_dict(cls, data: Dict[str, Any]) -> 'Condition':
"""Create Condition from dict."""
return cls(
field=data.get('field', ''),
operator=data.get('operator', 'regex_match'),
pattern=data.get('pattern', '')
)
@dataclass
class Rule:
"""A hookify rule."""
name: str
enabled: bool
event: str # "bash", "file", "stop", "all", etc.
pattern: Optional[str] = None # Simple pattern (legacy)
conditions: List[Condition] = field(default_factory=list)
action: str = "warn" # "warn" or "block" (future)
tool_matcher: Optional[str] = None # Override tool matching
message: str = "" # Message body from markdown
@classmethod
def from_dict(cls, frontmatter: Dict[str, Any], message: str) -> 'Rule':
"""Create Rule from frontmatter dict and message body."""
# Handle both simple pattern and complex conditions
conditions = []
# New style: explicit conditions list
if 'conditions' in frontmatter:
cond_list = frontmatter['conditions']
if isinstance(cond_list, list):
conditions = [Condition.from_dict(c) for c in cond_list]
# Legacy style: simple pattern field
simple_pattern = frontmatter.get('pattern')
if simple_pattern and not conditions:
# Convert simple pattern to condition
# Infer field from event
event = frontmatter.get('event', 'all')
if event == 'bash':
field = 'command'
elif event == 'file':
field = 'new_text'
else:
field = 'content'
conditions = [Condition(
field=field,
operator='regex_match',
pattern=simple_pattern
)]
return cls(
name=frontmatter.get('name', 'unnamed'),
enabled=frontmatter.get('enabled', True),
event=frontmatter.get('event', 'all'),
pattern=simple_pattern,
conditions=conditions,
action=frontmatter.get('action', 'warn'),
tool_matcher=frontmatter.get('tool_matcher'),
message=message.strip()
)
def extract_frontmatter(content: str) -> tuple[Dict[str, Any], str]:
"""Extract YAML frontmatter and message body from markdown.
Returns (frontmatter_dict, message_body).
Supports multi-line dictionary items in lists by preserving indentation.
"""
if not content.startswith('---'):
return {}, content
# Split on --- markers
parts = content.split('---', 2)
if len(parts) < 3:
return {}, content
frontmatter_text = parts[1]
message = parts[2].strip()
# Simple YAML parser that handles indented list items
frontmatter = {}
lines = frontmatter_text.split('\n')
current_key = None
current_list = []
current_dict = {}
in_list = False
in_dict_item = False
for line in lines:
# Skip empty lines and comments
stripped = line.strip()
if not stripped or stripped.startswith('#'):
continue
# Check indentation level
indent = len(line) - len(line.lstrip())
# Top-level key (no indentation or minimal)
if indent == 0 and ':' in line and not line.strip().startswith('-'):
# Save previous list/dict if any
if in_list and current_key:
if in_dict_item and current_dict:
current_list.append(current_dict)
current_dict = {}
frontmatter[current_key] = current_list
in_list = False
in_dict_item = False
current_list = []
key, value = line.split(':', 1)
key = key.strip()
value = value.strip()
if not value:
# Empty value - list or nested structure follows
current_key = key
in_list = True
current_list = []
else:
# Simple key-value pair
value = value.strip('"').strip("'")
if value.lower() == 'true':
value = True
elif value.lower() == 'false':
value = False
frontmatter[key] = value
# List item (starts with -)
elif stripped.startswith('-') and in_list:
# Save previous dict item if any
if in_dict_item and current_dict:
current_list.append(current_dict)
current_dict = {}
item_text = stripped[1:].strip()
# Check if this is an inline dict (key: value on same line)
if ':' in item_text and ',' in item_text:
# Inline comma-separated dict: "- field: command, operator: regex_match"
item_dict = {}
for part in item_text.split(','):
if ':' in part:
k, v = part.split(':', 1)
item_dict[k.strip()] = v.strip().strip('"').strip("'")
current_list.append(item_dict)
in_dict_item = False
elif ':' in item_text:
# Start of multi-line dict item: "- field: command"
in_dict_item = True
k, v = item_text.split(':', 1)
current_dict = {k.strip(): v.strip().strip('"').strip("'")}
else:
# Simple list item
current_list.append(item_text.strip('"').strip("'"))
in_dict_item = False
# Continuation of dict item (indented under list item)
elif indent > 2 and in_dict_item and ':' in line:
# This is a field of the current dict item
k, v = stripped.split(':', 1)
current_dict[k.strip()] = v.strip().strip('"').strip("'")
# Save final list/dict if any
if in_list and current_key:
if in_dict_item and current_dict:
current_list.append(current_dict)
frontmatter[current_key] = current_list
return frontmatter, message
def load_rules(event: Optional[str] = None) -> List[Rule]:
"""Load all hookify rules from .claude directory.
Args:
event: Optional event filter ("bash", "file", "stop", etc.)
Returns:
List of enabled Rule objects matching the event.
"""
rules = []
# Find all hookify.*.local.md files
pattern = os.path.join('.claude', 'hookify.*.local.md')
files = glob.glob(pattern)
for file_path in files:
try:
rule = load_rule_file(file_path)
if not rule:
continue
# Filter by event if specified
if event:
if rule.event != 'all' and rule.event != event:
continue
# Only include enabled rules
if rule.enabled:
rules.append(rule)
except (IOError, OSError, PermissionError) as e:
# File I/O errors - log and continue
print(f"Warning: Failed to read {file_path}: {e}", file=sys.stderr)
continue
except (ValueError, KeyError, AttributeError, TypeError) as e:
# Parsing errors - log and continue
print(f"Warning: Failed to parse {file_path}: {e}", file=sys.stderr)
continue
except Exception as e:
# Unexpected errors - log with type details
print(f"Warning: Unexpected error loading {file_path} ({type(e).__name__}): {e}", file=sys.stderr)
continue
return rules
def load_rule_file(file_path: str) -> Optional[Rule]:
"""Load a single rule file.
Returns:
Rule object or None if file is invalid.
"""
try:
with open(file_path, 'r') as f:
content = f.read()
frontmatter, message = extract_frontmatter(content)
if not frontmatter:
print(f"Warning: {file_path} missing YAML frontmatter (must start with ---)", file=sys.stderr)
return None
rule = Rule.from_dict(frontmatter, message)
return rule
except (IOError, OSError, PermissionError) as e:
print(f"Error: Cannot read {file_path}: {e}", file=sys.stderr)
return None
except (ValueError, KeyError, AttributeError, TypeError) as e:
print(f"Error: Malformed rule file {file_path}: {e}", file=sys.stderr)
return None
except UnicodeDecodeError as e:
print(f"Error: Invalid encoding in {file_path}: {e}", file=sys.stderr)
return None
except Exception as e:
print(f"Error: Unexpected error parsing {file_path} ({type(e).__name__}): {e}", file=sys.stderr)
return None
# For testing
if __name__ == '__main__':
import sys
# Test frontmatter parsing
test_content = """---
name: test-rule
enabled: true
event: bash
pattern: "rm -rf"
---
⚠️ Dangerous command detected!
"""
fm, msg = extract_frontmatter(test_content)
print("Frontmatter:", fm)
print("Message:", msg)
rule = Rule.from_dict(fm, msg)
print("Rule:", rule)

View File

@@ -0,0 +1,313 @@
#!/usr/bin/env python3
"""Rule evaluation engine for hookify plugin."""
import re
import sys
from functools import lru_cache
from typing import List, Dict, Any, Optional
# Import from local module
from hookify.core.config_loader import Rule, Condition
# Cache compiled regexes (max 128 patterns)
@lru_cache(maxsize=128)
def compile_regex(pattern: str) -> re.Pattern:
"""Compile regex pattern with caching.
Args:
pattern: Regex pattern string
Returns:
Compiled regex pattern
"""
return re.compile(pattern, re.IGNORECASE)
class RuleEngine:
"""Evaluates rules against hook input data."""
def __init__(self):
"""Initialize rule engine."""
# No need for instance cache anymore - using global lru_cache
pass
def evaluate_rules(self, rules: List[Rule], input_data: Dict[str, Any]) -> Dict[str, Any]:
"""Evaluate all rules and return combined results.
Checks all rules and accumulates matches. Blocking rules take priority
over warning rules. All matching rule messages are combined.
Args:
rules: List of Rule objects to evaluate
input_data: Hook input JSON (tool_name, tool_input, etc.)
Returns:
Response dict with systemMessage, hookSpecificOutput, etc.
Empty dict {} if no rules match.
"""
hook_event = input_data.get('hook_event_name', '')
blocking_rules = []
warning_rules = []
for rule in rules:
if self._rule_matches(rule, input_data):
if rule.action == 'block':
blocking_rules.append(rule)
else:
warning_rules.append(rule)
# If any blocking rules matched, block the operation
if blocking_rules:
messages = [f"**[{r.name}]**\n{r.message}" for r in blocking_rules]
combined_message = "\n\n".join(messages)
# Use appropriate blocking format based on event type
if hook_event == 'Stop':
return {
"decision": "block",
"reason": combined_message,
"systemMessage": combined_message
}
elif hook_event in ['PreToolUse', 'PostToolUse']:
return {
"hookSpecificOutput": {
"hookEventName": hook_event,
"permissionDecision": "deny"
},
"systemMessage": combined_message
}
else:
# For other events, just show message
return {
"systemMessage": combined_message
}
# If only warnings, show them but allow operation
if warning_rules:
messages = [f"**[{r.name}]**\n{r.message}" for r in warning_rules]
return {
"systemMessage": "\n\n".join(messages)
}
# No matches - allow operation
return {}
def _rule_matches(self, rule: Rule, input_data: Dict[str, Any]) -> bool:
"""Check if rule matches input data.
Args:
rule: Rule to evaluate
input_data: Hook input data
Returns:
True if rule matches, False otherwise
"""
# Extract tool information
tool_name = input_data.get('tool_name', '')
tool_input = input_data.get('tool_input', {})
# Check tool matcher if specified
if rule.tool_matcher:
if not self._matches_tool(rule.tool_matcher, tool_name):
return False
# If no conditions, don't match
# (Rules must have at least one condition to be valid)
if not rule.conditions:
return False
# All conditions must match
for condition in rule.conditions:
if not self._check_condition(condition, tool_name, tool_input, input_data):
return False
return True
def _matches_tool(self, matcher: str, tool_name: str) -> bool:
"""Check if tool_name matches the matcher pattern.
Args:
matcher: Pattern like "Bash", "Edit|Write", "*"
tool_name: Actual tool name
Returns:
True if matches
"""
if matcher == '*':
return True
# Split on | for OR matching
patterns = matcher.split('|')
return tool_name in patterns
def _check_condition(self, condition: Condition, tool_name: str,
tool_input: Dict[str, Any], input_data: Dict[str, Any] = None) -> bool:
"""Check if a single condition matches.
Args:
condition: Condition to check
tool_name: Tool being used
tool_input: Tool input dict
input_data: Full hook input data (for Stop events, etc.)
Returns:
True if condition matches
"""
# Extract the field value to check
field_value = self._extract_field(condition.field, tool_name, tool_input, input_data)
if field_value is None:
return False
# Apply operator
operator = condition.operator
pattern = condition.pattern
if operator == 'regex_match':
return self._regex_match(pattern, field_value)
elif operator == 'contains':
return pattern in field_value
elif operator == 'equals':
return pattern == field_value
elif operator == 'not_contains':
return pattern not in field_value
elif operator == 'starts_with':
return field_value.startswith(pattern)
elif operator == 'ends_with':
return field_value.endswith(pattern)
else:
# Unknown operator
return False
def _extract_field(self, field: str, tool_name: str,
tool_input: Dict[str, Any], input_data: Dict[str, Any] = None) -> Optional[str]:
"""Extract field value from tool input or hook input data.
Args:
field: Field name like "command", "new_text", "file_path", "reason", "transcript"
tool_name: Tool being used (may be empty for Stop events)
tool_input: Tool input dict
input_data: Full hook input (for accessing transcript_path, reason, etc.)
Returns:
Field value as string, or None if not found
"""
# Direct tool_input fields
if field in tool_input:
value = tool_input[field]
if isinstance(value, str):
return value
return str(value)
# For Stop events and other non-tool events, check input_data
if input_data:
# Stop event specific fields
if field == 'reason':
return input_data.get('reason', '')
elif field == 'transcript':
# Read transcript file if path provided
transcript_path = input_data.get('transcript_path')
if transcript_path:
try:
with open(transcript_path, 'r') as f:
return f.read()
except FileNotFoundError:
print(f"Warning: Transcript file not found: {transcript_path}", file=sys.stderr)
return ''
except PermissionError:
print(f"Warning: Permission denied reading transcript: {transcript_path}", file=sys.stderr)
return ''
except (IOError, OSError) as e:
print(f"Warning: Error reading transcript {transcript_path}: {e}", file=sys.stderr)
return ''
except UnicodeDecodeError as e:
print(f"Warning: Encoding error in transcript {transcript_path}: {e}", file=sys.stderr)
return ''
elif field == 'user_prompt':
# For UserPromptSubmit events
return input_data.get('user_prompt', '')
# Handle special cases by tool type
if tool_name == 'Bash':
if field == 'command':
return tool_input.get('command', '')
elif tool_name in ['Write', 'Edit']:
if field == 'content':
# Write uses 'content', Edit has 'new_string'
return tool_input.get('content') or tool_input.get('new_string', '')
elif field == 'new_text' or field == 'new_string':
return tool_input.get('new_string', '')
elif field == 'old_text' or field == 'old_string':
return tool_input.get('old_string', '')
elif field == 'file_path':
return tool_input.get('file_path', '')
elif tool_name == 'MultiEdit':
if field == 'file_path':
return tool_input.get('file_path', '')
elif field in ['new_text', 'content']:
# Concatenate all edits
edits = tool_input.get('edits', [])
return ' '.join(e.get('new_string', '') for e in edits)
return None
def _regex_match(self, pattern: str, text: str) -> bool:
"""Check if pattern matches text using regex.
Args:
pattern: Regex pattern
text: Text to match against
Returns:
True if pattern matches
"""
try:
# Use cached compiled regex (LRU cache with max 128 patterns)
regex = compile_regex(pattern)
return bool(regex.search(text))
except re.error as e:
print(f"Invalid regex pattern '{pattern}': {e}", file=sys.stderr)
return False
# For testing
if __name__ == '__main__':
from hookify.core.config_loader import Condition, Rule
# Test rule evaluation
rule = Rule(
name="test-rm",
enabled=True,
event="bash",
conditions=[
Condition(field="command", operator="regex_match", pattern=r"rm\s+-rf")
],
message="Dangerous rm command!"
)
engine = RuleEngine()
# Test matching input
test_input = {
"tool_name": "Bash",
"tool_input": {
"command": "rm -rf /tmp/test"
}
}
result = engine.evaluate_rules([rule], test_input)
print("Match result:", result)
# Test non-matching input
test_input2 = {
"tool_name": "Bash",
"tool_input": {
"command": "ls -la"
}
}
result2 = engine.evaluate_rules([rule], test_input2)
print("Non-match result:", result2)

View File

@@ -0,0 +1,14 @@
---
name: warn-console-log
enabled: true
event: file
pattern: console\.log\(
action: warn
---
🔍 **Console.log detected**
You're adding a console.log statement. Please consider:
- Is this for debugging or should it be proper logging?
- Will this ship to production?
- Should this use a logging library instead?

View File

@@ -0,0 +1,14 @@
---
name: block-dangerous-rm
enabled: true
event: bash
pattern: rm\s+-rf
action: block
---
⚠️ **Dangerous rm command detected!**
This command could delete important files. Please:
- Verify the path is correct
- Consider using a safer approach
- Make sure you have backups

View File

@@ -0,0 +1,22 @@
---
name: require-tests-run
enabled: false
event: stop
action: block
conditions:
- field: transcript
operator: not_contains
pattern: npm test|pytest|cargo test
---
**Tests not detected in transcript!**
Before stopping, please run tests to verify your changes work correctly.
Look for test commands like:
- `npm test`
- `pytest`
- `cargo test`
**Note:** This rule blocks stopping if no test commands appear in the transcript.
Enable this rule only when you want strict test enforcement.

View File

@@ -0,0 +1,18 @@
---
name: warn-sensitive-files
enabled: true
event: file
action: warn
conditions:
- field: file_path
operator: regex_match
pattern: \.env$|\.env\.|credentials|secrets
---
🔐 **Sensitive file detected**
You're editing a file that may contain sensitive data:
- Ensure credentials are not hardcoded
- Use environment variables for secrets
- Verify this file is in .gitignore
- Consider using a secrets manager

View File

View File

@@ -0,0 +1,49 @@
{
"description": "Hookify plugin - User-configurable hooks from .local.md files",
"hooks": {
"PreToolUse": [
{
"hooks": [
{
"type": "command",
"command": "python3 ${CLAUDE_PLUGIN_ROOT}/hooks/pretooluse.py",
"timeout": 10
}
]
}
],
"PostToolUse": [
{
"hooks": [
{
"type": "command",
"command": "python3 ${CLAUDE_PLUGIN_ROOT}/hooks/posttooluse.py",
"timeout": 10
}
]
}
],
"Stop": [
{
"hooks": [
{
"type": "command",
"command": "python3 ${CLAUDE_PLUGIN_ROOT}/hooks/stop.py",
"timeout": 10
}
]
}
],
"UserPromptSubmit": [
{
"hooks": [
{
"type": "command",
"command": "python3 ${CLAUDE_PLUGIN_ROOT}/hooks/userpromptsubmit.py",
"timeout": 10
}
]
}
]
}
}

View File

@@ -0,0 +1,66 @@
#!/usr/bin/env python3
"""PostToolUse hook executor for hookify plugin.
This script is called by Claude Code after a tool executes.
It reads .claude/hookify.*.local.md files and evaluates rules.
"""
import os
import sys
import json
# CRITICAL: Add plugin root to Python path for imports
PLUGIN_ROOT = os.environ.get('CLAUDE_PLUGIN_ROOT')
if PLUGIN_ROOT:
parent_dir = os.path.dirname(PLUGIN_ROOT)
if parent_dir not in sys.path:
sys.path.insert(0, parent_dir)
if PLUGIN_ROOT not in sys.path:
sys.path.insert(0, PLUGIN_ROOT)
try:
from hookify.core.config_loader import load_rules
from hookify.core.rule_engine import RuleEngine
except ImportError as e:
error_msg = {"systemMessage": f"Hookify import error: {e}"}
print(json.dumps(error_msg), file=sys.stdout)
sys.exit(0)
def main():
"""Main entry point for PostToolUse hook."""
try:
# Read input from stdin
input_data = json.load(sys.stdin)
# Determine event type based on tool
tool_name = input_data.get('tool_name', '')
event = None
if tool_name == 'Bash':
event = 'bash'
elif tool_name in ['Edit', 'Write', 'MultiEdit']:
event = 'file'
# Load rules
rules = load_rules(event=event)
# Evaluate rules
engine = RuleEngine()
result = engine.evaluate_rules(rules, input_data)
# Always output JSON (even if empty)
print(json.dumps(result), file=sys.stdout)
except Exception as e:
error_output = {
"systemMessage": f"Hookify error: {str(e)}"
}
print(json.dumps(error_output), file=sys.stdout)
finally:
# ALWAYS exit 0
sys.exit(0)
if __name__ == '__main__':
main()

View File

@@ -0,0 +1,74 @@
#!/usr/bin/env python3
"""PreToolUse hook executor for hookify plugin.
This script is called by Claude Code before any tool executes.
It reads .claude/hookify.*.local.md files and evaluates rules.
"""
import os
import sys
import json
# CRITICAL: Add plugin root to Python path for imports
# We need to add the parent of the plugin directory so Python can find "hookify" package
PLUGIN_ROOT = os.environ.get('CLAUDE_PLUGIN_ROOT')
if PLUGIN_ROOT:
# Add the parent directory of the plugin
parent_dir = os.path.dirname(PLUGIN_ROOT)
if parent_dir not in sys.path:
sys.path.insert(0, parent_dir)
# Also add PLUGIN_ROOT itself in case we have other scripts
if PLUGIN_ROOT not in sys.path:
sys.path.insert(0, PLUGIN_ROOT)
try:
from hookify.core.config_loader import load_rules
from hookify.core.rule_engine import RuleEngine
except ImportError as e:
# If imports fail, allow operation and log error
error_msg = {"systemMessage": f"Hookify import error: {e}"}
print(json.dumps(error_msg), file=sys.stdout)
sys.exit(0)
def main():
"""Main entry point for PreToolUse hook."""
try:
# Read input from stdin
input_data = json.load(sys.stdin)
# Determine event type for filtering
# For PreToolUse, we use tool_name to determine "bash" vs "file" event
tool_name = input_data.get('tool_name', '')
event = None
if tool_name == 'Bash':
event = 'bash'
elif tool_name in ['Edit', 'Write', 'MultiEdit']:
event = 'file'
# Load rules
rules = load_rules(event=event)
# Evaluate rules
engine = RuleEngine()
result = engine.evaluate_rules(rules, input_data)
# Always output JSON (even if empty)
print(json.dumps(result), file=sys.stdout)
except Exception as e:
# On any error, allow the operation and log
error_output = {
"systemMessage": f"Hookify error: {str(e)}"
}
print(json.dumps(error_output), file=sys.stdout)
finally:
# ALWAYS exit 0 - never block operations due to hook errors
sys.exit(0)
if __name__ == '__main__':
main()

59
plugins/hookify/hooks/stop.py Executable file
View File

@@ -0,0 +1,59 @@
#!/usr/bin/env python3
"""Stop hook executor for hookify plugin.
This script is called by Claude Code when agent wants to stop.
It reads .claude/hookify.*.local.md files and evaluates stop rules.
"""
import os
import sys
import json
# CRITICAL: Add plugin root to Python path for imports
PLUGIN_ROOT = os.environ.get('CLAUDE_PLUGIN_ROOT')
if PLUGIN_ROOT:
parent_dir = os.path.dirname(PLUGIN_ROOT)
if parent_dir not in sys.path:
sys.path.insert(0, parent_dir)
if PLUGIN_ROOT not in sys.path:
sys.path.insert(0, PLUGIN_ROOT)
try:
from hookify.core.config_loader import load_rules
from hookify.core.rule_engine import RuleEngine
except ImportError as e:
error_msg = {"systemMessage": f"Hookify import error: {e}"}
print(json.dumps(error_msg), file=sys.stdout)
sys.exit(0)
def main():
"""Main entry point for Stop hook."""
try:
# Read input from stdin
input_data = json.load(sys.stdin)
# Load stop rules
rules = load_rules(event='stop')
# Evaluate rules
engine = RuleEngine()
result = engine.evaluate_rules(rules, input_data)
# Always output JSON (even if empty)
print(json.dumps(result), file=sys.stdout)
except Exception as e:
# On any error, allow the operation
error_output = {
"systemMessage": f"Hookify error: {str(e)}"
}
print(json.dumps(error_output), file=sys.stdout)
finally:
# ALWAYS exit 0
sys.exit(0)
if __name__ == '__main__':
main()

View File

@@ -0,0 +1,58 @@
#!/usr/bin/env python3
"""UserPromptSubmit hook executor for hookify plugin.
This script is called by Claude Code when user submits a prompt.
It reads .claude/hookify.*.local.md files and evaluates rules.
"""
import os
import sys
import json
# CRITICAL: Add plugin root to Python path for imports
PLUGIN_ROOT = os.environ.get('CLAUDE_PLUGIN_ROOT')
if PLUGIN_ROOT:
parent_dir = os.path.dirname(PLUGIN_ROOT)
if parent_dir not in sys.path:
sys.path.insert(0, parent_dir)
if PLUGIN_ROOT not in sys.path:
sys.path.insert(0, PLUGIN_ROOT)
try:
from hookify.core.config_loader import load_rules
from hookify.core.rule_engine import RuleEngine
except ImportError as e:
error_msg = {"systemMessage": f"Hookify import error: {e}"}
print(json.dumps(error_msg), file=sys.stdout)
sys.exit(0)
def main():
"""Main entry point for UserPromptSubmit hook."""
try:
# Read input from stdin
input_data = json.load(sys.stdin)
# Load user prompt rules
rules = load_rules(event='prompt')
# Evaluate rules
engine = RuleEngine()
result = engine.evaluate_rules(rules, input_data)
# Always output JSON (even if empty)
print(json.dumps(result), file=sys.stdout)
except Exception as e:
error_output = {
"systemMessage": f"Hookify error: {str(e)}"
}
print(json.dumps(error_output), file=sys.stdout)
finally:
# ALWAYS exit 0
sys.exit(0)
if __name__ == '__main__':
main()

View File

View File

@@ -0,0 +1,374 @@
---
name: Writing Hookify Rules
description: This skill should be used when the user asks to "create a hookify rule", "write a hook rule", "configure hookify", "add a hookify rule", or needs guidance on hookify rule syntax and patterns.
version: 0.1.0
---
# Writing Hookify Rules
## Overview
Hookify rules are markdown files with YAML frontmatter that define patterns to watch for and messages to show when those patterns match. Rules are stored in `.claude/hookify.{rule-name}.local.md` files.
## Rule File Format
### Basic Structure
```markdown
---
name: rule-identifier
enabled: true
event: bash|file|stop|prompt|all
pattern: regex-pattern-here
---
Message to show Claude when this rule triggers.
Can include markdown formatting, warnings, suggestions, etc.
```
### Frontmatter Fields
**name** (required): Unique identifier for the rule
- Use kebab-case: `warn-dangerous-rm`, `block-console-log`
- Be descriptive and action-oriented
- Start with verb: warn, prevent, block, require, check
**enabled** (required): Boolean to activate/deactivate
- `true`: Rule is active
- `false`: Rule is disabled (won't trigger)
- Can toggle without deleting rule
**event** (required): Which hook event to trigger on
- `bash`: Bash tool commands
- `file`: Edit, Write, MultiEdit tools
- `stop`: When agent wants to stop
- `prompt`: When user submits a prompt
- `all`: All events
**action** (optional): What to do when rule matches
- `warn`: Show message but allow operation (default)
- `block`: Prevent operation (PreToolUse) or stop session (Stop events)
- If omitted, defaults to `warn`
**pattern** (simple format): Regex pattern to match
- Used for simple single-condition rules
- Matches against command (bash) or new_text (file)
- Python regex syntax
**Example:**
```yaml
event: bash
pattern: rm\s+-rf
```
### Advanced Format (Multiple Conditions)
For complex rules with multiple conditions:
```markdown
---
name: warn-env-file-edits
enabled: true
event: file
conditions:
- field: file_path
operator: regex_match
pattern: \.env$
- field: new_text
operator: contains
pattern: API_KEY
---
You're adding an API key to a .env file. Ensure this file is in .gitignore!
```
**Condition fields:**
- `field`: Which field to check
- For bash: `command`
- For file: `file_path`, `new_text`, `old_text`, `content`
- `operator`: How to match
- `regex_match`: Regex pattern matching
- `contains`: Substring check
- `equals`: Exact match
- `not_contains`: Substring must NOT be present
- `starts_with`: Prefix check
- `ends_with`: Suffix check
- `pattern`: Pattern or string to match
**All conditions must match for rule to trigger.**
## Message Body
The markdown content after frontmatter is shown to Claude when the rule triggers.
**Good messages:**
- Explain what was detected
- Explain why it's problematic
- Suggest alternatives or best practices
- Use formatting for clarity (bold, lists, etc.)
**Example:**
```markdown
⚠️ **Console.log detected!**
You're adding console.log to production code.
**Why this matters:**
- Debug logs shouldn't ship to production
- Console.log can expose sensitive data
- Impacts browser performance
**Alternatives:**
- Use a proper logging library
- Remove before committing
- Use conditional debug builds
```
## Event Type Guide
### bash Events
Match Bash command patterns:
```markdown
---
event: bash
pattern: sudo\s+|rm\s+-rf|chmod\s+777
---
Dangerous command detected!
```
**Common patterns:**
- Dangerous commands: `rm\s+-rf`, `dd\s+if=`, `mkfs`
- Privilege escalation: `sudo\s+`, `su\s+`
- Permission issues: `chmod\s+777`, `chown\s+root`
### file Events
Match Edit/Write/MultiEdit operations:
```markdown
---
event: file
pattern: console\.log\(|eval\(|innerHTML\s*=
---
Potentially problematic code pattern detected!
```
**Match on different fields:**
```markdown
---
event: file
conditions:
- field: file_path
operator: regex_match
pattern: \.tsx?$
- field: new_text
operator: regex_match
pattern: console\.log\(
---
Console.log in TypeScript file!
```
**Common patterns:**
- Debug code: `console\.log\(`, `debugger`, `print\(`
- Security risks: `eval\(`, `innerHTML\s*=`, `dangerouslySetInnerHTML`
- Sensitive files: `\.env$`, `credentials`, `\.pem$`
- Generated files: `node_modules/`, `dist/`, `build/`
### stop Events
Match when agent wants to stop (completion checks):
```markdown
---
event: stop
pattern: .*
---
Before stopping, verify:
- [ ] Tests were run
- [ ] Build succeeded
- [ ] Documentation updated
```
**Use for:**
- Reminders about required steps
- Completion checklists
- Process enforcement
### prompt Events
Match user prompt content (advanced):
```markdown
---
event: prompt
conditions:
- field: user_prompt
operator: contains
pattern: deploy to production
---
Production deployment checklist:
- [ ] Tests passing?
- [ ] Reviewed by team?
- [ ] Monitoring ready?
```
## Pattern Writing Tips
### Regex Basics
**Literal characters:** Most characters match themselves
- `rm` matches "rm"
- `console.log` matches "console.log"
**Special characters need escaping:**
- `.` (any char) → `\.` (literal dot)
- `(` `)``\(` `\)` (literal parens)
- `[` `]``\[` `\]` (literal brackets)
**Common metacharacters:**
- `\s` - whitespace (space, tab, newline)
- `\d` - digit (0-9)
- `\w` - word character (a-z, A-Z, 0-9, _)
- `.` - any character
- `+` - one or more
- `*` - zero or more
- `?` - zero or one
- `|` - OR
**Examples:**
```
rm\s+-rf Matches: rm -rf, rm -rf
console\.log\( Matches: console.log(
(eval|exec)\( Matches: eval( or exec(
chmod\s+777 Matches: chmod 777, chmod 777
API_KEY\s*= Matches: API_KEY=, API_KEY =
```
### Testing Patterns
Test regex patterns before using:
```bash
python3 -c "import re; print(re.search(r'your_pattern', 'test text'))"
```
Or use online regex testers (regex101.com with Python flavor).
### Common Pitfalls
**Too broad:**
```yaml
pattern: log # Matches "log", "login", "dialog", "catalog"
```
Better: `console\.log\(|logger\.`
**Too specific:**
```yaml
pattern: rm -rf /tmp # Only matches exact path
```
Better: `rm\s+-rf`
**Escaping issues:**
- YAML quoted strings: `"pattern"` requires double backslashes `\\s`
- YAML unquoted: `pattern: \s` works as-is
- **Recommendation**: Use unquoted patterns in YAML
## File Organization
**Location:** All rules in `.claude/` directory
**Naming:** `.claude/hookify.{descriptive-name}.local.md`
**Gitignore:** Add `.claude/*.local.md` to `.gitignore`
**Good names:**
- `hookify.dangerous-rm.local.md`
- `hookify.console-log.local.md`
- `hookify.require-tests.local.md`
- `hookify.sensitive-files.local.md`
**Bad names:**
- `hookify.rule1.local.md` (not descriptive)
- `hookify.md` (missing .local)
- `danger.local.md` (missing hookify prefix)
## Workflow
### Creating a Rule
1. Identify unwanted behavior
2. Determine which tool is involved (Bash, Edit, etc.)
3. Choose event type (bash, file, stop, etc.)
4. Write regex pattern
5. Create `.claude/hookify.{name}.local.md` file in project root
6. Test immediately - rules are read dynamically on next tool use
### Refining a Rule
1. Edit the `.local.md` file
2. Adjust pattern or message
3. Test immediately - changes take effect on next tool use
### Disabling a Rule
**Temporary:** Set `enabled: false` in frontmatter
**Permanent:** Delete the `.local.md` file
## Examples
See `${CLAUDE_PLUGIN_ROOT}/examples/` for complete examples:
- `dangerous-rm.local.md` - Block dangerous rm commands
- `console-log-warning.local.md` - Warn about console.log
- `sensitive-files-warning.local.md` - Warn about editing .env files
## Quick Reference
**Minimum viable rule:**
```markdown
---
name: my-rule
enabled: true
event: bash
pattern: dangerous_command
---
Warning message here
```
**Rule with conditions:**
```markdown
---
name: my-rule
enabled: true
event: file
conditions:
- field: file_path
operator: regex_match
pattern: \.ts$
- field: new_text
operator: contains
pattern: any
---
Warning message
```
**Event types:**
- `bash` - Bash commands
- `file` - File edits
- `stop` - Completion checks
- `prompt` - User input
- `all` - All events
**Field options:**
- Bash: `command`
- File: `file_path`, `new_text`, `old_text`, `content`
- Prompt: `user_prompt`
**Operators:**
- `regex_match`, `contains`, `equals`, `not_contains`, `starts_with`, `ends_with`

View File

View File

@@ -0,0 +1,9 @@
{
"name": "learning-output-style",
"version": "1.0.0",
"description": "Interactive learning mode that requests meaningful code contributions at decision points (mimics the unshipped Learning output style)",
"author": {
"name": "Boris Cherny",
"email": "boris@anthropic.com"
}
}

View File

@@ -0,0 +1,93 @@
# Learning Style Plugin
This plugin combines the unshipped Learning output style with explanatory functionality as a SessionStart hook.
**Note:** This plugin differs from the original unshipped Learning output style by also incorporating all functionality from the [explanatory-output-style plugin](https://github.com/anthropics/claude-code/tree/main/plugins/explanatory-output-style), providing both interactive learning and educational insights.
WARNING: Do not install this plugin unless you are fine with incurring the token cost of this plugin's additional instructions and the interactive nature of learning mode.
## What it does
When enabled, this plugin automatically adds instructions at the start of each session that encourage Claude to:
1. **Learning Mode:** Engage you in active learning by requesting meaningful code contributions at decision points
2. **Explanatory Mode:** Provide educational insights about implementation choices and codebase patterns
Instead of implementing everything automatically, Claude will:
1. Identify opportunities where you can write 5-10 lines of meaningful code
2. Focus on business logic and design choices where your input truly matters
3. Prepare the context and location for your contribution
4. Explain trade-offs and guide your implementation
5. Provide educational insights before and after writing code
## How it works
The plugin uses a SessionStart hook to inject additional context into every session. This context instructs Claude to adopt an interactive teaching approach where you actively participate in writing key parts of the code.
## When Claude requests contributions
Claude will ask you to write code for:
- Business logic with multiple valid approaches
- Error handling strategies
- Algorithm implementation choices
- Data structure decisions
- User experience decisions
- Design patterns and architecture choices
## When Claude won't request contributions
Claude will implement directly:
- Boilerplate or repetitive code
- Obvious implementations with no meaningful choices
- Configuration or setup code
- Simple CRUD operations
## Example interaction
**Claude:** I've set up the authentication middleware. The session timeout behavior is a security vs. UX trade-off - should sessions auto-extend on activity, or have a hard timeout?
In `auth/middleware.ts`, implement the `handleSessionTimeout()` function to define the timeout behavior.
Consider: auto-extending improves UX but may leave sessions open longer; hard timeouts are more secure but might frustrate active users.
**You:** [Write 5-10 lines implementing your preferred approach]
## Educational insights
In addition to interactive learning, Claude will provide educational insights about implementation choices using this format:
```
`★ Insight ─────────────────────────────────────`
[2-3 key educational points about the codebase or implementation]
`─────────────────────────────────────────────────`
```
These insights focus on:
- Specific implementation choices for your codebase
- Patterns and conventions in your code
- Trade-offs and design decisions
- Codebase-specific details rather than general programming concepts
## Usage
Once installed, the plugin activates automatically at the start of every session. No additional configuration is needed.
## Migration from Output Styles
This plugin combines the unshipped "Learning" output style with the deprecated "Explanatory" output style. It provides an interactive learning experience where you actively contribute code at meaningful decision points, while also receiving educational insights about implementation choices.
If you previously used the explanatory-output-style plugin, this learning plugin includes all of that functionality plus interactive learning features.
This SessionStart hook pattern is roughly equivalent to CLAUDE.md, but it is more flexible and allows for distribution through plugins.
## Managing changes
- Disable the plugin - keep the code installed on your device
- Uninstall the plugin - remove the code from your device
- Update the plugin - create a local copy of this plugin to personalize it
- Hint: Ask Claude to read https://docs.claude.com/en/docs/claude-code/plugins.md and set it up for you!
## Philosophy
Learning by doing is more effective than passive observation. This plugin transforms your interaction with Claude from "watch and learn" to "build and understand," ensuring you develop practical skills through hands-on coding of meaningful logic.

View File

@@ -0,0 +1,15 @@
#!/usr/bin/env bash
# Output the learning mode instructions as additionalContext
# This combines the unshipped Learning output style with explanatory functionality
cat << 'EOF'
{
"hookSpecificOutput": {
"hookEventName": "SessionStart",
"additionalContext": "You are in 'learning' output style mode, which combines interactive learning with educational explanations. This mode differs from the original unshipped Learning output style by also incorporating explanatory functionality.\n\n## Learning Mode Philosophy\n\nInstead of implementing everything yourself, identify opportunities where the user can write 5-10 lines of meaningful code that shapes the solution. Focus on business logic, design choices, and implementation strategies where their input truly matters.\n\n## When to Request User Contributions\n\nRequest code contributions for:\n- Business logic with multiple valid approaches\n- Error handling strategies\n- Algorithm implementation choices\n- Data structure decisions\n- User experience decisions\n- Design patterns and architecture choices\n\n## How to Request Contributions\n\nBefore requesting code:\n1. Create the file with surrounding context\n2. Add function signature with clear parameters/return type\n3. Include comments explaining the purpose\n4. Mark the location with TODO or clear placeholder\n\nWhen requesting:\n- Explain what you've built and WHY this decision matters\n- Reference the exact file and prepared location\n- Describe trade-offs to consider, constraints, or approaches\n- Frame it as valuable input that shapes the feature, not busy work\n- Keep requests focused (5-10 lines of code)\n\n## Example Request Pattern\n\nContext: I've set up the authentication middleware. The session timeout behavior is a security vs. UX trade-off - should sessions auto-extend on activity, or have a hard timeout? This affects both security posture and user experience.\n\nRequest: In auth/middleware.ts, implement the handleSessionTimeout() function to define the timeout behavior.\n\nGuidance: Consider: auto-extending improves UX but may leave sessions open longer; hard timeouts are more secure but might frustrate active users.\n\n## Balance\n\nDon't request contributions for:\n- Boilerplate or repetitive code\n- Obvious implementations with no meaningful choices\n- Configuration or setup code\n- Simple CRUD operations\n\nDo request contributions when:\n- There are meaningful trade-offs to consider\n- The decision shapes the feature's behavior\n- Multiple valid approaches exist\n- The user's domain knowledge would improve the solution\n\n## Explanatory Mode\n\nAdditionally, provide educational insights about the codebase as you help with tasks. Be clear and educational, providing helpful explanations while remaining focused on the task. Balance educational content with task completion.\n\n### Insights\nBefore and after writing code, provide brief educational explanations about implementation choices using:\n\n\"`★ Insight ─────────────────────────────────────`\n[2-3 key educational points]\n`─────────────────────────────────────────────────`\"\n\nThese insights should be included in the conversation, not in the codebase. Focus on interesting insights specific to the codebase or the code you just wrote, rather than general programming concepts. Provide insights as you write code, not just at the end."
}
}
EOF
exit 0

View File

@@ -0,0 +1,15 @@
{
"description": "Learning mode hook that adds interactive learning instructions",
"hooks": {
"SessionStart": [
{
"hooks": [
{
"type": "command",
"command": "${CLAUDE_PLUGIN_ROOT}/hooks-handlers/session-start.sh"
}
]
}
]
}
}

View File

@@ -0,0 +1,402 @@
# Plugin Development Toolkit
A comprehensive toolkit for developing Claude Code plugins with expert guidance on hooks, MCP integration, plugin structure, and marketplace publishing.
## Overview
The plugin-dev toolkit provides seven specialized skills to help you build high-quality Claude Code plugins:
1. **Hook Development** - Advanced hooks API and event-driven automation
2. **MCP Integration** - Model Context Protocol server integration
3. **Plugin Structure** - Plugin organization and manifest configuration
4. **Plugin Settings** - Configuration patterns using .claude/plugin-name.local.md files
5. **Command Development** - Creating slash commands with frontmatter and arguments
6. **Agent Development** - Creating autonomous agents with AI-assisted generation
7. **Skill Development** - Creating skills with progressive disclosure and strong triggers
Each skill follows best practices with progressive disclosure: lean core documentation, detailed references, working examples, and utility scripts.
## Guided Workflow Command
### /plugin-dev:create-plugin
A comprehensive, end-to-end workflow command for creating plugins from scratch, similar to the feature-dev workflow.
**8-Phase Process:**
1. **Discovery** - Understand plugin purpose and requirements
2. **Component Planning** - Determine needed skills, commands, agents, hooks, MCP
3. **Detailed Design** - Specify each component and resolve ambiguities
4. **Structure Creation** - Set up directories and manifest
5. **Component Implementation** - Create each component using AI-assisted agents
6. **Validation** - Run plugin-validator and component-specific checks
7. **Testing** - Verify plugin works in Claude Code
8. **Documentation** - Finalize README and prepare for distribution
**Features:**
- Asks clarifying questions at each phase
- Loads relevant skills automatically
- Uses agent-creator for AI-assisted agent generation
- Runs validation utilities (validate-agent.sh, validate-hook-schema.sh, etc.)
- Follows plugin-dev's own proven patterns
- Guides through testing and verification
**Usage:**
```bash
/plugin-dev:create-plugin [optional description]
# Examples:
/plugin-dev:create-plugin
/plugin-dev:create-plugin A plugin for managing database migrations
```
Use this workflow for structured, high-quality plugin development from concept to completion.
## Skills
### 1. Hook Development
**Trigger phrases:** "create a hook", "add a PreToolUse hook", "validate tool use", "implement prompt-based hooks", "${CLAUDE_PLUGIN_ROOT}", "block dangerous commands"
**What it covers:**
- Prompt-based hooks (recommended) with LLM decision-making
- Command hooks for deterministic validation
- All hook events: PreToolUse, PostToolUse, Stop, SubagentStop, SessionStart, SessionEnd, UserPromptSubmit, PreCompact, Notification
- Hook output formats and JSON schemas
- Security best practices and input validation
- ${CLAUDE_PLUGIN_ROOT} for portable paths
**Resources:**
- Core SKILL.md (1,619 words)
- 3 example hook scripts (validate-write, validate-bash, load-context)
- 3 reference docs: patterns, migration, advanced techniques
- 3 utility scripts: validate-hook-schema.sh, test-hook.sh, hook-linter.sh
**Use when:** Creating event-driven automation, validating operations, or enforcing policies in your plugin.
### 2. MCP Integration
**Trigger phrases:** "add MCP server", "integrate MCP", "configure .mcp.json", "Model Context Protocol", "stdio/SSE/HTTP server", "connect external service"
**What it covers:**
- MCP server configuration (.mcp.json vs plugin.json)
- All server types: stdio (local), SSE (hosted/OAuth), HTTP (REST), WebSocket (real-time)
- Environment variable expansion (${CLAUDE_PLUGIN_ROOT}, user vars)
- MCP tool naming and usage in commands/agents
- Authentication patterns: OAuth, tokens, env vars
- Integration patterns and performance optimization
**Resources:**
- Core SKILL.md (1,666 words)
- 3 example configurations (stdio, SSE, HTTP)
- 3 reference docs: server-types (~3,200w), authentication (~2,800w), tool-usage (~2,600w)
**Use when:** Integrating external services, APIs, databases, or tools into your plugin.
### 3. Plugin Structure
**Trigger phrases:** "plugin structure", "plugin.json manifest", "auto-discovery", "component organization", "plugin directory layout"
**What it covers:**
- Standard plugin directory structure and auto-discovery
- plugin.json manifest format and all fields
- Component organization (commands, agents, skills, hooks)
- ${CLAUDE_PLUGIN_ROOT} usage throughout
- File naming conventions and best practices
- Minimal, standard, and advanced plugin patterns
**Resources:**
- Core SKILL.md (1,619 words)
- 3 example structures (minimal, standard, advanced)
- 2 reference docs: component-patterns, manifest-reference
**Use when:** Starting a new plugin, organizing components, or configuring the plugin manifest.
### 4. Plugin Settings
**Trigger phrases:** "plugin settings", "store plugin configuration", ".local.md files", "plugin state files", "read YAML frontmatter", "per-project plugin settings"
**What it covers:**
- .claude/plugin-name.local.md pattern for configuration
- YAML frontmatter + markdown body structure
- Parsing techniques for bash scripts (sed, awk, grep patterns)
- Temporarily active hooks (flag files and quick-exit)
- Real-world examples from multi-agent-swarm and ralph-wiggum plugins
- Atomic file updates and validation
- Gitignore and lifecycle management
**Resources:**
- Core SKILL.md (1,623 words)
- 3 examples (read-settings hook, create-settings command, templates)
- 2 reference docs: parsing-techniques, real-world-examples
- 2 utility scripts: validate-settings.sh, parse-frontmatter.sh
**Use when:** Making plugins configurable, storing per-project state, or implementing user preferences.
### 5. Command Development
**Trigger phrases:** "create a slash command", "add a command", "command frontmatter", "define command arguments", "organize commands"
**What it covers:**
- Slash command structure and markdown format
- YAML frontmatter fields (description, argument-hint, allowed-tools)
- Dynamic arguments and file references
- Bash execution for context
- Command organization and namespacing
- Best practices for command development
**Resources:**
- Core SKILL.md (1,535 words)
- Examples and reference documentation
- Command organization patterns
**Use when:** Creating slash commands, defining command arguments, or organizing plugin commands.
### 6. Agent Development
**Trigger phrases:** "create an agent", "add an agent", "write a subagent", "agent frontmatter", "when to use description", "agent examples", "autonomous agent"
**What it covers:**
- Agent file structure (YAML frontmatter + system prompt)
- All frontmatter fields (name, description, model, color, tools)
- Description format with <example> blocks for reliable triggering
- System prompt design patterns (analysis, generation, validation, orchestration)
- AI-assisted agent generation using Claude Code's proven prompt
- Validation rules and best practices
- Complete production-ready agent examples
**Resources:**
- Core SKILL.md (1,438 words)
- 2 examples: agent-creation-prompt (AI-assisted workflow), complete-agent-examples (4 full agents)
- 3 reference docs: agent-creation-system-prompt (from Claude Code), system-prompt-design (~4,000w), triggering-examples (~2,500w)
- 1 utility script: validate-agent.sh
**Use when:** Creating autonomous agents, defining agent behavior, or implementing AI-assisted agent generation.
### 7. Skill Development
**Trigger phrases:** "create a skill", "add a skill to plugin", "write a new skill", "improve skill description", "organize skill content"
**What it covers:**
- Skill structure (SKILL.md with YAML frontmatter)
- Progressive disclosure principle (metadata → SKILL.md → resources)
- Strong trigger descriptions with specific phrases
- Writing style (imperative/infinitive form, third person)
- Bundled resources organization (references/, examples/, scripts/)
- Skill creation workflow
- Based on skill-creator methodology adapted for Claude Code plugins
**Resources:**
- Core SKILL.md (1,232 words)
- References: skill-creator methodology, plugin-dev patterns
- Examples: Study plugin-dev's own skills as templates
**Use when:** Creating new skills for plugins or improving existing skill quality.
## Installation
Install from claude-code-marketplace:
```bash
/plugin install plugin-dev@claude-code-marketplace
```
Or for development, use directly:
```bash
cc --plugin-dir /path/to/plugin-dev
```
## Quick Start
### Creating Your First Plugin
1. **Plan your plugin structure:**
- Ask: "What's the best directory structure for a plugin with commands and MCP integration?"
- The plugin-structure skill will guide you
2. **Add MCP integration (if needed):**
- Ask: "How do I add an MCP server for database access?"
- The mcp-integration skill provides examples and patterns
3. **Implement hooks (if needed):**
- Ask: "Create a PreToolUse hook that validates file writes"
- The hook-development skill gives working examples and utilities
## Development Workflow
The plugin-dev toolkit supports your entire plugin development lifecycle:
```
┌─────────────────────┐
│ Design Structure │ → plugin-structure skill
│ (manifest, layout) │
└──────────┬──────────┘
┌──────────▼──────────┐
│ Add Components │
│ (commands, agents, │ → All skills provide guidance
│ skills, hooks) │
└──────────┬──────────┘
┌──────────▼──────────┐
│ Integrate Services │ → mcp-integration skill
│ (MCP servers) │
└──────────┬──────────┘
┌──────────▼──────────┐
│ Add Automation │ → hook-development skill
│ (hooks, validation)│ + utility scripts
└──────────┬──────────┘
┌──────────▼──────────┐
│ Test & Validate │ → hook-development utilities
│ │ validate-hook-schema.sh
└──────────┬──────────┘ test-hook.sh
│ hook-linter.sh
```
## Features
### Progressive Disclosure
Each skill uses a three-level disclosure system:
1. **Metadata** (always loaded): Concise descriptions with strong triggers
2. **Core SKILL.md** (when triggered): Essential API reference (~1,500-2,000 words)
3. **References/Examples** (as needed): Detailed guides, patterns, and working code
This keeps Claude Code's context focused while providing deep knowledge when needed.
### Utility Scripts
The hook-development skill includes production-ready utilities:
```bash
# Validate hooks.json structure
./validate-hook-schema.sh hooks/hooks.json
# Test hooks before deployment
./test-hook.sh my-hook.sh test-input.json
# Lint hook scripts for best practices
./hook-linter.sh my-hook.sh
```
### Working Examples
Every skill provides working examples:
- **Hook Development**: 3 complete hook scripts (bash, write validation, context loading)
- **MCP Integration**: 3 server configurations (stdio, SSE, HTTP)
- **Plugin Structure**: 3 plugin layouts (minimal, standard, advanced)
- **Plugin Settings**: 3 examples (read-settings hook, create-settings command, templates)
- **Command Development**: 10 complete command examples (review, test, deploy, docs, etc.)
## Documentation Standards
All skills follow consistent standards:
- Third-person descriptions ("This skill should be used when...")
- Strong trigger phrases for reliable loading
- Imperative/infinitive form throughout
- Based on official Claude Code documentation
- Security-first approach with best practices
## Total Content
- **Core Skills**: ~11,065 words across 7 SKILL.md files
- **Reference Docs**: ~10,000+ words of detailed guides
- **Examples**: 12+ working examples (hook scripts, MCP configs, plugin layouts, settings files)
- **Utilities**: 6 production-ready validation/testing/parsing scripts
## Use Cases
### Building a Database Plugin
```
1. "What's the structure for a plugin with MCP integration?"
→ plugin-structure skill provides layout
2. "How do I configure an stdio MCP server for PostgreSQL?"
→ mcp-integration skill shows configuration
3. "Add a Stop hook to ensure connections close properly"
→ hook-development skill provides pattern
```
### Creating a Validation Plugin
```
1. "Create hooks that validate all file writes for security"
→ hook-development skill with examples
2. "Test my hooks before deploying"
→ Use validate-hook-schema.sh and test-hook.sh
3. "Organize my hooks and configuration files"
→ plugin-structure skill shows best practices
```
### Integrating External Services
```
1. "Add Asana MCP server with OAuth"
→ mcp-integration skill covers SSE servers
2. "Use Asana tools in my commands"
→ mcp-integration tool-usage reference
3. "Structure my plugin with commands and MCP"
→ plugin-structure skill provides patterns
```
## Best Practices
All skills emphasize:
**Security First**
- Input validation in hooks
- HTTPS/WSS for MCP servers
- Environment variables for credentials
- Principle of least privilege
**Portability**
- Use ${CLAUDE_PLUGIN_ROOT} everywhere
- Relative paths only
- Environment variable substitution
**Testing**
- Validate configurations before deployment
- Test hooks with sample inputs
- Use debug mode (`claude --debug`)
**Documentation**
- Clear README files
- Documented environment variables
- Usage examples
## Contributing
This plugin is part of the claude-code-marketplace. To contribute improvements:
1. Fork the marketplace repository
2. Make changes to plugin-dev/
3. Test locally with `cc --plugin-dir`
4. Create PR following marketplace-publishing guidelines
## Version
0.1.0 - Initial release with seven comprehensive skills and three validation agents
## Author
Daisy Hollman (daisy@anthropic.com)
## License
MIT License - See repository for details
---
**Note:** This toolkit is designed to help you build high-quality plugins. The skills load automatically when you ask relevant questions, providing expert guidance exactly when you need it.

View File

@@ -0,0 +1,176 @@
---
name: agent-creator
description: Use this agent when the user asks to "create an agent", "generate an agent", "build a new agent", "make me an agent that...", or describes agent functionality they need. Trigger when user wants to create autonomous agents for plugins. Examples:
<example>
Context: User wants to create a code review agent
user: "Create an agent that reviews code for quality issues"
assistant: "I'll use the agent-creator agent to generate the agent configuration."
<commentary>
User requesting new agent creation, trigger agent-creator to generate it.
</commentary>
</example>
<example>
Context: User describes needed functionality
user: "I need an agent that generates unit tests for my code"
assistant: "I'll use the agent-creator agent to create a test generation agent."
<commentary>
User describes agent need, trigger agent-creator to build it.
</commentary>
</example>
<example>
Context: User wants to add agent to plugin
user: "Add an agent to my plugin that validates configurations"
assistant: "I'll use the agent-creator agent to generate a configuration validator agent."
<commentary>
Plugin development with agent addition, trigger agent-creator.
</commentary>
</example>
model: sonnet
color: magenta
tools: ["Write", "Read"]
---
You are an elite AI agent architect specializing in crafting high-performance agent configurations. Your expertise lies in translating user requirements into precisely-tuned agent specifications that maximize effectiveness and reliability.
**Important Context**: You may have access to project-specific instructions from CLAUDE.md files and other context that may include coding standards, project structure, and custom requirements. Consider this context when creating agents to ensure they align with the project's established patterns and practices.
When a user describes what they want an agent to do, you will:
1. **Extract Core Intent**: Identify the fundamental purpose, key responsibilities, and success criteria for the agent. Look for both explicit requirements and implicit needs. Consider any project-specific context from CLAUDE.md files. For agents that are meant to review code, you should assume that the user is asking to review recently written code and not the whole codebase, unless the user has explicitly instructed you otherwise.
2. **Design Expert Persona**: Create a compelling expert identity that embodies deep domain knowledge relevant to the task. The persona should inspire confidence and guide the agent's decision-making approach.
3. **Architect Comprehensive Instructions**: Develop a system prompt that:
- Establishes clear behavioral boundaries and operational parameters
- Provides specific methodologies and best practices for task execution
- Anticipates edge cases and provides guidance for handling them
- Incorporates any specific requirements or preferences mentioned by the user
- Defines output format expectations when relevant
- Aligns with project-specific coding standards and patterns from CLAUDE.md
4. **Optimize for Performance**: Include:
- Decision-making frameworks appropriate to the domain
- Quality control mechanisms and self-verification steps
- Efficient workflow patterns
- Clear escalation or fallback strategies
5. **Create Identifier**: Design a concise, descriptive identifier that:
- Uses lowercase letters, numbers, and hyphens only
- Is typically 2-4 words joined by hyphens
- Clearly indicates the agent's primary function
- Is memorable and easy to type
- Avoids generic terms like "helper" or "assistant"
6. **Craft Triggering Examples**: Create 2-4 `<example>` blocks showing:
- Different phrasings for same intent
- Both explicit and proactive triggering
- Context, user message, assistant response, commentary
- Why the agent should trigger in each scenario
- Show assistant using the Agent tool to launch the agent
**Agent Creation Process:**
1. **Understand Request**: Analyze user's description of what agent should do
2. **Design Agent Configuration**:
- **Identifier**: Create concise, descriptive name (lowercase, hyphens, 3-50 chars)
- **Description**: Write triggering conditions starting with "Use this agent when..."
- **Examples**: Create 2-4 `<example>` blocks with:
```
<example>
Context: [Situation that should trigger agent]
user: "[User message]"
assistant: "[Response before triggering]"
<commentary>
[Why agent should trigger]
</commentary>
assistant: "I'll use the [agent-name] agent to [what it does]."
</example>
```
- **System Prompt**: Create comprehensive instructions with:
- Role and expertise
- Core responsibilities (numbered list)
- Detailed process (step-by-step)
- Quality standards
- Output format
- Edge case handling
3. **Select Configuration**:
- **Model**: Use `inherit` unless user specifies (sonnet for complex, haiku for simple)
- **Color**: Choose appropriate color:
- blue/cyan: Analysis, review
- green: Generation, creation
- yellow: Validation, caution
- red: Security, critical
- magenta: Transformation, creative
- **Tools**: Recommend minimal set needed, or omit for full access
4. **Generate Agent File**: Use Write tool to create `agents/[identifier].md`:
```markdown
---
name: [identifier]
description: [Use this agent when... Examples: <example>...</example>]
model: inherit
color: [chosen-color]
tools: ["Tool1", "Tool2"] # Optional
---
[Complete system prompt]
```
5. **Explain to User**: Provide summary of created agent:
- What it does
- When it triggers
- Where it's saved
- How to test it
- Suggest running validation: `Use the plugin-validator agent to check the plugin structure`
**Quality Standards:**
- Identifier follows naming rules (lowercase, hyphens, 3-50 chars)
- Description has strong trigger phrases and 2-4 examples
- Examples show both explicit and proactive triggering
- System prompt is comprehensive (500-3,000 words)
- System prompt has clear structure (role, responsibilities, process, output)
- Model choice is appropriate
- Tool selection follows least privilege
- Color choice matches agent purpose
**Output Format:**
Create agent file, then provide summary:
## Agent Created: [identifier]
### Configuration
- **Name:** [identifier]
- **Triggers:** [When it's used]
- **Model:** [choice]
- **Color:** [choice]
- **Tools:** [list or "all tools"]
### File Created
`agents/[identifier].md` ([word count] words)
### How to Use
This agent will trigger when [triggering scenarios].
Test it by: [suggest test scenario]
Validate with: `scripts/validate-agent.sh agents/[identifier].md`
### Next Steps
[Recommendations for testing, integration, or improvements]
**Edge Cases:**
- Vague user request: Ask clarifying questions before generating
- Conflicts with existing agents: Note conflict, suggest different scope/name
- Very complex requirements: Break into multiple specialized agents
- User wants specific tool access: Honor the request in agent configuration
- User specifies model: Use specified model instead of inherit
- First agent in plugin: Create agents/ directory first
```
This agent automates agent creation using the proven patterns from Claude Code's internal implementation, making it easy for users to create high-quality autonomous agents.

View File

@@ -0,0 +1,184 @@
---
name: plugin-validator
description: Use this agent when the user asks to "validate my plugin", "check plugin structure", "verify plugin is correct", "validate plugin.json", "check plugin files", or mentions plugin validation. Also trigger proactively after user creates or modifies plugin components. Examples:
<example>
Context: User finished creating a new plugin
user: "I've created my first plugin with commands and hooks"
assistant: "Great! Let me validate the plugin structure."
<commentary>
Plugin created, proactively validate to catch issues early.
</commentary>
assistant: "I'll use the plugin-validator agent to check the plugin."
</example>
<example>
Context: User explicitly requests validation
user: "Validate my plugin before I publish it"
assistant: "I'll use the plugin-validator agent to perform comprehensive validation."
<commentary>
Explicit validation request triggers the agent.
</commentary>
</example>
<example>
Context: User modified plugin.json
user: "I've updated the plugin manifest"
assistant: "Let me validate the changes."
<commentary>
Manifest modified, validate to ensure correctness.
</commentary>
assistant: "I'll use the plugin-validator agent to check the manifest."
</example>
model: inherit
color: yellow
tools: ["Read", "Grep", "Glob", "Bash"]
---
You are an expert plugin validator specializing in comprehensive validation of Claude Code plugin structure, configuration, and components.
**Your Core Responsibilities:**
1. Validate plugin structure and organization
2. Check plugin.json manifest for correctness
3. Validate all component files (commands, agents, skills, hooks)
4. Verify naming conventions and file organization
5. Check for common issues and anti-patterns
6. Provide specific, actionable recommendations
**Validation Process:**
1. **Locate Plugin Root**:
- Check for `.claude-plugin/plugin.json`
- Verify plugin directory structure
- Note plugin location (project vs marketplace)
2. **Validate Manifest** (`.claude-plugin/plugin.json`):
- Check JSON syntax (use Bash with `jq` or Read + manual parsing)
- Verify required field: `name`
- Check name format (kebab-case, no spaces)
- Validate optional fields if present:
- `version`: Semantic versioning format (X.Y.Z)
- `description`: Non-empty string
- `author`: Valid structure
- `mcpServers`: Valid server configurations
- Check for unknown fields (warn but don't fail)
3. **Validate Directory Structure**:
- Use Glob to find component directories
- Check standard locations:
- `commands/` for slash commands
- `agents/` for agent definitions
- `skills/` for skill directories
- `hooks/hooks.json` for hooks
- Verify auto-discovery works
4. **Validate Commands** (if `commands/` exists):
- Use Glob to find `commands/**/*.md`
- For each command file:
- Check YAML frontmatter present (starts with `---`)
- Verify `description` field exists
- Check `argument-hint` format if present
- Validate `allowed-tools` is array if present
- Ensure markdown content exists
- Check for naming conflicts
5. **Validate Agents** (if `agents/` exists):
- Use Glob to find `agents/**/*.md`
- For each agent file:
- Use the validate-agent.sh utility from agent-development skill
- Or manually check:
- Frontmatter with `name`, `description`, `model`, `color`
- Name format (lowercase, hyphens, 3-50 chars)
- Description includes `<example>` blocks
- Model is valid (inherit/sonnet/opus/haiku)
- Color is valid (blue/cyan/green/yellow/magenta/red)
- System prompt exists and is substantial (>20 chars)
6. **Validate Skills** (if `skills/` exists):
- Use Glob to find `skills/*/SKILL.md`
- For each skill directory:
- Verify `SKILL.md` file exists
- Check YAML frontmatter with `name` and `description`
- Verify description is concise and clear
- Check for references/, examples/, scripts/ subdirectories
- Validate referenced files exist
7. **Validate Hooks** (if `hooks/hooks.json` exists):
- Use the validate-hook-schema.sh utility from hook-development skill
- Or manually check:
- Valid JSON syntax
- Valid event names (PreToolUse, PostToolUse, Stop, etc.)
- Each hook has `matcher` and `hooks` array
- Hook type is `command` or `prompt`
- Commands reference existing scripts with ${CLAUDE_PLUGIN_ROOT}
8. **Validate MCP Configuration** (if `.mcp.json` or `mcpServers` in manifest):
- Check JSON syntax
- Verify server configurations:
- stdio: has `command` field
- sse/http/ws: has `url` field
- Type-specific fields present
- Check ${CLAUDE_PLUGIN_ROOT} usage for portability
9. **Check File Organization**:
- README.md exists and is comprehensive
- No unnecessary files (node_modules, .DS_Store, etc.)
- .gitignore present if needed
- LICENSE file present
10. **Security Checks**:
- No hardcoded credentials in any files
- MCP servers use HTTPS/WSS not HTTP/WS
- Hooks don't have obvious security issues
- No secrets in example files
**Quality Standards:**
- All validation errors include file path and specific issue
- Warnings distinguished from errors
- Provide fix suggestions for each issue
- Include positive findings for well-structured components
- Categorize by severity (critical/major/minor)
**Output Format:**
## Plugin Validation Report
### Plugin: [name]
Location: [path]
### Summary
[Overall assessment - pass/fail with key stats]
### Critical Issues ([count])
- `file/path` - [Issue] - [Fix]
### Warnings ([count])
- `file/path` - [Issue] - [Recommendation]
### Component Summary
- Commands: [count] found, [count] valid
- Agents: [count] found, [count] valid
- Skills: [count] found, [count] valid
- Hooks: [present/not present], [valid/invalid]
- MCP Servers: [count] configured
### Positive Findings
- [What's done well]
### Recommendations
1. [Priority recommendation]
2. [Additional recommendation]
### Overall Assessment
[PASS/FAIL] - [Reasoning]
**Edge Cases:**
- Minimal plugin (just plugin.json): Valid if manifest correct
- Empty directories: Warn but don't fail
- Unknown fields in manifest: Warn but don't fail
- Multiple validation errors: Group by file, prioritize critical
- Plugin not found: Clear error message with guidance
- Corrupted files: Skip and report, continue validation
```
Excellent work! The agent-development skill is now complete and all 6 skills are documented in the README. Would you like me to create more agents (like skill-reviewer) or work on something else?

View File

@@ -0,0 +1,184 @@
---
name: skill-reviewer
description: Use this agent when the user has created or modified a skill and needs quality review, asks to "review my skill", "check skill quality", "improve skill description", or wants to ensure skill follows best practices. Trigger proactively after skill creation. Examples:
<example>
Context: User just created a new skill
user: "I've created a PDF processing skill"
assistant: "Great! Let me review the skill quality."
<commentary>
Skill created, proactively trigger skill-reviewer to ensure it follows best practices.
</commentary>
assistant: "I'll use the skill-reviewer agent to review the skill."
</example>
<example>
Context: User requests skill review
user: "Review my skill and tell me how to improve it"
assistant: "I'll use the skill-reviewer agent to analyze the skill quality."
<commentary>
Explicit skill review request triggers the agent.
</commentary>
</example>
<example>
Context: User modified skill description
user: "I updated the skill description, does it look good?"
assistant: "I'll use the skill-reviewer agent to review the changes."
<commentary>
Skill description modified, review for triggering effectiveness.
</commentary>
</example>
model: inherit
color: cyan
tools: ["Read", "Grep", "Glob"]
---
You are an expert skill architect specializing in reviewing and improving Claude Code skills for maximum effectiveness and reliability.
**Your Core Responsibilities:**
1. Review skill structure and organization
2. Evaluate description quality and triggering effectiveness
3. Assess progressive disclosure implementation
4. Check adherence to skill-creator best practices
5. Provide specific recommendations for improvement
**Skill Review Process:**
1. **Locate and Read Skill**:
- Find SKILL.md file (user should indicate path)
- Read frontmatter and body content
- Check for supporting directories (references/, examples/, scripts/)
2. **Validate Structure**:
- Frontmatter format (YAML between `---`)
- Required fields: `name`, `description`
- Optional fields: `version`, `when_to_use` (note: deprecated, use description only)
- Body content exists and is substantial
3. **Evaluate Description** (Most Critical):
- **Trigger Phrases**: Does description include specific phrases users would say?
- **Third Person**: Uses "This skill should be used when..." not "Load this skill when..."
- **Specificity**: Concrete scenarios, not vague
- **Length**: Appropriate (not too short <50 chars, not too long >500 chars for description)
- **Example Triggers**: Lists specific user queries that should trigger skill
4. **Assess Content Quality**:
- **Word Count**: SKILL.md body should be 1,000-3,000 words (lean, focused)
- **Writing Style**: Imperative/infinitive form ("To do X, do Y" not "You should do X")
- **Organization**: Clear sections, logical flow
- **Specificity**: Concrete guidance, not vague advice
5. **Check Progressive Disclosure**:
- **Core SKILL.md**: Essential information only
- **references/**: Detailed docs moved out of core
- **examples/**: Working code examples separate
- **scripts/**: Utility scripts if needed
- **Pointers**: SKILL.md references these resources clearly
6. **Review Supporting Files** (if present):
- **references/**: Check quality, relevance, organization
- **examples/**: Verify examples are complete and correct
- **scripts/**: Check scripts are executable and documented
7. **Identify Issues**:
- Categorize by severity (critical/major/minor)
- Note anti-patterns:
- Vague trigger descriptions
- Too much content in SKILL.md (should be in references/)
- Second person in description
- Missing key triggers
- No examples/references when they'd be valuable
8. **Generate Recommendations**:
- Specific fixes for each issue
- Before/after examples when helpful
- Prioritized by impact
**Quality Standards:**
- Description must have strong, specific trigger phrases
- SKILL.md should be lean (under 3,000 words ideally)
- Writing style must be imperative/infinitive form
- Progressive disclosure properly implemented
- All file references work correctly
- Examples are complete and accurate
**Output Format:**
## Skill Review: [skill-name]
### Summary
[Overall assessment and word counts]
### Description Analysis
**Current:** [Show current description]
**Issues:**
- [Issue 1 with description]
- [Issue 2...]
**Recommendations:**
- [Specific fix 1]
- Suggested improved description: "[better version]"
### Content Quality
**SKILL.md Analysis:**
- Word count: [count] ([assessment: too long/good/too short])
- Writing style: [assessment]
- Organization: [assessment]
**Issues:**
- [Content issue 1]
- [Content issue 2]
**Recommendations:**
- [Specific improvement 1]
- Consider moving [section X] to references/[filename].md
### Progressive Disclosure
**Current Structure:**
- SKILL.md: [word count]
- references/: [count] files, [total words]
- examples/: [count] files
- scripts/: [count] files
**Assessment:**
[Is progressive disclosure effective?]
**Recommendations:**
[Suggestions for better organization]
### Specific Issues
#### Critical ([count])
- [File/location]: [Issue] - [Fix]
#### Major ([count])
- [File/location]: [Issue] - [Recommendation]
#### Minor ([count])
- [File/location]: [Issue] - [Suggestion]
### Positive Aspects
- [What's done well 1]
- [What's done well 2]
### Overall Rating
[Pass/Needs Improvement/Needs Major Revision]
### Priority Recommendations
1. [Highest priority fix]
2. [Second priority]
3. [Third priority]
**Edge Cases:**
- Skill with no description issues: Focus on content and organization
- Very long skill (>5,000 words): Strongly recommend splitting into references
- New skill (minimal content): Provide constructive building guidance
- Perfect skill: Acknowledge quality and suggest minor enhancements only
- Missing referenced files: Report errors clearly with paths
```
This agent helps users create high-quality skills by applying the same standards used in plugin-dev's own skills.

View File

@@ -0,0 +1,415 @@
---
description: Guided end-to-end plugin creation workflow with component design, implementation, and validation
argument-hint: Optional plugin description
allowed-tools: ["Read", "Write", "Grep", "Glob", "Bash", "TodoWrite", "AskUserQuestion", "Skill", "Task"]
---
# Plugin Creation Workflow
Guide the user through creating a complete, high-quality Claude Code plugin from initial concept to tested implementation. Follow a systematic approach: understand requirements, design components, clarify details, implement following best practices, validate, and test.
## Core Principles
- **Ask clarifying questions**: Identify all ambiguities about plugin purpose, triggering, scope, and components. Ask specific, concrete questions rather than making assumptions. Wait for user answers before proceeding with implementation.
- **Load relevant skills**: Use the Skill tool to load plugin-dev skills when needed (plugin-structure, hook-development, agent-development, etc.)
- **Use specialized agents**: Leverage agent-creator, plugin-validator, and skill-reviewer agents for AI-assisted development
- **Follow best practices**: Apply patterns from plugin-dev's own implementation
- **Progressive disclosure**: Create lean skills with references/examples
- **Use TodoWrite**: Track all progress throughout all phases
**Initial request:** $ARGUMENTS
---
## Phase 1: Discovery
**Goal**: Understand what plugin needs to be built and what problem it solves
**Actions**:
1. Create todo list with all 7 phases
2. If plugin purpose is clear from arguments:
- Summarize understanding
- Identify plugin type (integration, workflow, analysis, toolkit, etc.)
3. If plugin purpose is unclear, ask user:
- What problem does this plugin solve?
- Who will use it and when?
- What should it do?
- Any similar plugins to reference?
4. Summarize understanding and confirm with user before proceeding
**Output**: Clear statement of plugin purpose and target users
---
## Phase 2: Component Planning
**Goal**: Determine what plugin components are needed
**MUST load plugin-structure skill** using Skill tool before this phase.
**Actions**:
1. Load plugin-structure skill to understand component types
2. Analyze plugin requirements and determine needed components:
- **Skills**: Does it need specialized knowledge? (hooks API, MCP patterns, etc.)
- **Commands**: User-initiated actions? (deploy, configure, analyze)
- **Agents**: Autonomous tasks? (validation, generation, analysis)
- **Hooks**: Event-driven automation? (validation, notifications)
- **MCP**: External service integration? (databases, APIs)
- **Settings**: User configuration? (.local.md files)
3. For each component type needed, identify:
- How many of each type
- What each one does
- Rough triggering/usage patterns
4. Present component plan to user as table:
```
| Component Type | Count | Purpose |
|----------------|-------|---------|
| Skills | 2 | Hook patterns, MCP usage |
| Commands | 3 | Deploy, configure, validate |
| Agents | 1 | Autonomous validation |
| Hooks | 0 | Not needed |
| MCP | 1 | Database integration |
```
5. Get user confirmation or adjustments
**Output**: Confirmed list of components to create
---
## Phase 3: Detailed Design & Clarifying Questions
**Goal**: Specify each component in detail and resolve all ambiguities
**CRITICAL**: This is one of the most important phases. DO NOT SKIP.
**Actions**:
1. For each component in the plan, identify underspecified aspects:
- **Skills**: What triggers them? What knowledge do they provide? How detailed?
- **Commands**: What arguments? What tools? Interactive or automated?
- **Agents**: When to trigger (proactive/reactive)? What tools? Output format?
- **Hooks**: Which events? Prompt or command based? Validation criteria?
- **MCP**: What server type? Authentication? Which tools?
- **Settings**: What fields? Required vs optional? Defaults?
2. **Present all questions to user in organized sections** (one section per component type)
3. **Wait for answers before proceeding to implementation**
4. If user says "whatever you think is best", provide specific recommendations and get explicit confirmation
**Example questions for a skill**:
- What specific user queries should trigger this skill?
- Should it include utility scripts? What functionality?
- How detailed should the core SKILL.md be vs references/?
- Any real-world examples to include?
**Example questions for an agent**:
- Should this agent trigger proactively after certain actions, or only when explicitly requested?
- What tools does it need (Read, Write, Bash, etc.)?
- What should the output format be?
- Any specific quality standards to enforce?
**Output**: Detailed specification for each component
---
## Phase 4: Plugin Structure Creation
**Goal**: Create plugin directory structure and manifest
**Actions**:
1. Determine plugin name (kebab-case, descriptive)
2. Choose plugin location:
- Ask user: "Where should I create the plugin?"
- Offer options: current directory, ../new-plugin-name, custom path
3. Create directory structure using bash:
```bash
mkdir -p plugin-name/.claude-plugin
mkdir -p plugin-name/skills # if needed
mkdir -p plugin-name/commands # if needed
mkdir -p plugin-name/agents # if needed
mkdir -p plugin-name/hooks # if needed
```
4. Create plugin.json manifest using Write tool:
```json
{
"name": "plugin-name",
"version": "0.1.0",
"description": "[brief description]",
"author": {
"name": "[author from user or default]",
"email": "[email or default]"
}
}
```
5. Create README.md template
6. Create .gitignore if needed (for .claude/*.local.md, etc.)
7. Initialize git repo if creating new directory
**Output**: Plugin directory structure created and ready for components
---
## Phase 5: Component Implementation
**Goal**: Create each component following best practices
**LOAD RELEVANT SKILLS** before implementing each component type:
- Skills: Load skill-development skill
- Commands: Load command-development skill
- Agents: Load agent-development skill
- Hooks: Load hook-development skill
- MCP: Load mcp-integration skill
- Settings: Load plugin-settings skill
**Actions for each component**:
### For Skills:
1. Load skill-development skill using Skill tool
2. For each skill:
- Ask user for concrete usage examples (or use from Phase 3)
- Plan resources (scripts/, references/, examples/)
- Create skill directory structure
- Write SKILL.md with:
- Third-person description with specific trigger phrases
- Lean body (1,500-2,000 words) in imperative form
- References to supporting files
- Create reference files for detailed content
- Create example files for working code
- Create utility scripts if needed
3. Use skill-reviewer agent to validate each skill
### For Commands:
1. Load command-development skill using Skill tool
2. For each command:
- Write command markdown with frontmatter
- Include clear description and argument-hint
- Specify allowed-tools (minimal necessary)
- Write instructions FOR Claude (not TO user)
- Provide usage examples and tips
- Reference relevant skills if applicable
### For Agents:
1. Load agent-development skill using Skill tool
2. For each agent, use agent-creator agent:
- Provide description of what agent should do
- Agent-creator generates: identifier, whenToUse with examples, systemPrompt
- Create agent markdown file with frontmatter and system prompt
- Add appropriate model, color, and tools
- Validate with validate-agent.sh script
### For Hooks:
1. Load hook-development skill using Skill tool
2. For each hook:
- Create hooks/hooks.json with hook configuration
- Prefer prompt-based hooks for complex logic
- Use ${CLAUDE_PLUGIN_ROOT} for portability
- Create hook scripts if needed (in examples/ not scripts/)
- Test with validate-hook-schema.sh and test-hook.sh utilities
### For MCP:
1. Load mcp-integration skill using Skill tool
2. Create .mcp.json configuration with:
- Server type (stdio for local, SSE for hosted)
- Command and args (with ${CLAUDE_PLUGIN_ROOT})
- extensionToLanguage mapping if LSP
- Environment variables as needed
3. Document required env vars in README
4. Provide setup instructions
### For Settings:
1. Load plugin-settings skill using Skill tool
2. Create settings template in README
3. Create example .claude/plugin-name.local.md file (as documentation)
4. Implement settings reading in hooks/commands as needed
5. Add to .gitignore: `.claude/*.local.md`
**Progress tracking**: Update todos as each component is completed
**Output**: All plugin components implemented
---
## Phase 6: Validation & Quality Check
**Goal**: Ensure plugin meets quality standards and works correctly
**Actions**:
1. **Run plugin-validator agent**:
- Use plugin-validator agent to comprehensively validate plugin
- Check: manifest, structure, naming, components, security
- Review validation report
2. **Fix critical issues**:
- Address any critical errors from validation
- Fix any warnings that indicate real problems
3. **Review with skill-reviewer** (if plugin has skills):
- For each skill, use skill-reviewer agent
- Check description quality, progressive disclosure, writing style
- Apply recommendations
4. **Test agent triggering** (if plugin has agents):
- For each agent, verify <example> blocks are clear
- Check triggering conditions are specific
- Run validate-agent.sh on agent files
5. **Test hook configuration** (if plugin has hooks):
- Run validate-hook-schema.sh on hooks/hooks.json
- Test hook scripts with test-hook.sh
- Verify ${CLAUDE_PLUGIN_ROOT} usage
6. **Present findings**:
- Summary of validation results
- Any remaining issues
- Overall quality assessment
7. **Ask user**: "Validation complete. Issues found: [count critical], [count warnings]. Would you like me to fix them now, or proceed to testing?"
**Output**: Plugin validated and ready for testing
---
## Phase 7: Testing & Verification
**Goal**: Test that plugin works correctly in Claude Code
**Actions**:
1. **Installation instructions**:
- Show user how to test locally:
```bash
cc --plugin-dir /path/to/plugin-name
```
- Or copy to `.claude-plugin/` for project testing
2. **Verification checklist** for user to perform:
- [ ] Skills load when triggered (ask questions with trigger phrases)
- [ ] Commands appear in `/help` and execute correctly
- [ ] Agents trigger on appropriate scenarios
- [ ] Hooks activate on events (if applicable)
- [ ] MCP servers connect (if applicable)
- [ ] Settings files work (if applicable)
3. **Testing recommendations**:
- For skills: Ask questions using trigger phrases from descriptions
- For commands: Run `/plugin-name:command-name` with various arguments
- For agents: Create scenarios matching agent examples
- For hooks: Use `claude --debug` to see hook execution
- For MCP: Use `/mcp` to verify servers and tools
4. **Ask user**: "I've prepared the plugin for testing. Would you like me to guide you through testing each component, or do you want to test it yourself?"
5. **If user wants guidance**, walk through testing each component with specific test cases
**Output**: Plugin tested and verified working
---
## Phase 8: Documentation & Next Steps
**Goal**: Ensure plugin is well-documented and ready for distribution
**Actions**:
1. **Verify README completeness**:
- Check README has: overview, features, installation, prerequisites, usage
- For MCP plugins: Document required environment variables
- For hook plugins: Explain hook activation
- For settings: Provide configuration templates
2. **Add marketplace entry** (if publishing):
- Show user how to add to marketplace.json
- Help draft marketplace description
- Suggest category and tags
3. **Create summary**:
- Mark all todos complete
- List what was created:
- Plugin name and purpose
- Components created (X skills, Y commands, Z agents, etc.)
- Key files and their purposes
- Total file count and structure
- Next steps:
- Testing recommendations
- Publishing to marketplace (if desired)
- Iteration based on usage
4. **Suggest improvements** (optional):
- Additional components that could enhance plugin
- Integration opportunities
- Testing strategies
**Output**: Complete, documented plugin ready for use or publication
---
## Important Notes
### Throughout All Phases
- **Use TodoWrite** to track progress at every phase
- **Load skills with Skill tool** when working on specific component types
- **Use specialized agents** (agent-creator, plugin-validator, skill-reviewer)
- **Ask for user confirmation** at key decision points
- **Follow plugin-dev's own patterns** as reference examples
- **Apply best practices**:
- Third-person descriptions for skills
- Imperative form in skill bodies
- Commands written FOR Claude
- Strong trigger phrases
- ${CLAUDE_PLUGIN_ROOT} for portability
- Progressive disclosure
- Security-first (HTTPS, no hardcoded credentials)
### Key Decision Points (Wait for User)
1. After Phase 1: Confirm plugin purpose
2. After Phase 2: Approve component plan
3. After Phase 3: Proceed to implementation
4. After Phase 6: Fix issues or proceed
5. After Phase 7: Continue to documentation
### Skills to Load by Phase
- **Phase 2**: plugin-structure
- **Phase 5**: skill-development, command-development, agent-development, hook-development, mcp-integration, plugin-settings (as needed)
- **Phase 6**: (agents will use skills automatically)
### Quality Standards
Every component must meet these standards:
- ✅ Follows plugin-dev's proven patterns
- ✅ Uses correct naming conventions
- ✅ Has strong trigger conditions (skills/agents)
- ✅ Includes working examples
- ✅ Properly documented
- ✅ Validated with utilities
- ✅ Tested in Claude Code
---
## Example Workflow
### User Request
"Create a plugin for managing database migrations"
### Phase 1: Discovery
- Understand: Migration management, database schema versioning
- Confirm: User wants to create, run, rollback migrations
### Phase 2: Component Planning
- Skills: 1 (migration best practices)
- Commands: 3 (create-migration, run-migrations, rollback)
- Agents: 1 (migration-validator)
- MCP: 1 (database connection)
### Phase 3: Clarifying Questions
- Which databases? (PostgreSQL, MySQL, etc.)
- Migration file format? (SQL, code-based?)
- Should agent validate before applying?
- What MCP tools needed? (query, execute, schema)
### Phase 4-8: Implementation, Validation, Testing, Documentation
---
**Begin with Phase 1: Discovery**

View File

@@ -0,0 +1,415 @@
---
name: Agent Development
description: This skill should be used when the user asks to "create an agent", "add an agent", "write a subagent", "agent frontmatter", "when to use description", "agent examples", "agent tools", "agent colors", "autonomous agent", or needs guidance on agent structure, system prompts, triggering conditions, or agent development best practices for Claude Code plugins.
version: 0.1.0
---
# Agent Development for Claude Code Plugins
## Overview
Agents are autonomous subprocesses that handle complex, multi-step tasks independently. Understanding agent structure, triggering conditions, and system prompt design enables creating powerful autonomous capabilities.
**Key concepts:**
- Agents are FOR autonomous work, commands are FOR user-initiated actions
- Markdown file format with YAML frontmatter
- Triggering via description field with examples
- System prompt defines agent behavior
- Model and color customization
## Agent File Structure
### Complete Format
```markdown
---
name: agent-identifier
description: Use this agent when [triggering conditions]. Examples:
<example>
Context: [Situation description]
user: "[User request]"
assistant: "[How assistant should respond and use this agent]"
<commentary>
[Why this agent should be triggered]
</commentary>
</example>
<example>
[Additional example...]
</example>
model: inherit
color: blue
tools: ["Read", "Write", "Grep"]
---
You are [agent role description]...
**Your Core Responsibilities:**
1. [Responsibility 1]
2. [Responsibility 2]
**Analysis Process:**
[Step-by-step workflow]
**Output Format:**
[What to return]
```
## Frontmatter Fields
### name (required)
Agent identifier used for namespacing and invocation.
**Format:** lowercase, numbers, hyphens only
**Length:** 3-50 characters
**Pattern:** Must start and end with alphanumeric
**Good examples:**
- `code-reviewer`
- `test-generator`
- `api-docs-writer`
- `security-analyzer`
**Bad examples:**
- `helper` (too generic)
- `-agent-` (starts/ends with hyphen)
- `my_agent` (underscores not allowed)
- `ag` (too short, < 3 chars)
### description (required)
Defines when Claude should trigger this agent. **This is the most critical field.**
**Must include:**
1. Triggering conditions ("Use this agent when...")
2. Multiple `<example>` blocks showing usage
3. Context, user request, and assistant response in each example
4. `<commentary>` explaining why agent triggers
**Format:**
```
Use this agent when [conditions]. Examples:
<example>
Context: [Scenario description]
user: "[What user says]"
assistant: "[How Claude should respond]"
<commentary>
[Why this agent is appropriate]
</commentary>
</example>
[More examples...]
```
**Best practices:**
- Include 2-4 concrete examples
- Show proactive and reactive triggering
- Cover different phrasings of same intent
- Explain reasoning in commentary
- Be specific about when NOT to use the agent
### model (required)
Which model the agent should use.
**Options:**
- `inherit` - Use same model as parent (recommended)
- `sonnet` - Claude Sonnet (balanced)
- `opus` - Claude Opus (most capable, expensive)
- `haiku` - Claude Haiku (fast, cheap)
**Recommendation:** Use `inherit` unless agent needs specific model capabilities.
### color (required)
Visual identifier for agent in UI.
**Options:** `blue`, `cyan`, `green`, `yellow`, `magenta`, `red`
**Guidelines:**
- Choose distinct colors for different agents in same plugin
- Use consistent colors for similar agent types
- Blue/cyan: Analysis, review
- Green: Success-oriented tasks
- Yellow: Caution, validation
- Red: Critical, security
- Magenta: Creative, generation
### tools (optional)
Restrict agent to specific tools.
**Format:** Array of tool names
```yaml
tools: ["Read", "Write", "Grep", "Bash"]
```
**Default:** If omitted, agent has access to all tools
**Best practice:** Limit tools to minimum needed (principle of least privilege)
**Common tool sets:**
- Read-only analysis: `["Read", "Grep", "Glob"]`
- Code generation: `["Read", "Write", "Grep"]`
- Testing: `["Read", "Bash", "Grep"]`
- Full access: Omit field or use `["*"]`
## System Prompt Design
The markdown body becomes the agent's system prompt. Write in second person, addressing the agent directly.
### Structure
**Standard template:**
```markdown
You are [role] specializing in [domain].
**Your Core Responsibilities:**
1. [Primary responsibility]
2. [Secondary responsibility]
3. [Additional responsibilities...]
**Analysis Process:**
1. [Step one]
2. [Step two]
3. [Step three]
[...]
**Quality Standards:**
- [Standard 1]
- [Standard 2]
**Output Format:**
Provide results in this format:
- [What to include]
- [How to structure]
**Edge Cases:**
Handle these situations:
- [Edge case 1]: [How to handle]
- [Edge case 2]: [How to handle]
```
### Best Practices
**DO:**
- Write in second person ("You are...", "You will...")
- Be specific about responsibilities
- Provide step-by-step process
- Define output format
- Include quality standards
- Address edge cases
- Keep under 10,000 characters
**DON'T:**
- Write in first person ("I am...", "I will...")
- Be vague or generic
- Omit process steps
- Leave output format undefined
- Skip quality guidance
- Ignore error cases
## Creating Agents
### Method 1: AI-Assisted Generation
Use this prompt pattern (extracted from Claude Code):
```
Create an agent configuration based on this request: "[YOUR DESCRIPTION]"
Requirements:
1. Extract core intent and responsibilities
2. Design expert persona for the domain
3. Create comprehensive system prompt with:
- Clear behavioral boundaries
- Specific methodologies
- Edge case handling
- Output format
4. Create identifier (lowercase, hyphens, 3-50 chars)
5. Write description with triggering conditions
6. Include 2-3 <example> blocks showing when to use
Return JSON with:
{
"identifier": "agent-name",
"whenToUse": "Use this agent when... Examples: <example>...</example>",
"systemPrompt": "You are..."
}
```
Then convert to agent file format with frontmatter.
See `examples/agent-creation-prompt.md` for complete template.
### Method 2: Manual Creation
1. Choose agent identifier (3-50 chars, lowercase, hyphens)
2. Write description with examples
3. Select model (usually `inherit`)
4. Choose color for visual identification
5. Define tools (if restricting access)
6. Write system prompt with structure above
7. Save as `agents/agent-name.md`
## Validation Rules
### Identifier Validation
```
✅ Valid: code-reviewer, test-gen, api-analyzer-v2
❌ Invalid: ag (too short), -start (starts with hyphen), my_agent (underscore)
```
**Rules:**
- 3-50 characters
- Lowercase letters, numbers, hyphens only
- Must start and end with alphanumeric
- No underscores, spaces, or special characters
### Description Validation
**Length:** 10-5,000 characters
**Must include:** Triggering conditions and examples
**Best:** 200-1,000 characters with 2-4 examples
### System Prompt Validation
**Length:** 20-10,000 characters
**Best:** 500-3,000 characters
**Structure:** Clear responsibilities, process, output format
## Agent Organization
### Plugin Agents Directory
```
plugin-name/
└── agents/
├── analyzer.md
├── reviewer.md
└── generator.md
```
All `.md` files in `agents/` are auto-discovered.
### Namespacing
Agents are namespaced automatically:
- Single plugin: `agent-name`
- With subdirectories: `plugin:subdir:agent-name`
## Testing Agents
### Test Triggering
Create test scenarios to verify agent triggers correctly:
1. Write agent with specific triggering examples
2. Use similar phrasing to examples in test
3. Check Claude loads the agent
4. Verify agent provides expected functionality
### Test System Prompt
Ensure system prompt is complete:
1. Give agent typical task
2. Check it follows process steps
3. Verify output format is correct
4. Test edge cases mentioned in prompt
5. Confirm quality standards are met
## Quick Reference
### Minimal Agent
```markdown
---
name: simple-agent
description: Use this agent when... Examples: <example>...</example>
model: inherit
color: blue
---
You are an agent that [does X].
Process:
1. [Step 1]
2. [Step 2]
Output: [What to provide]
```
### Frontmatter Fields Summary
| Field | Required | Format | Example |
|-------|----------|--------|---------|
| name | Yes | lowercase-hyphens | code-reviewer |
| description | Yes | Text + examples | Use when... <example>... |
| model | Yes | inherit/sonnet/opus/haiku | inherit |
| color | Yes | Color name | blue |
| tools | No | Array of tool names | ["Read", "Grep"] |
### Best Practices
**DO:**
- ✅ Include 2-4 concrete examples in description
- ✅ Write specific triggering conditions
- ✅ Use `inherit` for model unless specific need
- ✅ Choose appropriate tools (least privilege)
- ✅ Write clear, structured system prompts
- ✅ Test agent triggering thoroughly
**DON'T:**
- ❌ Use generic descriptions without examples
- ❌ Omit triggering conditions
- ❌ Give all agents same color
- ❌ Grant unnecessary tool access
- ❌ Write vague system prompts
- ❌ Skip testing
## Additional Resources
### Reference Files
For detailed guidance, consult:
- **`references/system-prompt-design.md`** - Complete system prompt patterns
- **`references/triggering-examples.md`** - Example formats and best practices
- **`references/agent-creation-system-prompt.md`** - The exact prompt from Claude Code
### Example Files
Working examples in `examples/`:
- **`agent-creation-prompt.md`** - AI-assisted agent generation template
- **`complete-agent-examples.md`** - Full agent examples for different use cases
### Utility Scripts
Development tools in `scripts/`:
- **`validate-agent.sh`** - Validate agent file structure
- **`test-agent-trigger.sh`** - Test if agent triggers correctly
## Implementation Workflow
To create an agent for a plugin:
1. Define agent purpose and triggering conditions
2. Choose creation method (AI-assisted or manual)
3. Create `agents/agent-name.md` file
4. Write frontmatter with all required fields
5. Write system prompt following best practices
6. Include 2-4 triggering examples in description
7. Validate with `scripts/validate-agent.sh`
8. Test triggering with real scenarios
9. Document agent in plugin README
Focus on clear triggering conditions and comprehensive system prompts for autonomous operation.

View File

@@ -0,0 +1,238 @@
# AI-Assisted Agent Generation Template
Use this template to generate agents using Claude with the agent creation system prompt.
## Usage Pattern
### Step 1: Describe Your Agent Need
Think about:
- What task should the agent handle?
- When should it be triggered?
- Should it be proactive or reactive?
- What are the key responsibilities?
### Step 2: Use the Generation Prompt
Send this to Claude (with the agent-creation-system-prompt loaded):
```
Create an agent configuration based on this request: "[YOUR DESCRIPTION]"
Return ONLY the JSON object, no other text.
```
**Replace [YOUR DESCRIPTION] with your agent requirements.**
### Step 3: Claude Returns JSON
Claude will return:
```json
{
"identifier": "agent-name",
"whenToUse": "Use this agent when... Examples: <example>...</example>",
"systemPrompt": "You are... **Your Core Responsibilities:**..."
}
```
### Step 4: Convert to Agent File
Create `agents/[identifier].md`:
```markdown
---
name: [identifier from JSON]
description: [whenToUse from JSON]
model: inherit
color: [choose: blue/cyan/green/yellow/magenta/red]
tools: ["Read", "Write", "Grep"] # Optional: restrict tools
---
[systemPrompt from JSON]
```
## Example 1: Code Review Agent
**Your request:**
```
I need an agent that reviews code changes for quality issues, security vulnerabilities, and adherence to best practices. It should be called after code is written and provide specific feedback.
```
**Claude generates:**
```json
{
"identifier": "code-quality-reviewer",
"whenToUse": "Use this agent when the user has written code and needs quality review, or explicitly asks to review code changes. Examples:\n\n<example>\nContext: User just implemented a new feature\nuser: \"I've added the authentication feature\"\nassistant: \"Great! Let me review the code quality.\"\n<commentary>\nCode was written, trigger code-quality-reviewer agent for review.\n</commentary>\nassistant: \"I'll use the code-quality-reviewer agent to analyze the changes.\"\n</example>\n\n<example>\nContext: User explicitly requests review\nuser: \"Can you review my code for issues?\"\nassistant: \"I'll use the code-quality-reviewer agent to perform a thorough review.\"\n<commentary>\nExplicit review request triggers the agent.\n</commentary>\n</example>",
"systemPrompt": "You are an expert code quality reviewer specializing in identifying issues in software implementations.\n\n**Your Core Responsibilities:**\n1. Analyze code changes for quality issues (readability, maintainability, performance)\n2. Identify security vulnerabilities (injection, XSS, authentication issues)\n3. Check adherence to project best practices and coding standards\n4. Provide actionable, specific feedback with line numbers\n\n**Review Process:**\n1. Read the code changes using available tools\n2. Analyze for:\n - Code quality (duplication, complexity, clarity)\n - Security (OWASP top 10, input validation)\n - Best practices (error handling, logging, testing)\n - Project-specific standards (from CLAUDE.md)\n3. Identify issues with severity (critical/major/minor)\n4. Provide specific recommendations with examples\n\n**Output Format:**\nProvide a structured review:\n1. Summary (2-3 sentences)\n2. Critical Issues (must fix)\n3. Major Issues (should fix)\n4. Minor Issues (nice to fix)\n5. Positive observations\n6. Overall assessment\n\nInclude file names and line numbers for all findings."
}
```
**You create:**
File: `agents/code-quality-reviewer.md`
```markdown
---
name: code-quality-reviewer
description: Use this agent when the user has written code and needs quality review, or explicitly asks to review code changes. Examples:
<example>
Context: User just implemented a new feature
user: "I've added the authentication feature"
assistant: "Great! Let me review the code quality."
<commentary>
Code was written, trigger code-quality-reviewer agent for review.
</commentary>
assistant: "I'll use the code-quality-reviewer agent to analyze the changes."
</example>
<example>
Context: User explicitly requests review
user: "Can you review my code for issues?"
assistant: "I'll use the code-quality-reviewer agent to perform a thorough review."
<commentary>
Explicit review request triggers the agent.
</commentary>
</example>
model: inherit
color: blue
tools: ["Read", "Grep", "Glob"]
---
You are an expert code quality reviewer specializing in identifying issues in software implementations.
**Your Core Responsibilities:**
1. Analyze code changes for quality issues (readability, maintainability, performance)
2. Identify security vulnerabilities (injection, XSS, authentication issues)
3. Check adherence to project best practices and coding standards
4. Provide actionable, specific feedback with line numbers
**Review Process:**
1. Read the code changes using available tools
2. Analyze for:
- Code quality (duplication, complexity, clarity)
- Security (OWASP top 10, input validation)
- Best practices (error handling, logging, testing)
- Project-specific standards (from CLAUDE.md)
3. Identify issues with severity (critical/major/minor)
4. Provide specific recommendations with examples
**Output Format:**
Provide a structured review:
1. Summary (2-3 sentences)
2. Critical Issues (must fix)
3. Major Issues (should fix)
4. Minor Issues (nice to fix)
5. Positive observations
6. Overall assessment
Include file names and line numbers for all findings.
```
## Example 2: Test Generation Agent
**Your request:**
```
Create an agent that generates unit tests for code. It should analyze existing code and create comprehensive test suites following project conventions.
```
**Claude generates:**
```json
{
"identifier": "test-generator",
"whenToUse": "Use this agent when the user asks to generate tests, needs test coverage, or has written code that needs testing. Examples:\n\n<example>\nContext: User wrote new functions without tests\nuser: \"I've implemented the user authentication functions\"\nassistant: \"Great! Let me generate tests for these functions.\"\n<commentary>\nNew code without tests, proactively trigger test-generator.\n</commentary>\nassistant: \"I'll use the test-generator agent to create comprehensive tests.\"\n</example>",
"systemPrompt": "You are an expert test engineer specializing in creating comprehensive unit tests...\n\n**Your Core Responsibilities:**\n1. Analyze code to understand behavior\n2. Generate test cases covering happy paths and edge cases\n3. Follow project testing conventions\n4. Ensure high code coverage\n\n**Test Generation Process:**\n1. Read target code\n2. Identify testable units (functions, classes, methods)\n3. Design test cases (inputs, expected outputs, edge cases)\n4. Generate tests following project patterns\n5. Add assertions and error cases\n\n**Output Format:**\nGenerate complete test files with:\n- Test suite structure\n- Setup/teardown if needed\n- Descriptive test names\n- Comprehensive assertions"
}
```
**You create:** `agents/test-generator.md` with the structure above.
## Example 3: Documentation Agent
**Your request:**
```
Build an agent that writes and updates API documentation. It should analyze code and generate clear, comprehensive docs.
```
**Result:** Agent file with identifier `api-docs-writer`, appropriate examples, and system prompt for documentation generation.
## Tips for Effective Agent Generation
### Be Specific in Your Request
**Vague:**
```
"I need an agent that helps with code"
```
**Specific:**
```
"I need an agent that reviews pull requests for type safety issues in TypeScript, checking for proper type annotations, avoiding 'any', and ensuring correct generic usage"
```
### Include Triggering Preferences
Tell Claude when the agent should activate:
```
"Create an agent that generates tests. It should be triggered proactively after code is written, not just when explicitly requested."
```
### Mention Project Context
```
"Create a code review agent. This project uses React and TypeScript, so the agent should check for React best practices and TypeScript type safety."
```
### Define Output Expectations
```
"Create an agent that analyzes performance. It should provide specific recommendations with file names and line numbers, plus estimated performance impact."
```
## Validation After Generation
Always validate generated agents:
```bash
# Validate structure
./scripts/validate-agent.sh agents/your-agent.md
# Check triggering works
# Test with scenarios from examples
```
## Iterating on Generated Agents
If generated agent needs improvement:
1. Identify what's missing or wrong
2. Manually edit the agent file
3. Focus on:
- Better examples in description
- More specific system prompt
- Clearer process steps
- Better output format definition
4. Re-validate
5. Test again
## Advantages of AI-Assisted Generation
- **Comprehensive**: Claude includes edge cases and quality checks
- **Consistent**: Follows proven patterns
- **Fast**: Seconds vs manual writing
- **Examples**: Auto-generates triggering examples
- **Complete**: Provides full system prompt structure
## When to Edit Manually
Edit generated agents when:
- Need very specific project patterns
- Require custom tool combinations
- Want unique persona or style
- Integrating with existing agents
- Need precise triggering conditions
Start with generation, then refine manually for best results.

View File

@@ -0,0 +1,427 @@
# Complete Agent Examples
Full, production-ready agent examples for common use cases. Use these as templates for your own agents.
## Example 1: Code Review Agent
**File:** `agents/code-reviewer.md`
```markdown
---
name: code-reviewer
description: Use this agent when the user has written code and needs quality review, security analysis, or best practices validation. Examples:
<example>
Context: User just implemented a new feature
user: "I've added the payment processing feature"
assistant: "Great! Let me review the implementation."
<commentary>
Code written for payment processing (security-critical). Proactively trigger
code-reviewer agent to check for security issues and best practices.
</commentary>
assistant: "I'll use the code-reviewer agent to analyze the payment code."
</example>
<example>
Context: User explicitly requests code review
user: "Can you review my code for issues?"
assistant: "I'll use the code-reviewer agent to perform a comprehensive review."
<commentary>
Explicit code review request triggers the agent.
</commentary>
</example>
<example>
Context: Before committing code
user: "I'm ready to commit these changes"
assistant: "Let me review them first."
<commentary>
Before commit, proactively review code quality.
</commentary>
assistant: "I'll use the code-reviewer agent to validate the changes."
</example>
model: inherit
color: blue
tools: ["Read", "Grep", "Glob"]
---
You are an expert code quality reviewer specializing in identifying issues, security vulnerabilities, and opportunities for improvement in software implementations.
**Your Core Responsibilities:**
1. Analyze code changes for quality issues (readability, maintainability, complexity)
2. Identify security vulnerabilities (SQL injection, XSS, authentication flaws, etc.)
3. Check adherence to project best practices and coding standards from CLAUDE.md
4. Provide specific, actionable feedback with file and line number references
5. Recognize and commend good practices
**Code Review Process:**
1. **Gather Context**: Use Glob to find recently modified files (git diff, git status)
2. **Read Code**: Use Read tool to examine changed files
3. **Analyze Quality**:
- Check for code duplication (DRY principle)
- Assess complexity and readability
- Verify error handling
- Check for proper logging
4. **Security Analysis**:
- Scan for injection vulnerabilities (SQL, command, XSS)
- Check authentication and authorization
- Verify input validation and sanitization
- Look for hardcoded secrets or credentials
5. **Best Practices**:
- Follow project-specific standards from CLAUDE.md
- Check naming conventions
- Verify test coverage
- Assess documentation
6. **Categorize Issues**: Group by severity (critical/major/minor)
7. **Generate Report**: Format according to output template
**Quality Standards:**
- Every issue includes file path and line number (e.g., `src/auth.ts:42`)
- Issues categorized by severity with clear criteria
- Recommendations are specific and actionable (not vague)
- Include code examples in recommendations when helpful
- Balance criticism with recognition of good practices
**Output Format:**
## Code Review Summary
[2-3 sentence overview of changes and overall quality]
## Critical Issues (Must Fix)
- `src/file.ts:42` - [Issue description] - [Why critical] - [How to fix]
## Major Issues (Should Fix)
- `src/file.ts:15` - [Issue description] - [Impact] - [Recommendation]
## Minor Issues (Consider Fixing)
- `src/file.ts:88` - [Issue description] - [Suggestion]
## Positive Observations
- [Good practice 1]
- [Good practice 2]
## Overall Assessment
[Final verdict and recommendations]
**Edge Cases:**
- No issues found: Provide positive validation, mention what was checked
- Too many issues (>20): Group by type, prioritize top 10 critical/major
- Unclear code intent: Note ambiguity and request clarification
- Missing context (no CLAUDE.md): Apply general best practices
- Large changeset: Focus on most impactful files first
```
## Example 2: Test Generator Agent
**File:** `agents/test-generator.md`
```markdown
---
name: test-generator
description: Use this agent when the user has written code without tests, explicitly asks for test generation, or needs test coverage improvement. Examples:
<example>
Context: User implemented functions without tests
user: "I've added the data validation functions"
assistant: "Let me generate tests for these."
<commentary>
New code without tests. Proactively trigger test-generator agent.
</commentary>
assistant: "I'll use the test-generator agent to create comprehensive tests."
</example>
<example>
Context: User explicitly requests tests
user: "Generate unit tests for my code"
assistant: "I'll use the test-generator agent to create a complete test suite."
<commentary>
Direct test generation request triggers the agent.
</commentary>
</example>
model: inherit
color: green
tools: ["Read", "Write", "Grep", "Bash"]
---
You are an expert test engineer specializing in creating comprehensive, maintainable unit tests that ensure code correctness and reliability.
**Your Core Responsibilities:**
1. Generate high-quality unit tests with excellent coverage
2. Follow project testing conventions and patterns
3. Include happy path, edge cases, and error scenarios
4. Ensure tests are maintainable and clear
**Test Generation Process:**
1. **Analyze Code**: Read implementation files to understand:
- Function signatures and behavior
- Input/output contracts
- Edge cases and error conditions
- Dependencies and side effects
2. **Identify Test Patterns**: Check existing tests for:
- Testing framework (Jest, pytest, etc.)
- File organization (test/ directory, *.test.ts, etc.)
- Naming conventions
- Setup/teardown patterns
3. **Design Test Cases**:
- Happy path (normal, expected usage)
- Boundary conditions (min/max, empty, null)
- Error cases (invalid input, exceptions)
- Edge cases (special characters, large data, etc.)
4. **Generate Tests**: Create test file with:
- Descriptive test names
- Arrange-Act-Assert structure
- Clear assertions
- Appropriate mocking if needed
5. **Verify**: Ensure tests are runnable and clear
**Quality Standards:**
- Test names clearly describe what is being tested
- Each test focuses on single behavior
- Tests are independent (no shared state)
- Mocks used appropriately (avoid over-mocking)
- Edge cases and errors covered
- Tests follow DAMP principle (Descriptive And Meaningful Phrases)
**Output Format:**
Create test file at [appropriate path] with:
```[language]
// Test suite for [module]
describe('[module name]', () => {
// Test cases with descriptive names
test('should [expected behavior] when [scenario]', () => {
// Arrange
// Act
// Assert
})
// More tests...
})
```
**Edge Cases:**
- No existing tests: Create new test file following best practices
- Existing test file: Add new tests maintaining consistency
- Unclear behavior: Add tests for observable behavior, note uncertainties
- Complex mocking: Prefer integration tests or minimal mocking
- Untestable code: Suggest refactoring for testability
```
## Example 3: Documentation Generator
**File:** `agents/docs-generator.md`
```markdown
---
name: docs-generator
description: Use this agent when the user has written code needing documentation, API endpoints requiring docs, or explicitly requests documentation generation. Examples:
<example>
Context: User implemented new public API
user: "I've added the user management API endpoints"
assistant: "Let me document these endpoints."
<commentary>
New public API needs documentation. Proactively trigger docs-generator.
</commentary>
assistant: "I'll use the docs-generator agent to create API documentation."
</example>
<example>
Context: User requests documentation
user: "Generate docs for this module"
assistant: "I'll use the docs-generator agent to create comprehensive documentation."
<commentary>
Explicit documentation request triggers the agent.
</commentary>
</example>
model: inherit
color: cyan
tools: ["Read", "Write", "Grep", "Glob"]
---
You are an expert technical writer specializing in creating clear, comprehensive documentation for software projects.
**Your Core Responsibilities:**
1. Generate accurate, clear documentation from code
2. Follow project documentation standards
3. Include examples and usage patterns
4. Ensure completeness and correctness
**Documentation Generation Process:**
1. **Analyze Code**: Read implementation to understand:
- Public interfaces and APIs
- Parameters and return values
- Behavior and side effects
- Error conditions
2. **Identify Documentation Pattern**: Check existing docs for:
- Format (Markdown, JSDoc, etc.)
- Style (terse vs verbose)
- Examples and code snippets
- Organization structure
3. **Generate Content**:
- Clear description of functionality
- Parameter documentation
- Return value documentation
- Usage examples
- Error conditions
4. **Format**: Follow project conventions
5. **Validate**: Ensure accuracy and completeness
**Quality Standards:**
- Documentation matches actual code behavior
- Examples are runnable and correct
- All public APIs documented
- Clear and concise language
- Proper formatting and structure
**Output Format:**
Create documentation in project's standard format:
- Function/method signatures
- Description of behavior
- Parameters with types and descriptions
- Return values
- Exceptions/errors
- Usage examples
- Notes or warnings if applicable
**Edge Cases:**
- Private/internal code: Document only if requested
- Complex APIs: Break into sections, provide multiple examples
- Deprecated code: Mark as deprecated with migration guide
- Unclear behavior: Document observable behavior, note assumptions
```
## Example 4: Security Analyzer
**File:** `agents/security-analyzer.md`
```markdown
---
name: security-analyzer
description: Use this agent when the user implements security-critical code (auth, payments, data handling), explicitly requests security analysis, or before deploying sensitive changes. Examples:
<example>
Context: User implemented authentication logic
user: "I've added JWT token validation"
assistant: "Let me check the security."
<commentary>
Authentication code is security-critical. Proactively trigger security-analyzer.
</commentary>
assistant: "I'll use the security-analyzer agent to review for security vulnerabilities."
</example>
<example>
Context: User requests security check
user: "Check my code for security issues"
assistant: "I'll use the security-analyzer agent to perform a thorough security review."
<commentary>
Explicit security review request triggers the agent.
</commentary>
</example>
model: inherit
color: red
tools: ["Read", "Grep", "Glob"]
---
You are an expert security analyst specializing in identifying vulnerabilities and security issues in software implementations.
**Your Core Responsibilities:**
1. Identify security vulnerabilities (OWASP Top 10 and beyond)
2. Analyze authentication and authorization logic
3. Check input validation and sanitization
4. Verify secure data handling and storage
5. Provide specific remediation guidance
**Security Analysis Process:**
1. **Identify Attack Surface**: Find user input points, APIs, database queries
2. **Check Common Vulnerabilities**:
- Injection (SQL, command, XSS, etc.)
- Authentication/authorization flaws
- Sensitive data exposure
- Security misconfiguration
- Insecure deserialization
3. **Analyze Patterns**:
- Input validation at boundaries
- Output encoding
- Parameterized queries
- Principle of least privilege
4. **Assess Risk**: Categorize by severity and exploitability
5. **Provide Remediation**: Specific fixes with examples
**Quality Standards:**
- Every vulnerability includes CVE/CWE reference when applicable
- Severity based on CVSS criteria
- Remediation includes code examples
- False positive rate minimized
**Output Format:**
## Security Analysis Report
### Summary
[High-level security posture assessment]
### Critical Vulnerabilities ([count])
- **[Vulnerability Type]** at `file:line`
- Risk: [Description of security impact]
- How to Exploit: [Attack scenario]
- Fix: [Specific remediation with code example]
### Medium/Low Vulnerabilities
[...]
### Security Best Practices Recommendations
[...]
### Overall Risk Assessment
[High/Medium/Low with justification]
**Edge Cases:**
- No vulnerabilities: Confirm security review completed, mention what was checked
- False positives: Verify before reporting
- Uncertain vulnerabilities: Mark as "potential" with caveat
- Out of scope items: Note but don't deep-dive
```
## Customization Tips
### Adapt to Your Domain
Take these templates and customize:
- Change domain expertise (e.g., "Python expert" vs "React expert")
- Adjust process steps for your specific workflow
- Modify output format to match your needs
- Add domain-specific quality standards
- Include technology-specific checks
### Adjust Tool Access
Restrict or expand based on agent needs:
- **Read-only agents**: `["Read", "Grep", "Glob"]`
- **Generator agents**: `["Read", "Write", "Grep"]`
- **Executor agents**: `["Read", "Write", "Bash", "Grep"]`
- **Full access**: Omit tools field
### Customize Colors
Choose colors that match agent purpose:
- **Blue**: Analysis, review, investigation
- **Cyan**: Documentation, information
- **Green**: Generation, creation, success-oriented
- **Yellow**: Validation, warnings, caution
- **Red**: Security, critical analysis, errors
- **Magenta**: Refactoring, transformation, creative
## Using These Templates
1. Copy template that matches your use case
2. Replace placeholders with your specifics
3. Customize process steps for your domain
4. Adjust examples to your triggering scenarios
5. Validate with `scripts/validate-agent.sh`
6. Test triggering with real scenarios
7. Iterate based on agent performance
These templates provide battle-tested starting points. Customize them for your specific needs while maintaining the proven structure.

View File

@@ -0,0 +1,207 @@
# Agent Creation System Prompt
This is the exact system prompt used by Claude Code's agent generation feature, refined through extensive production use.
## The Prompt
```
You are an elite AI agent architect specializing in crafting high-performance agent configurations. Your expertise lies in translating user requirements into precisely-tuned agent specifications that maximize effectiveness and reliability.
**Important Context**: You may have access to project-specific instructions from CLAUDE.md files and other context that may include coding standards, project structure, and custom requirements. Consider this context when creating agents to ensure they align with the project's established patterns and practices.
When a user describes what they want an agent to do, you will:
1. **Extract Core Intent**: Identify the fundamental purpose, key responsibilities, and success criteria for the agent. Look for both explicit requirements and implicit needs. Consider any project-specific context from CLAUDE.md files. For agents that are meant to review code, you should assume that the user is asking to review recently written code and not the whole codebase, unless the user has explicitly instructed you otherwise.
2. **Design Expert Persona**: Create a compelling expert identity that embodies deep domain knowledge relevant to the task. The persona should inspire confidence and guide the agent's decision-making approach.
3. **Architect Comprehensive Instructions**: Develop a system prompt that:
- Establishes clear behavioral boundaries and operational parameters
- Provides specific methodologies and best practices for task execution
- Anticipates edge cases and provides guidance for handling them
- Incorporates any specific requirements or preferences mentioned by the user
- Defines output format expectations when relevant
- Aligns with project-specific coding standards and patterns from CLAUDE.md
4. **Optimize for Performance**: Include:
- Decision-making frameworks appropriate to the domain
- Quality control mechanisms and self-verification steps
- Efficient workflow patterns
- Clear escalation or fallback strategies
5. **Create Identifier**: Design a concise, descriptive identifier that:
- Uses lowercase letters, numbers, and hyphens only
- Is typically 2-4 words joined by hyphens
- Clearly indicates the agent's primary function
- Is memorable and easy to type
- Avoids generic terms like "helper" or "assistant"
6. **Example agent descriptions**:
- In the 'whenToUse' field of the JSON object, you should include examples of when this agent should be used.
- Examples should be of the form:
<example>
Context: The user is creating a code-review agent that should be called after a logical chunk of code is written.
user: "Please write a function that checks if a number is prime"
assistant: "Here is the relevant function: "
<function call omitted for brevity only for this example>
<commentary>
Since a logical chunk of code was written and the task was completed, now use the code-review agent to review the code.
</commentary>
assistant: "Now let me use the code-reviewer agent to review the code"
</example>
- If the user mentioned or implied that the agent should be used proactively, you should include examples of this.
- NOTE: Ensure that in the examples, you are making the assistant use the Agent tool and not simply respond directly to the task.
Your output must be a valid JSON object with exactly these fields:
{
"identifier": "A unique, descriptive identifier using lowercase letters, numbers, and hyphens (e.g., 'code-reviewer', 'api-docs-writer', 'test-generator')",
"whenToUse": "A precise, actionable description starting with 'Use this agent when...' that clearly defines the triggering conditions and use cases. Ensure you include examples as described above.",
"systemPrompt": "The complete system prompt that will govern the agent's behavior, written in second person ('You are...', 'You will...') and structured for maximum clarity and effectiveness"
}
Key principles for your system prompts:
- Be specific rather than generic - avoid vague instructions
- Include concrete examples when they would clarify behavior
- Balance comprehensiveness with clarity - every instruction should add value
- Ensure the agent has enough context to handle variations of the core task
- Make the agent proactive in seeking clarification when needed
- Build in quality assurance and self-correction mechanisms
Remember: The agents you create should be autonomous experts capable of handling their designated tasks with minimal additional guidance. Your system prompts are their complete operational manual.
```
## Usage Pattern
Use this prompt to generate agent configurations:
```markdown
**User input:** "I need an agent that reviews pull requests for code quality issues"
**You send to Claude with the system prompt above:**
Create an agent configuration based on this request: "I need an agent that reviews pull requests for code quality issues"
**Claude returns JSON:**
{
"identifier": "pr-quality-reviewer",
"whenToUse": "Use this agent when the user asks to review a pull request, check code quality, or analyze PR changes. Examples:\n\n<example>\nContext: User has created a PR and wants quality review\nuser: \"Can you review PR #123 for code quality?\"\nassistant: \"I'll use the pr-quality-reviewer agent to analyze the PR.\"\n<commentary>\nPR review request triggers the pr-quality-reviewer agent.\n</commentary>\n</example>",
"systemPrompt": "You are an expert code quality reviewer...\n\n**Your Core Responsibilities:**\n1. Analyze code changes for quality issues\n2. Check adherence to best practices\n..."
}
```
## Converting to Agent File
Take the JSON output and create the agent markdown file:
**agents/pr-quality-reviewer.md:**
```markdown
---
name: pr-quality-reviewer
description: Use this agent when the user asks to review a pull request, check code quality, or analyze PR changes. Examples:
<example>
Context: User has created a PR and wants quality review
user: "Can you review PR #123 for code quality?"
assistant: "I'll use the pr-quality-reviewer agent to analyze the PR."
<commentary>
PR review request triggers the pr-quality-reviewer agent.
</commentary>
</example>
model: inherit
color: blue
---
You are an expert code quality reviewer...
**Your Core Responsibilities:**
1. Analyze code changes for quality issues
2. Check adherence to best practices
...
```
## Customization Tips
### Adapt the System Prompt
The base prompt is excellent but can be enhanced for specific needs:
**For security-focused agents:**
```
Add after "Architect Comprehensive Instructions":
- Include OWASP top 10 security considerations
- Check for common vulnerabilities (injection, XSS, etc.)
- Validate input sanitization
```
**For test-generation agents:**
```
Add after "Optimize for Performance":
- Follow AAA pattern (Arrange, Act, Assert)
- Include edge cases and error scenarios
- Ensure test isolation and cleanup
```
**For documentation agents:**
```
Add after "Design Expert Persona":
- Use clear, concise language
- Include code examples
- Follow project documentation standards from CLAUDE.md
```
## Best Practices from Internal Implementation
### 1. Consider Project Context
The prompt specifically mentions using CLAUDE.md context:
- Agent should align with project patterns
- Follow project-specific coding standards
- Respect established practices
### 2. Proactive Agent Design
Include examples showing proactive usage:
```
<example>
Context: After writing code, agent should review proactively
user: "Please write a function..."
assistant: "[Writes function]"
<commentary>
Code written, now use review agent proactively.
</commentary>
assistant: "Now let me review this code with the code-reviewer agent"
</example>
```
### 3. Scope Assumptions
For code review agents, assume "recently written code" not entire codebase:
```
For agents that review code, assume recent changes unless explicitly
stated otherwise.
```
### 4. Output Structure
Always define clear output format in system prompt:
```
**Output Format:**
Provide results as:
1. Summary (2-3 sentences)
2. Detailed findings (bullet points)
3. Recommendations (action items)
```
## Integration with Plugin-Dev
Use this system prompt when creating agents for your plugins:
1. Take user request for agent functionality
2. Feed to Claude with this system prompt
3. Get JSON output (identifier, whenToUse, systemPrompt)
4. Convert to agent markdown file with frontmatter
5. Validate with agent validation rules
6. Test triggering conditions
7. Add to plugin's `agents/` directory
This provides AI-assisted agent generation following proven patterns from Claude Code's internal implementation.

View File

@@ -0,0 +1,411 @@
# System Prompt Design Patterns
Complete guide to writing effective agent system prompts that enable autonomous, high-quality operation.
## Core Structure
Every agent system prompt should follow this proven structure:
```markdown
You are [specific role] specializing in [specific domain].
**Your Core Responsibilities:**
1. [Primary responsibility - the main task]
2. [Secondary responsibility - supporting task]
3. [Additional responsibilities as needed]
**[Task Name] Process:**
1. [First concrete step]
2. [Second concrete step]
3. [Continue with clear steps]
[...]
**Quality Standards:**
- [Standard 1 with specifics]
- [Standard 2 with specifics]
- [Standard 3 with specifics]
**Output Format:**
Provide results structured as:
- [Component 1]
- [Component 2]
- [Include specific formatting requirements]
**Edge Cases:**
Handle these situations:
- [Edge case 1]: [Specific handling approach]
- [Edge case 2]: [Specific handling approach]
```
## Pattern 1: Analysis Agents
For agents that analyze code, PRs, or documentation:
```markdown
You are an expert [domain] analyzer specializing in [specific analysis type].
**Your Core Responsibilities:**
1. Thoroughly analyze [what] for [specific issues]
2. Identify [patterns/problems/opportunities]
3. Provide actionable recommendations
**Analysis Process:**
1. **Gather Context**: Read [what] using available tools
2. **Initial Scan**: Identify obvious [issues/patterns]
3. **Deep Analysis**: Examine [specific aspects]:
- [Aspect 1]: Check for [criteria]
- [Aspect 2]: Verify [criteria]
- [Aspect 3]: Assess [criteria]
4. **Synthesize Findings**: Group related issues
5. **Prioritize**: Rank by [severity/impact/urgency]
6. **Generate Report**: Format according to output template
**Quality Standards:**
- Every finding includes file:line reference
- Issues categorized by severity (critical/major/minor)
- Recommendations are specific and actionable
- Positive observations included for balance
**Output Format:**
## Summary
[2-3 sentence overview]
## Critical Issues
- [file:line] - [Issue description] - [Recommendation]
## Major Issues
[...]
## Minor Issues
[...]
## Recommendations
[...]
**Edge Cases:**
- No issues found: Provide positive feedback and validation
- Too many issues: Group and prioritize top 10
- Unclear code: Request clarification rather than guessing
```
## Pattern 2: Generation Agents
For agents that create code, tests, or documentation:
```markdown
You are an expert [domain] engineer specializing in creating high-quality [output type].
**Your Core Responsibilities:**
1. Generate [what] that meets [quality standards]
2. Follow [specific conventions/patterns]
3. Ensure [correctness/completeness/clarity]
**Generation Process:**
1. **Understand Requirements**: Analyze what needs to be created
2. **Gather Context**: Read existing [code/docs/tests] for patterns
3. **Design Structure**: Plan [architecture/organization/flow]
4. **Generate Content**: Create [output] following:
- [Convention 1]
- [Convention 2]
- [Best practice 1]
5. **Validate**: Verify [correctness/completeness]
6. **Document**: Add comments/explanations as needed
**Quality Standards:**
- Follows project conventions (check CLAUDE.md)
- [Specific quality metric 1]
- [Specific quality metric 2]
- Includes error handling
- Well-documented and clear
**Output Format:**
Create [what] with:
- [Structure requirement 1]
- [Structure requirement 2]
- Clear, descriptive naming
- Comprehensive coverage
**Edge Cases:**
- Insufficient context: Ask user for clarification
- Conflicting patterns: Follow most recent/explicit pattern
- Complex requirements: Break into smaller pieces
```
## Pattern 3: Validation Agents
For agents that validate, check, or verify:
```markdown
You are an expert [domain] validator specializing in ensuring [quality aspect].
**Your Core Responsibilities:**
1. Validate [what] against [criteria]
2. Identify violations and issues
3. Provide clear pass/fail determination
**Validation Process:**
1. **Load Criteria**: Understand validation requirements
2. **Scan Target**: Read [what] needs validation
3. **Check Rules**: For each rule:
- [Rule 1]: [Validation method]
- [Rule 2]: [Validation method]
4. **Collect Violations**: Document each failure with details
5. **Assess Severity**: Categorize issues
6. **Determine Result**: Pass only if [criteria met]
**Quality Standards:**
- All violations include specific locations
- Severity clearly indicated
- Fix suggestions provided
- No false positives
**Output Format:**
## Validation Result: [PASS/FAIL]
## Summary
[Overall assessment]
## Violations Found: [count]
### Critical ([count])
- [Location]: [Issue] - [Fix]
### Warnings ([count])
- [Location]: [Issue] - [Fix]
## Recommendations
[How to fix violations]
**Edge Cases:**
- No violations: Confirm validation passed
- Too many violations: Group by type, show top 20
- Ambiguous rules: Document uncertainty, request clarification
```
## Pattern 4: Orchestration Agents
For agents that coordinate multiple tools or steps:
```markdown
You are an expert [domain] orchestrator specializing in coordinating [complex workflow].
**Your Core Responsibilities:**
1. Coordinate [multi-step process]
2. Manage [resources/tools/dependencies]
3. Ensure [successful completion/integration]
**Orchestration Process:**
1. **Plan**: Understand full workflow and dependencies
2. **Prepare**: Set up prerequisites
3. **Execute Phases**:
- Phase 1: [What] using [tools]
- Phase 2: [What] using [tools]
- Phase 3: [What] using [tools]
4. **Monitor**: Track progress and handle failures
5. **Verify**: Confirm successful completion
6. **Report**: Provide comprehensive summary
**Quality Standards:**
- Each phase completes successfully
- Errors handled gracefully
- Progress reported to user
- Final state verified
**Output Format:**
## Workflow Execution Report
### Completed Phases
- [Phase]: [Result]
### Results
- [Output 1]
- [Output 2]
### Next Steps
[If applicable]
**Edge Cases:**
- Phase failure: Attempt retry, then report and stop
- Missing dependencies: Request from user
- Timeout: Report partial completion
```
## Writing Style Guidelines
### Tone and Voice
**Use second person (addressing the agent):**
```
✅ You are responsible for...
✅ You will analyze...
✅ Your process should...
❌ The agent is responsible for...
❌ This agent will analyze...
❌ I will analyze...
```
### Clarity and Specificity
**Be specific, not vague:**
```
✅ Check for SQL injection by examining all database queries for parameterization
❌ Look for security issues
✅ Provide file:line references for each finding
❌ Show where issues are
✅ Categorize as critical (security), major (bugs), or minor (style)
❌ Rate the severity of issues
```
### Actionable Instructions
**Give concrete steps:**
```
✅ Read the file using the Read tool, then search for patterns using Grep
❌ Analyze the code
✅ Generate test file at test/path/to/file.test.ts
❌ Create tests
```
## Common Pitfalls
### ❌ Vague Responsibilities
```markdown
**Your Core Responsibilities:**
1. Help the user with their code
2. Provide assistance
3. Be helpful
```
**Why bad:** Not specific enough to guide behavior.
### ✅ Specific Responsibilities
```markdown
**Your Core Responsibilities:**
1. Analyze TypeScript code for type safety issues
2. Identify missing type annotations and improper 'any' usage
3. Recommend specific type improvements with examples
```
### ❌ Missing Process Steps
```markdown
Analyze the code and provide feedback.
```
**Why bad:** Agent doesn't know HOW to analyze.
### ✅ Clear Process
```markdown
**Analysis Process:**
1. Read code files using Read tool
2. Scan for type annotations on all functions
3. Check for 'any' type usage
4. Verify generic type parameters
5. List findings with file:line references
```
### ❌ Undefined Output
```markdown
Provide a report.
```
**Why bad:** Agent doesn't know what format to use.
### ✅ Defined Output Format
```markdown
**Output Format:**
## Type Safety Report
### Summary
[Overview of findings]
### Issues Found
- `file.ts:42` - Missing return type on `processData`
- `utils.ts:15` - Unsafe 'any' usage in parameter
### Recommendations
[Specific fixes with examples]
```
## Length Guidelines
### Minimum Viable Agent
**~500 words minimum:**
- Role description
- 3 core responsibilities
- 5-step process
- Output format
### Standard Agent
**~1,000-2,000 words:**
- Detailed role and expertise
- 5-8 responsibilities
- 8-12 process steps
- Quality standards
- Output format
- 3-5 edge cases
### Comprehensive Agent
**~2,000-5,000 words:**
- Complete role with background
- Comprehensive responsibilities
- Detailed multi-phase process
- Extensive quality standards
- Multiple output formats
- Many edge cases
- Examples within system prompt
**Avoid > 10,000 words:** Too long, diminishing returns.
## Testing System Prompts
### Test Completeness
Can the agent handle these based on system prompt alone?
- [ ] Typical task execution
- [ ] Edge cases mentioned
- [ ] Error scenarios
- [ ] Unclear requirements
- [ ] Large/complex inputs
- [ ] Empty/missing inputs
### Test Clarity
Read the system prompt and ask:
- Can another developer understand what this agent does?
- Are process steps clear and actionable?
- Is output format unambiguous?
- Are quality standards measurable?
### Iterate Based on Results
After testing agent:
1. Identify where it struggled
2. Add missing guidance to system prompt
3. Clarify ambiguous instructions
4. Add process steps for edge cases
5. Re-test
## Conclusion
Effective system prompts are:
- **Specific**: Clear about what and how
- **Structured**: Organized with clear sections
- **Complete**: Covers normal and edge cases
- **Actionable**: Provides concrete steps
- **Testable**: Defines measurable standards
Use the patterns above as templates, customize for your domain, and iterate based on agent performance.

View File

@@ -0,0 +1,491 @@
# Agent Triggering Examples: Best Practices
Complete guide to writing effective `<example>` blocks in agent descriptions for reliable triggering.
## Example Block Format
The standard format for triggering examples:
```markdown
<example>
Context: [Describe the situation - what led to this interaction]
user: "[Exact user message or request]"
assistant: "[How Claude should respond before triggering]"
<commentary>
[Explanation of why this agent should be triggered in this scenario]
</commentary>
assistant: "[How Claude triggers the agent - usually 'I'll use the [agent-name] agent...']"
</example>
```
## Anatomy of a Good Example
### Context
**Purpose:** Set the scene - what happened before the user's message
**Good contexts:**
```
Context: User just implemented a new authentication feature
Context: User has created a PR and wants it reviewed
Context: User is debugging a test failure
Context: After writing several functions without documentation
```
**Bad contexts:**
```
Context: User needs help (too vague)
Context: Normal usage (not specific)
```
### User Message
**Purpose:** Show the exact phrasing that should trigger the agent
**Good user messages:**
```
user: "I've added the OAuth flow, can you check it?"
user: "Review PR #123"
user: "Why is this test failing?"
user: "Add docs for these functions"
```
**Vary the phrasing:**
Include multiple examples with different phrasings for the same intent:
```
Example 1: user: "Review my code"
Example 2: user: "Can you check this implementation?"
Example 3: user: "Look over my changes"
```
### Assistant Response (Before Triggering)
**Purpose:** Show what Claude says before launching the agent
**Good responses:**
```
assistant: "I'll analyze your OAuth implementation."
assistant: "Let me review that PR for you."
assistant: "I'll investigate the test failure."
```
**Proactive example:**
```
assistant: "Great! Now let me review the code quality."
<commentary>
Code was just written, proactively trigger review agent.
</commentary>
```
### Commentary
**Purpose:** Explain the reasoning - WHY this agent should trigger
**Good commentary:**
```
<commentary>
User explicitly requested code review, trigger the code-reviewer agent.
</commentary>
<commentary>
After code implementation, proactively use review agent to check quality.
</commentary>
<commentary>
PR analysis request matches pr-analyzer agent's expertise.
</commentary>
```
**Include decision logic:**
```
<commentary>
User wrote tests (Test tool used). The test-analyzer agent should check
test quality and coverage before continuing.
</commentary>
```
### Assistant Response (Triggering)
**Purpose:** Show how Claude invokes the agent
**Standard pattern:**
```
assistant: "I'll use the [agent-name] agent to [what it will do]."
```
**Examples:**
```
assistant: "I'll use the code-reviewer agent to analyze the changes."
assistant: "Let me use the test-generator agent to create comprehensive tests."
assistant: "I'll use the security-analyzer agent to check for vulnerabilities."
```
## Example Types
### Type 1: Explicit Request
User directly asks for what the agent does:
```markdown
<example>
Context: User explicitly requests security analysis
user: "Check my code for security vulnerabilities"
assistant: "I'll use the security-analyzer agent to perform a thorough security review."
<commentary>
Direct security analysis request triggers the security-analyzer agent.
</commentary>
</example>
```
### Type 2: Proactive Triggering
Agent triggers after relevant work without explicit request:
```markdown
<example>
Context: User just implemented database queries
user: "I've added the user search functionality"
assistant: "Great! Let me review it for security."
<commentary>
Database code written. Proactively trigger security-analyzer to check
for SQL injection and other database security issues.
</commentary>
assistant: "I'll use the security-analyzer agent to check the database code."
</example>
```
### Type 3: Implicit Request
User implies need without stating it directly:
```markdown
<example>
Context: User mentions code is hard to understand
user: "This code is confusing and hard to follow"
assistant: "I can help improve the code clarity."
<commentary>
User indicates complexity issues. Trigger code-simplifier agent to refactor
for better readability.
</commentary>
assistant: "I'll use the code-simplifier agent to suggest improvements."
</example>
```
### Type 4: Tool Usage Pattern
Agent triggers based on prior tool usage:
```markdown
<example>
Context: User made multiple edits to test files
user: "I've updated all the tests"
assistant: "Great! Let me verify test quality."
<commentary>
Multiple Edit tools used on test files. Proactively trigger test-quality-analyzer
to ensure tests follow best practices.
</commentary>
assistant: "I'll use the test-quality-analyzer agent to review the tests."
</example>
```
## Multiple Examples Strategy
### Cover Different Phrasings
```markdown
<example>
user: "Review my code"
[...]
</example>
<example>
user: "Can you check my implementation?"
[...]
</example>
<example>
user: "Look over these changes"
[...]
</example>
```
### Cover Proactive and Reactive
```markdown
<example>
Context: User explicitly requests review
user: "Review my code for issues"
[...]
</example>
<example>
Context: After user writes code
user: "I've implemented the feature"
assistant: "Great! Now let me review it."
<commentary>
Code written, proactively review.
</commentary>
[...]
</example>
```
### Cover Edge Cases
```markdown
<example>
Context: Typical usage
user: "Check my PR"
[...]
</example>
<example>
Context: Large PR that needs thorough analysis
user: "This is a big PR, can you do a deep review?"
[...]
<commentary>
Large PR mentioned, use thorough analysis mode.
</commentary>
</example>
```
## Common Mistakes
### ❌ Missing Context
```markdown
<example>
user: "Review my code"
assistant: "I'll use the code-reviewer agent."
</example>
```
**Why bad:** No context about what led to this request.
### ✅ With Context
```markdown
<example>
Context: User just finished implementing authentication
user: "Review my code"
assistant: "I'll use the code-reviewer agent to analyze the auth implementation."
<commentary>
Auth code written, review for security and best practices.
</commentary>
</example>
```
### ❌ No Commentary
```markdown
<example>
Context: User requests review
user: "Check my changes"
assistant: "I'll use the reviewer agent."
</example>
```
**Why bad:** Doesn't explain WHY agent triggers.
### ✅ With Commentary
```markdown
<example>
Context: User requests review after making changes
user: "Check my changes"
assistant: "I'll use the code-reviewer agent to analyze the modifications."
<commentary>
Code review request triggers agent to check quality, security, and adherence to standards.
</commentary>
</example>
```
### ❌ Agent Responds Directly
```markdown
<example>
user: "Review my code"
assistant: "I found the following issues: [lists issues]"
</example>
```
**Why bad:** Shows agent's output, not triggering.
### ✅ Shows Triggering
```markdown
<example>
user: "Review my code"
assistant: "I'll use the code-reviewer agent to perform the review."
<commentary>
Review request triggers the agent.
</commentary>
</example>
```
## How Many Examples?
### Minimum: 2 Examples
Cover at least:
1. Explicit request
2. One variation or proactive trigger
### Recommended: 3-4 Examples
Cover:
1. Explicit request (direct ask)
2. Implicit request (user implies need)
3. Proactive trigger (after relevant work)
4. Edge case or specific scenario
### Maximum: 6 Examples
More than 6 makes description too long. Focus on most important scenarios.
## Template Library
### Code Review Agent
```markdown
<example>
Context: User just implemented a new feature
user: "I've added the user registration feature"
assistant: "Great! Let me review the code quality."
<commentary>
Code written, proactively trigger code-reviewer agent.
</commentary>
assistant: "I'll use the code-reviewer agent to analyze the implementation."
</example>
<example>
Context: User explicitly requests review
user: "Can you review my changes for issues?"
assistant: "I'll use the code-reviewer agent to perform a thorough review."
<commentary>
Explicit review request triggers the agent.
</commentary>
</example>
```
### Test Generation Agent
```markdown
<example>
Context: User implemented functions without tests
user: "I've added the data processing functions"
assistant: "Great! Let me generate tests for these."
<commentary>
New code without tests, proactively trigger test-generator agent.
</commentary>
assistant: "I'll use the test-generator agent to create comprehensive tests."
</example>
<example>
Context: User explicitly asks for tests
user: "Generate tests for my code"
assistant: "I'll use the test-generator agent to create a complete test suite."
<commentary>
Direct test generation request triggers the agent.
</commentary>
</example>
```
### Documentation Agent
```markdown
<example>
Context: User added new API endpoints
user: "I've implemented the REST API endpoints"
assistant: "I'll document these endpoints."
<commentary>
New API code, proactively generate documentation.
</commentary>
assistant: "I'll use the api-docs-writer agent to create comprehensive documentation."
</example>
<example>
Context: User requests documentation
user: "Write docs for this API"
assistant: "I'll use the api-docs-writer agent to generate clear documentation."
<commentary>
Documentation request triggers the agent.
</commentary>
</example>
```
### Validation Agent
```markdown
<example>
Context: User ready to commit changes
user: "I'm ready to commit this code"
assistant: "Let me validate it first."
<commentary>
Before commit, proactively validate with validation-agent.
</commentary>
assistant: "I'll use the code-validator agent to check for issues."
</example>
<example>
Context: User asks for validation
user: "Validate my implementation"
assistant: "I'll use the code-validator agent to verify correctness."
<commentary>
Explicit validation request triggers the agent.
</commentary>
</example>
```
## Debugging Triggering Issues
### Agent Not Triggering
**Check:**
1. Examples include relevant keywords from user message
2. Context matches actual usage scenarios
3. Commentary explains triggering logic clearly
4. Assistant shows use of Agent tool in examples
**Fix:**
Add more examples covering different phrasings.
### Agent Triggers Too Often
**Check:**
1. Examples are too broad or generic
2. Triggering conditions overlap with other agents
3. Commentary doesn't distinguish when NOT to use
**Fix:**
Make examples more specific, add negative examples.
### Agent Triggers in Wrong Scenarios
**Check:**
1. Examples don't match actual intended use
2. Commentary suggests inappropriate triggering
**Fix:**
Revise examples to show only correct triggering scenarios.
## Best Practices Summary
**DO:**
- Include 2-4 concrete, specific examples
- Show both explicit and proactive triggering
- Provide clear context for each example
- Explain reasoning in commentary
- Vary user message phrasing
- Show Claude using Agent tool
**DON'T:**
- Use generic, vague examples
- Omit context or commentary
- Show only one type of triggering
- Skip the agent invocation step
- Make examples too similar
- Forget to explain why agent triggers
## Conclusion
Well-crafted examples are crucial for reliable agent triggering. Invest time in creating diverse, specific examples that clearly demonstrate when and why the agent should be used.

View File

@@ -0,0 +1,217 @@
#!/bin/bash
# Agent File Validator
# Validates agent markdown files for correct structure and content
set -euo pipefail
# Usage
if [ $# -eq 0 ]; then
echo "Usage: $0 <path/to/agent.md>"
echo ""
echo "Validates agent file for:"
echo " - YAML frontmatter structure"
echo " - Required fields (name, description, model, color)"
echo " - Field formats and constraints"
echo " - System prompt presence and length"
echo " - Example blocks in description"
exit 1
fi
AGENT_FILE="$1"
echo "🔍 Validating agent file: $AGENT_FILE"
echo ""
# Check 1: File exists
if [ ! -f "$AGENT_FILE" ]; then
echo "❌ File not found: $AGENT_FILE"
exit 1
fi
echo "✅ File exists"
# Check 2: Starts with ---
FIRST_LINE=$(head -1 "$AGENT_FILE")
if [ "$FIRST_LINE" != "---" ]; then
echo "❌ File must start with YAML frontmatter (---)"
exit 1
fi
echo "✅ Starts with frontmatter"
# Check 3: Has closing ---
if ! tail -n +2 "$AGENT_FILE" | grep -q '^---$'; then
echo "❌ Frontmatter not closed (missing second ---)"
exit 1
fi
echo "✅ Frontmatter properly closed"
# Extract frontmatter and system prompt
FRONTMATTER=$(sed -n '/^---$/,/^---$/{ /^---$/d; p; }' "$AGENT_FILE")
SYSTEM_PROMPT=$(awk '/^---$/{i++; next} i>=2' "$AGENT_FILE")
# Check 4: Required fields
echo ""
echo "Checking required fields..."
error_count=0
warning_count=0
# Check name field
NAME=$(echo "$FRONTMATTER" | grep '^name:' | sed 's/name: *//' | sed 's/^"\(.*\)"$/\1/')
if [ -z "$NAME" ]; then
echo "❌ Missing required field: name"
((error_count++))
else
echo "✅ name: $NAME"
# Validate name format
if ! [[ "$NAME" =~ ^[a-zA-Z0-9][a-zA-Z0-9-]*[a-zA-Z0-9]$ ]]; then
echo "❌ name must start/end with alphanumeric and contain only letters, numbers, hyphens"
((error_count++))
fi
# Validate name length
name_length=${#NAME}
if [ $name_length -lt 3 ]; then
echo "❌ name too short (minimum 3 characters)"
((error_count++))
elif [ $name_length -gt 50 ]; then
echo "❌ name too long (maximum 50 characters)"
((error_count++))
fi
# Check for generic names
if [[ "$NAME" =~ ^(helper|assistant|agent|tool)$ ]]; then
echo "⚠️ name is too generic: $NAME"
((warning_count++))
fi
fi
# Check description field
DESCRIPTION=$(echo "$FRONTMATTER" | grep '^description:' | sed 's/description: *//')
if [ -z "$DESCRIPTION" ]; then
echo "❌ Missing required field: description"
((error_count++))
else
desc_length=${#DESCRIPTION}
echo "✅ description: ${desc_length} characters"
if [ $desc_length -lt 10 ]; then
echo "⚠️ description too short (minimum 10 characters recommended)"
((warning_count++))
elif [ $desc_length -gt 5000 ]; then
echo "⚠️ description very long (over 5000 characters)"
((warning_count++))
fi
# Check for example blocks
if ! echo "$DESCRIPTION" | grep -q '<example>'; then
echo "⚠️ description should include <example> blocks for triggering"
((warning_count++))
fi
# Check for "Use this agent when" pattern
if ! echo "$DESCRIPTION" | grep -qi 'use this agent when'; then
echo "⚠️ description should start with 'Use this agent when...'"
((warning_count++))
fi
fi
# Check model field
MODEL=$(echo "$FRONTMATTER" | grep '^model:' | sed 's/model: *//')
if [ -z "$MODEL" ]; then
echo "❌ Missing required field: model"
((error_count++))
else
echo "✅ model: $MODEL"
case "$MODEL" in
inherit|sonnet|opus|haiku)
# Valid model
;;
*)
echo "⚠️ Unknown model: $MODEL (valid: inherit, sonnet, opus, haiku)"
((warning_count++))
;;
esac
fi
# Check color field
COLOR=$(echo "$FRONTMATTER" | grep '^color:' | sed 's/color: *//')
if [ -z "$COLOR" ]; then
echo "❌ Missing required field: color"
((error_count++))
else
echo "✅ color: $COLOR"
case "$COLOR" in
blue|cyan|green|yellow|magenta|red)
# Valid color
;;
*)
echo "⚠️ Unknown color: $COLOR (valid: blue, cyan, green, yellow, magenta, red)"
((warning_count++))
;;
esac
fi
# Check tools field (optional)
TOOLS=$(echo "$FRONTMATTER" | grep '^tools:' | sed 's/tools: *//')
if [ -n "$TOOLS" ]; then
echo "✅ tools: $TOOLS"
else
echo "💡 tools: not specified (agent has access to all tools)"
fi
# Check 5: System prompt
echo ""
echo "Checking system prompt..."
if [ -z "$SYSTEM_PROMPT" ]; then
echo "❌ System prompt is empty"
((error_count++))
else
prompt_length=${#SYSTEM_PROMPT}
echo "✅ System prompt: $prompt_length characters"
if [ $prompt_length -lt 20 ]; then
echo "❌ System prompt too short (minimum 20 characters)"
((error_count++))
elif [ $prompt_length -gt 10000 ]; then
echo "⚠️ System prompt very long (over 10,000 characters)"
((warning_count++))
fi
# Check for second person
if ! echo "$SYSTEM_PROMPT" | grep -q "You are\|You will\|Your"; then
echo "⚠️ System prompt should use second person (You are..., You will...)"
((warning_count++))
fi
# Check for structure
if ! echo "$SYSTEM_PROMPT" | grep -qi "responsibilities\|process\|steps"; then
echo "💡 Consider adding clear responsibilities or process steps"
fi
if ! echo "$SYSTEM_PROMPT" | grep -qi "output"; then
echo "💡 Consider defining output format expectations"
fi
fi
echo ""
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
if [ $error_count -eq 0 ] && [ $warning_count -eq 0 ]; then
echo "✅ All checks passed!"
exit 0
elif [ $error_count -eq 0 ]; then
echo "⚠️ Validation passed with $warning_count warning(s)"
exit 0
else
echo "❌ Validation failed with $error_count error(s) and $warning_count warning(s)"
exit 1
fi

View File

@@ -0,0 +1,272 @@
# Command Development Skill
Comprehensive guidance on creating Claude Code slash commands, including file format, frontmatter options, dynamic arguments, and best practices.
## Overview
This skill provides knowledge about:
- Slash command file format and structure
- YAML frontmatter configuration fields
- Dynamic arguments ($ARGUMENTS, $1, $2, etc.)
- File references with @ syntax
- Bash execution with !` syntax
- Command organization and namespacing
- Best practices for command development
- Plugin-specific features (${CLAUDE_PLUGIN_ROOT}, plugin patterns)
- Integration with plugin components (agents, skills, hooks)
- Validation patterns and error handling
## Skill Structure
### SKILL.md (~2,470 words)
Core skill content covering:
**Fundamentals:**
- Command basics and locations
- File format (Markdown with optional frontmatter)
- YAML frontmatter fields overview
- Dynamic arguments ($ARGUMENTS and positional)
- File references (@ syntax)
- Bash execution (!` syntax)
- Command organization patterns
- Best practices and common patterns
- Troubleshooting
**Plugin-Specific:**
- ${CLAUDE_PLUGIN_ROOT} environment variable
- Plugin command discovery and organization
- Plugin command patterns (configuration, template, multi-script)
- Integration with plugin components (agents, skills, hooks)
- Validation patterns (argument, file, resource, error handling)
### References
Detailed documentation:
- **frontmatter-reference.md**: Complete YAML frontmatter field specifications
- All field descriptions with types and defaults
- When to use each field
- Examples and best practices
- Validation and common errors
- **plugin-features-reference.md**: Plugin-specific command features
- Plugin command discovery and organization
- ${CLAUDE_PLUGIN_ROOT} environment variable usage
- Plugin command patterns (configuration, template, multi-script)
- Integration with plugin agents, skills, and hooks
- Validation patterns and error handling
### Examples
Practical command examples:
- **simple-commands.md**: 10 complete command examples
- Code review commands
- Testing commands
- Deployment commands
- Documentation generators
- Git integration commands
- Analysis and research commands
- **plugin-commands.md**: 10 plugin-specific command examples
- Simple plugin commands with scripts
- Multi-script workflows
- Template-based generation
- Configuration-driven deployment
- Agent and skill integration
- Multi-component workflows
- Validated input commands
- Environment-aware commands
## When This Skill Triggers
Claude Code activates this skill when users:
- Ask to "create a slash command" or "add a command"
- Need to "write a custom command"
- Want to "define command arguments"
- Ask about "command frontmatter" or YAML configuration
- Need to "organize commands" or use namespacing
- Want to create commands with file references
- Ask about "bash execution in commands"
- Need command development best practices
## Progressive Disclosure
The skill uses progressive disclosure:
1. **SKILL.md** (~2,470 words): Core concepts, common patterns, and plugin features overview
2. **References** (~13,500 words total): Detailed specifications
- frontmatter-reference.md (~1,200 words)
- plugin-features-reference.md (~1,800 words)
- interactive-commands.md (~2,500 words)
- advanced-workflows.md (~1,700 words)
- testing-strategies.md (~2,200 words)
- documentation-patterns.md (~2,000 words)
- marketplace-considerations.md (~2,200 words)
3. **Examples** (~6,000 words total): Complete working command examples
- simple-commands.md
- plugin-commands.md
Claude loads references and examples as needed based on task.
## Command Basics Quick Reference
### File Format
```markdown
---
description: Brief description
argument-hint: [arg1] [arg2]
allowed-tools: Read, Bash(git:*)
---
Command prompt content with:
- Arguments: $1, $2, or $ARGUMENTS
- Files: @path/to/file
- Bash: !`command here`
```
### Locations
- **Project**: `.claude/commands/` (shared with team)
- **Personal**: `~/.claude/commands/` (your commands)
- **Plugin**: `plugin-name/commands/` (plugin-specific)
### Key Features
**Dynamic arguments:**
- `$ARGUMENTS` - All arguments as single string
- `$1`, `$2`, `$3` - Positional arguments
**File references:**
- `@path/to/file` - Include file contents
**Bash execution:**
- `!`command`` - Execute and include output
## Frontmatter Fields Quick Reference
| Field | Purpose | Example |
|-------|---------|---------|
| `description` | Brief description for /help | `"Review code for issues"` |
| `allowed-tools` | Restrict tool access | `Read, Bash(git:*)` |
| `model` | Specify model | `sonnet`, `opus`, `haiku` |
| `argument-hint` | Document arguments | `[pr-number] [priority]` |
| `disable-model-invocation` | Manual-only command | `true` |
## Common Patterns
### Simple Review Command
```markdown
---
description: Review code for issues
---
Review this code for quality and potential bugs.
```
### Command with Arguments
```markdown
---
description: Deploy to environment
argument-hint: [environment] [version]
---
Deploy to $1 environment using version $2
```
### Command with File Reference
```markdown
---
description: Document file
argument-hint: [file-path]
---
Generate documentation for @$1
```
### Command with Bash Execution
```markdown
---
description: Show Git status
allowed-tools: Bash(git:*)
---
Current status: !`git status`
Recent commits: !`git log --oneline -5`
```
## Development Workflow
1. **Design command:**
- Define purpose and scope
- Determine required arguments
- Identify needed tools
2. **Create file:**
- Choose appropriate location
- Create `.md` file with command name
- Write basic prompt
3. **Add frontmatter:**
- Start minimal (just description)
- Add fields as needed (allowed-tools, etc.)
- Document arguments with argument-hint
4. **Test command:**
- Invoke with `/command-name`
- Verify arguments work
- Check bash execution
- Test file references
5. **Refine:**
- Improve prompt clarity
- Handle edge cases
- Add examples in comments
- Document requirements
## Best Practices Summary
1. **Single responsibility**: One command, one clear purpose
2. **Clear descriptions**: Make discoverable in `/help`
3. **Document arguments**: Always use argument-hint
4. **Minimal tools**: Use most restrictive allowed-tools
5. **Test thoroughly**: Verify all features work
6. **Add comments**: Explain complex logic
7. **Handle errors**: Consider missing arguments/files
## Status
**Completed enhancements:**
- ✓ Plugin command patterns (${CLAUDE_PLUGIN_ROOT}, discovery, organization)
- ✓ Integration patterns (agents, skills, hooks coordination)
- ✓ Validation patterns (input, file, resource validation, error handling)
**Remaining enhancements (in progress):**
- Advanced workflows (multi-step command sequences)
- Testing strategies (how to test commands effectively)
- Documentation patterns (command documentation best practices)
- Marketplace considerations (publishing and distribution)
## Maintenance
To update this skill:
1. Keep SKILL.md focused on core fundamentals
2. Move detailed specifications to references/
3. Add new examples/ for different use cases
4. Update frontmatter when new fields added
5. Ensure imperative/infinitive form throughout
6. Test examples work with current Claude Code
## Version History
**v0.1.0** (2025-01-15):
- Initial release with basic command fundamentals
- Frontmatter field reference
- 10 simple command examples
- Ready for plugin-specific pattern additions

View File

@@ -0,0 +1,834 @@
---
name: Command Development
description: This skill should be used when the user asks to "create a slash command", "add a command", "write a custom command", "define command arguments", "use command frontmatter", "organize commands", "create command with file references", "interactive command", "use AskUserQuestion in command", or needs guidance on slash command structure, YAML frontmatter fields, dynamic arguments, bash execution in commands, user interaction patterns, or command development best practices for Claude Code.
version: 0.2.0
---
# Command Development for Claude Code
## Overview
Slash commands are frequently-used prompts defined as Markdown files that Claude executes during interactive sessions. Understanding command structure, frontmatter options, and dynamic features enables creating powerful, reusable workflows.
**Key concepts:**
- Markdown file format for commands
- YAML frontmatter for configuration
- Dynamic arguments and file references
- Bash execution for context
- Command organization and namespacing
## Command Basics
### What is a Slash Command?
A slash command is a Markdown file containing a prompt that Claude executes when invoked. Commands provide:
- **Reusability**: Define once, use repeatedly
- **Consistency**: Standardize common workflows
- **Sharing**: Distribute across team or projects
- **Efficiency**: Quick access to complex prompts
### Critical: Commands are Instructions FOR Claude
**Commands are written for agent consumption, not human consumption.**
When a user invokes `/command-name`, the command content becomes Claude's instructions. Write commands as directives TO Claude about what to do, not as messages TO the user.
**Correct approach (instructions for Claude):**
```markdown
Review this code for security vulnerabilities including:
- SQL injection
- XSS attacks
- Authentication issues
Provide specific line numbers and severity ratings.
```
**Incorrect approach (messages to user):**
```markdown
This command will review your code for security issues.
You'll receive a report with vulnerability details.
```
The first example tells Claude what to do. The second tells the user what will happen but doesn't instruct Claude. Always use the first approach.
### Command Locations
**Project commands** (shared with team):
- Location: `.claude/commands/`
- Scope: Available in specific project
- Label: Shown as "(project)" in `/help`
- Use for: Team workflows, project-specific tasks
**Personal commands** (available everywhere):
- Location: `~/.claude/commands/`
- Scope: Available in all projects
- Label: Shown as "(user)" in `/help`
- Use for: Personal workflows, cross-project utilities
**Plugin commands** (bundled with plugins):
- Location: `plugin-name/commands/`
- Scope: Available when plugin installed
- Label: Shown as "(plugin-name)" in `/help`
- Use for: Plugin-specific functionality
## File Format
### Basic Structure
Commands are Markdown files with `.md` extension:
```
.claude/commands/
├── review.md # /review command
├── test.md # /test command
└── deploy.md # /deploy command
```
**Simple command:**
```markdown
Review this code for security vulnerabilities including:
- SQL injection
- XSS attacks
- Authentication bypass
- Insecure data handling
```
No frontmatter needed for basic commands.
### With YAML Frontmatter
Add configuration using YAML frontmatter:
```markdown
---
description: Review code for security issues
allowed-tools: Read, Grep, Bash(git:*)
model: sonnet
---
Review this code for security vulnerabilities...
```
## YAML Frontmatter Fields
### description
**Purpose:** Brief description shown in `/help`
**Type:** String
**Default:** First line of command prompt
```yaml
---
description: Review pull request for code quality
---
```
**Best practice:** Clear, actionable description (under 60 characters)
### allowed-tools
**Purpose:** Specify which tools command can use
**Type:** String or Array
**Default:** Inherits from conversation
```yaml
---
allowed-tools: Read, Write, Edit, Bash(git:*)
---
```
**Patterns:**
- `Read, Write, Edit` - Specific tools
- `Bash(git:*)` - Bash with git commands only
- `*` - All tools (rarely needed)
**Use when:** Command requires specific tool access
### model
**Purpose:** Specify model for command execution
**Type:** String (sonnet, opus, haiku)
**Default:** Inherits from conversation
```yaml
---
model: haiku
---
```
**Use cases:**
- `haiku` - Fast, simple commands
- `sonnet` - Standard workflows
- `opus` - Complex analysis
### argument-hint
**Purpose:** Document expected arguments for autocomplete
**Type:** String
**Default:** None
```yaml
---
argument-hint: [pr-number] [priority] [assignee]
---
```
**Benefits:**
- Helps users understand command arguments
- Improves command discovery
- Documents command interface
### disable-model-invocation
**Purpose:** Prevent SlashCommand tool from programmatically calling command
**Type:** Boolean
**Default:** false
```yaml
---
disable-model-invocation: true
---
```
**Use when:** Command should only be manually invoked
## Dynamic Arguments
### Using $ARGUMENTS
Capture all arguments as single string:
```markdown
---
description: Fix issue by number
argument-hint: [issue-number]
---
Fix issue #$ARGUMENTS following our coding standards and best practices.
```
**Usage:**
```
> /fix-issue 123
> /fix-issue 456
```
**Expands to:**
```
Fix issue #123 following our coding standards...
Fix issue #456 following our coding standards...
```
### Using Positional Arguments
Capture individual arguments with `$1`, `$2`, `$3`, etc.:
```markdown
---
description: Review PR with priority and assignee
argument-hint: [pr-number] [priority] [assignee]
---
Review pull request #$1 with priority level $2.
After review, assign to $3 for follow-up.
```
**Usage:**
```
> /review-pr 123 high alice
```
**Expands to:**
```
Review pull request #123 with priority level high.
After review, assign to alice for follow-up.
```
### Combining Arguments
Mix positional and remaining arguments:
```markdown
Deploy $1 to $2 environment with options: $3
```
**Usage:**
```
> /deploy api staging --force --skip-tests
```
**Expands to:**
```
Deploy api to staging environment with options: --force --skip-tests
```
## File References
### Using @ Syntax
Include file contents in command:
```markdown
---
description: Review specific file
argument-hint: [file-path]
---
Review @$1 for:
- Code quality
- Best practices
- Potential bugs
```
**Usage:**
```
> /review-file src/api/users.ts
```
**Effect:** Claude reads `src/api/users.ts` before processing command
### Multiple File References
Reference multiple files:
```markdown
Compare @src/old-version.js with @src/new-version.js
Identify:
- Breaking changes
- New features
- Bug fixes
```
### Static File References
Reference known files without arguments:
```markdown
Review @package.json and @tsconfig.json for consistency
Ensure:
- TypeScript version matches
- Dependencies are aligned
- Build configuration is correct
```
## Bash Execution in Commands
Commands can execute bash commands inline to dynamically gather context before Claude processes the command. This is useful for including repository state, environment information, or project-specific context.
**When to use:**
- Include dynamic context (git status, environment vars, etc.)
- Gather project/repository state
- Build context-aware workflows
**Implementation details:**
For complete syntax, examples, and best practices, see `references/plugin-features-reference.md` section on bash execution. The reference includes the exact syntax and multiple working examples to avoid execution issues
## Command Organization
### Flat Structure
Simple organization for small command sets:
```
.claude/commands/
├── build.md
├── test.md
├── deploy.md
├── review.md
└── docs.md
```
**Use when:** 5-15 commands, no clear categories
### Namespaced Structure
Organize commands in subdirectories:
```
.claude/commands/
├── ci/
│ ├── build.md # /build (project:ci)
│ ├── test.md # /test (project:ci)
│ └── lint.md # /lint (project:ci)
├── git/
│ ├── commit.md # /commit (project:git)
│ └── pr.md # /pr (project:git)
└── docs/
├── generate.md # /generate (project:docs)
└── publish.md # /publish (project:docs)
```
**Benefits:**
- Logical grouping by category
- Namespace shown in `/help`
- Easier to find related commands
**Use when:** 15+ commands, clear categories
## Best Practices
### Command Design
1. **Single responsibility:** One command, one task
2. **Clear descriptions:** Self-explanatory in `/help`
3. **Explicit dependencies:** Use `allowed-tools` when needed
4. **Document arguments:** Always provide `argument-hint`
5. **Consistent naming:** Use verb-noun pattern (review-pr, fix-issue)
### Argument Handling
1. **Validate arguments:** Check for required arguments in prompt
2. **Provide defaults:** Suggest defaults when arguments missing
3. **Document format:** Explain expected argument format
4. **Handle edge cases:** Consider missing or invalid arguments
```markdown
---
argument-hint: [pr-number]
---
$IF($1,
Review PR #$1,
Please provide a PR number. Usage: /review-pr [number]
)
```
### File References
1. **Explicit paths:** Use clear file paths
2. **Check existence:** Handle missing files gracefully
3. **Relative paths:** Use project-relative paths
4. **Glob support:** Consider using Glob tool for patterns
### Bash Commands
1. **Limit scope:** Use `Bash(git:*)` not `Bash(*)`
2. **Safe commands:** Avoid destructive operations
3. **Handle errors:** Consider command failures
4. **Keep fast:** Long-running commands slow invocation
### Documentation
1. **Add comments:** Explain complex logic
2. **Provide examples:** Show usage in comments
3. **List requirements:** Document dependencies
4. **Version commands:** Note breaking changes
```markdown
---
description: Deploy application to environment
argument-hint: [environment] [version]
---
<!--
Usage: /deploy [staging|production] [version]
Requires: AWS credentials configured
Example: /deploy staging v1.2.3
-->
Deploy application to $1 environment using version $2...
```
## Common Patterns
### Review Pattern
```markdown
---
description: Review code changes
allowed-tools: Read, Bash(git:*)
---
Files changed: !`git diff --name-only`
Review each file for:
1. Code quality and style
2. Potential bugs or issues
3. Test coverage
4. Documentation needs
Provide specific feedback for each file.
```
### Testing Pattern
```markdown
---
description: Run tests for specific file
argument-hint: [test-file]
allowed-tools: Bash(npm:*)
---
Run tests: !`npm test $1`
Analyze results and suggest fixes for failures.
```
### Documentation Pattern
```markdown
---
description: Generate documentation for file
argument-hint: [source-file]
---
Generate comprehensive documentation for @$1 including:
- Function/class descriptions
- Parameter documentation
- Return value descriptions
- Usage examples
- Edge cases and errors
```
### Workflow Pattern
```markdown
---
description: Complete PR workflow
argument-hint: [pr-number]
allowed-tools: Bash(gh:*), Read
---
PR #$1 Workflow:
1. Fetch PR: !`gh pr view $1`
2. Review changes
3. Run checks
4. Approve or request changes
```
## Troubleshooting
**Command not appearing:**
- Check file is in correct directory
- Verify `.md` extension present
- Ensure valid Markdown format
- Restart Claude Code
**Arguments not working:**
- Verify `$1`, `$2` syntax correct
- Check `argument-hint` matches usage
- Ensure no extra spaces
**Bash execution failing:**
- Check `allowed-tools` includes Bash
- Verify command syntax in backticks
- Test command in terminal first
- Check for required permissions
**File references not working:**
- Verify `@` syntax correct
- Check file path is valid
- Ensure Read tool allowed
- Use absolute or project-relative paths
## Plugin-Specific Features
### CLAUDE_PLUGIN_ROOT Variable
Plugin commands have access to `${CLAUDE_PLUGIN_ROOT}`, an environment variable that resolves to the plugin's absolute path.
**Purpose:**
- Reference plugin files portably
- Execute plugin scripts
- Load plugin configuration
- Access plugin templates
**Basic usage:**
```markdown
---
description: Analyze using plugin script
allowed-tools: Bash(node:*)
---
Run analysis: !`node ${CLAUDE_PLUGIN_ROOT}/scripts/analyze.js $1`
Review results and report findings.
```
**Common patterns:**
```markdown
# Execute plugin script
!`bash ${CLAUDE_PLUGIN_ROOT}/scripts/script.sh`
# Load plugin configuration
@${CLAUDE_PLUGIN_ROOT}/config/settings.json
# Use plugin template
@${CLAUDE_PLUGIN_ROOT}/templates/report.md
# Access plugin resources
@${CLAUDE_PLUGIN_ROOT}/docs/reference.md
```
**Why use it:**
- Works across all installations
- Portable between systems
- No hardcoded paths needed
- Essential for multi-file plugins
### Plugin Command Organization
Plugin commands discovered automatically from `commands/` directory:
```
plugin-name/
├── commands/
│ ├── foo.md # /foo (plugin:plugin-name)
│ ├── bar.md # /bar (plugin:plugin-name)
│ └── utils/
│ └── helper.md # /helper (plugin:plugin-name:utils)
└── plugin.json
```
**Namespace benefits:**
- Logical command grouping
- Shown in `/help` output
- Avoid name conflicts
- Organize related commands
**Naming conventions:**
- Use descriptive action names
- Avoid generic names (test, run)
- Consider plugin-specific prefix
- Use hyphens for multi-word names
### Plugin Command Patterns
**Configuration-based pattern:**
```markdown
---
description: Deploy using plugin configuration
argument-hint: [environment]
allowed-tools: Read, Bash(*)
---
Load configuration: @${CLAUDE_PLUGIN_ROOT}/config/$1-deploy.json
Deploy to $1 using configuration settings.
Monitor deployment and report status.
```
**Template-based pattern:**
```markdown
---
description: Generate docs from template
argument-hint: [component]
---
Template: @${CLAUDE_PLUGIN_ROOT}/templates/docs.md
Generate documentation for $1 following template structure.
```
**Multi-script pattern:**
```markdown
---
description: Complete build workflow
allowed-tools: Bash(*)
---
Build: !`bash ${CLAUDE_PLUGIN_ROOT}/scripts/build.sh`
Test: !`bash ${CLAUDE_PLUGIN_ROOT}/scripts/test.sh`
Package: !`bash ${CLAUDE_PLUGIN_ROOT}/scripts/package.sh`
Review outputs and report workflow status.
```
**See `references/plugin-features-reference.md` for detailed patterns.**
## Integration with Plugin Components
Commands can integrate with other plugin components for powerful workflows.
### Agent Integration
Launch plugin agents for complex tasks:
```markdown
---
description: Deep code review
argument-hint: [file-path]
---
Initiate comprehensive review of @$1 using the code-reviewer agent.
The agent will analyze:
- Code structure
- Security issues
- Performance
- Best practices
Agent uses plugin resources:
- ${CLAUDE_PLUGIN_ROOT}/config/rules.json
- ${CLAUDE_PLUGIN_ROOT}/checklists/review.md
```
**Key points:**
- Agent must exist in `plugin/agents/` directory
- Claude uses Task tool to launch agent
- Document agent capabilities
- Reference plugin resources agent uses
### Skill Integration
Leverage plugin skills for specialized knowledge:
```markdown
---
description: Document API with standards
argument-hint: [api-file]
---
Document API in @$1 following plugin standards.
Use the api-docs-standards skill to ensure:
- Complete endpoint documentation
- Consistent formatting
- Example quality
- Error documentation
Generate production-ready API docs.
```
**Key points:**
- Skill must exist in `plugin/skills/` directory
- Mention skill name to trigger invocation
- Document skill purpose
- Explain what skill provides
### Hook Coordination
Design commands that work with plugin hooks:
- Commands can prepare state for hooks to process
- Hooks execute automatically on tool events
- Commands should document expected hook behavior
- Guide Claude on interpreting hook output
See `references/plugin-features-reference.md` for examples of commands that coordinate with hooks
### Multi-Component Workflows
Combine agents, skills, and scripts:
```markdown
---
description: Comprehensive review workflow
argument-hint: [file]
allowed-tools: Bash(node:*), Read
---
Target: @$1
Phase 1 - Static Analysis:
!`node ${CLAUDE_PLUGIN_ROOT}/scripts/lint.js $1`
Phase 2 - Deep Review:
Launch code-reviewer agent for detailed analysis.
Phase 3 - Standards Check:
Use coding-standards skill for validation.
Phase 4 - Report:
Template: @${CLAUDE_PLUGIN_ROOT}/templates/review.md
Compile findings into report following template.
```
**When to use:**
- Complex multi-step workflows
- Leverage multiple plugin capabilities
- Require specialized analysis
- Need structured outputs
## Validation Patterns
Commands should validate inputs and resources before processing.
### Argument Validation
```markdown
---
description: Deploy with validation
argument-hint: [environment]
---
Validate environment: !`echo "$1" | grep -E "^(dev|staging|prod)$" || echo "INVALID"`
If $1 is valid environment:
Deploy to $1
Otherwise:
Explain valid environments: dev, staging, prod
Show usage: /deploy [environment]
```
### File Existence Checks
```markdown
---
description: Process configuration
argument-hint: [config-file]
---
Check file exists: !`test -f $1 && echo "EXISTS" || echo "MISSING"`
If file exists:
Process configuration: @$1
Otherwise:
Explain where to place config file
Show expected format
Provide example configuration
```
### Plugin Resource Validation
```markdown
---
description: Run plugin analyzer
allowed-tools: Bash(test:*)
---
Validate plugin setup:
- Script: !`test -x ${CLAUDE_PLUGIN_ROOT}/bin/analyze && echo "✓" || echo "✗"`
- Config: !`test -f ${CLAUDE_PLUGIN_ROOT}/config.json && echo "✓" || echo "✗"`
If all checks pass, run analysis.
Otherwise, report missing components.
```
### Error Handling
```markdown
---
description: Build with error handling
allowed-tools: Bash(*)
---
Execute build: !`bash ${CLAUDE_PLUGIN_ROOT}/scripts/build.sh 2>&1 || echo "BUILD_FAILED"`
If build succeeded:
Report success and output location
If build failed:
Analyze error output
Suggest likely causes
Provide troubleshooting steps
```
**Best practices:**
- Validate early in command
- Provide helpful error messages
- Suggest corrective actions
- Handle edge cases gracefully
---
For detailed frontmatter field specifications, see `references/frontmatter-reference.md`.
For plugin-specific features and patterns, see `references/plugin-features-reference.md`.
For command pattern examples, see `examples/` directory.

View File

@@ -0,0 +1,557 @@
# Plugin Command Examples
Practical examples of commands designed for Claude Code plugins, demonstrating plugin-specific patterns and features.
## Table of Contents
1. [Simple Plugin Command](#1-simple-plugin-command)
2. [Script-Based Analysis](#2-script-based-analysis)
3. [Template-Based Generation](#3-template-based-generation)
4. [Multi-Script Workflow](#4-multi-script-workflow)
5. [Configuration-Driven Deployment](#5-configuration-driven-deployment)
6. [Agent Integration](#6-agent-integration)
7. [Skill Integration](#7-skill-integration)
8. [Multi-Component Workflow](#8-multi-component-workflow)
9. [Validated Input Command](#9-validated-input-command)
10. [Environment-Aware Command](#10-environment-aware-command)
---
## 1. Simple Plugin Command
**Use case:** Basic command that uses plugin script
**File:** `commands/analyze.md`
```markdown
---
description: Analyze code quality using plugin tools
argument-hint: [file-path]
allowed-tools: Bash(node:*), Read
---
Analyze @$1 using plugin's quality checker:
!`node ${CLAUDE_PLUGIN_ROOT}/scripts/quality-check.js $1`
Review the analysis output and provide:
1. Summary of findings
2. Priority issues to address
3. Suggested improvements
4. Code quality score interpretation
```
**Key features:**
- Uses `${CLAUDE_PLUGIN_ROOT}` for portable path
- Combines file reference with script execution
- Simple single-purpose command
---
## 2. Script-Based Analysis
**Use case:** Run comprehensive analysis using multiple plugin scripts
**File:** `commands/full-audit.md`
```markdown
---
description: Complete code audit using plugin suite
argument-hint: [directory]
allowed-tools: Bash(*)
model: sonnet
---
Running complete audit on $1:
**Security scan:**
!`bash ${CLAUDE_PLUGIN_ROOT}/scripts/security-scan.sh $1`
**Performance analysis:**
!`bash ${CLAUDE_PLUGIN_ROOT}/scripts/perf-analyze.sh $1`
**Best practices check:**
!`bash ${CLAUDE_PLUGIN_ROOT}/scripts/best-practices.sh $1`
Analyze all results and create comprehensive report including:
- Critical issues requiring immediate attention
- Performance optimization opportunities
- Security vulnerabilities and fixes
- Overall health score and recommendations
```
**Key features:**
- Multiple script executions
- Organized output sections
- Comprehensive workflow
- Clear reporting structure
---
## 3. Template-Based Generation
**Use case:** Generate documentation following plugin template
**File:** `commands/gen-api-docs.md`
```markdown
---
description: Generate API documentation from template
argument-hint: [api-file]
---
Template structure: @${CLAUDE_PLUGIN_ROOT}/templates/api-documentation.md
API implementation: @$1
Generate complete API documentation following the template format above.
Ensure documentation includes:
- Endpoint descriptions with HTTP methods
- Request/response schemas
- Authentication requirements
- Error codes and handling
- Usage examples with curl commands
- Rate limiting information
Format output as markdown suitable for README or docs site.
```
**Key features:**
- Uses plugin template
- Combines template with source file
- Standardized output format
- Clear documentation structure
---
## 4. Multi-Script Workflow
**Use case:** Orchestrate build, test, and deploy workflow
**File:** `commands/release.md`
```markdown
---
description: Execute complete release workflow
argument-hint: [version]
allowed-tools: Bash(*), Read
---
Executing release workflow for version $1:
**Step 1 - Pre-release validation:**
!`bash ${CLAUDE_PLUGIN_ROOT}/scripts/pre-release-check.sh $1`
**Step 2 - Build artifacts:**
!`bash ${CLAUDE_PLUGIN_ROOT}/scripts/build-release.sh $1`
**Step 3 - Run test suite:**
!`bash ${CLAUDE_PLUGIN_ROOT}/scripts/run-tests.sh`
**Step 4 - Package release:**
!`bash ${CLAUDE_PLUGIN_ROOT}/scripts/package.sh $1`
Review all step outputs and report:
1. Any failures or warnings
2. Build artifacts location
3. Test results summary
4. Next steps for deployment
5. Rollback plan if needed
```
**Key features:**
- Multi-step workflow
- Sequential script execution
- Clear step numbering
- Comprehensive reporting
---
## 5. Configuration-Driven Deployment
**Use case:** Deploy using environment-specific plugin configuration
**File:** `commands/deploy.md`
```markdown
---
description: Deploy application to environment
argument-hint: [environment]
allowed-tools: Read, Bash(*)
---
Deployment configuration for $1: @${CLAUDE_PLUGIN_ROOT}/config/$1-deploy.json
Current git state: !`git rev-parse --short HEAD`
Build info: !`cat package.json | grep -E '(name|version)'`
Execute deployment to $1 environment using configuration above.
Deployment checklist:
1. Validate configuration settings
2. Build application for $1
3. Run pre-deployment tests
4. Deploy to target environment
5. Run smoke tests
6. Verify deployment success
7. Update deployment log
Report deployment status and any issues encountered.
```
**Key features:**
- Environment-specific configuration
- Dynamic config file loading
- Pre-deployment validation
- Structured checklist
---
## 6. Agent Integration
**Use case:** Command that launches plugin agent for complex task
**File:** `commands/deep-review.md`
```markdown
---
description: Deep code review using plugin agent
argument-hint: [file-or-directory]
---
Initiate comprehensive code review of @$1 using the code-reviewer agent.
The agent will perform:
1. **Static analysis** - Check for code smells and anti-patterns
2. **Security audit** - Identify potential vulnerabilities
3. **Performance review** - Find optimization opportunities
4. **Best practices** - Ensure code follows standards
5. **Documentation check** - Verify adequate documentation
The agent has access to:
- Plugin's linting rules: ${CLAUDE_PLUGIN_ROOT}/config/lint-rules.json
- Security checklist: ${CLAUDE_PLUGIN_ROOT}/checklists/security.md
- Performance guidelines: ${CLAUDE_PLUGIN_ROOT}/docs/performance.md
Note: This uses the Task tool to launch the plugin's code-reviewer agent for thorough analysis.
```
**Key features:**
- Delegates to plugin agent
- Documents agent capabilities
- References plugin resources
- Clear scope definition
---
## 7. Skill Integration
**Use case:** Command that leverages plugin skill for specialized knowledge
**File:** `commands/document-api.md`
```markdown
---
description: Document API following plugin standards
argument-hint: [api-file]
---
API source code: @$1
Generate API documentation following the plugin's API documentation standards.
Use the api-documentation-standards skill to ensure:
- **OpenAPI compliance** - Follow OpenAPI 3.0 specification
- **Consistent formatting** - Use plugin's documentation style
- **Complete coverage** - Document all endpoints and schemas
- **Example quality** - Provide realistic usage examples
- **Error documentation** - Cover all error scenarios
The skill provides:
- Standard documentation templates
- API documentation best practices
- Common patterns for this codebase
- Quality validation criteria
Generate production-ready API documentation.
```
**Key features:**
- Invokes plugin skill by name
- Documents skill purpose
- Clear expectations
- Leverages skill knowledge
---
## 8. Multi-Component Workflow
**Use case:** Complex workflow using agents, skills, and scripts
**File:** `commands/complete-review.md`
```markdown
---
description: Comprehensive review using all plugin components
argument-hint: [file-path]
allowed-tools: Bash(node:*), Read
---
Target file: @$1
Execute comprehensive review workflow:
**Phase 1: Automated Analysis**
Run plugin analyzer: !`node ${CLAUDE_PLUGIN_ROOT}/scripts/analyze.js $1`
**Phase 2: Deep Review (Agent)**
Launch the code-quality-reviewer agent for detailed analysis.
Agent will examine:
- Code structure and organization
- Error handling patterns
- Testing coverage
- Documentation quality
**Phase 3: Standards Check (Skill)**
Use the coding-standards skill to validate:
- Naming conventions
- Code formatting
- Best practices adherence
- Framework-specific patterns
**Phase 4: Report Generation**
Template: @${CLAUDE_PLUGIN_ROOT}/templates/review-report.md
Compile all findings into comprehensive report following template.
**Phase 5: Recommendations**
Generate prioritized action items:
1. Critical issues (must fix)
2. Important improvements (should fix)
3. Nice-to-have enhancements (could fix)
Include specific file locations and suggested changes for each item.
```
**Key features:**
- Multi-phase workflow
- Combines scripts, agents, skills
- Template-based reporting
- Prioritized outputs
---
## 9. Validated Input Command
**Use case:** Command with input validation and error handling
**File:** `commands/build-env.md`
```markdown
---
description: Build for specific environment with validation
argument-hint: [environment]
allowed-tools: Bash(*)
---
Validate environment argument: !`echo "$1" | grep -E "^(dev|staging|prod)$" && echo "VALID" || echo "INVALID"`
Check build script exists: !`test -x ${CLAUDE_PLUGIN_ROOT}/scripts/build.sh && echo "EXISTS" || echo "MISSING"`
Verify configuration available: !`test -f ${CLAUDE_PLUGIN_ROOT}/config/$1.json && echo "FOUND" || echo "NOT_FOUND"`
If all validations pass:
**Configuration:** @${CLAUDE_PLUGIN_ROOT}/config/$1.json
**Execute build:** !`bash ${CLAUDE_PLUGIN_ROOT}/scripts/build.sh $1 2>&1`
**Validation results:** !`bash ${CLAUDE_PLUGIN_ROOT}/scripts/validate-build.sh $1 2>&1`
Report build status and any issues.
If validations fail:
- Explain which validation failed
- Provide expected values/locations
- Suggest corrective actions
- Document troubleshooting steps
```
**Key features:**
- Input validation
- Resource existence checks
- Error handling
- Helpful error messages
- Graceful failure handling
---
## 10. Environment-Aware Command
**Use case:** Command that adapts behavior based on environment
**File:** `commands/run-checks.md`
```markdown
---
description: Run environment-appropriate checks
argument-hint: [environment]
allowed-tools: Bash(*), Read
---
Environment: $1
Load environment configuration: @${CLAUDE_PLUGIN_ROOT}/config/$1-checks.json
Determine check level: !`echo "$1" | grep -E "^prod$" && echo "FULL" || echo "BASIC"`
**For production environment:**
- Full test suite: !`bash ${CLAUDE_PLUGIN_ROOT}/scripts/test-full.sh`
- Security scan: !`bash ${CLAUDE_PLUGIN_ROOT}/scripts/security-scan.sh`
- Performance audit: !`bash ${CLAUDE_PLUGIN_ROOT}/scripts/perf-check.sh`
- Compliance check: !`bash ${CLAUDE_PLUGIN_ROOT}/scripts/compliance.sh`
**For non-production environments:**
- Basic tests: !`bash ${CLAUDE_PLUGIN_ROOT}/scripts/test-basic.sh`
- Quick lint: !`bash ${CLAUDE_PLUGIN_ROOT}/scripts/lint.sh`
Analyze results based on environment requirements:
**Production:** All checks must pass with zero critical issues
**Staging:** No critical issues, warnings acceptable
**Development:** Focus on blocking issues only
Report status and recommend proceed/block decision.
```
**Key features:**
- Environment-aware logic
- Conditional execution
- Different validation levels
- Appropriate reporting per environment
---
## Common Patterns Summary
### Pattern: Plugin Script Execution
```markdown
!`node ${CLAUDE_PLUGIN_ROOT}/scripts/script-name.js $1`
```
Use for: Running plugin-provided Node.js scripts
### Pattern: Plugin Configuration Loading
```markdown
@${CLAUDE_PLUGIN_ROOT}/config/config-name.json
```
Use for: Loading plugin configuration files
### Pattern: Plugin Template Usage
```markdown
@${CLAUDE_PLUGIN_ROOT}/templates/template-name.md
```
Use for: Using plugin templates for generation
### Pattern: Agent Invocation
```markdown
Launch the [agent-name] agent for [task description].
```
Use for: Delegating complex tasks to plugin agents
### Pattern: Skill Reference
```markdown
Use the [skill-name] skill to ensure [requirements].
```
Use for: Leveraging plugin skills for specialized knowledge
### Pattern: Input Validation
```markdown
Validate input: !`echo "$1" | grep -E "^pattern$" && echo "OK" || echo "ERROR"`
```
Use for: Validating command arguments
### Pattern: Resource Validation
```markdown
Check exists: !`test -f ${CLAUDE_PLUGIN_ROOT}/path/file && echo "YES" || echo "NO"`
```
Use for: Verifying required plugin files exist
---
## Development Tips
### Testing Plugin Commands
1. **Test with plugin installed:**
```bash
cd /path/to/plugin
claude /command-name args
```
2. **Verify ${CLAUDE_PLUGIN_ROOT} expansion:**
```bash
# Add debug output to command
!`echo "Plugin root: ${CLAUDE_PLUGIN_ROOT}"`
```
3. **Test across different working directories:**
```bash
cd /tmp && claude /command-name
cd /other/project && claude /command-name
```
4. **Validate resource availability:**
```bash
# Check all plugin resources exist
!`ls -la ${CLAUDE_PLUGIN_ROOT}/scripts/`
!`ls -la ${CLAUDE_PLUGIN_ROOT}/config/`
```
### Common Mistakes to Avoid
1. **Using relative paths instead of ${CLAUDE_PLUGIN_ROOT}:**
```markdown
# Wrong
!`node ./scripts/analyze.js`
# Correct
!`node ${CLAUDE_PLUGIN_ROOT}/scripts/analyze.js`
```
2. **Forgetting to allow required tools:**
```markdown
# Missing allowed-tools
!`bash script.sh` # Will fail without Bash permission
# Correct
---
allowed-tools: Bash(*)
---
!`bash ${CLAUDE_PLUGIN_ROOT}/scripts/script.sh`
```
3. **Not validating inputs:**
```markdown
# Risky - no validation
Deploy to $1 environment
# Better - with validation
Validate: !`echo "$1" | grep -E "^(dev|staging|prod)$" || echo "INVALID"`
Deploy to $1 environment (if valid)
```
4. **Hardcoding plugin paths:**
```markdown
# Wrong - breaks on different installations
@/home/user/.claude/plugins/my-plugin/config.json
# Correct - works everywhere
@${CLAUDE_PLUGIN_ROOT}/config.json
```
---
For detailed plugin-specific features, see `references/plugin-features-reference.md`.
For general command development, see main `SKILL.md`.

View File

@@ -0,0 +1,504 @@
# Simple Command Examples
Basic slash command patterns for common use cases.
**Important:** All examples below are written as instructions FOR Claude (agent consumption), not messages TO users. Commands tell Claude what to do, not tell users what will happen.
## Example 1: Code Review Command
**File:** `.claude/commands/review.md`
```markdown
---
description: Review code for quality and issues
allowed-tools: Read, Bash(git:*)
---
Review the code in this repository for:
1. **Code Quality:**
- Readability and maintainability
- Consistent style and formatting
- Appropriate abstraction levels
2. **Potential Issues:**
- Logic errors or bugs
- Edge cases not handled
- Performance concerns
3. **Best Practices:**
- Design patterns used correctly
- Error handling present
- Documentation adequate
Provide specific feedback with file and line references.
```
**Usage:**
```
> /review
```
---
## Example 2: Security Review Command
**File:** `.claude/commands/security-review.md`
```markdown
---
description: Review code for security vulnerabilities
allowed-tools: Read, Grep
model: sonnet
---
Perform comprehensive security review checking for:
**Common Vulnerabilities:**
- SQL injection risks
- Cross-site scripting (XSS)
- Authentication/authorization issues
- Insecure data handling
- Hardcoded secrets or credentials
**Security Best Practices:**
- Input validation present
- Output encoding correct
- Secure defaults used
- Error messages safe
- Logging appropriate (no sensitive data)
For each issue found:
- File and line number
- Severity (Critical/High/Medium/Low)
- Description of vulnerability
- Recommended fix
Prioritize issues by severity.
```
**Usage:**
```
> /security-review
```
---
## Example 3: Test Command with File Argument
**File:** `.claude/commands/test-file.md`
```markdown
---
description: Run tests for specific file
argument-hint: [test-file]
allowed-tools: Bash(npm:*), Bash(jest:*)
---
Run tests for $1:
Test execution: !`npm test $1`
Analyze results:
- Tests passed/failed
- Code coverage
- Performance issues
- Flaky tests
If failures found, suggest fixes based on error messages.
```
**Usage:**
```
> /test-file src/utils/helpers.test.ts
```
---
## Example 4: Documentation Generator
**File:** `.claude/commands/document.md`
```markdown
---
description: Generate documentation for file
argument-hint: [source-file]
---
Generate comprehensive documentation for @$1
Include:
**Overview:**
- Purpose and responsibility
- Main functionality
- Dependencies
**API Documentation:**
- Function/method signatures
- Parameter descriptions with types
- Return values with types
- Exceptions/errors thrown
**Usage Examples:**
- Basic usage
- Common patterns
- Edge cases
**Implementation Notes:**
- Algorithm complexity
- Performance considerations
- Known limitations
Format as Markdown suitable for project documentation.
```
**Usage:**
```
> /document src/api/users.ts
```
---
## Example 5: Git Status Summary
**File:** `.claude/commands/git-status.md`
```markdown
---
description: Summarize Git repository status
allowed-tools: Bash(git:*)
---
Repository Status Summary:
**Current Branch:** !`git branch --show-current`
**Status:** !`git status --short`
**Recent Commits:** !`git log --oneline -5`
**Remote Status:** !`git fetch && git status -sb`
Provide:
- Summary of changes
- Suggested next actions
- Any warnings or issues
```
**Usage:**
```
> /git-status
```
---
## Example 6: Deployment Command
**File:** `.claude/commands/deploy.md`
```markdown
---
description: Deploy to specified environment
argument-hint: [environment] [version]
allowed-tools: Bash(kubectl:*), Read
---
Deploy to $1 environment using version $2
**Pre-deployment Checks:**
1. Verify $1 configuration exists
2. Check version $2 is valid
3. Verify cluster accessibility: !`kubectl cluster-info`
**Deployment Steps:**
1. Update deployment manifest with version $2
2. Apply configuration to $1
3. Monitor rollout status
4. Verify pod health
5. Run smoke tests
**Rollback Plan:**
Document current version for rollback if issues occur.
Proceed with deployment? (yes/no)
```
**Usage:**
```
> /deploy staging v1.2.3
```
---
## Example 7: Comparison Command
**File:** `.claude/commands/compare-files.md`
```markdown
---
description: Compare two files
argument-hint: [file1] [file2]
---
Compare @$1 with @$2
**Analysis:**
1. **Differences:**
- Lines added
- Lines removed
- Lines modified
2. **Functional Changes:**
- Breaking changes
- New features
- Bug fixes
- Refactoring
3. **Impact:**
- Affected components
- Required updates elsewhere
- Migration requirements
4. **Recommendations:**
- Code review focus areas
- Testing requirements
- Documentation updates needed
Present as structured comparison report.
```
**Usage:**
```
> /compare-files src/old-api.ts src/new-api.ts
```
---
## Example 8: Quick Fix Command
**File:** `.claude/commands/quick-fix.md`
```markdown
---
description: Quick fix for common issues
argument-hint: [issue-description]
model: haiku
---
Quickly fix: $ARGUMENTS
**Approach:**
1. Identify the issue
2. Find relevant code
3. Propose fix
4. Explain solution
Focus on:
- Simple, direct solution
- Minimal changes
- Following existing patterns
- No breaking changes
Provide code changes with file paths and line numbers.
```
**Usage:**
```
> /quick-fix button not responding to clicks
> /quick-fix typo in error message
```
---
## Example 9: Research Command
**File:** `.claude/commands/research.md`
```markdown
---
description: Research best practices for topic
argument-hint: [topic]
model: sonnet
---
Research best practices for: $ARGUMENTS
**Coverage:**
1. **Current State:**
- How we currently handle this
- Existing implementations
2. **Industry Standards:**
- Common patterns
- Recommended approaches
- Tools and libraries
3. **Comparison:**
- Our approach vs standards
- Gaps or improvements needed
- Migration considerations
4. **Recommendations:**
- Concrete action items
- Priority and effort estimates
- Resources for implementation
Provide actionable guidance based on research.
```
**Usage:**
```
> /research error handling in async operations
> /research API authentication patterns
```
---
## Example 10: Explain Code Command
**File:** `.claude/commands/explain.md`
```markdown
---
description: Explain how code works
argument-hint: [file-or-function]
---
Explain @$1 in detail
**Explanation Structure:**
1. **Overview:**
- What it does
- Why it exists
- How it fits in system
2. **Step-by-Step:**
- Line-by-line walkthrough
- Key algorithms or logic
- Important details
3. **Inputs and Outputs:**
- Parameters and types
- Return values
- Side effects
4. **Edge Cases:**
- Error handling
- Special cases
- Limitations
5. **Usage Examples:**
- How to call it
- Common patterns
- Integration points
Explain at level appropriate for junior engineer.
```
**Usage:**
```
> /explain src/utils/cache.ts
> /explain AuthService.login
```
---
## Key Patterns
### Pattern 1: Read-Only Analysis
```markdown
---
allowed-tools: Read, Grep
---
Analyze but don't modify...
```
**Use for:** Code review, documentation, analysis
### Pattern 2: Git Operations
```markdown
---
allowed-tools: Bash(git:*)
---
!`git status`
Analyze and suggest...
```
**Use for:** Repository status, commit analysis
### Pattern 3: Single Argument
```markdown
---
argument-hint: [target]
---
Process $1...
```
**Use for:** File operations, targeted actions
### Pattern 4: Multiple Arguments
```markdown
---
argument-hint: [source] [target] [options]
---
Process $1 to $2 with $3...
```
**Use for:** Workflows, deployments, comparisons
### Pattern 5: Fast Execution
```markdown
---
model: haiku
---
Quick simple task...
```
**Use for:** Simple, repetitive commands
### Pattern 6: File Comparison
```markdown
Compare @$1 with @$2...
```
**Use for:** Diff analysis, migration planning
### Pattern 7: Context Gathering
```markdown
---
allowed-tools: Bash(git:*), Read
---
Context: !`git status`
Files: @file1 @file2
Analyze...
```
**Use for:** Informed decision making
## Tips for Writing Simple Commands
1. **Start basic:** Single responsibility, clear purpose
2. **Add complexity gradually:** Start without frontmatter
3. **Test incrementally:** Verify each feature works
4. **Use descriptive names:** Command name should indicate purpose
5. **Document arguments:** Always use argument-hint
6. **Provide examples:** Show usage in comments
7. **Handle errors:** Consider missing arguments or files

View File

@@ -0,0 +1,722 @@
# Advanced Workflow Patterns
Multi-step command sequences and composition patterns for complex workflows.
## Overview
Advanced workflows combine multiple commands, coordinate state across invocations, and create sophisticated automation sequences. These patterns enable building complex functionality from simple command building blocks.
## Multi-Step Command Patterns
### Sequential Workflow Command
Commands that guide users through multi-step processes:
```markdown
---
description: Complete PR review workflow
argument-hint: [pr-number]
allowed-tools: Bash(gh:*), Read, Grep
---
# PR Review Workflow for #$1
## Step 1: Fetch PR Details
!`gh pr view $1 --json title,body,author,files`
## Step 2: Review Files
Files changed: !`gh pr diff $1 --name-only`
For each file:
- Check code quality
- Verify tests exist
- Review documentation
## Step 3: Run Checks
Test status: !`gh pr checks $1`
Verify:
- All tests passing
- No merge conflicts
- CI/CD successful
## Step 4: Provide Feedback
Summarize:
- Issues found (critical/minor)
- Suggestions for improvement
- Approval recommendation
Would you like to:
1. Approve PR
2. Request changes
3. Leave comments only
Reply with your choice and I'll help complete the action.
```
**Key features:**
- Numbered steps for clarity
- Bash execution for context
- Decision points for user input
- Next action suggestions
### State-Carrying Workflow
Commands that maintain state between invocations:
```markdown
---
description: Initialize deployment workflow
allowed-tools: Write, Bash(git:*)
---
# Initialize Deployment
Creating deployment tracking file...
Current branch: !`git branch --show-current`
Latest commit: !`git log -1 --format=%H`
Deployment state saved to `.claude/deployment-state.local.md`:
\`\`\`markdown
---
initialized: true
branch: $(git branch --show-current)
commit: $(git log -1 --format=%H)
timestamp: $(date -u +%Y-%m-%dT%H:%M:%SZ)
status: initialized
---
# Deployment Tracking
Branch: $(git branch --show-current)
Started: $(date)
Next steps:
1. Run tests: /deploy-test
2. Build: /deploy-build
3. Deploy: /deploy-execute
\`\`\`
State saved. Run `/deploy-test` to continue.
```
**Next command** (`/deploy-test`):
```markdown
---
description: Run deployment tests
allowed-tools: Read, Bash(npm:*)
---
Reading deployment state from `.claude/deployment-state.local.md`...
Running tests: !`npm test`
Updating state to 'tested'...
Tests complete. Run `/deploy-build` to continue.
```
**Pattern benefits:**
- Persistent state across commands
- Clear workflow progression
- Safety checkpoints
- Resume capability
### Conditional Workflow Branching
Commands that adapt based on conditions:
```markdown
---
description: Smart deployment workflow
argument-hint: [environment]
allowed-tools: Bash(git:*), Bash(npm:*), Read
---
# Deploy to $1
## Pre-flight Checks
Branch: !`git branch --show-current`
Status: !`git status --short`
**Checking conditions:**
1. Branch status:
- If main/master: Require approval
- If feature branch: Warning about target
- If hotfix: Fast-track process
2. Tests:
!`npm test`
- If tests fail: STOP - fix tests first
- If tests pass: Continue
3. Environment:
- If $1 = 'production': Extra validation
- If $1 = 'staging': Standard process
- If $1 = 'dev': Minimal checks
**Workflow decision:**
Based on above, proceeding with: [determined workflow]
[Conditional steps based on environment and status]
Ready to deploy? (yes/no)
```
## Command Composition Patterns
### Command Chaining
Commands designed to work together:
```markdown
---
description: Prepare for code review
---
# Prepare Code Review
Running preparation sequence:
1. Format code: /format-code
2. Run linter: /lint-code
3. Run tests: /test-all
4. Generate coverage: /coverage-report
5. Create review summary: /review-summary
This is a meta-command. After completing each step above,
I'll compile results and prepare comprehensive review materials.
Starting sequence...
```
**Individual commands** are simple:
- `/format-code` - Just formats
- `/lint-code` - Just lints
- `/test-all` - Just tests
**Composition command** orchestrates them.
### Pipeline Pattern
Commands that process output from previous commands:
```markdown
---
description: Analyze test failures
---
# Analyze Test Failures
## Step 1: Get test results
(Run /test-all first if not done)
Reading test output...
## Step 2: Categorize failures
- Flaky tests (random failures)
- Consistent failures
- New failures vs existing
## Step 3: Prioritize
Rank by:
- Impact (critical path vs edge case)
- Frequency (always fails vs sometimes)
- Effort (quick fix vs major work)
## Step 4: Generate fix plan
For each failure:
- Root cause hypothesis
- Suggested fix approach
- Estimated effort
Would you like me to:
1. Fix highest priority failure
2. Generate detailed fix plans for all
3. Create GitHub issues for each
```
### Parallel Execution Pattern
Commands that coordinate multiple simultaneous operations:
```markdown
---
description: Run comprehensive validation
allowed-tools: Bash(*), Read
---
# Comprehensive Validation
Running validations in parallel...
Starting:
- Code quality checks
- Security scanning
- Dependency audit
- Performance profiling
This will take 2-3 minutes. I'll monitor all processes
and report when complete.
[Poll each process and report progress]
All validations complete. Summary:
- Quality: PASS (0 issues)
- Security: WARN (2 minor issues)
- Dependencies: PASS
- Performance: PASS (baseline met)
Details:
[Collated results from all checks]
```
## Workflow State Management
### Using .local.md Files
Store workflow state in plugin-specific files:
```markdown
.claude/plugin-name-workflow.local.md:
---
workflow: deployment
stage: testing
started: 2025-01-15T10:30:00Z
environment: staging
branch: feature/new-api
commit: abc123def
tests_passed: false
build_complete: false
---
# Deployment Workflow State
Current stage: Testing
Started: 2025-01-15 10:30 UTC
Completed steps:
- ✅ Validation
- ✅ Branch check
- ⏳ Testing (in progress)
Pending steps:
- Build
- Deploy
- Smoke tests
```
**Reading state in commands:**
```markdown
---
description: Continue deployment workflow
allowed-tools: Read, Write
---
Reading workflow state from .claude/plugin-name-workflow.local.md...
Current stage: @.claude/plugin-name-workflow.local.md
[Parse YAML frontmatter to determine next step]
Next action based on state: [determined action]
```
### Workflow Recovery
Handle interrupted workflows:
```markdown
---
description: Resume deployment workflow
allowed-tools: Read
---
# Resume Deployment
Checking for interrupted workflow...
State file: @.claude/plugin-name-workflow.local.md
**Workflow found:**
- Started: [timestamp]
- Environment: [env]
- Last completed: [step]
**Recovery options:**
1. Resume from last step
2. Restart from beginning
3. Abort and clean up
Which would you like? (1/2/3)
```
## Workflow Coordination Patterns
### Cross-Command Communication
Commands that signal each other:
```markdown
---
description: Mark feature complete
allowed-tools: Write
---
# Mark Feature Complete
Writing completion marker...
Creating: .claude/feature-complete.flag
This signals other commands that feature is ready for:
- Integration testing (/integration-test will auto-detect)
- Documentation generation (/docs-generate will include)
- Release notes (/release-notes will add)
Feature marked complete.
```
**Other commands check for flag:**
```markdown
---
description: Generate release notes
allowed-tools: Read, Bash(git:*)
---
Checking for completed features...
if [ -f .claude/feature-complete.flag ]; then
Feature ready for release notes
fi
[Include in release notes]
```
### Workflow Locking
Prevent concurrent workflow execution:
```markdown
---
description: Start deployment
allowed-tools: Read, Write, Bash
---
# Start Deployment
Checking for active deployments...
if [ -f .claude/deployment.lock ]; then
ERROR: Deployment already in progress
Started: [timestamp from lock file]
Cannot start concurrent deployment.
Wait for completion or run /deployment-abort
Exit.
fi
Creating deployment lock...
Deployment started. Lock created.
[Proceed with deployment]
```
**Lock cleanup:**
```markdown
---
description: Complete deployment
allowed-tools: Write, Bash
---
Deployment complete.
Removing deployment lock...
rm .claude/deployment.lock
Ready for next deployment.
```
## Advanced Argument Handling
### Optional Arguments with Defaults
```markdown
---
description: Deploy with optional version
argument-hint: [environment] [version]
---
Environment: ${1:-staging}
Version: ${2:-latest}
Deploying ${2:-latest} to ${1:-staging}...
Note: Using defaults for missing arguments:
- Environment defaults to 'staging'
- Version defaults to 'latest'
```
### Argument Validation
```markdown
---
description: Deploy to validated environment
argument-hint: [environment]
---
Environment: $1
Validating environment...
valid_envs="dev staging production"
if ! echo "$valid_envs" | grep -w "$1" > /dev/null; then
ERROR: Invalid environment '$1'
Valid options: dev, staging, production
Exit.
fi
Environment validated. Proceeding...
```
### Argument Transformation
```markdown
---
description: Deploy with shorthand
argument-hint: [env-shorthand]
---
Input: $1
Expanding shorthand:
- d/dev → development
- s/stg → staging
- p/prod → production
case "$1" in
d|dev) ENV="development";;
s|stg) ENV="staging";;
p|prod) ENV="production";;
*) ENV="$1";;
esac
Deploying to: $ENV
```
## Error Handling in Workflows
### Graceful Failure
```markdown
---
description: Resilient deployment workflow
---
# Deployment Workflow
Running steps with error handling...
## Step 1: Tests
!`npm test`
if [ $? -ne 0 ]; then
ERROR: Tests failed
Options:
1. Fix tests and retry
2. Skip tests (NOT recommended)
3. Abort deployment
What would you like to do?
[Wait for user input before continuing]
fi
## Step 2: Build
[Continue only if Step 1 succeeded]
```
### Rollback on Failure
```markdown
---
description: Deployment with rollback
---
# Deploy with Rollback
Saving current state for rollback...
Previous version: !`current-version.sh`
Deploying new version...
!`deploy.sh`
if [ $? -ne 0 ]; then
DEPLOYMENT FAILED
Initiating automatic rollback...
!`rollback.sh`
Rolled back to previous version.
Check logs for failure details.
fi
Deployment complete.
```
### Checkpoint Recovery
```markdown
---
description: Workflow with checkpoints
---
# Multi-Stage Deployment
## Checkpoint 1: Validation
!`validate.sh`
echo "checkpoint:validation" >> .claude/deployment-checkpoints.log
## Checkpoint 2: Build
!`build.sh`
echo "checkpoint:build" >> .claude/deployment-checkpoints.log
## Checkpoint 3: Deploy
!`deploy.sh`
echo "checkpoint:deploy" >> .claude/deployment-checkpoints.log
If any step fails, resume with:
/deployment-resume [last-successful-checkpoint]
```
## Best Practices
### Workflow Design
1. **Clear progression**: Number steps, show current position
2. **Explicit state**: Don't rely on implicit state
3. **User control**: Provide decision points
4. **Error recovery**: Handle failures gracefully
5. **Progress indication**: Show what's done, what's pending
### Command Composition
1. **Single responsibility**: Each command does one thing well
2. **Composable design**: Commands work together easily
3. **Standard interfaces**: Consistent input/output formats
4. **Loose coupling**: Commands don't depend on each other's internals
### State Management
1. **Persistent state**: Use .local.md files
2. **Atomic updates**: Write complete state files atomically
3. **State validation**: Check state file format/completeness
4. **Cleanup**: Remove stale state files
5. **Documentation**: Document state file formats
### Error Handling
1. **Fail fast**: Detect errors early
2. **Clear messages**: Explain what went wrong
3. **Recovery options**: Provide clear next steps
4. **State preservation**: Keep state for recovery
5. **Rollback capability**: Support undoing changes
## Example: Complete Deployment Workflow
### Initialize Command
```markdown
---
description: Initialize deployment
argument-hint: [environment]
allowed-tools: Write, Bash(git:*)
---
# Initialize Deployment to $1
Creating workflow state...
\`\`\`yaml
---
workflow: deployment
environment: $1
branch: !`git branch --show-current`
commit: !`git rev-parse HEAD`
stage: initialized
timestamp: !`date -u +%Y-%m-%dT%H:%M:%SZ`
---
\`\`\`
Written to .claude/deployment-state.local.md
Next: Run /deployment-validate
```
### Validation Command
```markdown
---
description: Validate deployment
allowed-tools: Read, Bash
---
Reading state: @.claude/deployment-state.local.md
Running validation...
- Branch check: PASS
- Tests: PASS
- Build: PASS
Updating state to 'validated'...
Next: Run /deployment-execute
```
### Execution Command
```markdown
---
description: Execute deployment
allowed-tools: Read, Bash, Write
---
Reading state: @.claude/deployment-state.local.md
Executing deployment to [environment]...
!`deploy.sh [environment]`
Deployment complete.
Updating state to 'completed'...
Cleanup: /deployment-cleanup
```
### Cleanup Command
```markdown
---
description: Clean up deployment
allowed-tools: Bash
---
Removing deployment state...
rm .claude/deployment-state.local.md
Deployment workflow complete.
```
This complete workflow demonstrates state management, sequential execution, error handling, and clean separation of concerns across multiple commands.

View File

@@ -0,0 +1,739 @@
# Command Documentation Patterns
Strategies for creating self-documenting, maintainable commands with excellent user experience.
## Overview
Well-documented commands are easier to use, maintain, and distribute. Documentation should be embedded in the command itself, making it immediately accessible to users and maintainers.
## Self-Documenting Command Structure
### Complete Command Template
```markdown
---
description: Clear, actionable description under 60 chars
argument-hint: [arg1] [arg2] [optional-arg]
allowed-tools: Read, Bash(git:*)
model: sonnet
---
<!--
COMMAND: command-name
VERSION: 1.0.0
AUTHOR: Team Name
LAST UPDATED: 2025-01-15
PURPOSE:
Detailed explanation of what this command does and why it exists.
USAGE:
/command-name arg1 arg2
ARGUMENTS:
arg1: Description of first argument (required)
arg2: Description of second argument (optional, defaults to X)
EXAMPLES:
/command-name feature-branch main
→ Compares feature-branch with main
/command-name my-branch
→ Compares my-branch with current branch
REQUIREMENTS:
- Git repository
- Branch must exist
- Permissions to read repository
RELATED COMMANDS:
/other-command - Related functionality
/another-command - Alternative approach
TROUBLESHOOTING:
- If branch not found: Check branch name spelling
- If permission denied: Check repository access
CHANGELOG:
v1.0.0 (2025-01-15): Initial release
v0.9.0 (2025-01-10): Beta version
-->
# Command Implementation
[Command prompt content here...]
[Explain what will happen...]
[Guide user through steps...]
[Provide clear output...]
```
### Documentation Comment Sections
**PURPOSE**: Why the command exists
- Problem it solves
- Use cases
- When to use vs when not to use
**USAGE**: Basic syntax
- Command invocation pattern
- Required vs optional arguments
- Default values
**ARGUMENTS**: Detailed argument documentation
- Each argument described
- Type information
- Valid values/ranges
- Defaults
**EXAMPLES**: Concrete usage examples
- Common use cases
- Edge cases
- Expected outputs
**REQUIREMENTS**: Prerequisites
- Dependencies
- Permissions
- Environmental setup
**RELATED COMMANDS**: Connections
- Similar commands
- Complementary commands
- Alternative approaches
**TROUBLESHOOTING**: Common issues
- Known problems
- Solutions
- Workarounds
**CHANGELOG**: Version history
- What changed when
- Breaking changes highlighted
- Migration guidance
## In-Line Documentation Patterns
### Commented Sections
```markdown
---
description: Complex multi-step command
---
<!-- SECTION 1: VALIDATION -->
<!-- This section checks prerequisites before proceeding -->
Checking prerequisites...
- Git repository: !`git rev-parse --git-dir 2>/dev/null`
- Branch exists: [validation logic]
<!-- SECTION 2: ANALYSIS -->
<!-- Analyzes the differences between branches -->
Analyzing differences between $1 and $2...
[Analysis logic...]
<!-- SECTION 3: RECOMMENDATIONS -->
<!-- Provides actionable recommendations -->
Based on analysis, recommend:
[Recommendations...]
<!-- END: Next steps for user -->
```
### Inline Explanations
```markdown
---
description: Deployment command with inline docs
---
# Deploy to $1
## Pre-flight Checks
<!-- We check branch status to prevent deploying from wrong branch -->
Current branch: !`git branch --show-current`
<!-- Production deploys must come from main/master -->
if [ "$1" = "production" ] && [ "$(git branch --show-current)" != "main" ]; then
⚠️ WARNING: Not on main branch for production deploy
This is unusual. Confirm this is intentional.
fi
<!-- Test status ensures we don't deploy broken code -->
Running tests: !`npm test`
✓ All checks passed
## Deployment
<!-- Actual deployment happens here -->
<!-- Uses blue-green strategy for zero-downtime -->
Deploying to $1 environment...
[Deployment steps...]
<!-- Post-deployment verification -->
Verifying deployment health...
[Health checks...]
Deployment complete!
## Next Steps
<!-- Guide user on what to do after deployment -->
1. Monitor logs: /logs $1
2. Run smoke tests: /smoke-test $1
3. Notify team: /notify-deployment $1
```
### Decision Point Documentation
```markdown
---
description: Interactive deployment command
---
# Interactive Deployment
## Configuration Review
Target: $1
Current version: !`cat version.txt`
New version: $2
<!-- DECISION POINT: User confirms configuration -->
<!-- This pause allows user to verify everything is correct -->
<!-- We can't automatically proceed because deployment is risky -->
Review the above configuration.
**Continue with deployment?**
- Reply "yes" to proceed
- Reply "no" to cancel
- Reply "edit" to modify configuration
[Await user input before continuing...]
<!-- After user confirms, we proceed with deployment -->
<!-- All subsequent steps are automated -->
Proceeding with deployment...
```
## Help Text Patterns
### Built-in Help Command
Create a help subcommand for complex commands:
```markdown
---
description: Main command with help
argument-hint: [subcommand] [args]
---
# Command Processor
if [ "$1" = "help" ] || [ "$1" = "--help" ] || [ "$1" = "-h" ]; then
**Command Help**
USAGE:
/command [subcommand] [args]
SUBCOMMANDS:
init [name] Initialize new configuration
deploy [env] Deploy to environment
status Show current status
rollback Rollback last deployment
help Show this help
EXAMPLES:
/command init my-project
/command deploy staging
/command status
/command rollback
For detailed help on a subcommand:
/command [subcommand] --help
Exit.
fi
[Regular command processing...]
```
### Contextual Help
Provide help based on context:
```markdown
---
description: Context-aware command
argument-hint: [operation] [target]
---
# Context-Aware Operation
if [ -z "$1" ]; then
**No operation specified**
Available operations:
- analyze: Analyze target for issues
- fix: Apply automatic fixes
- report: Generate detailed report
Usage: /command [operation] [target]
Examples:
/command analyze src/
/command fix src/app.js
/command report
Run /command help for more details.
Exit.
fi
[Command continues if operation provided...]
```
## Error Message Documentation
### Helpful Error Messages
```markdown
---
description: Command with good error messages
---
# Validation Command
if [ -z "$1" ]; then
❌ ERROR: Missing required argument
The 'file-path' argument is required.
USAGE:
/validate [file-path]
EXAMPLE:
/validate src/app.js
Try again with a file path.
Exit.
fi
if [ ! -f "$1" ]; then
❌ ERROR: File not found: $1
The specified file does not exist or is not accessible.
COMMON CAUSES:
1. Typo in file path
2. File was deleted or moved
3. Insufficient permissions
SUGGESTIONS:
- Check spelling: $1
- Verify file exists: ls -la $(dirname "$1")
- Check permissions: ls -l "$1"
Exit.
fi
[Command continues if validation passes...]
```
### Error Recovery Guidance
```markdown
---
description: Command with recovery guidance
---
# Operation Command
Running operation...
!`risky-operation.sh`
if [ $? -ne 0 ]; then
❌ OPERATION FAILED
The operation encountered an error and could not complete.
WHAT HAPPENED:
The risky-operation.sh script returned a non-zero exit code.
WHAT THIS MEANS:
- Changes may be partially applied
- System may be in inconsistent state
- Manual intervention may be needed
RECOVERY STEPS:
1. Check operation logs: cat /tmp/operation.log
2. Verify system state: /check-state
3. If needed, rollback: /rollback-operation
4. Fix underlying issue
5. Retry operation: /retry-operation
NEED HELP?
- Check troubleshooting guide: /help troubleshooting
- Contact support with error code: ERR_OP_FAILED_001
Exit.
fi
```
## Usage Example Documentation
### Embedded Examples
```markdown
---
description: Command with embedded examples
---
# Feature Command
This command performs feature analysis with multiple options.
## Basic Usage
\`\`\`
/feature analyze src/
\`\`\`
Analyzes all files in src/ directory for feature usage.
## Advanced Usage
\`\`\`
/feature analyze src/ --detailed
\`\`\`
Provides detailed analysis including:
- Feature breakdown by file
- Usage patterns
- Optimization suggestions
## Use Cases
**Use Case 1: Quick overview**
\`\`\`
/feature analyze .
\`\`\`
Get high-level feature summary of entire project.
**Use Case 2: Specific directory**
\`\`\`
/feature analyze src/components
\`\`\`
Focus analysis on components directory only.
**Use Case 3: Comparison**
\`\`\`
/feature analyze src/ --compare baseline.json
\`\`\`
Compare current features against baseline.
---
Now processing your request...
[Command implementation...]
```
### Example-Driven Documentation
```markdown
---
description: Example-heavy command
---
# Transformation Command
## What This Does
Transforms data from one format to another.
## Examples First
### Example 1: JSON to YAML
**Input:** `data.json`
\`\`\`json
{"name": "test", "value": 42}
\`\`\`
**Command:** `/transform data.json yaml`
**Output:** `data.yaml`
\`\`\`yaml
name: test
value: 42
\`\`\`
### Example 2: CSV to JSON
**Input:** `data.csv`
\`\`\`csv
name,value
test,42
\`\`\`
**Command:** `/transform data.csv json`
**Output:** `data.json`
\`\`\`json
[{"name": "test", "value": "42"}]
\`\`\`
### Example 3: With Options
**Command:** `/transform data.json yaml --pretty --sort-keys`
**Result:** Formatted YAML with sorted keys
---
## Your Transformation
File: $1
Format: $2
[Perform transformation...]
```
## Maintenance Documentation
### Version and Changelog
```markdown
<!--
VERSION: 2.1.0
LAST UPDATED: 2025-01-15
AUTHOR: DevOps Team
CHANGELOG:
v2.1.0 (2025-01-15):
- Added support for YAML configuration
- Improved error messages
- Fixed bug with special characters in arguments
v2.0.0 (2025-01-01):
- BREAKING: Changed argument order
- BREAKING: Removed deprecated --old-flag
- Added new validation checks
- Migration guide: /migration-v2
v1.5.0 (2024-12-15):
- Added --verbose flag
- Improved performance by 50%
v1.0.0 (2024-12-01):
- Initial stable release
MIGRATION NOTES:
From v1.x to v2.0:
Old: /command arg1 arg2 --old-flag
New: /command arg2 arg1
The --old-flag is removed. Use --new-flag instead.
DEPRECATION WARNINGS:
- The --legacy-mode flag is deprecated as of v2.1.0
- Will be removed in v3.0.0 (estimated 2025-06-01)
- Use --modern-mode instead
KNOWN ISSUES:
- #123: Slow performance with large files (workaround: use --stream flag)
- #456: Special characters in Windows (fix planned for v2.2.0)
-->
```
### Maintenance Notes
```markdown
<!--
MAINTENANCE NOTES:
CODE STRUCTURE:
- Lines 1-50: Argument parsing and validation
- Lines 51-100: Main processing logic
- Lines 101-150: Output formatting
- Lines 151-200: Error handling
DEPENDENCIES:
- Requires git 2.x or later
- Uses jq for JSON processing
- Needs bash 4.0+ for associative arrays
PERFORMANCE:
- Fast path for small inputs (< 1MB)
- Streams large files to avoid memory issues
- Caches results in /tmp for 1 hour
SECURITY CONSIDERATIONS:
- Validates all inputs to prevent injection
- Uses allowed-tools to limit Bash access
- No credentials in command file
TESTING:
- Unit tests: tests/command-test.sh
- Integration tests: tests/integration/
- Manual test checklist: tests/manual-checklist.md
FUTURE IMPROVEMENTS:
- TODO: Add support for TOML format
- TODO: Implement parallel processing
- TODO: Add progress bar for large files
RELATED FILES:
- lib/parser.sh: Shared parsing logic
- lib/formatter.sh: Output formatting
- config/defaults.yml: Default configuration
-->
```
## README Documentation
Commands should have companion README files:
```markdown
# Command Name
Brief description of what the command does.
## Installation
This command is part of the [plugin-name] plugin.
Install with:
\`\`\`
/plugin install plugin-name
\`\`\`
## Usage
Basic usage:
\`\`\`
/command-name [arg1] [arg2]
\`\`\`
## Arguments
- `arg1`: Description (required)
- `arg2`: Description (optional, defaults to X)
## Examples
### Example 1: Basic Usage
\`\`\`
/command-name value1 value2
\`\`\`
Description of what happens.
### Example 2: Advanced Usage
\`\`\`
/command-name value1 --option
\`\`\`
Description of advanced feature.
## Configuration
Optional configuration file: `.claude/command-name.local.md`
\`\`\`markdown
---
default_arg: value
enable_feature: true
---
\`\`\`
## Requirements
- Git 2.x or later
- jq (for JSON processing)
- Node.js 14+ (optional, for advanced features)
## Troubleshooting
### Issue: Command not found
**Solution:** Ensure plugin is installed and enabled.
### Issue: Permission denied
**Solution:** Check file permissions and allowed-tools setting.
## Contributing
Contributions welcome! See [CONTRIBUTING.md](CONTRIBUTING.md).
## License
MIT License - See [LICENSE](LICENSE).
## Support
- Issues: https://github.com/user/plugin/issues
- Docs: https://docs.example.com
- Email: support@example.com
```
## Best Practices
### Documentation Principles
1. **Write for your future self**: Assume you'll forget details
2. **Examples before explanations**: Show, then tell
3. **Progressive disclosure**: Basic info first, details available
4. **Keep it current**: Update docs when code changes
5. **Test your docs**: Verify examples actually work
### Documentation Locations
1. **In command file**: Core usage, examples, inline explanations
2. **README**: Installation, configuration, troubleshooting
3. **Separate docs**: Detailed guides, tutorials, API reference
4. **Comments**: Implementation details for maintainers
### Documentation Style
1. **Clear and concise**: No unnecessary words
2. **Active voice**: "Run the command" not "The command can be run"
3. **Consistent terminology**: Use same terms throughout
4. **Formatted well**: Use headings, lists, code blocks
5. **Accessible**: Assume reader is beginner
### Documentation Maintenance
1. **Version everything**: Track what changed when
2. **Deprecate gracefully**: Warn before removing features
3. **Migration guides**: Help users upgrade
4. **Archive old docs**: Keep old versions accessible
5. **Review regularly**: Ensure docs match reality
## Documentation Checklist
Before releasing a command:
- [ ] Description in frontmatter is clear
- [ ] argument-hint documents all arguments
- [ ] Usage examples in comments
- [ ] Common use cases shown
- [ ] Error messages are helpful
- [ ] Requirements documented
- [ ] Related commands listed
- [ ] Changelog maintained
- [ ] Version number updated
- [ ] README created/updated
- [ ] Examples actually work
- [ ] Troubleshooting section complete
With good documentation, commands become self-service, reducing support burden and improving user experience.

View File

@@ -0,0 +1,463 @@
# Command Frontmatter Reference
Complete reference for YAML frontmatter fields in slash commands.
## Frontmatter Overview
YAML frontmatter is optional metadata at the start of command files:
```markdown
---
description: Brief description
allowed-tools: Read, Write
model: sonnet
argument-hint: [arg1] [arg2]
---
Command prompt content here...
```
All fields are optional. Commands work without any frontmatter.
## Field Specifications
### description
**Type:** String
**Required:** No
**Default:** First line of command prompt
**Max Length:** ~60 characters recommended for `/help` display
**Purpose:** Describes what the command does, shown in `/help` output
**Examples:**
```yaml
description: Review code for security issues
```
```yaml
description: Deploy to staging environment
```
```yaml
description: Generate API documentation
```
**Best practices:**
- Keep under 60 characters for clean display
- Start with verb (Review, Deploy, Generate)
- Be specific about what command does
- Avoid redundant "command" or "slash command"
**Good:**
- ✅ "Review PR for code quality and security"
- ✅ "Deploy application to specified environment"
- ✅ "Generate comprehensive API documentation"
**Bad:**
- ❌ "This command reviews PRs" (unnecessary "This command")
- ❌ "Review" (too vague)
- ❌ "A command that reviews pull requests for code quality, security issues, and best practices" (too long)
### allowed-tools
**Type:** String or Array of strings
**Required:** No
**Default:** Inherits from conversation permissions
**Purpose:** Restrict or specify which tools command can use
**Formats:**
**Single tool:**
```yaml
allowed-tools: Read
```
**Multiple tools (comma-separated):**
```yaml
allowed-tools: Read, Write, Edit
```
**Multiple tools (array):**
```yaml
allowed-tools:
- Read
- Write
- Bash(git:*)
```
**Tool Patterns:**
**Specific tools:**
```yaml
allowed-tools: Read, Grep, Edit
```
**Bash with command filter:**
```yaml
allowed-tools: Bash(git:*) # Only git commands
allowed-tools: Bash(npm:*) # Only npm commands
allowed-tools: Bash(docker:*) # Only docker commands
```
**All tools (not recommended):**
```yaml
allowed-tools: "*"
```
**When to use:**
1. **Security:** Restrict command to safe operations
```yaml
allowed-tools: Read, Grep # Read-only command
```
2. **Clarity:** Document required tools
```yaml
allowed-tools: Bash(git:*), Read
```
3. **Bash execution:** Enable bash command output
```yaml
allowed-tools: Bash(git status:*), Bash(git diff:*)
```
**Best practices:**
- Be as restrictive as possible
- Use command filters for Bash (e.g., `git:*` not `*`)
- Only specify when different from conversation permissions
- Document why specific tools are needed
### model
**Type:** String
**Required:** No
**Default:** Inherits from conversation
**Values:** `sonnet`, `opus`, `haiku`
**Purpose:** Specify which Claude model executes the command
**Examples:**
```yaml
model: haiku # Fast, efficient for simple tasks
```
```yaml
model: sonnet # Balanced performance (default)
```
```yaml
model: opus # Maximum capability for complex tasks
```
**When to use:**
**Use `haiku` for:**
- Simple, formulaic commands
- Fast execution needed
- Low complexity tasks
- Frequent invocations
```yaml
---
description: Format code file
model: haiku
---
```
**Use `sonnet` for:**
- Standard commands (default)
- Balanced speed/quality
- Most common use cases
```yaml
---
description: Review code changes
model: sonnet
---
```
**Use `opus` for:**
- Complex analysis
- Architectural decisions
- Deep code understanding
- Critical tasks
```yaml
---
description: Analyze system architecture
model: opus
---
```
**Best practices:**
- Omit unless specific need
- Use `haiku` for speed when possible
- Reserve `opus` for genuinely complex tasks
- Test with different models to find right balance
### argument-hint
**Type:** String
**Required:** No
**Default:** None
**Purpose:** Document expected arguments for users and autocomplete
**Format:**
```yaml
argument-hint: [arg1] [arg2] [optional-arg]
```
**Examples:**
**Single argument:**
```yaml
argument-hint: [pr-number]
```
**Multiple required arguments:**
```yaml
argument-hint: [environment] [version]
```
**Optional arguments:**
```yaml
argument-hint: [file-path] [options]
```
**Descriptive names:**
```yaml
argument-hint: [source-branch] [target-branch] [commit-message]
```
**Best practices:**
- Use square brackets `[]` for each argument
- Use descriptive names (not `arg1`, `arg2`)
- Indicate optional vs required in description
- Match order to positional arguments in command
- Keep concise but clear
**Examples by pattern:**
**Simple command:**
```yaml
---
description: Fix issue by number
argument-hint: [issue-number]
---
Fix issue #$1...
```
**Multi-argument:**
```yaml
---
description: Deploy to environment
argument-hint: [app-name] [environment] [version]
---
Deploy $1 to $2 using version $3...
```
**With options:**
```yaml
---
description: Run tests with options
argument-hint: [test-pattern] [options]
---
Run tests matching $1 with options: $2
```
### disable-model-invocation
**Type:** Boolean
**Required:** No
**Default:** false
**Purpose:** Prevent SlashCommand tool from programmatically invoking command
**Examples:**
```yaml
disable-model-invocation: true
```
**When to use:**
1. **Manual-only commands:** Commands requiring user judgment
```yaml
---
description: Approve deployment to production
disable-model-invocation: true
---
```
2. **Destructive operations:** Commands with irreversible effects
```yaml
---
description: Delete all test data
disable-model-invocation: true
---
```
3. **Interactive workflows:** Commands needing user input
```yaml
---
description: Walk through setup wizard
disable-model-invocation: true
---
```
**Default behavior (false):**
- Command available to SlashCommand tool
- Claude can invoke programmatically
- Still available for manual invocation
**When true:**
- Command only invokable by user typing `/command`
- Not available to SlashCommand tool
- Safer for sensitive operations
**Best practices:**
- Use sparingly (limits Claude's autonomy)
- Document why in command comments
- Consider if command should exist if always manual
## Complete Examples
### Minimal Command
No frontmatter needed:
```markdown
Review this code for common issues and suggest improvements.
```
### Simple Command
Just description:
```markdown
---
description: Review code for issues
---
Review this code for common issues and suggest improvements.
```
### Standard Command
Description and tools:
```markdown
---
description: Review Git changes
allowed-tools: Bash(git:*), Read
---
Current changes: !`git diff --name-only`
Review each changed file for:
- Code quality
- Potential bugs
- Best practices
```
### Complex Command
All common fields:
```markdown
---
description: Deploy application to environment
argument-hint: [app-name] [environment] [version]
allowed-tools: Bash(kubectl:*), Bash(helm:*), Read
model: sonnet
---
Deploy $1 to $2 environment using version $3
Pre-deployment checks:
- Verify $2 configuration
- Check cluster status: !`kubectl cluster-info`
- Validate version $3 exists
Proceed with deployment following deployment runbook.
```
### Manual-Only Command
Restricted invocation:
```markdown
---
description: Approve production deployment
argument-hint: [deployment-id]
disable-model-invocation: true
allowed-tools: Bash(gh:*)
---
<!--
MANUAL APPROVAL REQUIRED
This command requires human judgment and cannot be automated.
-->
Review deployment $1 for production approval:
Deployment details: !`gh api /deployments/$1`
Verify:
- All tests passed
- Security scan clean
- Stakeholder approval
- Rollback plan ready
Type "APPROVED" to confirm deployment.
```
## Validation
### Common Errors
**Invalid YAML syntax:**
```yaml
---
description: Missing quote
allowed-tools: Read, Write
model: sonnet
--- # ❌ Missing closing quote above
```
**Fix:** Validate YAML syntax
**Incorrect tool specification:**
```yaml
allowed-tools: Bash # ❌ Missing command filter
```
**Fix:** Use `Bash(git:*)` format
**Invalid model name:**
```yaml
model: gpt4 # ❌ Not a valid Claude model
```
**Fix:** Use `sonnet`, `opus`, or `haiku`
### Validation Checklist
Before committing command:
- [ ] YAML syntax valid (no errors)
- [ ] Description under 60 characters
- [ ] allowed-tools uses proper format
- [ ] model is valid value if specified
- [ ] argument-hint matches positional arguments
- [ ] disable-model-invocation used appropriately
## Best Practices Summary
1. **Start minimal:** Add frontmatter only when needed
2. **Document arguments:** Always use argument-hint with arguments
3. **Restrict tools:** Use most restrictive allowed-tools that works
4. **Choose right model:** Use haiku for speed, opus for complexity
5. **Manual-only sparingly:** Only use disable-model-invocation when necessary
6. **Clear descriptions:** Make commands discoverable in `/help`
7. **Test thoroughly:** Verify frontmatter works as expected

View File

@@ -0,0 +1,920 @@
# Interactive Command Patterns
Comprehensive guide to creating commands that gather user feedback and make decisions through the AskUserQuestion tool.
## Overview
Some commands need user input that doesn't work well with simple arguments. For example:
- Choosing between multiple complex options with trade-offs
- Selecting multiple items from a list
- Making decisions that require explanation
- Gathering preferences or configuration interactively
For these cases, use the **AskUserQuestion tool** within command execution rather than relying on command arguments.
## When to Use AskUserQuestion
### Use AskUserQuestion When:
1. **Multiple choice decisions** with explanations needed
2. **Complex options** that require context to choose
3. **Multi-select scenarios** (choosing multiple items)
4. **Preference gathering** for configuration
5. **Interactive workflows** that adapt based on answers
### Use Command Arguments When:
1. **Simple values** (file paths, numbers, names)
2. **Known inputs** user already has
3. **Scriptable workflows** that should be automatable
4. **Fast invocations** where prompting would slow down
## AskUserQuestion Basics
### Tool Parameters
```typescript
{
questions: [
{
question: "Which authentication method should we use?",
header: "Auth method", // Short label (max 12 chars)
multiSelect: false, // true for multiple selection
options: [
{
label: "OAuth 2.0",
description: "Industry standard, supports multiple providers"
},
{
label: "JWT",
description: "Stateless, good for APIs"
},
{
label: "Session",
description: "Traditional, server-side state"
}
]
}
]
}
```
**Key points:**
- Users can always choose "Other" to provide custom input (automatic)
- `multiSelect: true` allows selecting multiple options
- Options should be 2-4 choices (not more)
- Can ask 1-4 questions per tool call
## Command Pattern for User Interaction
### Basic Interactive Command
```markdown
---
description: Interactive setup command
allowed-tools: AskUserQuestion, Write
---
# Interactive Plugin Setup
This command will guide you through configuring the plugin with a series of questions.
## Step 1: Gather Configuration
Use the AskUserQuestion tool to ask:
**Question 1 - Deployment target:**
- header: "Deploy to"
- question: "Which deployment platform will you use?"
- options:
- AWS (Amazon Web Services with ECS/EKS)
- GCP (Google Cloud with GKE)
- Azure (Microsoft Azure with AKS)
- Local (Docker on local machine)
**Question 2 - Environment strategy:**
- header: "Environments"
- question: "How many environments do you need?"
- options:
- Single (Just production)
- Standard (Dev, Staging, Production)
- Complete (Dev, QA, Staging, Production)
**Question 3 - Features to enable:**
- header: "Features"
- question: "Which features do you want to enable?"
- multiSelect: true
- options:
- Auto-scaling (Automatic resource scaling)
- Monitoring (Health checks and metrics)
- CI/CD (Automated deployment pipeline)
- Backups (Automated database backups)
## Step 2: Process Answers
Based on the answers received from AskUserQuestion:
1. Parse the deployment target choice
2. Set up environment-specific configuration
3. Enable selected features
4. Generate configuration files
## Step 3: Generate Configuration
Create `.claude/plugin-name.local.md` with:
\`\`\`yaml
---
deployment_target: [answer from Q1]
environments: [answer from Q2]
features:
auto_scaling: [true if selected in Q3]
monitoring: [true if selected in Q3]
ci_cd: [true if selected in Q3]
backups: [true if selected in Q3]
---
# Plugin Configuration
Generated: [timestamp]
Target: [deployment_target]
Environments: [environments]
\`\`\`
## Step 4: Confirm and Next Steps
Confirm configuration created and guide user on next steps.
```
### Multi-Stage Interactive Workflow
```markdown
---
description: Multi-stage interactive workflow
allowed-tools: AskUserQuestion, Read, Write, Bash
---
# Multi-Stage Deployment Setup
This command walks through deployment setup in stages, adapting based on your answers.
## Stage 1: Basic Configuration
Use AskUserQuestion to ask about deployment basics.
Based on answers, determine which additional questions to ask.
## Stage 2: Advanced Options (Conditional)
If user selected "Advanced" deployment in Stage 1:
Use AskUserQuestion to ask about:
- Load balancing strategy
- Caching configuration
- Security hardening options
If user selected "Simple" deployment:
- Skip advanced questions
- Use sensible defaults
## Stage 3: Confirmation
Show summary of all selections.
Use AskUserQuestion for final confirmation:
- header: "Confirm"
- question: "Does this configuration look correct?"
- options:
- Yes (Proceed with setup)
- No (Start over)
- Modify (Let me adjust specific settings)
If "Modify", ask which specific setting to change.
## Stage 4: Execute Setup
Based on confirmed configuration, execute setup steps.
```
## Interactive Question Design
### Question Structure
**Good questions:**
```markdown
Question: "Which database should we use for this project?"
Header: "Database"
Options:
- PostgreSQL (Relational, ACID compliant, best for complex queries)
- MongoDB (Document store, flexible schema, best for rapid iteration)
- Redis (In-memory, fast, best for caching and sessions)
```
**Poor questions:**
```markdown
Question: "Database?" // Too vague
Header: "DB" // Unclear abbreviation
Options:
- Option 1 // Not descriptive
- Option 2
```
### Option Design Best Practices
**Clear labels:**
- Use 1-5 words
- Specific and descriptive
- No jargon without context
**Helpful descriptions:**
- Explain what the option means
- Mention key benefits or trade-offs
- Help user make informed decision
- Keep to 1-2 sentences
**Appropriate number:**
- 2-4 options per question
- Don't overwhelm with too many choices
- Group related options
- "Other" automatically provided
### Multi-Select Questions
**When to use multiSelect:**
```markdown
Use AskUserQuestion for enabling features:
Question: "Which features do you want to enable?"
Header: "Features"
multiSelect: true // Allow selecting multiple
Options:
- Logging (Detailed operation logs)
- Metrics (Performance monitoring)
- Alerts (Error notifications)
- Backups (Automatic backups)
```
User can select any combination: none, some, or all.
**When NOT to use multiSelect:**
```markdown
Question: "Which authentication method?"
multiSelect: false // Only one auth method makes sense
```
Mutually exclusive choices should not use multiSelect.
## Command Patterns with AskUserQuestion
### Pattern 1: Simple Yes/No Decision
```markdown
---
description: Command with confirmation
allowed-tools: AskUserQuestion, Bash
---
# Destructive Operation
This operation will delete all cached data.
Use AskUserQuestion to confirm:
Question: "This will delete all cached data. Are you sure?"
Header: "Confirm"
Options:
- Yes (Proceed with deletion)
- No (Cancel operation)
If user selects "Yes":
Execute deletion
Report completion
If user selects "No":
Cancel operation
Exit without changes
```
### Pattern 2: Multiple Configuration Questions
```markdown
---
description: Multi-question configuration
allowed-tools: AskUserQuestion, Write
---
# Project Configuration Setup
Gather configuration through multiple questions.
Use AskUserQuestion with multiple questions in one call:
**Question 1:**
- question: "Which programming language?"
- header: "Language"
- options: Python, TypeScript, Go, Rust
**Question 2:**
- question: "Which test framework?"
- header: "Testing"
- options: Jest, PyTest, Go Test, Cargo Test
(Adapt based on language from Q1)
**Question 3:**
- question: "Which CI/CD platform?"
- header: "CI/CD"
- options: GitHub Actions, GitLab CI, CircleCI
**Question 4:**
- question: "Which features do you need?"
- header: "Features"
- multiSelect: true
- options: Linting, Type checking, Code coverage, Security scanning
Process all answers together to generate cohesive configuration.
```
### Pattern 3: Conditional Question Flow
```markdown
---
description: Conditional interactive workflow
allowed-tools: AskUserQuestion, Read, Write
---
# Adaptive Configuration
## Question 1: Deployment Complexity
Use AskUserQuestion:
Question: "How complex is your deployment?"
Header: "Complexity"
Options:
- Simple (Single server, straightforward)
- Standard (Multiple servers, load balancing)
- Complex (Microservices, orchestration)
## Conditional Questions Based on Answer
If answer is "Simple":
- No additional questions
- Use minimal configuration
If answer is "Standard":
- Ask about load balancing strategy
- Ask about scaling policy
If answer is "Complex":
- Ask about orchestration platform (Kubernetes, Docker Swarm)
- Ask about service mesh (Istio, Linkerd, None)
- Ask about monitoring (Prometheus, Datadog, CloudWatch)
- Ask about logging aggregation
## Process Conditional Answers
Generate configuration appropriate for selected complexity level.
```
### Pattern 4: Iterative Collection
```markdown
---
description: Collect multiple items iteratively
allowed-tools: AskUserQuestion, Write
---
# Collect Team Members
We'll collect team member information for the project.
## Question: How many team members?
Use AskUserQuestion:
Question: "How many team members should we set up?"
Header: "Team size"
Options:
- 2 people
- 3 people
- 4 people
- 6 people
## Iterate Through Team Members
For each team member (1 to N based on answer):
Use AskUserQuestion for member details:
Question: "What role for team member [number]?"
Header: "Role"
Options:
- Frontend Developer
- Backend Developer
- DevOps Engineer
- QA Engineer
- Designer
Store each member's information.
## Generate Team Configuration
After collecting all N members, create team configuration file with all members and their roles.
```
### Pattern 5: Dependency Selection
```markdown
---
description: Select dependencies with multi-select
allowed-tools: AskUserQuestion
---
# Configure Project Dependencies
## Question: Required Libraries
Use AskUserQuestion with multiSelect:
Question: "Which libraries does your project need?"
Header: "Dependencies"
multiSelect: true
Options:
- React (UI framework)
- Express (Web server)
- TypeORM (Database ORM)
- Jest (Testing framework)
- Axios (HTTP client)
User can select any combination.
## Process Selections
For each selected library:
- Add to package.json dependencies
- Generate sample configuration
- Create usage examples
- Update documentation
```
## Best Practices for Interactive Commands
### Question Design
1. **Clear and specific**: Question should be unambiguous
2. **Concise header**: Max 12 characters for clean display
3. **Helpful options**: Labels are clear, descriptions explain trade-offs
4. **Appropriate count**: 2-4 options per question, 1-4 questions per call
5. **Logical order**: Questions flow naturally
### Error Handling
```markdown
# Handle AskUserQuestion Responses
After calling AskUserQuestion, verify answers received:
If answers are empty or invalid:
Something went wrong gathering responses.
Please try again or provide configuration manually:
[Show alternative approach]
Exit.
If answers look correct:
Process as expected
```
### Progressive Disclosure
```markdown
# Start Simple, Get Detailed as Needed
## Question 1: Setup Type
Use AskUserQuestion:
Question: "How would you like to set up?"
Header: "Setup type"
Options:
- Quick (Use recommended defaults)
- Custom (Configure all options)
- Guided (Step-by-step with explanations)
If "Quick":
Apply defaults, minimal questions
If "Custom":
Ask all available configuration questions
If "Guided":
Ask questions with extra explanation
Provide recommendations along the way
```
### Multi-Select Guidelines
**Good multi-select use:**
```markdown
Question: "Which features do you want to enable?"
multiSelect: true
Options:
- Logging
- Metrics
- Alerts
- Backups
Reason: User might want any combination
```
**Bad multi-select use:**
```markdown
Question: "Which database engine?"
multiSelect: true // ❌ Should be single-select
Reason: Can only use one database engine
```
## Advanced Patterns
### Validation Loop
```markdown
---
description: Interactive with validation
allowed-tools: AskUserQuestion, Bash
---
# Setup with Validation
## Gather Configuration
Use AskUserQuestion to collect settings.
## Validate Configuration
Check if configuration is valid:
- Required dependencies available?
- Settings compatible with each other?
- No conflicts detected?
If validation fails:
Show validation errors
Use AskUserQuestion to ask:
Question: "Configuration has issues. What would you like to do?"
Header: "Next step"
Options:
- Fix (Adjust settings to resolve issues)
- Override (Proceed despite warnings)
- Cancel (Abort setup)
Based on answer, retry or proceed or exit.
```
### Build Configuration Incrementally
```markdown
---
description: Incremental configuration builder
allowed-tools: AskUserQuestion, Write, Read
---
# Incremental Setup
## Phase 1: Core Settings
Use AskUserQuestion for core settings.
Save to `.claude/config-partial.yml`
## Phase 2: Review Core Settings
Show user the core settings:
Based on these core settings, you need to configure:
- [Setting A] (because you chose [X])
- [Setting B] (because you chose [Y])
Ready to continue?
## Phase 3: Detailed Settings
Use AskUserQuestion for settings based on Phase 1 answers.
Merge with core settings.
## Phase 4: Final Review
Present complete configuration.
Use AskUserQuestion for confirmation:
Question: "Is this configuration correct?"
Options:
- Yes (Save and apply)
- No (Start over)
- Modify (Edit specific settings)
```
### Dynamic Options Based on Context
```markdown
---
description: Context-aware questions
allowed-tools: AskUserQuestion, Bash, Read
---
# Context-Aware Setup
## Detect Current State
Check existing configuration:
- Current language: !`detect-language.sh`
- Existing frameworks: !`detect-frameworks.sh`
- Available tools: !`check-tools.sh`
## Ask Context-Appropriate Questions
Based on detected language, ask relevant questions.
If language is TypeScript:
Use AskUserQuestion:
Question: "Which TypeScript features should we enable?"
Options:
- Strict Mode (Maximum type safety)
- Decorators (Experimental decorator support)
- Path Mapping (Module path aliases)
If language is Python:
Use AskUserQuestion:
Question: "Which Python tools should we configure?"
Options:
- Type Hints (mypy for type checking)
- Black (Code formatting)
- Pylint (Linting and style)
Questions adapt to project context.
```
## Real-World Example: Multi-Agent Swarm Launch
**From multi-agent-swarm plugin:**
```markdown
---
description: Launch multi-agent swarm
allowed-tools: AskUserQuestion, Read, Write, Bash
---
# Launch Multi-Agent Swarm
## Interactive Mode (No Task List Provided)
If user didn't provide task list file, help create one interactively.
### Question 1: Agent Count
Use AskUserQuestion:
Question: "How many agents should we launch?"
Header: "Agent count"
Options:
- 2 agents (Best for simple projects)
- 3 agents (Good for medium projects)
- 4 agents (Standard team size)
- 6 agents (Large projects)
- 8 agents (Complex multi-component projects)
### Question 2: Task Definition Approach
Use AskUserQuestion:
Question: "How would you like to define tasks?"
Header: "Task setup"
Options:
- File (I have a task list file ready)
- Guided (Help me create tasks interactively)
- Custom (Other approach)
If "File":
Ask for file path
Validate file exists and has correct format
If "Guided":
Enter iterative task creation mode (see below)
### Question 3: Coordination Mode
Use AskUserQuestion:
Question: "How should agents coordinate?"
Header: "Coordination"
Options:
- Team Leader (One agent coordinates others)
- Collaborative (Agents coordinate as peers)
- Autonomous (Independent work, minimal coordination)
### Iterative Task Creation (If "Guided" Selected)
For each agent (1 to N from Question 1):
**Question A: Agent Name**
Question: "What should we call agent [number]?"
Header: "Agent name"
Options:
- auth-agent
- api-agent
- ui-agent
- db-agent
(Provide relevant suggestions based on common patterns)
**Question B: Task Type**
Question: "What task for [agent-name]?"
Header: "Task type"
Options:
- Authentication (User auth, JWT, OAuth)
- API Endpoints (REST/GraphQL APIs)
- UI Components (Frontend components)
- Database (Schema, migrations, queries)
- Testing (Test suites and coverage)
- Documentation (Docs, README, guides)
**Question C: Dependencies**
Question: "What does [agent-name] depend on?"
Header: "Dependencies"
multiSelect: true
Options:
- [List of previously defined agents]
- No dependencies
**Question D: Base Branch**
Question: "Which base branch for PR?"
Header: "PR base"
Options:
- main
- staging
- develop
Store all task information for each agent.
### Generate Task List File
After collecting all agent task details:
1. Ask for project name
2. Generate task list in proper format
3. Save to `.daisy/swarm/tasks.md`
4. Show user the file path
5. Proceed with launch using generated task list
```
## Best Practices
### Question Writing
1. **Be specific**: "Which database?" not "Choose option?"
2. **Explain trade-offs**: Describe pros/cons in option descriptions
3. **Provide context**: Question text should stand alone
4. **Guide decisions**: Help user make informed choice
5. **Keep concise**: Header max 12 chars, descriptions 1-2 sentences
### Option Design
1. **Meaningful labels**: Specific, clear names
2. **Informative descriptions**: Explain what each option does
3. **Show trade-offs**: Help user understand implications
4. **Consistent detail**: All options equally explained
5. **2-4 options**: Not too few, not too many
### Flow Design
1. **Logical order**: Questions flow naturally
2. **Build on previous**: Later questions use earlier answers
3. **Minimize questions**: Ask only what's needed
4. **Group related**: Ask related questions together
5. **Show progress**: Indicate where in flow
### User Experience
1. **Set expectations**: Tell user what to expect
2. **Explain why**: Help user understand purpose
3. **Provide defaults**: Suggest recommended options
4. **Allow escape**: Let user cancel or restart
5. **Confirm actions**: Summarize before executing
## Common Patterns
### Pattern: Feature Selection
```markdown
Use AskUserQuestion:
Question: "Which features do you need?"
Header: "Features"
multiSelect: true
Options:
- Authentication
- Authorization
- Rate Limiting
- Caching
```
### Pattern: Environment Configuration
```markdown
Use AskUserQuestion:
Question: "Which environment is this?"
Header: "Environment"
Options:
- Development (Local development)
- Staging (Pre-production testing)
- Production (Live environment)
```
### Pattern: Priority Selection
```markdown
Use AskUserQuestion:
Question: "What's the priority for this task?"
Header: "Priority"
Options:
- Critical (Must be done immediately)
- High (Important, do soon)
- Medium (Standard priority)
- Low (Nice to have)
```
### Pattern: Scope Selection
```markdown
Use AskUserQuestion:
Question: "What scope should we analyze?"
Header: "Scope"
Options:
- Current file (Just this file)
- Current directory (All files in directory)
- Entire project (Full codebase scan)
```
## Combining Arguments and Questions
### Use Both Appropriately
**Arguments for known values:**
```markdown
---
argument-hint: [project-name]
allowed-tools: AskUserQuestion, Write
---
Setup for project: $1
Now gather additional configuration...
Use AskUserQuestion for options that require explanation.
```
**Questions for complex choices:**
```markdown
Project name from argument: $1
Now use AskUserQuestion to choose:
- Architecture pattern
- Technology stack
- Deployment strategy
These require explanation, so questions work better than arguments.
```
## Troubleshooting
**Questions not appearing:**
- Verify AskUserQuestion in allowed-tools
- Check question format is correct
- Ensure options array has 2-4 items
**User can't make selection:**
- Check option labels are clear
- Verify descriptions are helpful
- Consider if too many options
- Ensure multiSelect setting is correct
**Flow feels confusing:**
- Reduce number of questions
- Group related questions
- Add explanation between stages
- Show progress through workflow
With AskUserQuestion, commands become interactive wizards that guide users through complex decisions while maintaining the clarity that simple arguments provide for straightforward inputs.

View File

@@ -0,0 +1,904 @@
# Marketplace Considerations for Commands
Guidelines for creating commands designed for distribution and marketplace success.
## Overview
Commands distributed through marketplaces need additional consideration beyond personal use commands. They must work across environments, handle diverse use cases, and provide excellent user experience for unknown users.
## Design for Distribution
### Universal Compatibility
**Cross-platform considerations:**
```markdown
---
description: Cross-platform command
allowed-tools: Bash(*)
---
# Platform-Aware Command
Detecting platform...
case "$(uname)" in
Darwin*) PLATFORM="macOS" ;;
Linux*) PLATFORM="Linux" ;;
MINGW*|MSYS*|CYGWIN*) PLATFORM="Windows" ;;
*) PLATFORM="Unknown" ;;
esac
Platform: $PLATFORM
<!-- Adjust behavior based on platform -->
if [ "$PLATFORM" = "Windows" ]; then
# Windows-specific handling
PATH_SEP="\\"
NULL_DEVICE="NUL"
else
# Unix-like handling
PATH_SEP="/"
NULL_DEVICE="/dev/null"
fi
[Platform-appropriate implementation...]
```
**Avoid platform-specific commands:**
```markdown
<!-- BAD: macOS-specific -->
!`pbcopy < file.txt`
<!-- GOOD: Platform detection -->
if command -v pbcopy > /dev/null; then
pbcopy < file.txt
elif command -v xclip > /dev/null; then
xclip -selection clipboard < file.txt
elif command -v clip.exe > /dev/null; then
cat file.txt | clip.exe
else
echo "Clipboard not available on this platform"
fi
```
### Minimal Dependencies
**Check for required tools:**
```markdown
---
description: Dependency-aware command
allowed-tools: Bash(*)
---
# Check Dependencies
Required tools:
- git
- jq
- node
Checking availability...
MISSING_DEPS=""
for tool in git jq node; do
if ! command -v $tool > /dev/null; then
MISSING_DEPS="$MISSING_DEPS $tool"
fi
done
if [ -n "$MISSING_DEPS" ]; then
❌ ERROR: Missing required dependencies:$MISSING_DEPS
INSTALLATION:
- git: https://git-scm.com/downloads
- jq: https://stedolan.github.io/jq/download/
- node: https://nodejs.org/
Install missing tools and try again.
Exit.
fi
✓ All dependencies available
[Continue with command...]
```
**Document optional dependencies:**
```markdown
<!--
DEPENDENCIES:
Required:
- git 2.0+: Version control
- jq 1.6+: JSON processing
Optional:
- gh: GitHub CLI (for PR operations)
- docker: Container operations (for containerized tests)
Feature availability depends on installed tools.
-->
```
### Graceful Degradation
**Handle missing features:**
```markdown
---
description: Feature-aware command
---
# Feature Detection
Detecting available features...
FEATURES=""
if command -v gh > /dev/null; then
FEATURES="$FEATURES github"
fi
if command -v docker > /dev/null; then
FEATURES="$FEATURES docker"
fi
Available features: $FEATURES
if echo "$FEATURES" | grep -q "github"; then
# Full functionality with GitHub integration
echo "✓ GitHub integration available"
else
# Reduced functionality without GitHub
echo "⚠ Limited functionality: GitHub CLI not installed"
echo " Install 'gh' for full features"
fi
[Adapt behavior based on available features...]
```
## User Experience for Unknown Users
### Clear Onboarding
**First-run experience:**
```markdown
---
description: Command with onboarding
allowed-tools: Read, Write
---
# First Run Check
if [ ! -f ".claude/command-initialized" ]; then
**Welcome to Command Name!**
This appears to be your first time using this command.
WHAT THIS COMMAND DOES:
[Brief explanation of purpose and benefits]
QUICK START:
1. Basic usage: /command [arg]
2. For help: /command help
3. Examples: /command examples
SETUP:
No additional setup required. You're ready to go!
✓ Initialization complete
[Create initialization marker]
Ready to proceed with your request...
fi
[Normal command execution...]
```
**Progressive feature discovery:**
```markdown
---
description: Command with tips
---
# Command Execution
[Main functionality...]
---
💡 TIP: Did you know?
You can speed up this command with the --fast flag:
/command --fast [args]
For more tips: /command tips
```
### Comprehensive Error Handling
**Anticipate user mistakes:**
```markdown
---
description: Forgiving command
---
# User Input Handling
Argument: "$1"
<!-- Check for common typos -->
if [ "$1" = "hlep" ] || [ "$1" = "hepl" ]; then
Did you mean: help?
Showing help instead...
[Display help]
Exit.
fi
<!-- Suggest similar commands if not found -->
if [ "$1" != "valid-option1" ] && [ "$1" != "valid-option2" ]; then
❌ Unknown option: $1
Did you mean:
- valid-option1 (most similar)
- valid-option2
For all options: /command help
Exit.
fi
[Command continues...]
```
**Helpful diagnostics:**
```markdown
---
description: Diagnostic command
---
# Operation Failed
The operation could not complete.
**Diagnostic Information:**
Environment:
- Platform: $(uname)
- Shell: $SHELL
- Working directory: $(pwd)
- Command: /command $@
Checking common issues:
- Git repository: $(git rev-parse --git-dir 2>&1)
- Write permissions: $(test -w . && echo "OK" || echo "DENIED")
- Required files: $(test -f config.yml && echo "Found" || echo "Missing")
This information helps debug the issue.
For support, include the above diagnostics.
```
## Distribution Best Practices
### Namespace Awareness
**Avoid name collisions:**
```markdown
---
description: Namespaced command
---
<!--
COMMAND NAME: plugin-name-command
This command is namespaced with the plugin name to avoid
conflicts with commands from other plugins.
Alternative naming approaches:
- Use plugin prefix: /plugin-command
- Use category: /category-command
- Use verb-noun: /verb-noun
Chosen approach: plugin-name prefix
Reasoning: Clearest ownership, least likely to conflict
-->
# Plugin Name Command
[Implementation...]
```
**Document naming rationale:**
```markdown
<!--
NAMING DECISION:
Command name: /deploy-app
Alternatives considered:
- /deploy: Too generic, likely conflicts
- /app-deploy: Less intuitive ordering
- /my-plugin-deploy: Too verbose
Final choice balances:
- Discoverability (clear purpose)
- Brevity (easy to type)
- Uniqueness (unlikely conflicts)
-->
```
### Configurability
**User preferences:**
```markdown
---
description: Configurable command
allowed-tools: Read
---
# Load User Configuration
Default configuration:
- verbose: false
- color: true
- max_results: 10
Checking for user config: .claude/plugin-name.local.md
if [ -f ".claude/plugin-name.local.md" ]; then
# Parse YAML frontmatter for settings
VERBOSE=$(grep "^verbose:" .claude/plugin-name.local.md | cut -d: -f2 | tr -d ' ')
COLOR=$(grep "^color:" .claude/plugin-name.local.md | cut -d: -f2 | tr -d ' ')
MAX_RESULTS=$(grep "^max_results:" .claude/plugin-name.local.md | cut -d: -f2 | tr -d ' ')
echo "✓ Using user configuration"
else
echo "Using default configuration"
echo "Create .claude/plugin-name.local.md to customize"
fi
[Use configuration in command...]
```
**Sensible defaults:**
```markdown
---
description: Command with smart defaults
---
# Smart Defaults
Configuration:
- Format: ${FORMAT:-json} # Defaults to json
- Output: ${OUTPUT:-stdout} # Defaults to stdout
- Verbose: ${VERBOSE:-false} # Defaults to false
These defaults work for 80% of use cases.
Override with arguments:
/command --format yaml --output file.txt --verbose
Or set in .claude/plugin-name.local.md:
\`\`\`yaml
---
format: yaml
output: custom.txt
verbose: true
---
\`\`\`
```
### Version Compatibility
**Version checking:**
```markdown
---
description: Version-aware command
---
<!--
COMMAND VERSION: 2.1.0
COMPATIBILITY:
- Requires plugin version: >= 2.0.0
- Breaking changes from v1.x documented in MIGRATION.md
VERSION HISTORY:
- v2.1.0: Added --new-feature flag
- v2.0.0: BREAKING: Changed argument order
- v1.0.0: Initial release
-->
# Version Check
Command version: 2.1.0
Plugin version: [detect from plugin.json]
if [ "$PLUGIN_VERSION" < "2.0.0" ]; then
❌ ERROR: Incompatible plugin version
This command requires plugin version >= 2.0.0
Current version: $PLUGIN_VERSION
Update plugin:
/plugin update plugin-name
Exit.
fi
✓ Version compatible
[Command continues...]
```
**Deprecation warnings:**
```markdown
---
description: Command with deprecation warnings
---
# Deprecation Check
if [ "$1" = "--old-flag" ]; then
⚠️ DEPRECATION WARNING
The --old-flag option is deprecated as of v2.0.0
It will be removed in v3.0.0 (est. June 2025)
Use instead: --new-flag
Example:
Old: /command --old-flag value
New: /command --new-flag value
See migration guide: /command migrate
Continuing with deprecated behavior for now...
fi
[Handle both old and new flags during deprecation period...]
```
## Marketplace Presentation
### Command Discovery
**Descriptive naming:**
```markdown
---
description: Review pull request with security and quality checks
---
<!-- GOOD: Descriptive name and description -->
```
```markdown
---
description: Do the thing
---
<!-- BAD: Vague description -->
```
**Searchable keywords:**
```markdown
<!--
KEYWORDS: security, code-review, quality, validation, audit
These keywords help users discover this command when searching
for related functionality in the marketplace.
-->
```
### Showcase Examples
**Compelling demonstrations:**
```markdown
---
description: Advanced code analysis command
---
# Code Analysis Command
This command performs deep code analysis with actionable insights.
## Demo: Quick Security Audit
Try it now:
\`\`\`
/analyze-code src/ --security
\`\`\`
**What you'll get:**
- Security vulnerability detection
- Code quality metrics
- Performance bottleneck identification
- Actionable recommendations
**Sample output:**
\`\`\`
Security Analysis Results
=========================
🔴 Critical (2):
- SQL injection risk in users.js:45
- XSS vulnerability in display.js:23
🟡 Warnings (5):
- Unvalidated input in api.js:67
...
Recommendations:
1. Fix critical issues immediately
2. Review warnings before next release
3. Run /analyze-code --fix for auto-fixes
\`\`\`
---
Ready to analyze your code...
[Command implementation...]
```
### User Reviews and Feedback
**Feedback mechanism:**
```markdown
---
description: Command with feedback
---
# Command Complete
[Command results...]
---
**How was your experience?**
This helps improve the command for everyone.
Rate this command:
- 👍 Helpful
- 👎 Not helpful
- 🐛 Found a bug
- 💡 Have a suggestion
Reply with an emoji or:
- /command feedback
Your feedback matters!
```
**Usage analytics preparation:**
```markdown
<!--
ANALYTICS NOTES:
Track for improvement:
- Most common arguments
- Failure rates
- Average execution time
- User satisfaction scores
Privacy-preserving:
- No personally identifiable information
- Aggregate statistics only
- User opt-out respected
-->
```
## Quality Standards
### Professional Polish
**Consistent branding:**
```markdown
---
description: Branded command
---
# ✨ Command Name
Part of the [Plugin Name] suite
[Command functionality...]
---
**Need Help?**
- Documentation: https://docs.example.com
- Support: support@example.com
- Community: https://community.example.com
Powered by Plugin Name v2.1.0
```
**Attention to detail:**
```markdown
<!-- Details that matter -->
✓ Use proper emoji/symbols consistently
✓ Align output columns neatly
✓ Format numbers with thousands separators
✓ Use color/formatting appropriately
✓ Provide progress indicators
✓ Show estimated time remaining
✓ Confirm successful operations
```
### Reliability
**Idempotency:**
```markdown
---
description: Idempotent command
---
# Safe Repeated Execution
Checking if operation already completed...
if [ -f ".claude/operation-completed.flag" ]; then
Operation already completed
Completed at: $(cat .claude/operation-completed.flag)
To re-run:
1. Remove flag: rm .claude/operation-completed.flag
2. Run command again
Otherwise, no action needed.
Exit.
fi
Performing operation...
[Safe, repeatable operation...]
Marking complete...
echo "$(date)" > .claude/operation-completed.flag
```
**Atomic operations:**
```markdown
---
description: Atomic command
---
# Atomic Operation
This operation is atomic - either fully succeeds or fully fails.
Creating temporary workspace...
TEMP_DIR=$(mktemp -d)
Performing changes in isolated environment...
[Make changes in $TEMP_DIR]
if [ $? -eq 0 ]; then
✓ Changes validated
Applying changes atomically...
mv $TEMP_DIR/* ./target/
✓ Operation complete
else
❌ Changes failed validation
Rolling back...
rm -rf $TEMP_DIR
No changes applied. Safe to retry.
fi
```
## Testing for Distribution
### Pre-Release Checklist
```markdown
<!--
PRE-RELEASE CHECKLIST:
Functionality:
- [ ] Works on macOS
- [ ] Works on Linux
- [ ] Works on Windows (WSL)
- [ ] All arguments tested
- [ ] Error cases handled
- [ ] Edge cases covered
User Experience:
- [ ] Clear description
- [ ] Helpful error messages
- [ ] Examples provided
- [ ] First-run experience good
- [ ] Documentation complete
Distribution:
- [ ] No hardcoded paths
- [ ] Dependencies documented
- [ ] Configuration options clear
- [ ] Version number set
- [ ] Changelog updated
Quality:
- [ ] No TODO comments
- [ ] No debug code
- [ ] Performance acceptable
- [ ] Security reviewed
- [ ] Privacy considered
Support:
- [ ] README complete
- [ ] Troubleshooting guide
- [ ] Support contact provided
- [ ] Feedback mechanism
- [ ] License specified
-->
```
### Beta Testing
**Beta release approach:**
```markdown
---
description: Beta command (v0.9.0)
---
# 🧪 Beta Command
**This is a beta release**
Features may change based on feedback.
BETA STATUS:
- Version: 0.9.0
- Stability: Experimental
- Support: Limited
- Feedback: Encouraged
Known limitations:
- Performance not optimized
- Some edge cases not handled
- Documentation incomplete
Help improve this command:
- Report issues: /command report-issue
- Suggest features: /command suggest
- Join beta testers: /command join-beta
---
[Command implementation...]
---
**Thank you for beta testing!**
Your feedback helps make this command better.
```
## Maintenance and Updates
### Update Strategy
**Versioned commands:**
```markdown
<!--
VERSION STRATEGY:
Major (X.0.0): Breaking changes
- Document all breaking changes
- Provide migration guide
- Support old version briefly
Minor (x.Y.0): New features
- Backward compatible
- Announce new features
- Update examples
Patch (x.y.Z): Bug fixes
- No user-facing changes
- Update changelog
- Security fixes prioritized
Release schedule:
- Patches: As needed
- Minors: Monthly
- Majors: Annually or as needed
-->
```
**Update notifications:**
```markdown
---
description: Update-aware command
---
# Check for Updates
Current version: 2.1.0
Latest version: [check if available]
if [ "$CURRENT_VERSION" != "$LATEST_VERSION" ]; then
📢 UPDATE AVAILABLE
New version: $LATEST_VERSION
Current: $CURRENT_VERSION
What's new:
- Feature improvements
- Bug fixes
- Performance enhancements
Update with:
/plugin update plugin-name
Release notes: https://releases.example.com/v$LATEST_VERSION
fi
[Command continues...]
```
## Best Practices Summary
### Distribution Design
1. **Universal**: Works across platforms and environments
2. **Self-contained**: Minimal dependencies, clear requirements
3. **Graceful**: Degrades gracefully when features unavailable
4. **Forgiving**: Anticipates and handles user mistakes
5. **Helpful**: Clear errors, good defaults, excellent docs
### Marketplace Success
1. **Discoverable**: Clear name, good description, searchable keywords
2. **Professional**: Polished presentation, consistent branding
3. **Reliable**: Tested thoroughly, handles edge cases
4. **Maintainable**: Versioned, updated regularly, supported
5. **User-focused**: Great UX, responsive to feedback
### Quality Standards
1. **Complete**: Fully documented, all features working
2. **Tested**: Works in real environments, edge cases handled
3. **Secure**: No vulnerabilities, safe operations
4. **Performant**: Reasonable speed, resource-efficient
5. **Ethical**: Privacy-respecting, user consent
With these considerations, commands become marketplace-ready and delight users across diverse environments and use cases.

View File

@@ -0,0 +1,609 @@
# Plugin-Specific Command Features Reference
This reference covers features and patterns specific to commands bundled in Claude Code plugins.
## Table of Contents
- [Plugin Command Discovery](#plugin-command-discovery)
- [CLAUDE_PLUGIN_ROOT Environment Variable](#claude_plugin_root-environment-variable)
- [Plugin Command Patterns](#plugin-command-patterns)
- [Integration with Plugin Components](#integration-with-plugin-components)
- [Validation Patterns](#validation-patterns)
## Plugin Command Discovery
### Auto-Discovery
Claude Code automatically discovers commands in plugins using the following locations:
```
plugin-name/
├── commands/ # Auto-discovered commands
│ ├── foo.md # /foo (plugin:plugin-name)
│ └── bar.md # /bar (plugin:plugin-name)
└── plugin.json # Plugin manifest
```
**Key points:**
- Commands are discovered at plugin load time
- No manual registration required
- Commands appear in `/help` with "(plugin:plugin-name)" label
- Subdirectories create namespaces
### Namespaced Plugin Commands
Organize commands in subdirectories for logical grouping:
```
plugin-name/
└── commands/
├── review/
│ ├── security.md # /security (plugin:plugin-name:review)
│ └── style.md # /style (plugin:plugin-name:review)
└── deploy/
├── staging.md # /staging (plugin:plugin-name:deploy)
└── prod.md # /prod (plugin:plugin-name:deploy)
```
**Namespace behavior:**
- Subdirectory name becomes namespace
- Shown as "(plugin:plugin-name:namespace)" in `/help`
- Helps organize related commands
- Use when plugin has 5+ commands
### Command Naming Conventions
**Plugin command names should:**
1. Be descriptive and action-oriented
2. Avoid conflicts with common command names
3. Use hyphens for multi-word names
4. Consider prefixing with plugin name for uniqueness
**Examples:**
```
Good:
- /mylyn-sync (plugin-specific prefix)
- /analyze-performance (descriptive action)
- /docker-compose-up (clear purpose)
Avoid:
- /test (conflicts with common name)
- /run (too generic)
- /do-stuff (not descriptive)
```
## CLAUDE_PLUGIN_ROOT Environment Variable
### Purpose
`${CLAUDE_PLUGIN_ROOT}` is a special environment variable available in plugin commands that resolves to the absolute path of the plugin directory.
**Why it matters:**
- Enables portable paths within plugin
- Allows referencing plugin files and scripts
- Works across different installations
- Essential for multi-file plugin operations
### Basic Usage
Reference files within your plugin:
```markdown
---
description: Analyze using plugin script
allowed-tools: Bash(node:*), Read
---
Run analysis: !`node ${CLAUDE_PLUGIN_ROOT}/scripts/analyze.js`
Read template: @${CLAUDE_PLUGIN_ROOT}/templates/report.md
```
**Expands to:**
```
Run analysis: !`node /path/to/plugins/plugin-name/scripts/analyze.js`
Read template: @/path/to/plugins/plugin-name/templates/report.md
```
### Common Patterns
#### 1. Executing Plugin Scripts
```markdown
---
description: Run custom linter from plugin
allowed-tools: Bash(node:*)
---
Lint results: !`node ${CLAUDE_PLUGIN_ROOT}/bin/lint.js $1`
Review the linting output and suggest fixes.
```
#### 2. Loading Configuration Files
```markdown
---
description: Deploy using plugin configuration
allowed-tools: Read, Bash(*)
---
Configuration: @${CLAUDE_PLUGIN_ROOT}/config/deploy-config.json
Deploy application using the configuration above for $1 environment.
```
#### 3. Accessing Plugin Resources
```markdown
---
description: Generate report from template
---
Use this template: @${CLAUDE_PLUGIN_ROOT}/templates/api-report.md
Generate a report for @$1 following the template format.
```
#### 4. Multi-Step Plugin Workflows
```markdown
---
description: Complete plugin workflow
allowed-tools: Bash(*), Read
---
Step 1 - Prepare: !`bash ${CLAUDE_PLUGIN_ROOT}/scripts/prepare.sh $1`
Step 2 - Config: @${CLAUDE_PLUGIN_ROOT}/config/$1.json
Step 3 - Execute: !`${CLAUDE_PLUGIN_ROOT}/bin/execute $1`
Review results and report status.
```
### Best Practices
1. **Always use for plugin-internal paths:**
```markdown
# Good
@${CLAUDE_PLUGIN_ROOT}/templates/foo.md
# Bad
@./templates/foo.md # Relative to current directory, not plugin
```
2. **Validate file existence:**
```markdown
---
description: Use plugin config if exists
allowed-tools: Bash(test:*), Read
---
!`test -f ${CLAUDE_PLUGIN_ROOT}/config.json && echo "exists" || echo "missing"`
If config exists, load it: @${CLAUDE_PLUGIN_ROOT}/config.json
Otherwise, use defaults...
```
3. **Document plugin file structure:**
```markdown
<!--
Plugin structure:
${CLAUDE_PLUGIN_ROOT}/
├── scripts/analyze.js (analysis script)
├── templates/ (report templates)
└── config/ (configuration files)
-->
```
4. **Combine with arguments:**
```markdown
Run: !`${CLAUDE_PLUGIN_ROOT}/bin/process.sh $1 $2`
```
### Troubleshooting
**Variable not expanding:**
- Ensure command is loaded from plugin
- Check bash execution is allowed
- Verify syntax is exact: `${CLAUDE_PLUGIN_ROOT}`
**File not found errors:**
- Verify file exists in plugin directory
- Check file path is correct relative to plugin root
- Ensure file permissions allow reading/execution
**Path with spaces:**
- Bash commands automatically handle spaces
- File references work with spaces in paths
- No special quoting needed
## Plugin Command Patterns
### Pattern 1: Configuration-Based Commands
Commands that load plugin-specific configuration:
```markdown
---
description: Deploy using plugin settings
allowed-tools: Read, Bash(*)
---
Load configuration: @${CLAUDE_PLUGIN_ROOT}/deploy-config.json
Deploy to $1 environment using:
1. Configuration settings above
2. Current git branch: !`git branch --show-current`
3. Application version: !`cat package.json | grep version`
Execute deployment and monitor progress.
```
**When to use:** Commands that need consistent settings across invocations
### Pattern 2: Template-Based Generation
Commands that use plugin templates:
```markdown
---
description: Generate documentation from template
argument-hint: [component-name]
---
Template: @${CLAUDE_PLUGIN_ROOT}/templates/component-docs.md
Generate documentation for $1 component following the template structure.
Include:
- Component purpose and usage
- API reference
- Examples
- Testing guidelines
```
**When to use:** Standardized output generation
### Pattern 3: Multi-Script Workflow
Commands that orchestrate multiple plugin scripts:
```markdown
---
description: Complete build and test workflow
allowed-tools: Bash(*)
---
Build: !`bash ${CLAUDE_PLUGIN_ROOT}/scripts/build.sh`
Validate: !`bash ${CLAUDE_PLUGIN_ROOT}/scripts/validate.sh`
Test: !`bash ${CLAUDE_PLUGIN_ROOT}/scripts/test.sh`
Review all outputs and report:
1. Build status
2. Validation results
3. Test results
4. Recommended next steps
```
**When to use:** Complex plugin workflows with multiple steps
### Pattern 4: Environment-Aware Commands
Commands that adapt to environment:
```markdown
---
description: Deploy based on environment
argument-hint: [dev|staging|prod]
---
Environment config: @${CLAUDE_PLUGIN_ROOT}/config/$1.json
Environment check: !`echo "Deploying to: $1"`
Deploy application using $1 environment configuration.
Verify deployment and run smoke tests.
```
**When to use:** Commands that behave differently per environment
### Pattern 5: Plugin Data Management
Commands that manage plugin-specific data:
```markdown
---
description: Save analysis results to plugin cache
allowed-tools: Bash(*), Read, Write
---
Cache directory: ${CLAUDE_PLUGIN_ROOT}/cache/
Analyze @$1 and save results to cache:
!`mkdir -p ${CLAUDE_PLUGIN_ROOT}/cache && date > ${CLAUDE_PLUGIN_ROOT}/cache/last-run.txt`
Store analysis for future reference and comparison.
```
**When to use:** Commands that need persistent data storage
## Integration with Plugin Components
### Invoking Plugin Agents
Commands can trigger plugin agents using the Task tool:
```markdown
---
description: Deep analysis using plugin agent
argument-hint: [file-path]
---
Initiate deep code analysis of @$1 using the code-analyzer agent.
The agent will:
1. Analyze code structure
2. Identify patterns
3. Suggest improvements
4. Generate detailed report
Note: This uses the Task tool to launch the plugin's code-analyzer agent.
```
**Key points:**
- Agent must be defined in plugin's `agents/` directory
- Claude will automatically use Task tool to launch agent
- Agent has access to same plugin resources
### Invoking Plugin Skills
Commands can reference plugin skills for specialized knowledge:
```markdown
---
description: API documentation with best practices
argument-hint: [api-file]
---
Document the API in @$1 following our API documentation standards.
Use the api-docs-standards skill to ensure documentation includes:
- Endpoint descriptions
- Parameter specifications
- Response formats
- Error codes
- Usage examples
Note: This leverages the plugin's api-docs-standards skill for consistency.
```
**Key points:**
- Skill must be defined in plugin's `skills/` directory
- Mention skill by name to hint Claude should invoke it
- Skills provide specialized domain knowledge
### Coordinating with Plugin Hooks
Commands can be designed to work with plugin hooks:
```markdown
---
description: Commit with pre-commit validation
allowed-tools: Bash(git:*)
---
Stage changes: !\`git add $1\`
Commit changes: !\`git commit -m "$2"\`
Note: This commit will trigger the plugin's pre-commit hook for validation.
Review hook output for any issues.
```
**Key points:**
- Hooks execute automatically on events
- Commands can prepare state for hooks
- Document hook interaction in command
### Multi-Component Plugin Commands
Commands that coordinate multiple plugin components:
```markdown
---
description: Comprehensive code review workflow
argument-hint: [file-path]
---
File to review: @$1
Execute comprehensive review:
1. **Static Analysis** (via plugin scripts)
!`node ${CLAUDE_PLUGIN_ROOT}/scripts/lint.js $1`
2. **Deep Review** (via plugin agent)
Launch the code-reviewer agent for detailed analysis.
3. **Best Practices** (via plugin skill)
Use the code-standards skill to ensure compliance.
4. **Documentation** (via plugin template)
Template: @${CLAUDE_PLUGIN_ROOT}/templates/review-report.md
Generate final report combining all outputs.
```
**When to use:** Complex workflows leveraging multiple plugin capabilities
## Validation Patterns
### Input Validation
Commands should validate inputs before processing:
```markdown
---
description: Deploy to environment with validation
argument-hint: [environment]
---
Validate environment: !`echo "$1" | grep -E "^(dev|staging|prod)$" || echo "INVALID"`
$IF($1 in [dev, staging, prod],
Deploy to $1 environment using validated configuration,
ERROR: Invalid environment '$1'. Must be one of: dev, staging, prod
)
```
**Validation approaches:**
1. Bash validation using grep/test
2. Inline validation in prompt
3. Script-based validation
### File Existence Checks
Verify required files exist:
```markdown
---
description: Process configuration file
argument-hint: [config-file]
---
Check file: !`test -f $1 && echo "EXISTS" || echo "MISSING"`
Process configuration if file exists: @$1
If file doesn't exist, explain:
- Expected location
- Required format
- How to create it
```
### Required Arguments
Validate required arguments provided:
```markdown
---
description: Create deployment with version
argument-hint: [environment] [version]
---
Validate inputs: !`test -n "$1" -a -n "$2" && echo "OK" || echo "MISSING"`
$IF($1 AND $2,
Deploy version $2 to $1 environment,
ERROR: Both environment and version required. Usage: /deploy [env] [version]
)
```
### Plugin Resource Validation
Verify plugin resources available:
```markdown
---
description: Run analysis with plugin tools
allowed-tools: Bash(test:*)
---
Validate plugin setup:
- Config exists: !`test -f ${CLAUDE_PLUGIN_ROOT}/config.json && echo "✓" || echo "✗"`
- Scripts exist: !`test -d ${CLAUDE_PLUGIN_ROOT}/scripts && echo "✓" || echo "✗"`
- Tools available: !`test -x ${CLAUDE_PLUGIN_ROOT}/bin/analyze && echo "✓" || echo "✗"`
If all checks pass, proceed with analysis.
Otherwise, report missing components and installation steps.
```
### Output Validation
Validate command execution results:
```markdown
---
description: Build and validate output
allowed-tools: Bash(*)
---
Build: !`bash ${CLAUDE_PLUGIN_ROOT}/scripts/build.sh`
Validate output:
- Exit code: !`echo $?`
- Output exists: !`test -d dist && echo "✓" || echo "✗"`
- File count: !`find dist -type f | wc -l`
Report build status and any validation failures.
```
### Graceful Error Handling
Handle errors gracefully with helpful messages:
```markdown
---
description: Process file with error handling
argument-hint: [file-path]
---
Try processing: !`node ${CLAUDE_PLUGIN_ROOT}/scripts/process.js $1 2>&1 || echo "ERROR: $?"`
If processing succeeded:
- Report results
- Suggest next steps
If processing failed:
- Explain likely causes
- Provide troubleshooting steps
- Suggest alternative approaches
```
## Best Practices Summary
### Plugin Commands Should:
1. **Use ${CLAUDE_PLUGIN_ROOT} for all plugin-internal paths**
- Scripts, templates, configuration, resources
2. **Validate inputs early**
- Check required arguments
- Verify file existence
- Validate argument formats
3. **Document plugin structure**
- Explain required files
- Document script purposes
- Clarify dependencies
4. **Integrate with plugin components**
- Reference agents for complex tasks
- Use skills for specialized knowledge
- Coordinate with hooks when relevant
5. **Provide helpful error messages**
- Explain what went wrong
- Suggest how to fix
- Offer alternatives
6. **Handle edge cases**
- Missing files
- Invalid arguments
- Failed script execution
- Missing dependencies
7. **Keep commands focused**
- One clear purpose per command
- Delegate complex logic to scripts
- Use agents for multi-step workflows
8. **Test across installations**
- Verify paths work everywhere
- Test with different arguments
- Validate error cases
---
For general command development, see main SKILL.md.
For command examples, see examples/ directory.

View File

@@ -0,0 +1,702 @@
# Command Testing Strategies
Comprehensive strategies for testing slash commands before deployment and distribution.
## Overview
Testing commands ensures they work correctly, handle edge cases, and provide good user experience. A systematic testing approach catches issues early and builds confidence in command reliability.
## Testing Levels
### Level 1: Syntax and Structure Validation
**What to test:**
- YAML frontmatter syntax
- Markdown format
- File location and naming
**How to test:**
```bash
# Validate YAML frontmatter
head -n 20 .claude/commands/my-command.md | grep -A 10 "^---"
# Check for closing frontmatter marker
head -n 20 .claude/commands/my-command.md | grep -c "^---" # Should be 2
# Verify file has .md extension
ls .claude/commands/*.md
# Check file is in correct location
test -f .claude/commands/my-command.md && echo "Found" || echo "Missing"
```
**Automated validation script:**
```bash
#!/bin/bash
# validate-command.sh
COMMAND_FILE="$1"
if [ ! -f "$COMMAND_FILE" ]; then
echo "ERROR: File not found: $COMMAND_FILE"
exit 1
fi
# Check .md extension
if [[ ! "$COMMAND_FILE" =~ \.md$ ]]; then
echo "ERROR: File must have .md extension"
exit 1
fi
# Validate YAML frontmatter if present
if head -n 1 "$COMMAND_FILE" | grep -q "^---"; then
# Count frontmatter markers
MARKERS=$(head -n 50 "$COMMAND_FILE" | grep -c "^---")
if [ "$MARKERS" -ne 2 ]; then
echo "ERROR: Invalid YAML frontmatter (need exactly 2 '---' markers)"
exit 1
fi
echo "✓ YAML frontmatter syntax valid"
fi
# Check for empty file
if [ ! -s "$COMMAND_FILE" ]; then
echo "ERROR: File is empty"
exit 1
fi
echo "✓ Command file structure valid"
```
### Level 2: Frontmatter Field Validation
**What to test:**
- Field types correct
- Values in valid ranges
- Required fields present (if any)
**Validation script:**
```bash
#!/bin/bash
# validate-frontmatter.sh
COMMAND_FILE="$1"
# Extract YAML frontmatter
FRONTMATTER=$(sed -n '/^---$/,/^---$/p' "$COMMAND_FILE" | sed '1d;$d')
if [ -z "$FRONTMATTER" ]; then
echo "No frontmatter to validate"
exit 0
fi
# Check 'model' field if present
if echo "$FRONTMATTER" | grep -q "^model:"; then
MODEL=$(echo "$FRONTMATTER" | grep "^model:" | cut -d: -f2 | tr -d ' ')
if ! echo "sonnet opus haiku" | grep -qw "$MODEL"; then
echo "ERROR: Invalid model '$MODEL' (must be sonnet, opus, or haiku)"
exit 1
fi
echo "✓ Model field valid: $MODEL"
fi
# Check 'allowed-tools' field format
if echo "$FRONTMATTER" | grep -q "^allowed-tools:"; then
echo "✓ allowed-tools field present"
# Could add more sophisticated validation here
fi
# Check 'description' length
if echo "$FRONTMATTER" | grep -q "^description:"; then
DESC=$(echo "$FRONTMATTER" | grep "^description:" | cut -d: -f2-)
LENGTH=${#DESC}
if [ "$LENGTH" -gt 80 ]; then
echo "WARNING: Description length $LENGTH (recommend < 60 chars)"
else
echo "✓ Description length acceptable: $LENGTH chars"
fi
fi
echo "✓ Frontmatter fields valid"
```
### Level 3: Manual Command Invocation
**What to test:**
- Command appears in `/help`
- Command executes without errors
- Output is as expected
**Test procedure:**
```bash
# 1. Start Claude Code
claude --debug
# 2. Check command appears in help
> /help
# Look for your command in the list
# 3. Invoke command without arguments
> /my-command
# Check for reasonable error or behavior
# 4. Invoke with valid arguments
> /my-command arg1 arg2
# Verify expected behavior
# 5. Check debug logs
tail -f ~/.claude/debug-logs/latest
# Look for errors or warnings
```
### Level 4: Argument Testing
**What to test:**
- Positional arguments work ($1, $2, etc.)
- $ARGUMENTS captures all arguments
- Missing arguments handled gracefully
- Invalid arguments detected
**Test matrix:**
| Test Case | Command | Expected Result |
|-----------|---------|-----------------|
| No args | `/cmd` | Graceful handling or useful message |
| One arg | `/cmd arg1` | $1 substituted correctly |
| Two args | `/cmd arg1 arg2` | $1 and $2 substituted |
| Extra args | `/cmd a b c d` | All captured or extras ignored appropriately |
| Special chars | `/cmd "arg with spaces"` | Quotes handled correctly |
| Empty arg | `/cmd ""` | Empty string handled |
**Test script:**
```bash
#!/bin/bash
# test-command-arguments.sh
COMMAND="$1"
echo "Testing argument handling for /$COMMAND"
echo
echo "Test 1: No arguments"
echo " Command: /$COMMAND"
echo " Expected: [describe expected behavior]"
echo " Manual test required"
echo
echo "Test 2: Single argument"
echo " Command: /$COMMAND test-value"
echo " Expected: 'test-value' appears in output"
echo " Manual test required"
echo
echo "Test 3: Multiple arguments"
echo " Command: /$COMMAND arg1 arg2 arg3"
echo " Expected: All arguments used appropriately"
echo " Manual test required"
echo
echo "Test 4: Special characters"
echo " Command: /$COMMAND \"value with spaces\""
echo " Expected: Entire phrase captured"
echo " Manual test required"
```
### Level 5: File Reference Testing
**What to test:**
- @ syntax loads file contents
- Non-existent files handled
- Large files handled appropriately
- Multiple file references work
**Test procedure:**
```bash
# Create test files
echo "Test content" > /tmp/test-file.txt
echo "Second file" > /tmp/test-file-2.txt
# Test single file reference
> /my-command /tmp/test-file.txt
# Verify file content is read
# Test non-existent file
> /my-command /tmp/nonexistent.txt
# Verify graceful error handling
# Test multiple files
> /my-command /tmp/test-file.txt /tmp/test-file-2.txt
# Verify both files processed
# Test large file
dd if=/dev/zero of=/tmp/large-file.bin bs=1M count=100
> /my-command /tmp/large-file.bin
# Verify reasonable behavior (may truncate or warn)
# Cleanup
rm /tmp/test-file*.txt /tmp/large-file.bin
```
### Level 6: Bash Execution Testing
**What to test:**
- !` commands execute correctly
- Command output included in prompt
- Command failures handled
- Security: only allowed commands run
**Test procedure:**
```bash
# Create test command with bash execution
cat > .claude/commands/test-bash.md << 'EOF'
---
description: Test bash execution
allowed-tools: Bash(echo:*), Bash(date:*)
---
Current date: !`date`
Test output: !`echo "Hello from bash"`
Analysis of output above...
EOF
# Test in Claude Code
> /test-bash
# Verify:
# 1. Date appears correctly
# 2. Echo output appears
# 3. No errors in debug logs
# Test with disallowed command (should fail or be blocked)
cat > .claude/commands/test-forbidden.md << 'EOF'
---
description: Test forbidden command
allowed-tools: Bash(echo:*)
---
Trying forbidden: !`ls -la /`
EOF
> /test-forbidden
# Verify: Permission denied or appropriate error
```
### Level 7: Integration Testing
**What to test:**
- Commands work with other plugin components
- Commands interact correctly with each other
- State management works across invocations
- Workflow commands execute in sequence
**Test scenarios:**
**Scenario 1: Command + Hook Integration**
```bash
# Setup: Command that triggers a hook
# Test: Invoke command, verify hook executes
# Command: .claude/commands/risky-operation.md
# Hook: PreToolUse that validates the operation
> /risky-operation
# Verify: Hook executes and validates before command completes
```
**Scenario 2: Command Sequence**
```bash
# Setup: Multi-command workflow
> /workflow-init
# Verify: State file created
> /workflow-step2
# Verify: State file read, step 2 executes
> /workflow-complete
# Verify: State file cleaned up
```
**Scenario 3: Command + MCP Integration**
```bash
# Setup: Command uses MCP tools
# Test: Verify MCP server accessible
> /mcp-command
# Verify:
# 1. MCP server starts (if stdio)
# 2. Tool calls succeed
# 3. Results included in output
```
## Automated Testing Approaches
### Command Test Suite
Create a test suite script:
```bash
#!/bin/bash
# test-commands.sh - Command test suite
TEST_DIR=".claude/commands"
FAILED_TESTS=0
echo "Command Test Suite"
echo "=================="
echo
for cmd_file in "$TEST_DIR"/*.md; do
cmd_name=$(basename "$cmd_file" .md)
echo "Testing: $cmd_name"
# Validate structure
if ./validate-command.sh "$cmd_file"; then
echo " ✓ Structure valid"
else
echo " ✗ Structure invalid"
((FAILED_TESTS++))
fi
# Validate frontmatter
if ./validate-frontmatter.sh "$cmd_file"; then
echo " ✓ Frontmatter valid"
else
echo " ✗ Frontmatter invalid"
((FAILED_TESTS++))
fi
echo
done
echo "=================="
echo "Tests complete"
echo "Failed: $FAILED_TESTS"
exit $FAILED_TESTS
```
### Pre-Commit Hook
Validate commands before committing:
```bash
#!/bin/bash
# .git/hooks/pre-commit
echo "Validating commands..."
COMMANDS_CHANGED=$(git diff --cached --name-only | grep "\.claude/commands/.*\.md")
if [ -z "$COMMANDS_CHANGED" ]; then
echo "No commands changed"
exit 0
fi
for cmd in $COMMANDS_CHANGED; do
echo "Checking: $cmd"
if ! ./scripts/validate-command.sh "$cmd"; then
echo "ERROR: Command validation failed: $cmd"
exit 1
fi
done
echo "✓ All commands valid"
```
### Continuous Testing
Test commands in CI/CD:
```yaml
# .github/workflows/test-commands.yml
name: Test Commands
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Validate command structure
run: |
for cmd in .claude/commands/*.md; do
echo "Testing: $cmd"
./scripts/validate-command.sh "$cmd"
done
- name: Validate frontmatter
run: |
for cmd in .claude/commands/*.md; do
./scripts/validate-frontmatter.sh "$cmd"
done
- name: Check for TODOs
run: |
if grep -r "TODO" .claude/commands/; then
echo "ERROR: TODOs found in commands"
exit 1
fi
```
## Edge Case Testing
### Test Edge Cases
**Empty arguments:**
```bash
> /cmd ""
> /cmd '' ''
```
**Special characters:**
```bash
> /cmd "arg with spaces"
> /cmd arg-with-dashes
> /cmd arg_with_underscores
> /cmd arg/with/slashes
> /cmd 'arg with "quotes"'
```
**Long arguments:**
```bash
> /cmd $(python -c "print('a' * 10000)")
```
**Unusual file paths:**
```bash
> /cmd ./file
> /cmd ../file
> /cmd ~/file
> /cmd "/path with spaces/file"
```
**Bash command edge cases:**
```markdown
# Commands that might fail
!`exit 1`
!`false`
!`command-that-does-not-exist`
# Commands with special output
!`echo ""`
!`cat /dev/null`
!`yes | head -n 1000000`
```
## Performance Testing
### Response Time Testing
```bash
#!/bin/bash
# test-command-performance.sh
COMMAND="$1"
echo "Testing performance of /$COMMAND"
echo
for i in {1..5}; do
echo "Run $i:"
START=$(date +%s%N)
# Invoke command (manual step - record time)
echo " Invoke: /$COMMAND"
echo " Start time: $START"
echo " (Record end time manually)"
echo
done
echo "Analyze results:"
echo " - Average response time"
echo " - Variance"
echo " - Acceptable threshold: < 3 seconds for fast commands"
```
### Resource Usage Testing
```bash
# Monitor Claude Code during command execution
# In terminal 1:
claude --debug
# In terminal 2:
watch -n 1 'ps aux | grep claude'
# Execute command and observe:
# - Memory usage
# - CPU usage
# - Process count
```
## User Experience Testing
### Usability Checklist
- [ ] Command name is intuitive
- [ ] Description is clear in `/help`
- [ ] Arguments are well-documented
- [ ] Error messages are helpful
- [ ] Output is formatted readably
- [ ] Long-running commands show progress
- [ ] Results are actionable
- [ ] Edge cases have good UX
### User Acceptance Testing
Recruit testers:
```markdown
# Testing Guide for Beta Testers
## Command: /my-new-command
### Test Scenarios
1. **Basic usage:**
- Run: `/my-new-command`
- Expected: [describe]
- Rate clarity: 1-5
2. **With arguments:**
- Run: `/my-new-command arg1 arg2`
- Expected: [describe]
- Rate usefulness: 1-5
3. **Error case:**
- Run: `/my-new-command invalid-input`
- Expected: Helpful error message
- Rate error message: 1-5
### Feedback Questions
1. Was the command easy to understand?
2. Did the output meet your expectations?
3. What would you change?
4. Would you use this command regularly?
```
## Testing Checklist
Before releasing a command:
### Structure
- [ ] File in correct location
- [ ] Correct .md extension
- [ ] Valid YAML frontmatter (if present)
- [ ] Markdown syntax correct
### Functionality
- [ ] Command appears in `/help`
- [ ] Description is clear
- [ ] Command executes without errors
- [ ] Arguments work as expected
- [ ] File references work
- [ ] Bash execution works (if used)
### Edge Cases
- [ ] Missing arguments handled
- [ ] Invalid arguments detected
- [ ] Non-existent files handled
- [ ] Special characters work
- [ ] Long inputs handled
### Integration
- [ ] Works with other commands
- [ ] Works with hooks (if applicable)
- [ ] Works with MCP (if applicable)
- [ ] State management works
### Quality
- [ ] Performance acceptable
- [ ] No security issues
- [ ] Error messages helpful
- [ ] Output formatted well
- [ ] Documentation complete
### Distribution
- [ ] Tested by others
- [ ] Feedback incorporated
- [ ] README updated
- [ ] Examples provided
## Debugging Failed Tests
### Common Issues and Solutions
**Issue: Command not appearing in /help**
```bash
# Check file location
ls -la .claude/commands/my-command.md
# Check permissions
chmod 644 .claude/commands/my-command.md
# Check syntax
head -n 20 .claude/commands/my-command.md
# Restart Claude Code
claude --debug
```
**Issue: Arguments not substituting**
```bash
# Verify syntax
grep '\$1' .claude/commands/my-command.md
grep '\$ARGUMENTS' .claude/commands/my-command.md
# Test with simple command first
echo "Test: \$1 and \$2" > .claude/commands/test-args.md
```
**Issue: Bash commands not executing**
```bash
# Check allowed-tools
grep "allowed-tools" .claude/commands/my-command.md
# Verify command syntax
grep '!\`' .claude/commands/my-command.md
# Test command manually
date
echo "test"
```
**Issue: File references not working**
```bash
# Check @ syntax
grep '@' .claude/commands/my-command.md
# Verify file exists
ls -la /path/to/referenced/file
# Check permissions
chmod 644 /path/to/referenced/file
```
## Best Practices
1. **Test early, test often**: Validate as you develop
2. **Automate validation**: Use scripts for repeatable checks
3. **Test edge cases**: Don't just test the happy path
4. **Get feedback**: Have others test before wide release
5. **Document tests**: Keep test scenarios for regression testing
6. **Monitor in production**: Watch for issues after release
7. **Iterate**: Improve based on real usage data

View File

@@ -0,0 +1,712 @@
---
name: Hook Development
description: This skill should be used when the user asks to "create a hook", "add a PreToolUse/PostToolUse/Stop hook", "validate tool use", "implement prompt-based hooks", "use ${CLAUDE_PLUGIN_ROOT}", "set up event-driven automation", "block dangerous commands", or mentions hook events (PreToolUse, PostToolUse, Stop, SubagentStop, SessionStart, SessionEnd, UserPromptSubmit, PreCompact, Notification). Provides comprehensive guidance for creating and implementing Claude Code plugin hooks with focus on advanced prompt-based hooks API.
version: 0.1.0
---
# Hook Development for Claude Code Plugins
## Overview
Hooks are event-driven automation scripts that execute in response to Claude Code events. Use hooks to validate operations, enforce policies, add context, and integrate external tools into workflows.
**Key capabilities:**
- Validate tool calls before execution (PreToolUse)
- React to tool results (PostToolUse)
- Enforce completion standards (Stop, SubagentStop)
- Load project context (SessionStart)
- Automate workflows across the development lifecycle
## Hook Types
### Prompt-Based Hooks (Recommended)
Use LLM-driven decision making for context-aware validation:
```json
{
"type": "prompt",
"prompt": "Evaluate if this tool use is appropriate: $TOOL_INPUT",
"timeout": 30
}
```
**Supported events:** Stop, SubagentStop, UserPromptSubmit, PreToolUse
**Benefits:**
- Context-aware decisions based on natural language reasoning
- Flexible evaluation logic without bash scripting
- Better edge case handling
- Easier to maintain and extend
### Command Hooks
Execute bash commands for deterministic checks:
```json
{
"type": "command",
"command": "bash ${CLAUDE_PLUGIN_ROOT}/scripts/validate.sh",
"timeout": 60
}
```
**Use for:**
- Fast deterministic validations
- File system operations
- External tool integrations
- Performance-critical checks
## Hook Configuration Formats
### Plugin hooks.json Format
**For plugin hooks** in `hooks/hooks.json`, use wrapper format:
```json
{
"description": "Brief explanation of hooks (optional)",
"hooks": {
"PreToolUse": [...],
"Stop": [...],
"SessionStart": [...]
}
}
```
**Key points:**
- `description` field is optional
- `hooks` field is required wrapper containing actual hook events
- This is the **plugin-specific format**
**Example:**
```json
{
"description": "Validation hooks for code quality",
"hooks": {
"PreToolUse": [
{
"matcher": "Write",
"hooks": [
{
"type": "command",
"command": "${CLAUDE_PLUGIN_ROOT}/hooks/validate.sh"
}
]
}
]
}
}
```
### Settings Format (Direct)
**For user settings** in `.claude/settings.json`, use direct format:
```json
{
"PreToolUse": [...],
"Stop": [...],
"SessionStart": [...]
}
```
**Key points:**
- No wrapper - events directly at top level
- No description field
- This is the **settings format**
**Important:** The examples below show the hook event structure that goes inside either format. For plugin hooks.json, wrap these in `{"hooks": {...}}`.
## Hook Events
### PreToolUse
Execute before any tool runs. Use to approve, deny, or modify tool calls.
**Example (prompt-based):**
```json
{
"PreToolUse": [
{
"matcher": "Write|Edit",
"hooks": [
{
"type": "prompt",
"prompt": "Validate file write safety. Check: system paths, credentials, path traversal, sensitive content. Return 'approve' or 'deny'."
}
]
}
]
}
```
**Output for PreToolUse:**
```json
{
"hookSpecificOutput": {
"permissionDecision": "allow|deny|ask",
"updatedInput": {"field": "modified_value"}
},
"systemMessage": "Explanation for Claude"
}
```
### PostToolUse
Execute after tool completes. Use to react to results, provide feedback, or log.
**Example:**
```json
{
"PostToolUse": [
{
"matcher": "Edit",
"hooks": [
{
"type": "prompt",
"prompt": "Analyze edit result for potential issues: syntax errors, security vulnerabilities, breaking changes. Provide feedback."
}
]
}
]
}
```
**Output behavior:**
- Exit 0: stdout shown in transcript
- Exit 2: stderr fed back to Claude
- systemMessage included in context
### Stop
Execute when main agent considers stopping. Use to validate completeness.
**Example:**
```json
{
"Stop": [
{
"matcher": "*",
"hooks": [
{
"type": "prompt",
"prompt": "Verify task completion: tests run, build succeeded, questions answered. Return 'approve' to stop or 'block' with reason to continue."
}
]
}
]
}
```
**Decision output:**
```json
{
"decision": "approve|block",
"reason": "Explanation",
"systemMessage": "Additional context"
}
```
### SubagentStop
Execute when subagent considers stopping. Use to ensure subagent completed its task.
Similar to Stop hook, but for subagents.
### UserPromptSubmit
Execute when user submits a prompt. Use to add context, validate, or block prompts.
**Example:**
```json
{
"UserPromptSubmit": [
{
"matcher": "*",
"hooks": [
{
"type": "prompt",
"prompt": "Check if prompt requires security guidance. If discussing auth, permissions, or API security, return relevant warnings."
}
]
}
]
}
```
### SessionStart
Execute when Claude Code session begins. Use to load context and set environment.
**Example:**
```json
{
"SessionStart": [
{
"matcher": "*",
"hooks": [
{
"type": "command",
"command": "bash ${CLAUDE_PLUGIN_ROOT}/scripts/load-context.sh"
}
]
}
]
}
```
**Special capability:** Persist environment variables using `$CLAUDE_ENV_FILE`:
```bash
echo "export PROJECT_TYPE=nodejs" >> "$CLAUDE_ENV_FILE"
```
See `examples/load-context.sh` for complete example.
### SessionEnd
Execute when session ends. Use for cleanup, logging, and state preservation.
### PreCompact
Execute before context compaction. Use to add critical information to preserve.
### Notification
Execute when Claude sends notifications. Use to react to user notifications.
## Hook Output Format
### Standard Output (All Hooks)
```json
{
"continue": true,
"suppressOutput": false,
"systemMessage": "Message for Claude"
}
```
- `continue`: If false, halt processing (default true)
- `suppressOutput`: Hide output from transcript (default false)
- `systemMessage`: Message shown to Claude
### Exit Codes
- `0` - Success (stdout shown in transcript)
- `2` - Blocking error (stderr fed back to Claude)
- Other - Non-blocking error
## Hook Input Format
All hooks receive JSON via stdin with common fields:
```json
{
"session_id": "abc123",
"transcript_path": "/path/to/transcript.txt",
"cwd": "/current/working/dir",
"permission_mode": "ask|allow",
"hook_event_name": "PreToolUse"
}
```
**Event-specific fields:**
- **PreToolUse/PostToolUse:** `tool_name`, `tool_input`, `tool_result`
- **UserPromptSubmit:** `user_prompt`
- **Stop/SubagentStop:** `reason`
Access fields in prompts using `$TOOL_INPUT`, `$TOOL_RESULT`, `$USER_PROMPT`, etc.
## Environment Variables
Available in all command hooks:
- `$CLAUDE_PROJECT_DIR` - Project root path
- `$CLAUDE_PLUGIN_ROOT` - Plugin directory (use for portable paths)
- `$CLAUDE_ENV_FILE` - SessionStart only: persist env vars here
- `$CLAUDE_CODE_REMOTE` - Set if running in remote context
**Always use ${CLAUDE_PLUGIN_ROOT} in hook commands for portability:**
```json
{
"type": "command",
"command": "bash ${CLAUDE_PLUGIN_ROOT}/scripts/validate.sh"
}
```
## Plugin Hook Configuration
In plugins, define hooks in `hooks/hooks.json`:
```json
{
"PreToolUse": [
{
"matcher": "Write|Edit",
"hooks": [
{
"type": "prompt",
"prompt": "Validate file write safety"
}
]
}
],
"Stop": [
{
"matcher": "*",
"hooks": [
{
"type": "prompt",
"prompt": "Verify task completion"
}
]
}
],
"SessionStart": [
{
"matcher": "*",
"hooks": [
{
"type": "command",
"command": "bash ${CLAUDE_PLUGIN_ROOT}/scripts/load-context.sh",
"timeout": 10
}
]
}
]
}
```
Plugin hooks merge with user's hooks and run in parallel.
## Matchers
### Tool Name Matching
**Exact match:**
```json
"matcher": "Write"
```
**Multiple tools:**
```json
"matcher": "Read|Write|Edit"
```
**Wildcard (all tools):**
```json
"matcher": "*"
```
**Regex patterns:**
```json
"matcher": "mcp__.*__delete.*" // All MCP delete tools
```
**Note:** Matchers are case-sensitive.
### Common Patterns
```json
// All MCP tools
"matcher": "mcp__.*"
// Specific plugin's MCP tools
"matcher": "mcp__plugin_asana_.*"
// All file operations
"matcher": "Read|Write|Edit"
// Bash commands only
"matcher": "Bash"
```
## Security Best Practices
### Input Validation
Always validate inputs in command hooks:
```bash
#!/bin/bash
set -euo pipefail
input=$(cat)
tool_name=$(echo "$input" | jq -r '.tool_name')
# Validate tool name format
if [[ ! "$tool_name" =~ ^[a-zA-Z0-9_]+$ ]]; then
echo '{"decision": "deny", "reason": "Invalid tool name"}' >&2
exit 2
fi
```
### Path Safety
Check for path traversal and sensitive files:
```bash
file_path=$(echo "$input" | jq -r '.tool_input.file_path')
# Deny path traversal
if [[ "$file_path" == *".."* ]]; then
echo '{"decision": "deny", "reason": "Path traversal detected"}' >&2
exit 2
fi
# Deny sensitive files
if [[ "$file_path" == *".env"* ]]; then
echo '{"decision": "deny", "reason": "Sensitive file"}' >&2
exit 2
fi
```
See `examples/validate-write.sh` and `examples/validate-bash.sh` for complete examples.
### Quote All Variables
```bash
# GOOD: Quoted
echo "$file_path"
cd "$CLAUDE_PROJECT_DIR"
# BAD: Unquoted (injection risk)
echo $file_path
cd $CLAUDE_PROJECT_DIR
```
### Set Appropriate Timeouts
```json
{
"type": "command",
"command": "bash script.sh",
"timeout": 10
}
```
**Defaults:** Command hooks (60s), Prompt hooks (30s)
## Performance Considerations
### Parallel Execution
All matching hooks run **in parallel**:
```json
{
"PreToolUse": [
{
"matcher": "Write",
"hooks": [
{"type": "command", "command": "check1.sh"}, // Parallel
{"type": "command", "command": "check2.sh"}, // Parallel
{"type": "prompt", "prompt": "Validate..."} // Parallel
]
}
]
}
```
**Design implications:**
- Hooks don't see each other's output
- Non-deterministic ordering
- Design for independence
### Optimization
1. Use command hooks for quick deterministic checks
2. Use prompt hooks for complex reasoning
3. Cache validation results in temp files
4. Minimize I/O in hot paths
## Temporarily Active Hooks
Create hooks that activate conditionally by checking for a flag file or configuration:
**Pattern: Flag file activation**
```bash
#!/bin/bash
# Only active when flag file exists
FLAG_FILE="$CLAUDE_PROJECT_DIR/.enable-strict-validation"
if [ ! -f "$FLAG_FILE" ]; then
# Flag not present, skip validation
exit 0
fi
# Flag present, run validation
input=$(cat)
# ... validation logic ...
```
**Pattern: Configuration-based activation**
```bash
#!/bin/bash
# Check configuration for activation
CONFIG_FILE="$CLAUDE_PROJECT_DIR/.claude/plugin-config.json"
if [ -f "$CONFIG_FILE" ]; then
enabled=$(jq -r '.strictMode // false' "$CONFIG_FILE")
if [ "$enabled" != "true" ]; then
exit 0 # Not enabled, skip
fi
fi
# Enabled, run hook logic
input=$(cat)
# ... hook logic ...
```
**Use cases:**
- Enable strict validation only when needed
- Temporary debugging hooks
- Project-specific hook behavior
- Feature flags for hooks
**Best practice:** Document activation mechanism in plugin README so users know how to enable/disable temporary hooks.
## Hook Lifecycle and Limitations
### Hooks Load at Session Start
**Important:** Hooks are loaded when Claude Code session starts. Changes to hook configuration require restarting Claude Code.
**Cannot hot-swap hooks:**
- Editing `hooks/hooks.json` won't affect current session
- Adding new hook scripts won't be recognized
- Changing hook commands/prompts won't update
- Must restart Claude Code: exit and run `claude` again
**To test hook changes:**
1. Edit hook configuration or scripts
2. Exit Claude Code session
3. Restart: `claude` or `cc`
4. New hook configuration loads
5. Test hooks with `claude --debug`
### Hook Validation at Startup
Hooks are validated when Claude Code starts:
- Invalid JSON in hooks.json causes loading failure
- Missing scripts cause warnings
- Syntax errors reported in debug mode
Use `/hooks` command to review loaded hooks in current session.
## Debugging Hooks
### Enable Debug Mode
```bash
claude --debug
```
Look for hook registration, execution logs, input/output JSON, and timing information.
### Test Hook Scripts
Test command hooks directly:
```bash
echo '{"tool_name": "Write", "tool_input": {"file_path": "/test"}}' | \
bash ${CLAUDE_PLUGIN_ROOT}/scripts/validate.sh
echo "Exit code: $?"
```
### Validate JSON Output
Ensure hooks output valid JSON:
```bash
output=$(./your-hook.sh < test-input.json)
echo "$output" | jq .
```
## Quick Reference
### Hook Events Summary
| Event | When | Use For |
|-------|------|---------|
| PreToolUse | Before tool | Validation, modification |
| PostToolUse | After tool | Feedback, logging |
| UserPromptSubmit | User input | Context, validation |
| Stop | Agent stopping | Completeness check |
| SubagentStop | Subagent done | Task validation |
| SessionStart | Session begins | Context loading |
| SessionEnd | Session ends | Cleanup, logging |
| PreCompact | Before compact | Preserve context |
| Notification | User notified | Logging, reactions |
### Best Practices
**DO:**
- ✅ Use prompt-based hooks for complex logic
- ✅ Use ${CLAUDE_PLUGIN_ROOT} for portability
- ✅ Validate all inputs in command hooks
- ✅ Quote all bash variables
- ✅ Set appropriate timeouts
- ✅ Return structured JSON output
- ✅ Test hooks thoroughly
**DON'T:**
- ❌ Use hardcoded paths
- ❌ Trust user input without validation
- ❌ Create long-running hooks
- ❌ Rely on hook execution order
- ❌ Modify global state unpredictably
- ❌ Log sensitive information
## Additional Resources
### Reference Files
For detailed patterns and advanced techniques, consult:
- **`references/patterns.md`** - Common hook patterns (8+ proven patterns)
- **`references/migration.md`** - Migrating from basic to advanced hooks
- **`references/advanced.md`** - Advanced use cases and techniques
### Example Hook Scripts
Working examples in `examples/`:
- **`validate-write.sh`** - File write validation example
- **`validate-bash.sh`** - Bash command validation example
- **`load-context.sh`** - SessionStart context loading example
### Utility Scripts
Development tools in `scripts/`:
- **`validate-hook-schema.sh`** - Validate hooks.json structure and syntax
- **`test-hook.sh`** - Test hooks with sample input before deployment
- **`hook-linter.sh`** - Check hook scripts for common issues and best practices
### External Resources
- **Official Docs**: https://docs.claude.com/en/docs/claude-code/hooks
- **Examples**: See security-guidance plugin in marketplace
- **Testing**: Use `claude --debug` for detailed logs
- **Validation**: Use `jq` to validate hook JSON output
## Implementation Workflow
To implement hooks in a plugin:
1. Identify events to hook into (PreToolUse, Stop, SessionStart, etc.)
2. Decide between prompt-based (flexible) or command (deterministic) hooks
3. Write hook configuration in `hooks/hooks.json`
4. For command hooks, create hook scripts
5. Use ${CLAUDE_PLUGIN_ROOT} for all file references
6. Validate configuration with `scripts/validate-hook-schema.sh hooks/hooks.json`
7. Test hooks with `scripts/test-hook.sh` before deployment
8. Test in Claude Code with `claude --debug`
9. Document hooks in plugin README
Focus on prompt-based hooks for most use cases. Reserve command hooks for performance-critical or deterministic checks.

View File

@@ -0,0 +1,55 @@
#!/bin/bash
# Example SessionStart hook for loading project context
# This script detects project type and sets environment variables
set -euo pipefail
# Navigate to project directory
cd "$CLAUDE_PROJECT_DIR" || exit 1
echo "Loading project context..."
# Detect project type and set environment
if [ -f "package.json" ]; then
echo "📦 Node.js project detected"
echo "export PROJECT_TYPE=nodejs" >> "$CLAUDE_ENV_FILE"
# Check if TypeScript
if [ -f "tsconfig.json" ]; then
echo "export USES_TYPESCRIPT=true" >> "$CLAUDE_ENV_FILE"
fi
elif [ -f "Cargo.toml" ]; then
echo "🦀 Rust project detected"
echo "export PROJECT_TYPE=rust" >> "$CLAUDE_ENV_FILE"
elif [ -f "go.mod" ]; then
echo "🐹 Go project detected"
echo "export PROJECT_TYPE=go" >> "$CLAUDE_ENV_FILE"
elif [ -f "pyproject.toml" ] || [ -f "setup.py" ]; then
echo "🐍 Python project detected"
echo "export PROJECT_TYPE=python" >> "$CLAUDE_ENV_FILE"
elif [ -f "pom.xml" ]; then
echo "☕ Java (Maven) project detected"
echo "export PROJECT_TYPE=java" >> "$CLAUDE_ENV_FILE"
echo "export BUILD_SYSTEM=maven" >> "$CLAUDE_ENV_FILE"
elif [ -f "build.gradle" ] || [ -f "build.gradle.kts" ]; then
echo "☕ Java/Kotlin (Gradle) project detected"
echo "export PROJECT_TYPE=java" >> "$CLAUDE_ENV_FILE"
echo "export BUILD_SYSTEM=gradle" >> "$CLAUDE_ENV_FILE"
else
echo "❓ Unknown project type"
echo "export PROJECT_TYPE=unknown" >> "$CLAUDE_ENV_FILE"
fi
# Check for CI configuration
if [ -f ".github/workflows" ] || [ -f ".gitlab-ci.yml" ] || [ -f ".circleci/config.yml" ]; then
echo "export HAS_CI=true" >> "$CLAUDE_ENV_FILE"
fi
echo "Project context loaded successfully"
exit 0

View File

@@ -0,0 +1,43 @@
#!/bin/bash
# Example PreToolUse hook for validating Bash commands
# This script demonstrates bash command validation patterns
set -euo pipefail
# Read input from stdin
input=$(cat)
# Extract command
command=$(echo "$input" | jq -r '.tool_input.command // empty')
# Validate command exists
if [ -z "$command" ]; then
echo '{"continue": true}' # No command to validate
exit 0
fi
# Check for obviously safe commands (quick approval)
if [[ "$command" =~ ^(ls|pwd|echo|date|whoami)(\s|$) ]]; then
exit 0
fi
# Check for destructive operations
if [[ "$command" == *"rm -rf"* ]] || [[ "$command" == *"rm -fr"* ]]; then
echo '{"hookSpecificOutput": {"permissionDecision": "deny"}, "systemMessage": "Dangerous command detected: rm -rf"}' >&2
exit 2
fi
# Check for other dangerous commands
if [[ "$command" == *"dd if="* ]] || [[ "$command" == *"mkfs"* ]] || [[ "$command" == *"> /dev/"* ]]; then
echo '{"hookSpecificOutput": {"permissionDecision": "deny"}, "systemMessage": "Dangerous system operation detected"}' >&2
exit 2
fi
# Check for privilege escalation
if [[ "$command" == sudo* ]] || [[ "$command" == su* ]]; then
echo '{"hookSpecificOutput": {"permissionDecision": "ask"}, "systemMessage": "Command requires elevated privileges"}' >&2
exit 2
fi
# Approve the operation
exit 0

View File

@@ -0,0 +1,38 @@
#!/bin/bash
# Example PreToolUse hook for validating Write/Edit operations
# This script demonstrates file write validation patterns
set -euo pipefail
# Read input from stdin
input=$(cat)
# Extract file path and content
file_path=$(echo "$input" | jq -r '.tool_input.file_path // empty')
# Validate path exists
if [ -z "$file_path" ]; then
echo '{"continue": true}' # No path to validate
exit 0
fi
# Check for path traversal
if [[ "$file_path" == *".."* ]]; then
echo '{"hookSpecificOutput": {"permissionDecision": "deny"}, "systemMessage": "Path traversal detected in: '"$file_path"'"}' >&2
exit 2
fi
# Check for system directories
if [[ "$file_path" == /etc/* ]] || [[ "$file_path" == /sys/* ]] || [[ "$file_path" == /usr/* ]]; then
echo '{"hookSpecificOutput": {"permissionDecision": "deny"}, "systemMessage": "Cannot write to system directory: '"$file_path"'"}' >&2
exit 2
fi
# Check for sensitive files
if [[ "$file_path" == *.env ]] || [[ "$file_path" == *secret* ]] || [[ "$file_path" == *credentials* ]]; then
echo '{"hookSpecificOutput": {"permissionDecision": "ask"}, "systemMessage": "Writing to potentially sensitive file: '"$file_path"'"}' >&2
exit 2
fi
# Approve the operation
exit 0

View File

@@ -0,0 +1,479 @@
# Advanced Hook Use Cases
This reference covers advanced hook patterns and techniques for sophisticated automation workflows.
## Multi-Stage Validation
Combine command and prompt hooks for layered validation:
```json
{
"PreToolUse": [
{
"matcher": "Bash",
"hooks": [
{
"type": "command",
"command": "bash ${CLAUDE_PLUGIN_ROOT}/scripts/quick-check.sh",
"timeout": 5
},
{
"type": "prompt",
"prompt": "Deep analysis of bash command: $TOOL_INPUT",
"timeout": 15
}
]
}
]
}
```
**Use case:** Fast deterministic checks followed by intelligent analysis
**Example quick-check.sh:**
```bash
#!/bin/bash
input=$(cat)
command=$(echo "$input" | jq -r '.tool_input.command')
# Immediate approval for safe commands
if [[ "$command" =~ ^(ls|pwd|echo|date|whoami)$ ]]; then
exit 0
fi
# Let prompt hook handle complex cases
exit 0
```
The command hook quickly approves obviously safe commands, while the prompt hook analyzes everything else.
## Conditional Hook Execution
Execute hooks based on environment or context:
```bash
#!/bin/bash
# Only run in CI environment
if [ -z "$CI" ]; then
echo '{"continue": true}' # Skip in non-CI
exit 0
fi
# Run validation logic in CI
input=$(cat)
# ... validation code ...
```
**Use cases:**
- Different behavior in CI vs local development
- Project-specific validation
- User-specific rules
**Example: Skip certain checks for trusted users:**
```bash
#!/bin/bash
# Skip detailed checks for admin users
if [ "$USER" = "admin" ]; then
exit 0
fi
# Full validation for other users
input=$(cat)
# ... validation code ...
```
## Hook Chaining via State
Share state between hooks using temporary files:
```bash
# Hook 1: Analyze and save state
#!/bin/bash
input=$(cat)
command=$(echo "$input" | jq -r '.tool_input.command')
# Analyze command
risk_level=$(calculate_risk "$command")
echo "$risk_level" > /tmp/hook-state-$$
exit 0
```
```bash
# Hook 2: Use saved state
#!/bin/bash
risk_level=$(cat /tmp/hook-state-$$ 2>/dev/null || echo "unknown")
if [ "$risk_level" = "high" ]; then
echo "High risk operation detected" >&2
exit 2
fi
```
**Important:** This only works for sequential hook events (e.g., PreToolUse then PostToolUse), not parallel hooks.
## Dynamic Hook Configuration
Modify hook behavior based on project configuration:
```bash
#!/bin/bash
cd "$CLAUDE_PROJECT_DIR" || exit 1
# Read project-specific config
if [ -f ".claude-hooks-config.json" ]; then
strict_mode=$(jq -r '.strict_mode' .claude-hooks-config.json)
if [ "$strict_mode" = "true" ]; then
# Apply strict validation
# ...
else
# Apply lenient validation
# ...
fi
fi
```
**Example .claude-hooks-config.json:**
```json
{
"strict_mode": true,
"allowed_commands": ["ls", "pwd", "grep"],
"forbidden_paths": ["/etc", "/sys"]
}
```
## Context-Aware Prompt Hooks
Use transcript and session context for intelligent decisions:
```json
{
"Stop": [
{
"matcher": "*",
"hooks": [
{
"type": "prompt",
"prompt": "Review the full transcript at $TRANSCRIPT_PATH. Check: 1) Were tests run after code changes? 2) Did the build succeed? 3) Were all user questions answered? 4) Is there any unfinished work? Return 'approve' only if everything is complete."
}
]
}
]
}
```
The LLM can read the transcript file and make context-aware decisions.
## Performance Optimization
### Caching Validation Results
```bash
#!/bin/bash
input=$(cat)
file_path=$(echo "$input" | jq -r '.tool_input.file_path')
cache_key=$(echo -n "$file_path" | md5sum | cut -d' ' -f1)
cache_file="/tmp/hook-cache-$cache_key"
# Check cache
if [ -f "$cache_file" ]; then
cache_age=$(($(date +%s) - $(stat -f%m "$cache_file" 2>/dev/null || stat -c%Y "$cache_file")))
if [ "$cache_age" -lt 300 ]; then # 5 minute cache
cat "$cache_file"
exit 0
fi
fi
# Perform validation
result='{"decision": "approve"}'
# Cache result
echo "$result" > "$cache_file"
echo "$result"
```
### Parallel Execution Optimization
Since hooks run in parallel, design them to be independent:
```json
{
"PreToolUse": [
{
"matcher": "Write",
"hooks": [
{
"type": "command",
"command": "bash check-size.sh", // Independent
"timeout": 2
},
{
"type": "command",
"command": "bash check-path.sh", // Independent
"timeout": 2
},
{
"type": "prompt",
"prompt": "Check content safety", // Independent
"timeout": 10
}
]
}
]
}
```
All three hooks run simultaneously, reducing total latency.
## Cross-Event Workflows
Coordinate hooks across different events:
**SessionStart - Set up tracking:**
```bash
#!/bin/bash
# Initialize session tracking
echo "0" > /tmp/test-count-$$
echo "0" > /tmp/build-count-$$
```
**PostToolUse - Track events:**
```bash
#!/bin/bash
input=$(cat)
tool_name=$(echo "$input" | jq -r '.tool_name')
if [ "$tool_name" = "Bash" ]; then
command=$(echo "$input" | jq -r '.tool_result')
if [[ "$command" == *"test"* ]]; then
count=$(cat /tmp/test-count-$$ 2>/dev/null || echo "0")
echo $((count + 1)) > /tmp/test-count-$$
fi
fi
```
**Stop - Verify based on tracking:**
```bash
#!/bin/bash
test_count=$(cat /tmp/test-count-$$ 2>/dev/null || echo "0")
if [ "$test_count" -eq 0 ]; then
echo '{"decision": "block", "reason": "No tests were run"}' >&2
exit 2
fi
```
## Integration with External Systems
### Slack Notifications
```bash
#!/bin/bash
input=$(cat)
tool_name=$(echo "$input" | jq -r '.tool_name')
decision="blocked"
# Send notification to Slack
curl -X POST "$SLACK_WEBHOOK" \
-H 'Content-Type: application/json' \
-d "{\"text\": \"Hook ${decision} ${tool_name} operation\"}" \
2>/dev/null
echo '{"decision": "deny"}' >&2
exit 2
```
### Database Logging
```bash
#!/bin/bash
input=$(cat)
# Log to database
psql "$DATABASE_URL" -c "INSERT INTO hook_logs (event, data) VALUES ('PreToolUse', '$input')" \
2>/dev/null
exit 0
```
### Metrics Collection
```bash
#!/bin/bash
input=$(cat)
tool_name=$(echo "$input" | jq -r '.tool_name')
# Send metrics to monitoring system
echo "hook.pretooluse.${tool_name}:1|c" | nc -u -w1 statsd.local 8125
exit 0
```
## Security Patterns
### Rate Limiting
```bash
#!/bin/bash
input=$(cat)
command=$(echo "$input" | jq -r '.tool_input.command')
# Track command frequency
rate_file="/tmp/hook-rate-$$"
current_minute=$(date +%Y%m%d%H%M)
if [ -f "$rate_file" ]; then
last_minute=$(head -1 "$rate_file")
count=$(tail -1 "$rate_file")
if [ "$current_minute" = "$last_minute" ]; then
if [ "$count" -gt 10 ]; then
echo '{"decision": "deny", "reason": "Rate limit exceeded"}' >&2
exit 2
fi
count=$((count + 1))
else
count=1
fi
else
count=1
fi
echo "$current_minute" > "$rate_file"
echo "$count" >> "$rate_file"
exit 0
```
### Audit Logging
```bash
#!/bin/bash
input=$(cat)
tool_name=$(echo "$input" | jq -r '.tool_name')
timestamp=$(date -Iseconds)
# Append to audit log
echo "$timestamp | $USER | $tool_name | $input" >> ~/.claude/audit.log
exit 0
```
### Secret Detection
```bash
#!/bin/bash
input=$(cat)
content=$(echo "$input" | jq -r '.tool_input.content')
# Check for common secret patterns
if echo "$content" | grep -qE "(api[_-]?key|password|secret|token).{0,20}['\"]?[A-Za-z0-9]{20,}"; then
echo '{"decision": "deny", "reason": "Potential secret detected in content"}' >&2
exit 2
fi
exit 0
```
## Testing Advanced Hooks
### Unit Testing Hook Scripts
```bash
# test-hook.sh
#!/bin/bash
# Test 1: Approve safe command
result=$(echo '{"tool_input": {"command": "ls"}}' | bash validate-bash.sh)
if [ $? -eq 0 ]; then
echo "✓ Test 1 passed"
else
echo "✗ Test 1 failed"
fi
# Test 2: Block dangerous command
result=$(echo '{"tool_input": {"command": "rm -rf /"}}' | bash validate-bash.sh)
if [ $? -eq 2 ]; then
echo "✓ Test 2 passed"
else
echo "✗ Test 2 failed"
fi
```
### Integration Testing
Create test scenarios that exercise the full hook workflow:
```bash
# integration-test.sh
#!/bin/bash
# Set up test environment
export CLAUDE_PROJECT_DIR="/tmp/test-project"
export CLAUDE_PLUGIN_ROOT="$(pwd)"
mkdir -p "$CLAUDE_PROJECT_DIR"
# Test SessionStart hook
echo '{}' | bash hooks/session-start.sh
if [ -f "/tmp/session-initialized" ]; then
echo "✓ SessionStart hook works"
else
echo "✗ SessionStart hook failed"
fi
# Clean up
rm -rf "$CLAUDE_PROJECT_DIR"
```
## Best Practices for Advanced Hooks
1. **Keep hooks independent**: Don't rely on execution order
2. **Use timeouts**: Set appropriate limits for each hook type
3. **Handle errors gracefully**: Provide clear error messages
4. **Document complexity**: Explain advanced patterns in README
5. **Test thoroughly**: Cover edge cases and failure modes
6. **Monitor performance**: Track hook execution time
7. **Version configuration**: Use version control for hook configs
8. **Provide escape hatches**: Allow users to bypass hooks when needed
## Common Pitfalls
### ❌ Assuming Hook Order
```bash
# BAD: Assumes hooks run in specific order
# Hook 1 saves state, Hook 2 reads it
# This can fail because hooks run in parallel!
```
### ❌ Long-Running Hooks
```bash
# BAD: Hook takes 2 minutes to run
sleep 120
# This will timeout and block the workflow
```
### ❌ Uncaught Exceptions
```bash
# BAD: Script crashes on unexpected input
file_path=$(echo "$input" | jq -r '.tool_input.file_path')
cat "$file_path" # Fails if file doesn't exist
```
### ✅ Proper Error Handling
```bash
# GOOD: Handles errors gracefully
file_path=$(echo "$input" | jq -r '.tool_input.file_path')
if [ ! -f "$file_path" ]; then
echo '{"continue": true, "systemMessage": "File not found, skipping check"}' >&2
exit 0
fi
```
## Conclusion
Advanced hook patterns enable sophisticated automation while maintaining reliability and performance. Use these techniques when basic hooks are insufficient, but always prioritize simplicity and maintainability.

View File

@@ -0,0 +1,369 @@
# Migrating from Basic to Advanced Hooks
This guide shows how to migrate from basic command hooks to advanced prompt-based hooks for better maintainability and flexibility.
## Why Migrate?
Prompt-based hooks offer several advantages:
- **Natural language reasoning**: LLM understands context and intent
- **Better edge case handling**: Adapts to unexpected scenarios
- **No bash scripting required**: Simpler to write and maintain
- **More flexible validation**: Can handle complex logic without coding
## Migration Example: Bash Command Validation
### Before (Basic Command Hook)
**Configuration:**
```json
{
"PreToolUse": [
{
"matcher": "Bash",
"hooks": [
{
"type": "command",
"command": "bash validate-bash.sh"
}
]
}
]
}
```
**Script (validate-bash.sh):**
```bash
#!/bin/bash
input=$(cat)
command=$(echo "$input" | jq -r '.tool_input.command')
# Hard-coded validation logic
if [[ "$command" == *"rm -rf"* ]]; then
echo "Dangerous command detected" >&2
exit 2
fi
```
**Problems:**
- Only checks for exact "rm -rf" pattern
- Doesn't catch variations like `rm -fr` or `rm -r -f`
- Misses other dangerous commands (`dd`, `mkfs`, etc.)
- No context awareness
- Requires bash scripting knowledge
### After (Advanced Prompt Hook)
**Configuration:**
```json
{
"PreToolUse": [
{
"matcher": "Bash",
"hooks": [
{
"type": "prompt",
"prompt": "Command: $TOOL_INPUT.command. Analyze for: 1) Destructive operations (rm -rf, dd, mkfs, etc) 2) Privilege escalation (sudo) 3) Network operations without user consent. Return 'approve' or 'deny' with explanation.",
"timeout": 15
}
]
}
]
}
```
**Benefits:**
- Catches all variations and patterns
- Understands intent, not just literal strings
- No script file needed
- Easy to extend with new criteria
- Context-aware decisions
- Natural language explanation in denial
## Migration Example: File Write Validation
### Before (Basic Command Hook)
**Configuration:**
```json
{
"PreToolUse": [
{
"matcher": "Write",
"hooks": [
{
"type": "command",
"command": "bash validate-write.sh"
}
]
}
]
}
```
**Script (validate-write.sh):**
```bash
#!/bin/bash
input=$(cat)
file_path=$(echo "$input" | jq -r '.tool_input.file_path')
# Check for path traversal
if [[ "$file_path" == *".."* ]]; then
echo '{"decision": "deny", "reason": "Path traversal detected"}' >&2
exit 2
fi
# Check for system paths
if [[ "$file_path" == "/etc/"* ]] || [[ "$file_path" == "/sys/"* ]]; then
echo '{"decision": "deny", "reason": "System file"}' >&2
exit 2
fi
```
**Problems:**
- Hard-coded path patterns
- Doesn't understand symlinks
- Missing edge cases (e.g., `/etc` vs `/etc/`)
- No consideration of file content
### After (Advanced Prompt Hook)
**Configuration:**
```json
{
"PreToolUse": [
{
"matcher": "Write|Edit",
"hooks": [
{
"type": "prompt",
"prompt": "File path: $TOOL_INPUT.file_path. Content preview: $TOOL_INPUT.content (first 200 chars). Verify: 1) Not system directories (/etc, /sys, /usr) 2) Not credentials (.env, tokens, secrets) 3) No path traversal 4) Content doesn't expose secrets. Return 'approve' or 'deny'."
}
]
}
]
}
```
**Benefits:**
- Context-aware (considers content too)
- Handles symlinks and edge cases
- Natural understanding of "system directories"
- Can detect secrets in content
- Easy to extend criteria
## When to Keep Command Hooks
Command hooks still have their place:
### 1. Deterministic Performance Checks
```bash
#!/bin/bash
# Check file size quickly
file_path=$(echo "$input" | jq -r '.tool_input.file_path')
size=$(stat -f%z "$file_path" 2>/dev/null || stat -c%s "$file_path" 2>/dev/null)
if [ "$size" -gt 10000000 ]; then
echo '{"decision": "deny", "reason": "File too large"}' >&2
exit 2
fi
```
**Use command hooks when:** Validation is purely mathematical or deterministic.
### 2. External Tool Integration
```bash
#!/bin/bash
# Run security scanner
file_path=$(echo "$input" | jq -r '.tool_input.file_path')
scan_result=$(security-scanner "$file_path")
if [ "$?" -ne 0 ]; then
echo "Security scan failed: $scan_result" >&2
exit 2
fi
```
**Use command hooks when:** Integrating with external tools that provide yes/no answers.
### 3. Very Fast Checks (< 50ms)
```bash
#!/bin/bash
# Quick regex check
command=$(echo "$input" | jq -r '.tool_input.command')
if [[ "$command" =~ ^(ls|pwd|echo)$ ]]; then
exit 0 # Safe commands
fi
```
**Use command hooks when:** Performance is critical and logic is simple.
## Hybrid Approach
Combine both for multi-stage validation:
```json
{
"PreToolUse": [
{
"matcher": "Bash",
"hooks": [
{
"type": "command",
"command": "bash ${CLAUDE_PLUGIN_ROOT}/scripts/quick-check.sh",
"timeout": 5
},
{
"type": "prompt",
"prompt": "Deep analysis of bash command: $TOOL_INPUT",
"timeout": 15
}
]
}
]
}
```
The command hook does fast deterministic checks, while the prompt hook handles complex reasoning.
## Migration Checklist
When migrating hooks:
- [ ] Identify the validation logic in the command hook
- [ ] Convert hard-coded patterns to natural language criteria
- [ ] Test with edge cases the old hook missed
- [ ] Verify LLM understands the intent
- [ ] Set appropriate timeout (usually 15-30s for prompt hooks)
- [ ] Document the new hook in README
- [ ] Remove or archive old script files
## Migration Tips
1. **Start with one hook**: Don't migrate everything at once
2. **Test thoroughly**: Verify prompt hook catches what command hook caught
3. **Look for improvements**: Use migration as opportunity to enhance validation
4. **Keep scripts for reference**: Archive old scripts in case you need to reference the logic
5. **Document reasoning**: Explain why prompt hook is better in README
## Complete Migration Example
### Original Plugin Structure
```
my-plugin/
├── .claude-plugin/plugin.json
├── hooks/hooks.json
└── scripts/
├── validate-bash.sh
├── validate-write.sh
└── check-tests.sh
```
### After Migration
```
my-plugin/
├── .claude-plugin/plugin.json
├── hooks/hooks.json # Now uses prompt hooks
└── scripts/ # Archive or delete
└── archive/
├── validate-bash.sh
├── validate-write.sh
└── check-tests.sh
```
### Updated hooks.json
```json
{
"PreToolUse": [
{
"matcher": "Bash",
"hooks": [
{
"type": "prompt",
"prompt": "Validate bash command safety: destructive ops, privilege escalation, network access"
}
]
},
{
"matcher": "Write|Edit",
"hooks": [
{
"type": "prompt",
"prompt": "Validate file write safety: system paths, credentials, path traversal, content secrets"
}
]
}
],
"Stop": [
{
"matcher": "*",
"hooks": [
{
"type": "prompt",
"prompt": "Verify tests were run if code was modified"
}
]
}
]
}
```
**Result:** Simpler, more maintainable, more powerful.
## Common Migration Patterns
### Pattern: String Contains → Natural Language
**Before:**
```bash
if [[ "$command" == *"sudo"* ]]; then
echo "Privilege escalation" >&2
exit 2
fi
```
**After:**
```
"Check for privilege escalation (sudo, su, etc)"
```
### Pattern: Regex → Intent
**Before:**
```bash
if [[ "$file" =~ \.(env|secret|key|token)$ ]]; then
echo "Credential file" >&2
exit 2
fi
```
**After:**
```
"Verify not writing to credential files (.env, secrets, keys, tokens)"
```
### Pattern: Multiple Conditions → Criteria List
**Before:**
```bash
if [ condition1 ] || [ condition2 ] || [ condition3 ]; then
echo "Invalid" >&2
exit 2
fi
```
**After:**
```
"Check: 1) condition1 2) condition2 3) condition3. Deny if any fail."
```
## Conclusion
Migrating to prompt-based hooks makes plugins more maintainable, flexible, and powerful. Reserve command hooks for deterministic checks and external tool integration.

View File

@@ -0,0 +1,346 @@
# Common Hook Patterns
This reference provides common, proven patterns for implementing Claude Code hooks. Use these patterns as starting points for typical hook use cases.
## Pattern 1: Security Validation
Block dangerous file writes using prompt-based hooks:
```json
{
"PreToolUse": [
{
"matcher": "Write|Edit",
"hooks": [
{
"type": "prompt",
"prompt": "File path: $TOOL_INPUT.file_path. Verify: 1) Not in /etc or system directories 2) Not .env or credentials 3) Path doesn't contain '..' traversal. Return 'approve' or 'deny'."
}
]
}
]
}
```
**Use for:** Preventing writes to sensitive files or system directories.
## Pattern 2: Test Enforcement
Ensure tests run before stopping:
```json
{
"Stop": [
{
"matcher": "*",
"hooks": [
{
"type": "prompt",
"prompt": "Review transcript. If code was modified (Write/Edit tools used), verify tests were executed. If no tests were run, block with reason 'Tests must be run after code changes'."
}
]
}
]
}
```
**Use for:** Enforcing quality standards and preventing incomplete work.
## Pattern 3: Context Loading
Load project-specific context at session start:
```json
{
"SessionStart": [
{
"matcher": "*",
"hooks": [
{
"type": "command",
"command": "bash ${CLAUDE_PLUGIN_ROOT}/scripts/load-context.sh"
}
]
}
]
}
```
**Example script (load-context.sh):**
```bash
#!/bin/bash
cd "$CLAUDE_PROJECT_DIR" || exit 1
# Detect project type
if [ -f "package.json" ]; then
echo "📦 Node.js project detected"
echo "export PROJECT_TYPE=nodejs" >> "$CLAUDE_ENV_FILE"
elif [ -f "Cargo.toml" ]; then
echo "🦀 Rust project detected"
echo "export PROJECT_TYPE=rust" >> "$CLAUDE_ENV_FILE"
fi
```
**Use for:** Automatically detecting and configuring project-specific settings.
## Pattern 4: Notification Logging
Log all notifications for audit or analysis:
```json
{
"Notification": [
{
"matcher": "*",
"hooks": [
{
"type": "command",
"command": "bash ${CLAUDE_PLUGIN_ROOT}/scripts/log-notification.sh"
}
]
}
]
}
```
**Use for:** Tracking user notifications or integration with external logging systems.
## Pattern 5: MCP Tool Monitoring
Monitor and validate MCP tool usage:
```json
{
"PreToolUse": [
{
"matcher": "mcp__.*__delete.*",
"hooks": [
{
"type": "prompt",
"prompt": "Deletion operation detected. Verify: Is this deletion intentional? Can it be undone? Are there backups? Return 'approve' only if safe."
}
]
}
]
}
```
**Use for:** Protecting against destructive MCP operations.
## Pattern 6: Build Verification
Ensure project builds after code changes:
```json
{
"Stop": [
{
"matcher": "*",
"hooks": [
{
"type": "prompt",
"prompt": "Check if code was modified. If Write/Edit tools were used, verify the project was built (npm run build, cargo build, etc). If not built, block and request build."
}
]
}
]
}
```
**Use for:** Catching build errors before committing or stopping work.
## Pattern 7: Permission Confirmation
Ask user before dangerous operations:
```json
{
"PreToolUse": [
{
"matcher": "Bash",
"hooks": [
{
"type": "prompt",
"prompt": "Command: $TOOL_INPUT.command. If command contains 'rm', 'delete', 'drop', or other destructive operations, return 'ask' to confirm with user. Otherwise 'approve'."
}
]
}
]
}
```
**Use for:** User confirmation on potentially destructive commands.
## Pattern 8: Code Quality Checks
Run linters or formatters on file edits:
```json
{
"PostToolUse": [
{
"matcher": "Write|Edit",
"hooks": [
{
"type": "command",
"command": "bash ${CLAUDE_PLUGIN_ROOT}/scripts/check-quality.sh"
}
]
}
]
}
```
**Example script (check-quality.sh):**
```bash
#!/bin/bash
input=$(cat)
file_path=$(echo "$input" | jq -r '.tool_input.file_path')
# Run linter if applicable
if [[ "$file_path" == *.js ]] || [[ "$file_path" == *.ts ]]; then
npx eslint "$file_path" 2>&1 || true
fi
```
**Use for:** Automatic code quality enforcement.
## Pattern Combinations
Combine multiple patterns for comprehensive protection:
```json
{
"PreToolUse": [
{
"matcher": "Write|Edit",
"hooks": [
{
"type": "prompt",
"prompt": "Validate file write safety"
}
]
},
{
"matcher": "Bash",
"hooks": [
{
"type": "prompt",
"prompt": "Validate bash command safety"
}
]
}
],
"Stop": [
{
"matcher": "*",
"hooks": [
{
"type": "prompt",
"prompt": "Verify tests run and build succeeded"
}
]
}
],
"SessionStart": [
{
"matcher": "*",
"hooks": [
{
"type": "command",
"command": "bash ${CLAUDE_PLUGIN_ROOT}/scripts/load-context.sh"
}
]
}
]
}
```
This provides multi-layered protection and automation.
## Pattern 9: Temporarily Active Hooks
Create hooks that only run when explicitly enabled via flag files:
```bash
#!/bin/bash
# Hook only active when flag file exists
FLAG_FILE="$CLAUDE_PROJECT_DIR/.enable-security-scan"
if [ ! -f "$FLAG_FILE" ]; then
# Quick exit when disabled
exit 0
fi
# Flag present, run validation
input=$(cat)
file_path=$(echo "$input" | jq -r '.tool_input.file_path')
# Run security scan
security-scanner "$file_path"
```
**Activation:**
```bash
# Enable the hook
touch .enable-security-scan
# Disable the hook
rm .enable-security-scan
```
**Use for:**
- Temporary debugging hooks
- Feature flags for development
- Project-specific validation that's opt-in
- Performance-intensive checks only when needed
**Note:** Must restart Claude Code after creating/removing flag files for hooks to recognize changes.
## Pattern 10: Configuration-Driven Hooks
Use JSON configuration to control hook behavior:
```bash
#!/bin/bash
CONFIG_FILE="$CLAUDE_PROJECT_DIR/.claude/my-plugin.local.json"
# Read configuration
if [ -f "$CONFIG_FILE" ]; then
strict_mode=$(jq -r '.strictMode // false' "$CONFIG_FILE")
max_file_size=$(jq -r '.maxFileSize // 1000000' "$CONFIG_FILE")
else
# Defaults
strict_mode=false
max_file_size=1000000
fi
# Skip if not in strict mode
if [ "$strict_mode" != "true" ]; then
exit 0
fi
# Apply configured limits
input=$(cat)
file_size=$(echo "$input" | jq -r '.tool_input.content | length')
if [ "$file_size" -gt "$max_file_size" ]; then
echo '{"decision": "deny", "reason": "File exceeds configured size limit"}' >&2
exit 2
fi
```
**Configuration file (.claude/my-plugin.local.json):**
```json
{
"strictMode": true,
"maxFileSize": 500000,
"allowedPaths": ["/tmp", "/home/user/projects"]
}
```
**Use for:**
- User-configurable hook behavior
- Per-project settings
- Team-specific rules
- Dynamic validation criteria

View File

@@ -0,0 +1,164 @@
# Hook Development Utility Scripts
These scripts help validate, test, and lint hook implementations before deployment.
## validate-hook-schema.sh
Validates `hooks.json` configuration files for correct structure and common issues.
**Usage:**
```bash
./validate-hook-schema.sh path/to/hooks.json
```
**Checks:**
- Valid JSON syntax
- Required fields present
- Valid hook event names
- Proper hook types (command/prompt)
- Timeout values in valid ranges
- Hardcoded path detection
- Prompt hook event compatibility
**Example:**
```bash
cd my-plugin
./validate-hook-schema.sh hooks/hooks.json
```
## test-hook.sh
Tests individual hook scripts with sample input before deploying to Claude Code.
**Usage:**
```bash
./test-hook.sh [options] <hook-script> <test-input.json>
```
**Options:**
- `-v, --verbose` - Show detailed execution information
- `-t, --timeout N` - Set timeout in seconds (default: 60)
- `--create-sample <event-type>` - Generate sample test input
**Example:**
```bash
# Create sample test input
./test-hook.sh --create-sample PreToolUse > test-input.json
# Test a hook script
./test-hook.sh my-hook.sh test-input.json
# Test with verbose output and custom timeout
./test-hook.sh -v -t 30 my-hook.sh test-input.json
```
**Features:**
- Sets up proper environment variables (CLAUDE_PROJECT_DIR, CLAUDE_PLUGIN_ROOT)
- Measures execution time
- Validates output JSON
- Shows exit codes and their meanings
- Captures environment file output
## hook-linter.sh
Checks hook scripts for common issues and best practices violations.
**Usage:**
```bash
./hook-linter.sh <hook-script.sh> [hook-script2.sh ...]
```
**Checks:**
- Shebang presence
- `set -euo pipefail` usage
- Stdin input reading
- Proper error handling
- Variable quoting (injection prevention)
- Exit code usage
- Hardcoded paths
- Long-running code detection
- Error output to stderr
- Input validation
**Example:**
```bash
# Lint single script
./hook-linter.sh ../examples/validate-write.sh
# Lint multiple scripts
./hook-linter.sh ../examples/*.sh
```
## Typical Workflow
1. **Write your hook script**
```bash
vim my-plugin/scripts/my-hook.sh
```
2. **Lint the script**
```bash
./hook-linter.sh my-plugin/scripts/my-hook.sh
```
3. **Create test input**
```bash
./test-hook.sh --create-sample PreToolUse > test-input.json
# Edit test-input.json as needed
```
4. **Test the hook**
```bash
./test-hook.sh -v my-plugin/scripts/my-hook.sh test-input.json
```
5. **Add to hooks.json**
```bash
# Edit my-plugin/hooks/hooks.json
```
6. **Validate configuration**
```bash
./validate-hook-schema.sh my-plugin/hooks/hooks.json
```
7. **Test in Claude Code**
```bash
claude --debug
```
## Tips
- Always test hooks before deploying to avoid breaking user workflows
- Use verbose mode (`-v`) to debug hook behavior
- Check the linter output for security and best practice issues
- Validate hooks.json after any changes
- Create different test inputs for various scenarios (safe operations, dangerous operations, edge cases)
## Common Issues
### Hook doesn't execute
Check:
- Script has shebang (`#!/bin/bash`)
- Script is executable (`chmod +x`)
- Path in hooks.json is correct (use `${CLAUDE_PLUGIN_ROOT}`)
### Hook times out
- Reduce timeout in hooks.json
- Optimize hook script performance
- Remove long-running operations
### Hook fails silently
- Check exit codes (should be 0 or 2)
- Ensure errors go to stderr (`>&2`)
- Validate JSON output structure
### Injection vulnerabilities
- Always quote variables: `"$variable"`
- Use `set -euo pipefail`
- Validate all input fields
- Run the linter to catch issues

View File

@@ -0,0 +1,153 @@
#!/bin/bash
# Hook Linter
# Checks hook scripts for common issues and best practices
set -euo pipefail
# Usage
if [ $# -eq 0 ]; then
echo "Usage: $0 <hook-script.sh> [hook-script2.sh ...]"
echo ""
echo "Checks hook scripts for:"
echo " - Shebang presence"
echo " - set -euo pipefail usage"
echo " - Input reading from stdin"
echo " - Proper error handling"
echo " - Variable quoting"
echo " - Exit code usage"
echo " - Hardcoded paths"
echo " - Timeout considerations"
exit 1
fi
check_script() {
local script="$1"
local warnings=0
local errors=0
echo "🔍 Linting: $script"
echo ""
if [ ! -f "$script" ]; then
echo "❌ Error: File not found"
return 1
fi
# Check 1: Executable
if [ ! -x "$script" ]; then
echo "⚠️ Not executable (chmod +x $script)"
((warnings++))
fi
# Check 2: Shebang
first_line=$(head -1 "$script")
if [[ ! "$first_line" =~ ^#!/ ]]; then
echo "❌ Missing shebang (#!/bin/bash)"
((errors++))
fi
# Check 3: set -euo pipefail
if ! grep -q "set -euo pipefail" "$script"; then
echo "⚠️ Missing 'set -euo pipefail' (recommended for safety)"
((warnings++))
fi
# Check 4: Reads from stdin
if ! grep -q "cat\|read" "$script"; then
echo "⚠️ Doesn't appear to read input from stdin"
((warnings++))
fi
# Check 5: Uses jq for JSON parsing
if grep -q "tool_input\|tool_name" "$script" && ! grep -q "jq" "$script"; then
echo "⚠️ Parses hook input but doesn't use jq"
((warnings++))
fi
# Check 6: Unquoted variables
if grep -E '\$[A-Za-z_][A-Za-z0-9_]*[^"]' "$script" | grep -v '#' | grep -q .; then
echo "⚠️ Potentially unquoted variables detected (injection risk)"
echo " Always use double quotes: \"\$variable\" not \$variable"
((warnings++))
fi
# Check 7: Hardcoded paths
if grep -E '^[^#]*/home/|^[^#]*/usr/|^[^#]*/opt/' "$script" | grep -q .; then
echo "⚠️ Hardcoded absolute paths detected"
echo " Use \$CLAUDE_PROJECT_DIR or \$CLAUDE_PLUGIN_ROOT"
((warnings++))
fi
# Check 8: Uses CLAUDE_PLUGIN_ROOT
if ! grep -q "CLAUDE_PLUGIN_ROOT\|CLAUDE_PROJECT_DIR" "$script"; then
echo "💡 Tip: Use \$CLAUDE_PLUGIN_ROOT for plugin-relative paths"
fi
# Check 9: Exit codes
if ! grep -q "exit 0\|exit 2" "$script"; then
echo "⚠️ No explicit exit codes (should exit 0 or 2)"
((warnings++))
fi
# Check 10: JSON output for decision hooks
if grep -q "PreToolUse\|Stop" "$script"; then
if ! grep -q "permissionDecision\|decision" "$script"; then
echo "💡 Tip: PreToolUse/Stop hooks should output decision JSON"
fi
fi
# Check 11: Long-running commands
if grep -E 'sleep [0-9]{3,}|while true' "$script" | grep -v '#' | grep -q .; then
echo "⚠️ Potentially long-running code detected"
echo " Hooks should complete quickly (< 60s)"
((warnings++))
fi
# Check 12: Error messages to stderr
if grep -q 'echo.*".*error\|Error\|denied\|Denied' "$script"; then
if ! grep -q '>&2' "$script"; then
echo "⚠️ Error messages should be written to stderr (>&2)"
((warnings++))
fi
fi
# Check 13: Input validation
if ! grep -q "if.*empty\|if.*null\|if.*-z" "$script"; then
echo "💡 Tip: Consider validating input fields aren't empty"
fi
echo ""
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
if [ $errors -eq 0 ] && [ $warnings -eq 0 ]; then
echo "✅ No issues found"
return 0
elif [ $errors -eq 0 ]; then
echo "⚠️ Found $warnings warning(s)"
return 0
else
echo "❌ Found $errors error(s) and $warnings warning(s)"
return 1
fi
}
echo "🔎 Hook Script Linter"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo ""
total_errors=0
for script in "$@"; do
if ! check_script "$script"; then
((total_errors++))
fi
echo ""
done
if [ $total_errors -eq 0 ]; then
echo "✅ All scripts passed linting"
exit 0
else
echo "$total_errors script(s) had errors"
exit 1
fi

View File

@@ -0,0 +1,252 @@
#!/bin/bash
# Hook Testing Helper
# Tests a hook with sample input and shows output
set -euo pipefail
# Usage
show_usage() {
echo "Usage: $0 [options] <hook-script> <test-input.json>"
echo ""
echo "Options:"
echo " -h, --help Show this help message"
echo " -v, --verbose Show detailed execution information"
echo " -t, --timeout N Set timeout in seconds (default: 60)"
echo ""
echo "Examples:"
echo " $0 validate-bash.sh test-input.json"
echo " $0 -v -t 30 validate-write.sh write-input.json"
echo ""
echo "Creates sample test input with:"
echo " $0 --create-sample <event-type>"
exit 0
}
# Create sample input
create_sample() {
event_type="$1"
case "$event_type" in
PreToolUse)
cat <<'EOF'
{
"session_id": "test-session",
"transcript_path": "/tmp/transcript.txt",
"cwd": "/tmp/test-project",
"permission_mode": "ask",
"hook_event_name": "PreToolUse",
"tool_name": "Write",
"tool_input": {
"file_path": "/tmp/test.txt",
"content": "Test content"
}
}
EOF
;;
PostToolUse)
cat <<'EOF'
{
"session_id": "test-session",
"transcript_path": "/tmp/transcript.txt",
"cwd": "/tmp/test-project",
"permission_mode": "ask",
"hook_event_name": "PostToolUse",
"tool_name": "Bash",
"tool_result": "Command executed successfully"
}
EOF
;;
Stop|SubagentStop)
cat <<'EOF'
{
"session_id": "test-session",
"transcript_path": "/tmp/transcript.txt",
"cwd": "/tmp/test-project",
"permission_mode": "ask",
"hook_event_name": "Stop",
"reason": "Task appears complete"
}
EOF
;;
UserPromptSubmit)
cat <<'EOF'
{
"session_id": "test-session",
"transcript_path": "/tmp/transcript.txt",
"cwd": "/tmp/test-project",
"permission_mode": "ask",
"hook_event_name": "UserPromptSubmit",
"user_prompt": "Test user prompt"
}
EOF
;;
SessionStart|SessionEnd)
cat <<'EOF'
{
"session_id": "test-session",
"transcript_path": "/tmp/transcript.txt",
"cwd": "/tmp/test-project",
"permission_mode": "ask",
"hook_event_name": "SessionStart"
}
EOF
;;
*)
echo "Unknown event type: $event_type"
echo "Valid types: PreToolUse, PostToolUse, Stop, SubagentStop, UserPromptSubmit, SessionStart, SessionEnd"
exit 1
;;
esac
}
# Parse arguments
VERBOSE=false
TIMEOUT=60
while [ $# -gt 0 ]; do
case "$1" in
-h|--help)
show_usage
;;
-v|--verbose)
VERBOSE=true
shift
;;
-t|--timeout)
TIMEOUT="$2"
shift 2
;;
--create-sample)
create_sample "$2"
exit 0
;;
*)
break
;;
esac
done
if [ $# -ne 2 ]; then
echo "Error: Missing required arguments"
echo ""
show_usage
fi
HOOK_SCRIPT="$1"
TEST_INPUT="$2"
# Validate inputs
if [ ! -f "$HOOK_SCRIPT" ]; then
echo "❌ Error: Hook script not found: $HOOK_SCRIPT"
exit 1
fi
if [ ! -x "$HOOK_SCRIPT" ]; then
echo "⚠️ Warning: Hook script is not executable. Attempting to run with bash..."
HOOK_SCRIPT="bash $HOOK_SCRIPT"
fi
if [ ! -f "$TEST_INPUT" ]; then
echo "❌ Error: Test input not found: $TEST_INPUT"
exit 1
fi
# Validate test input JSON
if ! jq empty "$TEST_INPUT" 2>/dev/null; then
echo "❌ Error: Test input is not valid JSON"
exit 1
fi
echo "🧪 Testing hook: $HOOK_SCRIPT"
echo "📥 Input: $TEST_INPUT"
echo ""
if [ "$VERBOSE" = true ]; then
echo "Input JSON:"
jq . "$TEST_INPUT"
echo ""
fi
# Set up environment
export CLAUDE_PROJECT_DIR="${CLAUDE_PROJECT_DIR:-/tmp/test-project}"
export CLAUDE_PLUGIN_ROOT="${CLAUDE_PLUGIN_ROOT:-$(pwd)}"
export CLAUDE_ENV_FILE="${CLAUDE_ENV_FILE:-/tmp/test-env-$$}"
if [ "$VERBOSE" = true ]; then
echo "Environment:"
echo " CLAUDE_PROJECT_DIR=$CLAUDE_PROJECT_DIR"
echo " CLAUDE_PLUGIN_ROOT=$CLAUDE_PLUGIN_ROOT"
echo " CLAUDE_ENV_FILE=$CLAUDE_ENV_FILE"
echo ""
fi
# Run the hook
echo "▶️ Running hook (timeout: ${TIMEOUT}s)..."
echo ""
start_time=$(date +%s)
set +e
output=$(timeout "$TIMEOUT" bash -c "cat '$TEST_INPUT' | $HOOK_SCRIPT" 2>&1)
exit_code=$?
set -e
end_time=$(date +%s)
duration=$((end_time - start_time))
# Analyze results
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo "Results:"
echo ""
echo "Exit Code: $exit_code"
echo "Duration: ${duration}s"
echo ""
case $exit_code in
0)
echo "✅ Hook approved/succeeded"
;;
2)
echo "🚫 Hook blocked/denied"
;;
124)
echo "⏱️ Hook timed out after ${TIMEOUT}s"
;;
*)
echo "⚠️ Hook returned unexpected exit code: $exit_code"
;;
esac
echo ""
echo "Output:"
if [ -n "$output" ]; then
echo "$output"
echo ""
# Try to parse as JSON
if echo "$output" | jq empty 2>/dev/null; then
echo "Parsed JSON output:"
echo "$output" | jq .
fi
else
echo "(no output)"
fi
# Check for environment file
if [ -f "$CLAUDE_ENV_FILE" ]; then
echo ""
echo "Environment file created:"
cat "$CLAUDE_ENV_FILE"
rm -f "$CLAUDE_ENV_FILE"
fi
echo ""
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
if [ $exit_code -eq 0 ] || [ $exit_code -eq 2 ]; then
echo "✅ Test completed successfully"
exit 0
else
echo "❌ Test failed"
exit 1
fi

View File

@@ -0,0 +1,159 @@
#!/bin/bash
# Hook Schema Validator
# Validates hooks.json structure and checks for common issues
set -euo pipefail
# Usage
if [ $# -eq 0 ]; then
echo "Usage: $0 <path/to/hooks.json>"
echo ""
echo "Validates hook configuration file for:"
echo " - Valid JSON syntax"
echo " - Required fields"
echo " - Hook type validity"
echo " - Matcher patterns"
echo " - Timeout ranges"
exit 1
fi
HOOKS_FILE="$1"
if [ ! -f "$HOOKS_FILE" ]; then
echo "❌ Error: File not found: $HOOKS_FILE"
exit 1
fi
echo "🔍 Validating hooks configuration: $HOOKS_FILE"
echo ""
# Check 1: Valid JSON
echo "Checking JSON syntax..."
if ! jq empty "$HOOKS_FILE" 2>/dev/null; then
echo "❌ Invalid JSON syntax"
exit 1
fi
echo "✅ Valid JSON"
# Check 2: Root structure
echo ""
echo "Checking root structure..."
VALID_EVENTS=("PreToolUse" "PostToolUse" "UserPromptSubmit" "Stop" "SubagentStop" "SessionStart" "SessionEnd" "PreCompact" "Notification")
for event in $(jq -r 'keys[]' "$HOOKS_FILE"); do
found=false
for valid_event in "${VALID_EVENTS[@]}"; do
if [ "$event" = "$valid_event" ]; then
found=true
break
fi
done
if [ "$found" = false ]; then
echo "⚠️ Unknown event type: $event"
fi
done
echo "✅ Root structure valid"
# Check 3: Validate each hook
echo ""
echo "Validating individual hooks..."
error_count=0
warning_count=0
for event in $(jq -r 'keys[]' "$HOOKS_FILE"); do
hook_count=$(jq -r ".\"$event\" | length" "$HOOKS_FILE")
for ((i=0; i<hook_count; i++)); do
# Check matcher exists
matcher=$(jq -r ".\"$event\"[$i].matcher // empty" "$HOOKS_FILE")
if [ -z "$matcher" ]; then
echo "$event[$i]: Missing 'matcher' field"
((error_count++))
continue
fi
# Check hooks array exists
hooks=$(jq -r ".\"$event\"[$i].hooks // empty" "$HOOKS_FILE")
if [ -z "$hooks" ] || [ "$hooks" = "null" ]; then
echo "$event[$i]: Missing 'hooks' array"
((error_count++))
continue
fi
# Validate each hook in the array
hook_array_count=$(jq -r ".\"$event\"[$i].hooks | length" "$HOOKS_FILE")
for ((j=0; j<hook_array_count; j++)); do
hook_type=$(jq -r ".\"$event\"[$i].hooks[$j].type // empty" "$HOOKS_FILE")
if [ -z "$hook_type" ]; then
echo "$event[$i].hooks[$j]: Missing 'type' field"
((error_count++))
continue
fi
if [ "$hook_type" != "command" ] && [ "$hook_type" != "prompt" ]; then
echo "$event[$i].hooks[$j]: Invalid type '$hook_type' (must be 'command' or 'prompt')"
((error_count++))
continue
fi
# Check type-specific fields
if [ "$hook_type" = "command" ]; then
command=$(jq -r ".\"$event\"[$i].hooks[$j].command // empty" "$HOOKS_FILE")
if [ -z "$command" ]; then
echo "$event[$i].hooks[$j]: Command hooks must have 'command' field"
((error_count++))
else
# Check for hardcoded paths
if [[ "$command" == /* ]] && [[ "$command" != *'${CLAUDE_PLUGIN_ROOT}'* ]]; then
echo "⚠️ $event[$i].hooks[$j]: Hardcoded absolute path detected. Consider using \${CLAUDE_PLUGIN_ROOT}"
((warning_count++))
fi
fi
elif [ "$hook_type" = "prompt" ]; then
prompt=$(jq -r ".\"$event\"[$i].hooks[$j].prompt // empty" "$HOOKS_FILE")
if [ -z "$prompt" ]; then
echo "$event[$i].hooks[$j]: Prompt hooks must have 'prompt' field"
((error_count++))
fi
# Check if prompt-based hooks are used on supported events
if [ "$event" != "Stop" ] && [ "$event" != "SubagentStop" ] && [ "$event" != "UserPromptSubmit" ] && [ "$event" != "PreToolUse" ]; then
echo "⚠️ $event[$i].hooks[$j]: Prompt hooks may not be fully supported on $event (best on Stop, SubagentStop, UserPromptSubmit, PreToolUse)"
((warning_count++))
fi
fi
# Check timeout
timeout=$(jq -r ".\"$event\"[$i].hooks[$j].timeout // empty" "$HOOKS_FILE")
if [ -n "$timeout" ] && [ "$timeout" != "null" ]; then
if ! [[ "$timeout" =~ ^[0-9]+$ ]]; then
echo "$event[$i].hooks[$j]: Timeout must be a number"
((error_count++))
elif [ "$timeout" -gt 600 ]; then
echo "⚠️ $event[$i].hooks[$j]: Timeout $timeout seconds is very high (max 600s)"
((warning_count++))
elif [ "$timeout" -lt 5 ]; then
echo "⚠️ $event[$i].hooks[$j]: Timeout $timeout seconds is very low"
((warning_count++))
fi
fi
done
done
done
echo ""
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
if [ $error_count -eq 0 ] && [ $warning_count -eq 0 ]; then
echo "✅ All checks passed!"
exit 0
elif [ $error_count -eq 0 ]; then
echo "⚠️ Validation passed with $warning_count warning(s)"
exit 0
else
echo "❌ Validation failed with $error_count error(s) and $warning_count warning(s)"
exit 1
fi

View File

@@ -0,0 +1,554 @@
---
name: MCP Integration
description: This skill should be used when the user asks to "add MCP server", "integrate MCP", "configure MCP in plugin", "use .mcp.json", "set up Model Context Protocol", "connect external service", mentions "${CLAUDE_PLUGIN_ROOT} with MCP", or discusses MCP server types (SSE, stdio, HTTP, WebSocket). Provides comprehensive guidance for integrating Model Context Protocol servers into Claude Code plugins for external tool and service integration.
version: 0.1.0
---
# MCP Integration for Claude Code Plugins
## Overview
Model Context Protocol (MCP) enables Claude Code plugins to integrate with external services and APIs by providing structured tool access. Use MCP integration to expose external service capabilities as tools within Claude Code.
**Key capabilities:**
- Connect to external services (databases, APIs, file systems)
- Provide 10+ related tools from a single service
- Handle OAuth and complex authentication flows
- Bundle MCP servers with plugins for automatic setup
## MCP Server Configuration Methods
Plugins can bundle MCP servers in two ways:
### Method 1: Dedicated .mcp.json (Recommended)
Create `.mcp.json` at plugin root:
```json
{
"database-tools": {
"command": "${CLAUDE_PLUGIN_ROOT}/servers/db-server",
"args": ["--config", "${CLAUDE_PLUGIN_ROOT}/config.json"],
"env": {
"DB_URL": "${DB_URL}"
}
}
}
```
**Benefits:**
- Clear separation of concerns
- Easier to maintain
- Better for multiple servers
### Method 2: Inline in plugin.json
Add `mcpServers` field to plugin.json:
```json
{
"name": "my-plugin",
"version": "1.0.0",
"mcpServers": {
"plugin-api": {
"command": "${CLAUDE_PLUGIN_ROOT}/servers/api-server",
"args": ["--port", "8080"]
}
}
}
```
**Benefits:**
- Single configuration file
- Good for simple single-server plugins
## MCP Server Types
### stdio (Local Process)
Execute local MCP servers as child processes. Best for local tools and custom servers.
**Configuration:**
```json
{
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/allowed/path"],
"env": {
"LOG_LEVEL": "debug"
}
}
}
```
**Use cases:**
- File system access
- Local database connections
- Custom MCP servers
- NPM-packaged MCP servers
**Process management:**
- Claude Code spawns and manages the process
- Communicates via stdin/stdout
- Terminates when Claude Code exits
### SSE (Server-Sent Events)
Connect to hosted MCP servers with OAuth support. Best for cloud services.
**Configuration:**
```json
{
"asana": {
"type": "sse",
"url": "https://mcp.asana.com/sse"
}
}
```
**Use cases:**
- Official hosted MCP servers (Asana, GitHub, etc.)
- Cloud services with MCP endpoints
- OAuth-based authentication
- No local installation needed
**Authentication:**
- OAuth flows handled automatically
- User prompted on first use
- Tokens managed by Claude Code
### HTTP (REST API)
Connect to RESTful MCP servers with token authentication.
**Configuration:**
```json
{
"api-service": {
"type": "http",
"url": "https://api.example.com/mcp",
"headers": {
"Authorization": "Bearer ${API_TOKEN}",
"X-Custom-Header": "value"
}
}
}
```
**Use cases:**
- REST API-based MCP servers
- Token-based authentication
- Custom API backends
- Stateless interactions
### WebSocket (Real-time)
Connect to WebSocket MCP servers for real-time bidirectional communication.
**Configuration:**
```json
{
"realtime-service": {
"type": "ws",
"url": "wss://mcp.example.com/ws",
"headers": {
"Authorization": "Bearer ${TOKEN}"
}
}
}
```
**Use cases:**
- Real-time data streaming
- Persistent connections
- Push notifications from server
- Low-latency requirements
## Environment Variable Expansion
All MCP configurations support environment variable substitution:
**${CLAUDE_PLUGIN_ROOT}** - Plugin directory (always use for portability):
```json
{
"command": "${CLAUDE_PLUGIN_ROOT}/servers/my-server"
}
```
**User environment variables** - From user's shell:
```json
{
"env": {
"API_KEY": "${MY_API_KEY}",
"DATABASE_URL": "${DB_URL}"
}
}
```
**Best practice:** Document all required environment variables in plugin README.
## MCP Tool Naming
When MCP servers provide tools, they're automatically prefixed:
**Format:** `mcp__plugin_<plugin-name>_<server-name>__<tool-name>`
**Example:**
- Plugin: `asana`
- Server: `asana`
- Tool: `create_task`
- **Full name:** `mcp__plugin_asana_asana__asana_create_task`
### Using MCP Tools in Commands
Pre-allow specific MCP tools in command frontmatter:
```markdown
---
allowed-tools: [
"mcp__plugin_asana_asana__asana_create_task",
"mcp__plugin_asana_asana__asana_search_tasks"
]
---
```
**Wildcard (use sparingly):**
```markdown
---
allowed-tools: ["mcp__plugin_asana_asana__*"]
---
```
**Best practice:** Pre-allow specific tools, not wildcards, for security.
## Lifecycle Management
**Automatic startup:**
- MCP servers start when plugin enables
- Connection established before first tool use
- Restart required for configuration changes
**Lifecycle:**
1. Plugin loads
2. MCP configuration parsed
3. Server process started (stdio) or connection established (SSE/HTTP/WS)
4. Tools discovered and registered
5. Tools available as `mcp__plugin_...__...`
**Viewing servers:**
Use `/mcp` command to see all servers including plugin-provided ones.
## Authentication Patterns
### OAuth (SSE/HTTP)
OAuth handled automatically by Claude Code:
```json
{
"type": "sse",
"url": "https://mcp.example.com/sse"
}
```
User authenticates in browser on first use. No additional configuration needed.
### Token-Based (Headers)
Static or environment variable tokens:
```json
{
"type": "http",
"url": "https://api.example.com",
"headers": {
"Authorization": "Bearer ${API_TOKEN}"
}
}
```
Document required environment variables in README.
### Environment Variables (stdio)
Pass configuration to MCP server:
```json
{
"command": "python",
"args": ["-m", "my_mcp_server"],
"env": {
"DATABASE_URL": "${DB_URL}",
"API_KEY": "${API_KEY}",
"LOG_LEVEL": "info"
}
}
```
## Integration Patterns
### Pattern 1: Simple Tool Wrapper
Commands use MCP tools with user interaction:
```markdown
# Command: create-item.md
---
allowed-tools: ["mcp__plugin_name_server__create_item"]
---
Steps:
1. Gather item details from user
2. Use mcp__plugin_name_server__create_item
3. Confirm creation
```
**Use for:** Adding validation or preprocessing before MCP calls.
### Pattern 2: Autonomous Agent
Agents use MCP tools autonomously:
```markdown
# Agent: data-analyzer.md
Analysis Process:
1. Query data via mcp__plugin_db_server__query
2. Process and analyze results
3. Generate insights report
```
**Use for:** Multi-step MCP workflows without user interaction.
### Pattern 3: Multi-Server Plugin
Integrate multiple MCP servers:
```json
{
"github": {
"type": "sse",
"url": "https://mcp.github.com/sse"
},
"jira": {
"type": "sse",
"url": "https://mcp.jira.com/sse"
}
}
```
**Use for:** Workflows spanning multiple services.
## Security Best Practices
### Use HTTPS/WSS
Always use secure connections:
```json
"url": "https://mcp.example.com/sse"
"url": "http://mcp.example.com/sse"
```
### Token Management
**DO:**
- ✅ Use environment variables for tokens
- ✅ Document required env vars in README
- ✅ Let OAuth flow handle authentication
**DON'T:**
- ❌ Hardcode tokens in configuration
- ❌ Commit tokens to git
- ❌ Share tokens in documentation
### Permission Scoping
Pre-allow only necessary MCP tools:
```markdown
✅ allowed-tools: [
"mcp__plugin_api_server__read_data",
"mcp__plugin_api_server__create_item"
]
❌ allowed-tools: ["mcp__plugin_api_server__*"]
```
## Error Handling
### Connection Failures
Handle MCP server unavailability:
- Provide fallback behavior in commands
- Inform user of connection issues
- Check server URL and configuration
### Tool Call Errors
Handle failed MCP operations:
- Validate inputs before calling MCP tools
- Provide clear error messages
- Check rate limiting and quotas
### Configuration Errors
Validate MCP configuration:
- Test server connectivity during development
- Validate JSON syntax
- Check required environment variables
## Performance Considerations
### Lazy Loading
MCP servers connect on-demand:
- Not all servers connect at startup
- First tool use triggers connection
- Connection pooling managed automatically
### Batching
Batch similar requests when possible:
```
# Good: Single query with filters
tasks = search_tasks(project="X", assignee="me", limit=50)
# Avoid: Many individual queries
for id in task_ids:
task = get_task(id)
```
## Testing MCP Integration
### Local Testing
1. Configure MCP server in `.mcp.json`
2. Install plugin locally (`.claude-plugin/`)
3. Run `/mcp` to verify server appears
4. Test tool calls in commands
5. Check `claude --debug` logs for connection issues
### Validation Checklist
- [ ] MCP configuration is valid JSON
- [ ] Server URL is correct and accessible
- [ ] Required environment variables documented
- [ ] Tools appear in `/mcp` output
- [ ] Authentication works (OAuth or tokens)
- [ ] Tool calls succeed from commands
- [ ] Error cases handled gracefully
## Debugging
### Enable Debug Logging
```bash
claude --debug
```
Look for:
- MCP server connection attempts
- Tool discovery logs
- Authentication flows
- Tool call errors
### Common Issues
**Server not connecting:**
- Check URL is correct
- Verify server is running (stdio)
- Check network connectivity
- Review authentication configuration
**Tools not available:**
- Verify server connected successfully
- Check tool names match exactly
- Run `/mcp` to see available tools
- Restart Claude Code after config changes
**Authentication failing:**
- Clear cached auth tokens
- Re-authenticate
- Check token scopes and permissions
- Verify environment variables set
## Quick Reference
### MCP Server Types
| Type | Transport | Best For | Auth |
|------|-----------|----------|------|
| stdio | Process | Local tools, custom servers | Env vars |
| SSE | HTTP | Hosted services, cloud APIs | OAuth |
| HTTP | REST | API backends, token auth | Tokens |
| ws | WebSocket | Real-time, streaming | Tokens |
### Configuration Checklist
- [ ] Server type specified (stdio/SSE/HTTP/ws)
- [ ] Type-specific fields complete (command or url)
- [ ] Authentication configured
- [ ] Environment variables documented
- [ ] HTTPS/WSS used (not HTTP/WS)
- [ ] ${CLAUDE_PLUGIN_ROOT} used for paths
### Best Practices
**DO:**
- ✅ Use ${CLAUDE_PLUGIN_ROOT} for portable paths
- ✅ Document required environment variables
- ✅ Use secure connections (HTTPS/WSS)
- ✅ Pre-allow specific MCP tools in commands
- ✅ Test MCP integration before publishing
- ✅ Handle connection and tool errors gracefully
**DON'T:**
- ❌ Hardcode absolute paths
- ❌ Commit credentials to git
- ❌ Use HTTP instead of HTTPS
- ❌ Pre-allow all tools with wildcards
- ❌ Skip error handling
- ❌ Forget to document setup
## Additional Resources
### Reference Files
For detailed information, consult:
- **`references/server-types.md`** - Deep dive on each server type
- **`references/authentication.md`** - Authentication patterns and OAuth
- **`references/tool-usage.md`** - Using MCP tools in commands and agents
### Example Configurations
Working examples in `examples/`:
- **`stdio-server.json`** - Local stdio MCP server
- **`sse-server.json`** - Hosted SSE server with OAuth
- **`http-server.json`** - REST API with token auth
### External Resources
- **Official MCP Docs**: https://modelcontextprotocol.io/
- **Claude Code MCP Docs**: https://docs.claude.com/en/docs/claude-code/mcp
- **MCP SDK**: @modelcontextprotocol/sdk
- **Testing**: Use `claude --debug` and `/mcp` command
## Implementation Workflow
To add MCP integration to a plugin:
1. Choose MCP server type (stdio, SSE, HTTP, ws)
2. Create `.mcp.json` at plugin root with configuration
3. Use ${CLAUDE_PLUGIN_ROOT} for all file references
4. Document required environment variables in README
5. Test locally with `/mcp` command
6. Pre-allow MCP tools in relevant commands
7. Handle authentication (OAuth or tokens)
8. Test error cases (connection failures, auth errors)
9. Document MCP integration in plugin README
Focus on stdio for custom/local servers, SSE for hosted services with OAuth.

View File

@@ -0,0 +1,20 @@
{
"_comment": "Example HTTP MCP server configuration for REST APIs",
"rest-api": {
"type": "http",
"url": "https://api.example.com/mcp",
"headers": {
"Authorization": "Bearer ${API_TOKEN}",
"Content-Type": "application/json",
"X-API-Version": "2024-01-01"
}
},
"internal-service": {
"type": "http",
"url": "https://api.example.com/mcp",
"headers": {
"Authorization": "Bearer ${API_TOKEN}",
"X-Service-Name": "claude-plugin"
}
}
}

View File

@@ -0,0 +1,19 @@
{
"_comment": "Example SSE MCP server configuration for hosted cloud services",
"asana": {
"type": "sse",
"url": "https://mcp.asana.com/sse"
},
"github": {
"type": "sse",
"url": "https://mcp.github.com/sse"
},
"custom-service": {
"type": "sse",
"url": "https://mcp.example.com/sse",
"headers": {
"X-API-Version": "v1",
"X-Client-ID": "${CLIENT_ID}"
}
}
}

View File

@@ -0,0 +1,26 @@
{
"_comment": "Example stdio MCP server configuration for local file system access",
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "${CLAUDE_PROJECT_DIR}"],
"env": {
"LOG_LEVEL": "info"
}
},
"database": {
"command": "${CLAUDE_PLUGIN_ROOT}/servers/db-server.js",
"args": ["--config", "${CLAUDE_PLUGIN_ROOT}/config/db.json"],
"env": {
"DATABASE_URL": "${DATABASE_URL}",
"DB_POOL_SIZE": "10"
}
},
"custom-tools": {
"command": "python",
"args": ["-m", "my_mcp_server", "--port", "8080"],
"env": {
"API_KEY": "${CUSTOM_API_KEY}",
"DEBUG": "false"
}
}
}

View File

@@ -0,0 +1,549 @@
# MCP Authentication Patterns
Complete guide to authentication methods for MCP servers in Claude Code plugins.
## Overview
MCP servers support multiple authentication methods depending on the server type and service requirements. Choose the method that best matches your use case and security requirements.
## OAuth (Automatic)
### How It Works
Claude Code automatically handles the complete OAuth 2.0 flow for SSE and HTTP servers:
1. User attempts to use MCP tool
2. Claude Code detects authentication needed
3. Opens browser for OAuth consent
4. User authorizes in browser
5. Tokens stored securely by Claude Code
6. Automatic token refresh
### Configuration
```json
{
"service": {
"type": "sse",
"url": "https://mcp.example.com/sse"
}
}
```
No additional auth configuration needed! Claude Code handles everything.
### Supported Services
**Known OAuth-enabled MCP servers:**
- Asana: `https://mcp.asana.com/sse`
- GitHub (when available)
- Google services (when available)
- Custom OAuth servers
### OAuth Scopes
OAuth scopes are determined by the MCP server. Users see required scopes during the consent flow.
**Document required scopes in your README:**
```markdown
## Authentication
This plugin requires the following Asana permissions:
- Read tasks and projects
- Create and update tasks
- Access workspace data
```
### Token Storage
Tokens are stored securely by Claude Code:
- Not accessible to plugins
- Encrypted at rest
- Automatic refresh
- Cleared on sign-out
### Troubleshooting OAuth
**Authentication loop:**
- Clear cached tokens (sign out and sign in)
- Check OAuth redirect URLs
- Verify server OAuth configuration
**Scope issues:**
- User may need to re-authorize for new scopes
- Check server documentation for required scopes
**Token expiration:**
- Claude Code auto-refreshes
- If refresh fails, prompts re-authentication
## Token-Based Authentication
### Bearer Tokens
Most common for HTTP and WebSocket servers.
**Configuration:**
```json
{
"api": {
"type": "http",
"url": "https://api.example.com/mcp",
"headers": {
"Authorization": "Bearer ${API_TOKEN}"
}
}
}
```
**Environment variable:**
```bash
export API_TOKEN="your-secret-token-here"
```
### API Keys
Alternative to Bearer tokens, often in custom headers.
**Configuration:**
```json
{
"api": {
"type": "http",
"url": "https://api.example.com/mcp",
"headers": {
"X-API-Key": "${API_KEY}",
"X-API-Secret": "${API_SECRET}"
}
}
}
```
### Custom Headers
Services may use custom authentication headers.
**Configuration:**
```json
{
"service": {
"type": "sse",
"url": "https://mcp.example.com/sse",
"headers": {
"X-Auth-Token": "${AUTH_TOKEN}",
"X-User-ID": "${USER_ID}",
"X-Tenant-ID": "${TENANT_ID}"
}
}
}
```
### Documenting Token Requirements
Always document in your README:
```markdown
## Setup
### Required Environment Variables
Set these environment variables before using the plugin:
\`\`\`bash
export API_TOKEN="your-token-here"
export API_SECRET="your-secret-here"
\`\`\`
### Obtaining Tokens
1. Visit https://api.example.com/tokens
2. Create a new API token
3. Copy the token and secret
4. Set environment variables as shown above
### Token Permissions
The API token needs the following permissions:
- Read access to resources
- Write access for creating items
- Delete access (optional, for cleanup operations)
\`\`\`
```
## Environment Variable Authentication (stdio)
### Passing Credentials to Server
For stdio servers, pass credentials via environment variables:
```json
{
"database": {
"command": "python",
"args": ["-m", "mcp_server_db"],
"env": {
"DATABASE_URL": "${DATABASE_URL}",
"DB_USER": "${DB_USER}",
"DB_PASSWORD": "${DB_PASSWORD}"
}
}
}
```
### User Environment Variables
```bash
# User sets these in their shell
export DATABASE_URL="postgresql://localhost/mydb"
export DB_USER="myuser"
export DB_PASSWORD="mypassword"
```
### Documentation Template
```markdown
## Database Configuration
Set these environment variables:
\`\`\`bash
export DATABASE_URL="postgresql://host:port/database"
export DB_USER="username"
export DB_PASSWORD="password"
\`\`\`
Or create a `.env` file (add to `.gitignore`):
\`\`\`
DATABASE_URL=postgresql://localhost:5432/mydb
DB_USER=myuser
DB_PASSWORD=mypassword
\`\`\`
Load with: \`source .env\` or \`export $(cat .env | xargs)\`
\`\`\`
```
## Dynamic Headers
### Headers Helper Script
For tokens that change or expire, use a helper script:
```json
{
"api": {
"type": "sse",
"url": "https://api.example.com",
"headersHelper": "${CLAUDE_PLUGIN_ROOT}/scripts/get-headers.sh"
}
}
```
**Script (get-headers.sh):**
```bash
#!/bin/bash
# Generate dynamic authentication headers
# Fetch fresh token
TOKEN=$(get-fresh-token-from-somewhere)
# Output JSON headers
cat <<EOF
{
"Authorization": "Bearer $TOKEN",
"X-Timestamp": "$(date -Iseconds)"
}
EOF
```
### Use Cases for Dynamic Headers
- Short-lived tokens that need refresh
- Tokens with HMAC signatures
- Time-based authentication
- Dynamic tenant/workspace selection
## Security Best Practices
### DO
**Use environment variables:**
```json
{
"headers": {
"Authorization": "Bearer ${API_TOKEN}"
}
}
```
**Document required variables in README**
**Use HTTPS/WSS always**
**Implement token rotation**
**Store tokens securely (env vars, not files)**
**Let OAuth handle authentication when available**
### DON'T
**Hardcode tokens:**
```json
{
"headers": {
"Authorization": "Bearer sk-abc123..." // NEVER!
}
}
```
**Commit tokens to git**
**Share tokens in documentation**
**Use HTTP instead of HTTPS**
**Store tokens in plugin files**
**Log tokens or sensitive headers**
## Multi-Tenancy Patterns
### Workspace/Tenant Selection
**Via environment variable:**
```json
{
"api": {
"type": "http",
"url": "https://api.example.com/mcp",
"headers": {
"Authorization": "Bearer ${API_TOKEN}",
"X-Workspace-ID": "${WORKSPACE_ID}"
}
}
}
```
**Via URL:**
```json
{
"api": {
"type": "http",
"url": "https://${TENANT_ID}.api.example.com/mcp"
}
}
```
### Per-User Configuration
Users set their own workspace:
```bash
export WORKSPACE_ID="my-workspace-123"
export TENANT_ID="my-company"
```
## Authentication Troubleshooting
### Common Issues
**401 Unauthorized:**
- Check token is set correctly
- Verify token hasn't expired
- Check token has required permissions
- Ensure header format is correct
**403 Forbidden:**
- Token valid but lacks permissions
- Check scope/permissions
- Verify workspace/tenant ID
- May need admin approval
**Token not found:**
```bash
# Check environment variable is set
echo $API_TOKEN
# If empty, set it
export API_TOKEN="your-token"
```
**Token in wrong format:**
```json
// Correct
"Authorization": "Bearer sk-abc123"
// Wrong
"Authorization": "sk-abc123"
```
### Debugging Authentication
**Enable debug mode:**
```bash
claude --debug
```
Look for:
- Authentication header values (sanitized)
- OAuth flow progress
- Token refresh attempts
- Authentication errors
**Test authentication separately:**
```bash
# Test HTTP endpoint
curl -H "Authorization: Bearer $API_TOKEN" \
https://api.example.com/mcp/health
# Should return 200 OK
```
## Migration Patterns
### From Hardcoded to Environment Variables
**Before:**
```json
{
"headers": {
"Authorization": "Bearer sk-hardcoded-token"
}
}
```
**After:**
```json
{
"headers": {
"Authorization": "Bearer ${API_TOKEN}"
}
}
```
**Migration steps:**
1. Add environment variable to plugin README
2. Update configuration to use ${VAR}
3. Test with variable set
4. Remove hardcoded value
5. Commit changes
### From Basic Auth to OAuth
**Before:**
```json
{
"headers": {
"Authorization": "Basic ${BASE64_CREDENTIALS}"
}
}
```
**After:**
```json
{
"type": "sse",
"url": "https://mcp.example.com/sse"
}
```
**Benefits:**
- Better security
- No credential management
- Automatic token refresh
- Scoped permissions
## Advanced Authentication
### Mutual TLS (mTLS)
Some enterprise services require client certificates.
**Not directly supported in MCP configuration.**
**Workaround:** Wrap in stdio server that handles mTLS:
```json
{
"secure-api": {
"command": "${CLAUDE_PLUGIN_ROOT}/servers/mtls-wrapper",
"args": ["--cert", "${CLIENT_CERT}", "--key", "${CLIENT_KEY}"],
"env": {
"API_URL": "https://secure.example.com"
}
}
}
```
### JWT Tokens
Generate JWT tokens dynamically with headers helper:
```bash
#!/bin/bash
# generate-jwt.sh
# Generate JWT (using library or API call)
JWT=$(generate-jwt-token)
echo "{\"Authorization\": \"Bearer $JWT\"}"
```
```json
{
"headersHelper": "${CLAUDE_PLUGIN_ROOT}/scripts/generate-jwt.sh"
}
```
### HMAC Signatures
For APIs requiring request signing:
```bash
#!/bin/bash
# generate-hmac.sh
TIMESTAMP=$(date -Iseconds)
SIGNATURE=$(echo -n "$TIMESTAMP" | openssl dgst -sha256 -hmac "$SECRET_KEY" | cut -d' ' -f2)
cat <<EOF
{
"X-Timestamp": "$TIMESTAMP",
"X-Signature": "$SIGNATURE",
"X-API-Key": "$API_KEY"
}
EOF
```
## Best Practices Summary
### For Plugin Developers
1. **Prefer OAuth** when service supports it
2. **Use environment variables** for tokens
3. **Document all required variables** in README
4. **Provide setup instructions** with examples
5. **Never commit credentials**
6. **Use HTTPS/WSS only**
7. **Test authentication thoroughly**
### For Plugin Users
1. **Set environment variables** before using plugin
2. **Keep tokens secure** and private
3. **Rotate tokens regularly**
4. **Use different tokens** for dev/prod
5. **Don't commit .env files** to git
6. **Review OAuth scopes** before authorizing
## Conclusion
Choose the authentication method that matches your MCP server's requirements:
- **OAuth** for cloud services (easiest for users)
- **Bearer tokens** for API services
- **Environment variables** for stdio servers
- **Dynamic headers** for complex auth flows
Always prioritize security and provide clear setup documentation for users.

Some files were not shown because too many files have changed in this diff Show More