diff --git a/.claude/commands/create-spec.md b/.claude/commands/create-spec.md
index f8cae28..f8a1b96 100644
--- a/.claude/commands/create-spec.md
+++ b/.claude/commands/create-spec.md
@@ -95,6 +95,27 @@ Ask the user about their involvement preference:
**For Detailed Mode users**, ask specific tech questions about frontend, backend, database, etc.
+### Phase 3b: Database Requirements (MANDATORY)
+
+**Always ask this question regardless of mode:**
+
+> "One foundational question about data storage:
+>
+> **Does this application need to store user data persistently?**
+>
+> 1. **Yes, needs a database** - Users create, save, and retrieve data (most apps)
+> 2. **No, stateless** - Pure frontend, no data storage needed (calculators, static sites)
+> 3. **Not sure** - Let me describe what I need and you decide"
+
+**Branching logic:**
+
+- **If "Yes" or "Not sure"**: Continue normally. The spec will include database in tech stack and the initializer will create 5 mandatory Infrastructure features (indices 0-4) to verify database connectivity and persistence.
+
+- **If "No, stateless"**: Note this in the spec. Skip database from tech stack. Infrastructure features will be simplified (no database persistence tests). Mark this clearly:
+ ```xml
+ none - stateless application
+ ```
+
## Phase 4: Features (THE MAIN PHASE)
This is where you spend most of your time. Ask questions in plain language that anyone can answer.
@@ -207,12 +228,23 @@ After gathering all features, **you** (the agent) should tally up the testable f
**Typical ranges for reference:**
-- **Simple apps** (todo list, calculator, notes): ~20-50 features
-- **Medium apps** (blog, task manager with auth): ~100 features
-- **Advanced apps** (e-commerce, CRM, full SaaS): ~150-200 features
+- **Simple apps** (todo list, calculator, notes): ~25-55 features (includes 5 infrastructure)
+- **Medium apps** (blog, task manager with auth): ~105 features (includes 5 infrastructure)
+- **Advanced apps** (e-commerce, CRM, full SaaS): ~155-205 features (includes 5 infrastructure)
These are just reference points - your actual count should come from the requirements discussed.
+**MANDATORY: Infrastructure Features**
+
+If the app requires a database (Phase 3b answer was "Yes" or "Not sure"), you MUST include 5 Infrastructure features (indices 0-4):
+1. Database connection established
+2. Database schema applied correctly
+3. Data persists across server restart
+4. No mock data patterns in codebase
+5. Backend API queries real database
+
+These features ensure the coding agent implements a real database, not mock data or in-memory storage.
+
**How to count features:**
For each feature area discussed, estimate the number of discrete, testable behaviors:
@@ -225,17 +257,20 @@ For each feature area discussed, estimate the number of discrete, testable behav
> "Based on what we discussed, here's my feature breakdown:
>
+> - **Infrastructure (required)**: 5 features (database setup, persistence verification)
> - [Category 1]: ~X features
> - [Category 2]: ~Y features
> - [Category 3]: ~Z features
> - ...
>
-> **Total: ~N features**
+> **Total: ~N features** (including 5 infrastructure)
>
> Does this seem right, or should I adjust?"
Let the user confirm or adjust. This becomes your `feature_count` for the spec.
+**Important:** The first 5 features (indices 0-4) created by the initializer MUST be the Infrastructure category with no dependencies. All other features depend on these.
+
## Phase 5: Technical Details (DERIVED OR DISCUSSED)
**For Quick Mode users:**
diff --git a/.claude/commands/review-pr.md b/.claude/commands/review-pr.md
new file mode 100644
index 0000000..9c9098f
--- /dev/null
+++ b/.claude/commands/review-pr.md
@@ -0,0 +1,54 @@
+---
+description: Review pull requests
+---
+
+Pull request(s): $ARGUMENTS
+
+- If no PR numbers are provided, ask the user to provide PR number(s).
+- At least 1 PR is required.
+
+## TASKS
+
+1. **Retrieve PR Details**
+ - Use the GH CLI tool to retrieve the details (descriptions, diffs, comments, feedback, reviews, etc)
+
+2. **Assess PR Complexity**
+
+ After retrieving PR details, assess complexity based on:
+ - Number of files changed
+ - Lines added/removed
+ - Number of contributors/commits
+ - Whether changes touch core/architectural files
+
+ ### Complexity Tiers
+
+ **Simple** (no deep dive agents needed):
+ - ≤5 files changed AND ≤100 lines changed AND single author
+ - Review directly without spawning agents
+
+ **Medium** (1-2 deep dive agents):
+ - 6-15 files changed, OR 100-500 lines, OR 2 contributors
+ - Spawn 1 agent for focused areas, 2 if changes span multiple domains
+
+ **Complex** (up to 3 deep dive agents):
+ - >15 files, OR >500 lines, OR >2 contributors, OR touches core architecture
+ - Spawn up to 3 agents to analyze different aspects (e.g., security, performance, architecture)
+
+3. **Analyze Codebase Impact**
+ - Based on the complexity tier determined above, spawn the appropriate number of deep dive subagents
+ - For Simple PRs: analyze directly without spawning agents
+ - For Medium PRs: spawn 1-2 agents focusing on the most impacted areas
+ - For Complex PRs: spawn up to 3 agents to cover security, performance, and architectural concerns
+
+4. **Vision Alignment Check**
+ - Read the project's README.md and CLAUDE.md to understand the application's core purpose
+ - Assess whether this PR aligns with the application's intended functionality
+ - If the changes deviate significantly from the core vision or add functionality that doesn't serve the application's purpose, note this in the review
+ - This is not a blocker, but should be flagged for the reviewer's consideration
+
+5. **Safety Assessment**
+ - Provide a review on whether the PR is safe to merge as-is
+ - Provide any feedback in terms of risk level
+
+6. **Improvements**
+ - Propose any improvements in terms of importance and complexity
\ No newline at end of file
diff --git a/.claude/templates/coding_prompt.template.md b/.claude/templates/coding_prompt.template.md
index bce9a14..9322404 100644
--- a/.claude/templates/coding_prompt.template.md
+++ b/.claude/templates/coding_prompt.template.md
@@ -156,6 +156,9 @@ Use browser automation tools:
- [ ] Deleted the test data - verified it's gone everywhere
- [ ] NO unexplained data appeared (would indicate mock data)
- [ ] Dashboard/counts reflect real numbers after my changes
+- [ ] **Ran extended mock data grep (STEP 5.6) - no hits in src/ (excluding tests)**
+- [ ] **Verified no globalThis, devStore, or dev-store patterns**
+- [ ] **Server restart test passed (STEP 5.7) - data persists across restart**
#### Navigation Verification
@@ -174,10 +177,92 @@ Use browser automation tools:
### STEP 5.6: MOCK DATA DETECTION (Before marking passing)
-1. **Search code:** `grep -r "mockData\|fakeData\|TODO\|STUB" --include="*.ts" --include="*.tsx"`
-2. **Runtime test:** Create unique data (e.g., "TEST_12345") → verify in UI → delete → verify gone
-3. **Check database:** All displayed data must come from real DB queries
-4. If unexplained data appears, it's mock data - fix before marking passing.
+**Run ALL these grep checks. Any hits in src/ (excluding test files) require investigation:**
+
+```bash
+# Common exclusions for test files
+EXCLUDE="--exclude=*.test.* --exclude=*.spec.* --exclude=*__test__* --exclude=*__mocks__*"
+
+# 1. In-memory storage patterns (CRITICAL - catches dev-store)
+grep -r "globalThis\." --include="*.ts" --include="*.tsx" --include="*.js" $EXCLUDE src/
+grep -r "dev-store\|devStore\|DevStore\|mock-db\|mockDb" --include="*.ts" --include="*.tsx" --include="*.js" $EXCLUDE src/
+
+# 2. Mock data variables
+grep -r "mockData\|fakeData\|sampleData\|dummyData\|testData" --include="*.ts" --include="*.tsx" --include="*.js" $EXCLUDE src/
+
+# 3. TODO/incomplete markers
+grep -r "TODO.*real\|TODO.*database\|TODO.*API\|STUB\|MOCK" --include="*.ts" --include="*.tsx" --include="*.js" $EXCLUDE src/
+
+# 4. Development-only conditionals
+grep -r "isDevelopment\|isDev\|process\.env\.NODE_ENV.*development" --include="*.ts" --include="*.tsx" --include="*.js" $EXCLUDE src/
+
+# 5. In-memory collections as data stores
+grep -r "new Map\(\)\|new Set\(\)" --include="*.ts" --include="*.tsx" --include="*.js" $EXCLUDE src/ 2>/dev/null
+```
+
+**Rule:** If ANY grep returns results in production code → investigate → FIX before marking passing.
+
+**Runtime verification:**
+1. Create unique data (e.g., "TEST_12345") → verify in UI → delete → verify gone
+2. Check database directly - all displayed data must come from real DB queries
+3. If unexplained data appears, it's mock data - fix before marking passing.
+
+### STEP 5.7: SERVER RESTART PERSISTENCE TEST (MANDATORY for data features)
+
+**When required:** Any feature involving CRUD operations or data persistence.
+
+**This test is NON-NEGOTIABLE. It catches in-memory storage implementations that pass all other tests.**
+
+**Steps:**
+
+1. Create unique test data via UI or API (e.g., item named "RESTART_TEST_12345")
+2. Verify data appears in UI and API response
+
+3. **STOP the server completely:**
+ ```bash
+ # Kill by port (safer - only kills the dev server, not VS Code/Claude Code/etc.)
+ # Unix/macOS:
+ lsof -ti :${PORT:-3000} | xargs kill -TERM 2>/dev/null || true
+ sleep 3
+ lsof -ti :${PORT:-3000} | xargs kill -9 2>/dev/null || true
+ sleep 2
+
+ # Windows alternative (use if lsof not available):
+ # netstat -ano | findstr :${PORT:-3000} | findstr LISTENING
+ # taskkill /F /PID 2>nul
+
+ # Verify server is stopped
+ if lsof -ti :${PORT:-3000} > /dev/null 2>&1; then
+ echo "ERROR: Server still running on port ${PORT:-3000}!"
+ exit 1
+ fi
+ ```
+
+4. **RESTART the server:**
+ ```bash
+ ./init.sh &
+ sleep 15 # Allow server to fully start
+ # Verify server is responding
+ if ! curl -f http://localhost:${PORT:-3000}/api/health && ! curl -f http://localhost:${PORT:-3000}; then
+ echo "ERROR: Server failed to start after restart"
+ exit 1
+ fi
+ ```
+
+5. **Query for test data - it MUST still exist**
+ - Via UI: Navigate to data location, verify data appears
+ - Via API: `curl http://localhost:${PORT:-3000}/api/items` - verify data in response
+
+6. **If data is GONE:** Implementation uses in-memory storage → CRITICAL FAIL
+ - Run all grep commands from STEP 5.6 to identify the mock pattern
+ - You MUST fix the in-memory storage implementation before proceeding
+ - Replace in-memory storage with real database queries
+
+7. **Clean up test data** after successful verification
+
+**Why this test exists:** In-memory stores like `globalThis.devStore` pass all other tests because data persists during a single server run. Only a full server restart reveals this bug. Skipping this step WILL allow dev-store implementations to slip through.
+
+**YOLO Mode Note:** Even in YOLO mode, this verification is MANDATORY for data features. Use curl instead of browser automation.
### STEP 6: UPDATE FEATURE STATUS (CAREFULLY!)
@@ -202,17 +287,23 @@ Use the feature_mark_passing tool with feature_id=42
### STEP 7: COMMIT YOUR PROGRESS
-Make a descriptive git commit:
+Make a descriptive git commit.
+
+**Git Commit Rules:**
+- ALWAYS use simple `-m` flag for commit messages
+- NEVER use heredocs (`cat </dev/null || true && sleep 5
+ - Windows: FOR /F "tokens=5" %a IN ('netstat -aon ^| find ":$PORT"') DO taskkill /F /PID %a 2>nul
+ - Note: Replace $PORT with actual port (e.g., 3000)
+4. Verify server is stopped: lsof -ti :$PORT returns nothing (or netstat on Windows)
+5. RESTART the server: ./init.sh & sleep 15
+6. Query API again: GET /api/items
+7. Verify "RESTART_TEST_12345" still exists
+8. If data is GONE → CRITICAL FAILURE (in-memory storage detected)
+9. Clean up test data
+```
+
+**Feature 3 - No mock data patterns in codebase:**
+```text
+Steps:
+1. Run: grep -r "globalThis\." --include="*.ts" --include="*.tsx" --include="*.js" src/
+2. Run: grep -r "dev-store\|devStore\|DevStore\|mock-db\|mockDb" --include="*.ts" --include="*.tsx" --include="*.js" src/
+3. Run: grep -r "mockData\|testData\|fakeData\|sampleData\|dummyData" --include="*.ts" --include="*.tsx" --include="*.js" src/
+4. Run: grep -r "TODO.*real\|TODO.*database\|TODO.*API\|STUB\|MOCK" --include="*.ts" --include="*.tsx" --include="*.js" src/
+5. Run: grep -r "isDevelopment\|isDev\|process\.env\.NODE_ENV.*development" --include="*.ts" --include="*.tsx" --include="*.js" src/
+6. Run: grep -r "new Map\(\)\|new Set\(\)" --include="*.ts" --include="*.tsx" --include="*.js" src/ 2>/dev/null
+7. Run: grep -E "json-server|miragejs|msw" package.json
+8. ALL grep commands must return empty (exit code 1)
+9. If any returns results → investigate and fix before passing
+```
+
+**Feature 4 - Backend API queries real database:**
+```text
+Steps:
+1. Start server with verbose logging
+2. Make API call (e.g., GET /api/items)
+3. Check server logs
+4. Verify SQL query appears (SELECT, INSERT, etc.) or ORM query log
+5. If no DB queries in logs → implementation is using mock data
+```
---
@@ -115,8 +199,9 @@ The feature_list.json **MUST** include tests from ALL 20 categories. Minimum cou
### Category Distribution by Complexity Tier
-| Category | Simple | Medium | Complex |
+| Category | Simple | Medium | Advanced |
| -------------------------------- | ------- | ------- | -------- |
+| **0. Infrastructure (REQUIRED)** | 5 | 5 | 5 |
| A. Security & Access Control | 5 | 20 | 40 |
| B. Navigation Integrity | 15 | 25 | 40 |
| C. Real Data Verification | 20 | 30 | 50 |
@@ -137,12 +222,14 @@ The feature_list.json **MUST** include tests from ALL 20 categories. Minimum cou
| R. Concurrency & Race Conditions | 5 | 8 | 15 |
| S. Export/Import | 5 | 6 | 10 |
| T. Performance | 5 | 5 | 10 |
-| **TOTAL** | **150** | **250** | **400+** |
+| **TOTAL** | **165** | **265** | **405+** |
---
### Category Descriptions
+**0. Infrastructure (REQUIRED - Priority 0)** - Database connectivity, schema existence, data persistence across server restart, absence of mock patterns. These features MUST pass before any functional features can begin. All tiers require exactly 5 infrastructure features (indices 0-4).
+
**A. Security & Access Control** - Test unauthorized access blocking, permission enforcement, session management, role-based access, and data isolation between users.
**B. Navigation Integrity** - Test all buttons, links, menus, breadcrumbs, deep links, back button behavior, 404 handling, and post-login/logout redirects.
@@ -205,6 +292,16 @@ The feature_list.json must include tests that **actively verify real data** and
- `setTimeout` simulating API delays with static data
- Static returns instead of database queries
+**Additional prohibited patterns (in-memory stores):**
+
+- `globalThis.` (in-memory storage pattern)
+- `dev-store`, `devStore`, `DevStore` (development stores)
+- `json-server`, `mirage`, `msw` (mock backends)
+- `Map()` or `Set()` used as primary data store
+- Environment checks like `if (process.env.NODE_ENV === 'development')` for data routing
+
+**Why this matters:** In-memory stores (like `globalThis.devStore`) will pass simple tests because data persists during a single server run. But data is LOST on server restart, which is unacceptable for production. The Infrastructure features (0-4) specifically test for this by requiring data to survive a full server restart.
+
---
**CRITICAL INSTRUCTION:**
diff --git a/.env.example b/.env.example
index e29bec3..9cb154a 100644
--- a/.env.example
+++ b/.env.example
@@ -1,12 +1,38 @@
# Optional: N8N webhook for progress notifications
# PROGRESS_N8N_WEBHOOK_URL=https://your-n8n-instance.com/webhook/...
-# Playwright Browser Mode
-# Controls whether Playwright runs Chrome in headless mode (no visible browser window).
-# - true: Browser runs in background, invisible (recommended for using PC while agent works)
+# Playwright Browser Configuration
+#
+# PLAYWRIGHT_BROWSER: Which browser to use for testing
+# - firefox: Lower CPU usage, recommended (default)
+# - chrome: Google Chrome
+# - webkit: Safari engine
+# - msedge: Microsoft Edge
+# PLAYWRIGHT_BROWSER=firefox
+#
+# PLAYWRIGHT_HEADLESS: Run browser without visible window
+# - true: Browser runs in background, saves CPU (default)
# - false: Browser opens a visible window (useful for debugging)
-# Defaults to 'false' if not specified
-# PLAYWRIGHT_HEADLESS=false
+# PLAYWRIGHT_HEADLESS=true
+
+# Extra Read Paths (Optional)
+# Comma-separated list of absolute paths for read-only access to external directories.
+# The agent can read files from these paths but cannot write to them.
+# Useful for referencing documentation, shared libraries, or other projects.
+# Example: EXTRA_READ_PATHS=/Volumes/Data/dev,/Users/shared/libs
+# EXTRA_READ_PATHS=
+
+# Google Cloud Vertex AI Configuration (Optional)
+# To use Claude via Vertex AI on Google Cloud Platform, uncomment and set these variables.
+# Requires: gcloud CLI installed and authenticated (run: gcloud auth application-default login)
+# Note: Use @ instead of - in model names (e.g., claude-opus-4-5@20251101)
+#
+# CLAUDE_CODE_USE_VERTEX=1
+# CLOUD_ML_REGION=us-east5
+# ANTHROPIC_VERTEX_PROJECT_ID=your-gcp-project-id
+# ANTHROPIC_DEFAULT_OPUS_MODEL=claude-opus-4-5@20251101
+# ANTHROPIC_DEFAULT_SONNET_MODEL=claude-sonnet-4-5@20250929
+# ANTHROPIC_DEFAULT_HAIKU_MODEL=claude-3-5-haiku@20241022
# GLM/Alternative API Configuration (Optional)
# To use Zhipu AI's GLM models instead of Claude, uncomment and set these variables.
@@ -19,3 +45,20 @@
# ANTHROPIC_DEFAULT_SONNET_MODEL=glm-4.7
# ANTHROPIC_DEFAULT_OPUS_MODEL=glm-4.7
# ANTHROPIC_DEFAULT_HAIKU_MODEL=glm-4.5-air
+
+# Ollama Local Model Configuration (Optional)
+# To use local models via Ollama instead of Claude, uncomment and set these variables.
+# Requires Ollama v0.14.0+ with Anthropic API compatibility.
+# See: https://ollama.com/blog/claude
+#
+# ANTHROPIC_BASE_URL=http://localhost:11434
+# ANTHROPIC_AUTH_TOKEN=ollama
+# API_TIMEOUT_MS=3000000
+# ANTHROPIC_DEFAULT_SONNET_MODEL=qwen3-coder
+# ANTHROPIC_DEFAULT_OPUS_MODEL=qwen3-coder
+# ANTHROPIC_DEFAULT_HAIKU_MODEL=qwen3-coder
+#
+# Model recommendations:
+# - For best results, use a capable coding model like qwen3-coder or deepseek-coder-v2
+# - You can use the same model for all tiers, or different models per tier
+# - Larger models (70B+) work best for Opus tier, smaller (7B-20B) for Haiku
diff --git a/.gitignore b/.gitignore
index d90c6ca..bb20118 100644
--- a/.gitignore
+++ b/.gitignore
@@ -76,6 +76,11 @@ ui/playwright-report/
.dmypy.json
dmypy.json
+# ===================
+# Claude Code
+# ===================
+.claude/settings.local.json
+
# ===================
# IDE / Editors
# ===================
diff --git a/CLAUDE.md b/CLAUDE.md
index 29cc2a5..d92db4e 100644
--- a/CLAUDE.md
+++ b/CLAUDE.md
@@ -2,6 +2,12 @@
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
+## Prerequisites
+
+- Python 3.11+
+- Node.js 20+ (for UI development)
+- Claude Code CLI
+
## Project Overview
This is an autonomous coding agent system with a React-based UI. It uses the Claude Agent SDK to build complete applications over multiple sessions using a two-agent pattern:
@@ -86,6 +92,33 @@ npm run lint # Run ESLint
**Note:** The `start_ui.bat` script serves the pre-built UI from `ui/dist/`. After making UI changes, run `npm run build` in the `ui/` directory.
+## Testing
+
+### Python
+
+```bash
+ruff check . # Lint
+mypy . # Type check
+python test_security.py # Security unit tests (163 tests)
+python test_security_integration.py # Integration tests (9 tests)
+```
+
+### React UI
+
+```bash
+cd ui
+npm run lint # ESLint
+npm run build # Type check + build
+npm run test:e2e # Playwright end-to-end tests
+npm run test:e2e:ui # Playwright tests with UI
+```
+
+### Code Quality
+
+Configuration in `pyproject.toml`:
+- ruff: Line length 120, Python 3.11 target
+- mypy: Strict return type checking, ignores missing imports
+
## Architecture
### Core Python Modules
@@ -141,7 +174,7 @@ MCP tools available to the agent:
### React UI (ui/)
-- Tech stack: React 18, TypeScript, TanStack Query, Tailwind CSS v4, Radix UI, dagre (graph layout)
+- Tech stack: React 19, TypeScript, TanStack Query, Tailwind CSS v4, Radix UI, dagre (graph layout)
- `src/App.tsx` - Main app with project selection, kanban board, agent controls
- `src/hooks/useWebSocket.ts` - Real-time updates via WebSocket (progress, agent status, logs, agent updates)
- `src/hooks/useProjects.ts` - React Query hooks for API calls
@@ -178,6 +211,46 @@ Defense-in-depth approach configured in `client.py`:
2. Filesystem restricted to project directory only
3. Bash commands validated using hierarchical allowlist system
+#### Extra Read Paths (Cross-Project File Access)
+
+The agent can optionally read files from directories outside the project folder via the `EXTRA_READ_PATHS` environment variable. This enables referencing documentation, shared libraries, or other projects.
+
+**Configuration:**
+
+```bash
+# Single path
+EXTRA_READ_PATHS=/Users/me/docs
+
+# Multiple paths (comma-separated)
+EXTRA_READ_PATHS=/Users/me/docs,/opt/shared-libs,/Volumes/Data/reference
+```
+
+**Security Controls:**
+
+All paths are validated before being granted read access:
+- Must be absolute paths (not relative)
+- Must exist and be directories
+- Paths are canonicalized via `Path.resolve()` to prevent `..` traversal attacks
+- Sensitive directories are blocked (see blocklist below)
+- Only Read, Glob, and Grep operations are allowed (no Write/Edit)
+
+**Blocked Sensitive Directories:**
+
+The following directories (relative to home) are always blocked:
+- `.ssh`, `.aws`, `.azure`, `.kube` - Cloud/SSH credentials
+- `.gnupg`, `.gpg`, `.password-store` - Encryption keys
+- `.docker`, `.config/gcloud` - Container/cloud configs
+- `.npmrc`, `.pypirc`, `.netrc` - Package manager credentials
+
+**Example Output:**
+
+```
+Created security settings at /path/to/project/.claude_settings.json
+ - Sandbox enabled (OS-level bash isolation)
+ - Filesystem restricted to: /path/to/project
+ - Extra read paths (validated): /Users/me/docs, /opt/shared-libs
+```
+
#### Per-Project Allowed Commands
The agent's bash command access is controlled through a hierarchical configuration system:
@@ -237,15 +310,6 @@ blocked_commands:
- Blocklisted commands (sudo, dd, shutdown, etc.) can NEVER be allowed
- Org-level blocked commands cannot be overridden by project configs
-**Testing:**
-```bash
-# Unit tests (136 tests - fast)
-python test_security.py
-
-# Integration tests (9 tests - uses real hooks)
-python test_security_integration.py
-```
-
**Files:**
- `security.py` - Command validation logic and hardcoded blocklist
- `test_security.py` - Unit tests for security system (136 tests)
@@ -256,6 +320,39 @@ python test_security_integration.py
- `examples/README.md` - Comprehensive guide with use cases, testing, and troubleshooting
- `PHASE3_SPEC.md` - Specification for mid-session approval feature (future enhancement)
+### Ollama Local Models (Optional)
+
+Run coding agents using local models via Ollama v0.14.0+:
+
+1. Install Ollama: https://ollama.com
+2. Start Ollama: `ollama serve`
+3. Pull a coding model: `ollama pull qwen3-coder`
+4. Configure `.env`:
+ ```
+ ANTHROPIC_BASE_URL=http://localhost:11434
+ ANTHROPIC_AUTH_TOKEN=ollama
+ API_TIMEOUT_MS=3000000
+ ANTHROPIC_DEFAULT_SONNET_MODEL=qwen3-coder
+ ANTHROPIC_DEFAULT_OPUS_MODEL=qwen3-coder
+ ANTHROPIC_DEFAULT_HAIKU_MODEL=qwen3-coder
+ ```
+5. Run autocoder normally - it will use your local Ollama models
+
+**Recommended coding models:**
+- `qwen3-coder` - Good balance of speed and capability
+- `deepseek-coder-v2` - Strong coding performance
+- `codellama` - Meta's code-focused model
+
+**Model tier mapping:**
+- Use the same model for all tiers, or map different models per capability level
+- Larger models (70B+) work best for Opus tier
+- Smaller models (7B-20B) work well for Haiku tier
+
+**Known limitations:**
+- Smaller context windows than Claude (model-dependent)
+- Extended context beta disabled (not supported by Ollama)
+- Performance depends on local hardware (GPU recommended)
+
## Claude Code Integration
- `.claude/commands/create-spec.md` - `/create-spec` slash command for interactive spec creation
@@ -301,55 +398,7 @@ The orchestrator enforces strict bounds on concurrent processes:
- `MAX_PARALLEL_AGENTS = 5` - Maximum concurrent coding agents
- `MAX_TOTAL_AGENTS = 10` - Hard limit on total agents (coding + testing)
- Testing agents are capped at `max_concurrency` (same as coding agents)
-
-**Expected process count during normal operation:**
-- 1 orchestrator process
-- Up to 5 coding agents
-- Up to 5 testing agents
-- Total: never exceeds 11 Python processes
-
-**Stress Test Verification:**
-
-```bash
-# Windows - verify process bounds
-# 1. Note baseline count
-tasklist | findstr python | find /c /v ""
-
-# 2. Start parallel agent (max concurrency)
-python autonomous_agent_demo.py --project-dir test --parallel --max-concurrency 5
-
-# 3. During run - should NEVER exceed baseline + 11
-tasklist | findstr python | find /c /v ""
-
-# 4. After stop via UI - should return to baseline
-tasklist | findstr python | find /c /v ""
-```
-
-```bash
-# macOS/Linux - verify process bounds
-# 1. Note baseline count
-pgrep -c python
-
-# 2. Start parallel agent
-python autonomous_agent_demo.py --project-dir test --parallel --max-concurrency 5
-
-# 3. During run - should NEVER exceed baseline + 11
-pgrep -c python
-
-# 4. After stop - should return to baseline
-pgrep -c python
-```
-
-**Log Verification:**
-
-```bash
-# Check spawn vs completion balance
-grep "Started testing agent" orchestrator_debug.log | wc -l
-grep "Testing agent.*completed\|failed" orchestrator_debug.log | wc -l
-
-# Watch for cap enforcement messages
-grep "at max testing agents\|At max total agents" orchestrator_debug.log
-```
+- Total process count never exceeds 11 Python processes (1 orchestrator + 5 coding + 5 testing)
### Design System
diff --git a/CUSTOM_UPDATES.md b/CUSTOM_UPDATES.md
new file mode 100644
index 0000000..f211696
--- /dev/null
+++ b/CUSTOM_UPDATES.md
@@ -0,0 +1,228 @@
+# Custom Updates - AutoCoder
+
+This document tracks all customizations made to AutoCoder that deviate from the upstream repository. Reference this file before any updates to preserve these changes.
+
+---
+
+## Table of Contents
+
+1. [UI Theme Customization](#1-ui-theme-customization)
+2. [Playwright Browser Configuration](#2-playwright-browser-configuration)
+3. [Update Checklist](#update-checklist)
+
+---
+
+## 1. UI Theme Customization
+
+### Overview
+
+The UI has been customized from the default **neobrutalism** style to a clean **Twitter/Supabase-style** design.
+
+**Design Changes:**
+- No shadows
+- Thin borders (1px)
+- Rounded corners (1.3rem base)
+- Blue accent color (Twitter blue)
+- Clean typography (Open Sans)
+
+### Modified Files
+
+#### `ui/src/styles/custom-theme.css`
+
+**Purpose:** Main theme override file that replaces neo design with clean Twitter style.
+
+**Key Changes:**
+- All `--shadow-neo-*` variables set to `none`
+- All status colors (`pending`, `progress`, `done`) use Twitter blue
+- Rounded corners: `--radius-neo-lg: 1.3rem`
+- Font: Open Sans
+- Removed all transform effects on hover
+- Dark mode with proper contrast
+
+**CSS Variables (Light Mode):**
+```css
+--color-neo-accent: oklch(0.6723 0.1606 244.9955); /* Twitter blue */
+--color-neo-pending: oklch(0.6723 0.1606 244.9955);
+--color-neo-progress: oklch(0.6723 0.1606 244.9955);
+--color-neo-done: oklch(0.6723 0.1606 244.9955);
+```
+
+**CSS Variables (Dark Mode):**
+```css
+--color-neo-bg: oklch(0.08 0 0);
+--color-neo-card: oklch(0.16 0.005 250);
+--color-neo-border: oklch(0.30 0 0);
+```
+
+**How to preserve:** This file should NOT be overwritten. It loads after `globals.css` and overrides it.
+
+---
+
+#### `ui/src/components/KanbanColumn.tsx`
+
+**Purpose:** Modified to support themeable kanban columns without inline styles.
+
+**Changes:**
+
+1. **colorMap changed from inline colors to CSS classes:**
+```tsx
+// BEFORE (original):
+const colorMap = {
+ pending: 'var(--color-neo-pending)',
+ progress: 'var(--color-neo-progress)',
+ done: 'var(--color-neo-done)',
+}
+
+// AFTER (customized):
+const colorMap = {
+ pending: 'kanban-header-pending',
+ progress: 'kanban-header-progress',
+ done: 'kanban-header-done',
+}
+```
+
+2. **Column div uses CSS class instead of inline style:**
+```tsx
+// BEFORE:
+
+
+// AFTER:
+
+```
+
+3. **Header div simplified (removed duplicate color class):**
+```tsx
+// BEFORE:
+
+
+// AFTER:
+
+```
+
+4. **Title text color:**
+```tsx
+// BEFORE:
+text-[var(--color-neo-text-on-bright)]
+
+// AFTER:
+text-[var(--color-neo-text)]
+```
+
+---
+
+## 2. Playwright Browser Configuration
+
+### Overview
+
+Changed default Playwright settings for better performance:
+- **Default browser:** Firefox (lower CPU usage)
+- **Default mode:** Headless (saves resources)
+
+### Modified Files
+
+#### `client.py`
+
+**Changes:**
+
+```python
+# BEFORE:
+DEFAULT_PLAYWRIGHT_HEADLESS = False
+
+# AFTER:
+DEFAULT_PLAYWRIGHT_HEADLESS = True
+DEFAULT_PLAYWRIGHT_BROWSER = "firefox"
+```
+
+**New function added:**
+```python
+def get_playwright_browser() -> str:
+ """
+ Get the browser to use for Playwright.
+ Options: chrome, firefox, webkit, msedge
+ Firefox is recommended for lower CPU usage.
+ """
+ return os.getenv("PLAYWRIGHT_BROWSER", DEFAULT_PLAYWRIGHT_BROWSER).lower()
+```
+
+**Playwright args updated:**
+```python
+playwright_args = [
+ "@playwright/mcp@latest",
+ "--viewport-size", "1280x720",
+ "--browser", browser, # NEW: configurable browser
+]
+```
+
+---
+
+#### `.env.example`
+
+**Updated documentation:**
+```bash
+# PLAYWRIGHT_BROWSER: Which browser to use for testing
+# - firefox: Lower CPU usage, recommended (default)
+# - chrome: Google Chrome
+# - webkit: Safari engine
+# - msedge: Microsoft Edge
+# PLAYWRIGHT_BROWSER=firefox
+
+# PLAYWRIGHT_HEADLESS: Run browser without visible window
+# - true: Browser runs in background, saves CPU (default)
+# - false: Browser opens a visible window (useful for debugging)
+# PLAYWRIGHT_HEADLESS=true
+```
+
+---
+
+## 3. Update Checklist
+
+When updating AutoCoder from upstream, verify these items:
+
+### UI Changes
+- [ ] `ui/src/styles/custom-theme.css` is preserved
+- [ ] `ui/src/components/KanbanColumn.tsx` changes are preserved
+- [ ] Run `npm run build` in `ui/` directory
+- [ ] Test both light and dark modes
+
+### Backend Changes
+- [ ] `client.py` - Playwright browser/headless defaults preserved
+- [ ] `.env.example` - Documentation updates preserved
+
+### General
+- [ ] Verify Playwright uses Firefox by default
+- [ ] Check that browser runs headless by default
+
+---
+
+## Reverting to Defaults
+
+### UI Only
+```bash
+rm ui/src/styles/custom-theme.css
+git checkout ui/src/components/KanbanColumn.tsx
+cd ui && npm run build
+```
+
+### Backend Only
+```bash
+git checkout client.py .env.example
+```
+
+---
+
+## Files Summary
+
+| File | Type | Change Description |
+|------|------|-------------------|
+| `ui/src/styles/custom-theme.css` | UI | Twitter-style theme |
+| `ui/src/components/KanbanColumn.tsx` | UI | Themeable kanban columns |
+| `ui/src/main.tsx` | UI | Imports custom theme |
+| `client.py` | Backend | Firefox + headless defaults |
+| `.env.example` | Config | Updated documentation |
+
+---
+
+## Last Updated
+
+**Date:** January 2026
+**PR:** #93 - Twitter-style UI theme with custom theme override system
diff --git a/api/database.py b/api/database.py
index f3a0cce..90dc49a 100644
--- a/api/database.py
+++ b/api/database.py
@@ -336,12 +336,20 @@ def create_database(project_dir: Path) -> tuple:
"""
Create database and return engine + session maker.
+ Uses a cache to avoid creating new engines for each request, which improves
+ performance by reusing database connections.
+
Args:
project_dir: Directory containing the project
Returns:
Tuple of (engine, SessionLocal)
"""
+ cache_key = project_dir.as_posix()
+
+ if cache_key in _engine_cache:
+ return _engine_cache[cache_key]
+
db_url = get_database_url(project_dir)
engine = create_engine(db_url, connect_args={
"check_same_thread": False,
@@ -369,12 +377,39 @@ def create_database(project_dir: Path) -> tuple:
_migrate_add_schedules_tables(engine)
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
+
+ # Cache the engine and session maker
+ _engine_cache[cache_key] = (engine, SessionLocal)
+
return engine, SessionLocal
+def dispose_engine(project_dir: Path) -> bool:
+ """Dispose of and remove the cached engine for a project.
+
+ This closes all database connections, releasing file locks on Windows.
+ Should be called before deleting the database file.
+
+ Returns:
+ True if an engine was disposed, False if no engine was cached.
+ """
+ cache_key = project_dir.as_posix()
+
+ if cache_key in _engine_cache:
+ engine, _ = _engine_cache.pop(cache_key)
+ engine.dispose()
+ return True
+
+ return False
+
+
# Global session maker - will be set when server starts
_session_maker: Optional[sessionmaker] = None
+# Engine cache to avoid creating new engines for each request
+# Key: project directory path (as posix string), Value: (engine, SessionLocal)
+_engine_cache: dict[str, tuple] = {}
+
def set_session_maker(session_maker: sessionmaker) -> None:
"""Set the global session maker."""
diff --git a/api/dependency_resolver.py b/api/dependency_resolver.py
index 103cee7..6b09244 100644
--- a/api/dependency_resolver.py
+++ b/api/dependency_resolver.py
@@ -300,15 +300,20 @@ def compute_scheduling_scores(features: list[dict]) -> dict[int, float]:
parents[f["id"]].append(dep_id)
# Calculate depths via BFS from roots
+ # Use visited set to prevent infinite loops from circular dependencies
depths: dict[int, int] = {}
+ visited: set[int] = set()
roots = [f["id"] for f in features if not parents[f["id"]]]
queue = [(root, 0) for root in roots]
while queue:
node_id, depth = queue.pop(0)
- if node_id not in depths or depth > depths[node_id]:
- depths[node_id] = depth
+ if node_id in visited:
+ continue # Skip already visited nodes (handles cycles)
+ visited.add(node_id)
+ depths[node_id] = depth
for child_id in children[node_id]:
- queue.append((child_id, depth + 1))
+ if child_id not in visited:
+ queue.append((child_id, depth + 1))
# Handle orphaned nodes (shouldn't happen but be safe)
for f in features:
diff --git a/client.py b/client.py
index e844aa4..f394ebb 100644
--- a/client.py
+++ b/client.py
@@ -7,6 +7,7 @@ Functions for creating and configuring the Claude Agent SDK client.
import json
import os
+import re
import shutil
import sys
from pathlib import Path
@@ -21,12 +22,17 @@ from security import bash_security_hook
load_dotenv()
# Default Playwright headless mode - can be overridden via PLAYWRIGHT_HEADLESS env var
-# When True, browser runs invisibly in background
-# When False, browser window is visible (default - useful for monitoring agent progress)
-DEFAULT_PLAYWRIGHT_HEADLESS = False
+# When True, browser runs invisibly in background (default - saves CPU)
+# When False, browser window is visible (useful for monitoring agent progress)
+DEFAULT_PLAYWRIGHT_HEADLESS = True
+
+# Default browser for Playwright - can be overridden via PLAYWRIGHT_BROWSER env var
+# Options: chrome, firefox, webkit, msedge
+# Firefox is recommended for lower CPU usage
+DEFAULT_PLAYWRIGHT_BROWSER = "firefox"
# Environment variables to pass through to Claude CLI for API configuration
-# These allow using alternative API endpoints (e.g., GLM via z.ai) without
+# These allow using alternative API endpoints (e.g., GLM via z.ai, Vertex AI) without
# affecting the user's global Claude Code settings
API_ENV_VARS = [
"ANTHROPIC_BASE_URL", # Custom API endpoint (e.g., https://api.z.ai/api/anthropic)
@@ -35,19 +41,172 @@ API_ENV_VARS = [
"ANTHROPIC_DEFAULT_SONNET_MODEL", # Model override for Sonnet
"ANTHROPIC_DEFAULT_OPUS_MODEL", # Model override for Opus
"ANTHROPIC_DEFAULT_HAIKU_MODEL", # Model override for Haiku
+ # Vertex AI configuration
+ "CLAUDE_CODE_USE_VERTEX", # Enable Vertex AI mode (set to "1")
+ "CLOUD_ML_REGION", # GCP region (e.g., us-east5)
+ "ANTHROPIC_VERTEX_PROJECT_ID", # GCP project ID
]
+# Extra read paths for cross-project file access (read-only)
+# Set EXTRA_READ_PATHS environment variable with comma-separated absolute paths
+# Example: EXTRA_READ_PATHS=/Volumes/Data/dev,/Users/shared/libs
+EXTRA_READ_PATHS_VAR = "EXTRA_READ_PATHS"
+
+# Sensitive directories that should never be allowed via EXTRA_READ_PATHS
+# These contain credentials, keys, or system-critical files
+EXTRA_READ_PATHS_BLOCKLIST = {
+ ".ssh",
+ ".aws",
+ ".azure",
+ ".kube",
+ ".gnupg",
+ ".gpg",
+ ".password-store",
+ ".docker",
+ ".config/gcloud",
+ ".npmrc",
+ ".pypirc",
+ ".netrc",
+}
+
+def convert_model_for_vertex(model: str) -> str:
+ """
+ Convert model name format for Vertex AI compatibility.
+
+ Vertex AI uses @ to separate model name from version (e.g., claude-opus-4-5@20251101)
+ while the Anthropic API uses - (e.g., claude-opus-4-5-20251101).
+
+ Args:
+ model: Model name in Anthropic format (with hyphens)
+
+ Returns:
+ Model name in Vertex AI format (with @ before date) if Vertex AI is enabled,
+ otherwise returns the model unchanged.
+ """
+ # Only convert if Vertex AI is enabled
+ if os.getenv("CLAUDE_CODE_USE_VERTEX") != "1":
+ return model
+
+ # Pattern: claude-{name}-{version}-{date} -> claude-{name}-{version}@{date}
+ # Example: claude-opus-4-5-20251101 -> claude-opus-4-5@20251101
+ # The date is always 8 digits at the end
+ match = re.match(r'^(claude-.+)-(\d{8})$', model)
+ if match:
+ base_name, date = match.groups()
+ return f"{base_name}@{date}"
+
+ # If already in @ format or doesn't match expected pattern, return as-is
+ return model
+
def get_playwright_headless() -> bool:
"""
Get the Playwright headless mode setting.
- Reads from PLAYWRIGHT_HEADLESS environment variable, defaults to False.
+ Reads from PLAYWRIGHT_HEADLESS environment variable, defaults to True.
Returns True for headless mode (invisible browser), False for visible browser.
"""
- value = os.getenv("PLAYWRIGHT_HEADLESS", "false").lower()
- # Accept various truthy/falsy values
- return value in ("true", "1", "yes", "on")
+ value = os.getenv("PLAYWRIGHT_HEADLESS", str(DEFAULT_PLAYWRIGHT_HEADLESS).lower()).strip().lower()
+ truthy = {"true", "1", "yes", "on"}
+ falsy = {"false", "0", "no", "off"}
+ if value not in truthy | falsy:
+ print(f" - Warning: Invalid PLAYWRIGHT_HEADLESS='{value}', defaulting to {DEFAULT_PLAYWRIGHT_HEADLESS}")
+ return DEFAULT_PLAYWRIGHT_HEADLESS
+ return value in truthy
+
+
+# Valid browsers supported by Playwright MCP
+VALID_PLAYWRIGHT_BROWSERS = {"chrome", "firefox", "webkit", "msedge"}
+
+
+def get_playwright_browser() -> str:
+ """
+ Get the browser to use for Playwright.
+
+ Reads from PLAYWRIGHT_BROWSER environment variable, defaults to firefox.
+ Options: chrome, firefox, webkit, msedge
+ Firefox is recommended for lower CPU usage.
+ """
+ value = os.getenv("PLAYWRIGHT_BROWSER", DEFAULT_PLAYWRIGHT_BROWSER).strip().lower()
+ if value not in VALID_PLAYWRIGHT_BROWSERS:
+ print(f" - Warning: Invalid PLAYWRIGHT_BROWSER='{value}', "
+ f"valid options: {', '.join(sorted(VALID_PLAYWRIGHT_BROWSERS))}. "
+ f"Defaulting to {DEFAULT_PLAYWRIGHT_BROWSER}")
+ return DEFAULT_PLAYWRIGHT_BROWSER
+ return value
+
+
+def get_extra_read_paths() -> list[Path]:
+ """
+ Get extra read-only paths from EXTRA_READ_PATHS environment variable.
+
+ Parses comma-separated absolute paths and validates each one:
+ - Must be an absolute path
+ - Must exist and be a directory
+ - Cannot be or contain sensitive directories (e.g., .ssh, .aws)
+
+ Returns:
+ List of validated, canonicalized Path objects.
+ """
+ raw_value = os.getenv(EXTRA_READ_PATHS_VAR, "").strip()
+ if not raw_value:
+ return []
+
+ validated_paths: list[Path] = []
+ home_dir = Path.home()
+
+ for path_str in raw_value.split(","):
+ path_str = path_str.strip()
+ if not path_str:
+ continue
+
+ # Parse and canonicalize the path
+ try:
+ path = Path(path_str).resolve()
+ except (OSError, ValueError) as e:
+ print(f" - Warning: Invalid EXTRA_READ_PATHS path '{path_str}': {e}")
+ continue
+
+ # Must be absolute (resolve() makes it absolute, but check original input)
+ if not Path(path_str).is_absolute():
+ print(f" - Warning: EXTRA_READ_PATHS requires absolute paths, skipping: {path_str}")
+ continue
+
+ # Must exist
+ if not path.exists():
+ print(f" - Warning: EXTRA_READ_PATHS path does not exist, skipping: {path_str}")
+ continue
+
+ # Must be a directory
+ if not path.is_dir():
+ print(f" - Warning: EXTRA_READ_PATHS path is not a directory, skipping: {path_str}")
+ continue
+
+ # Check against sensitive directory blocklist
+ is_blocked = False
+ for sensitive in EXTRA_READ_PATHS_BLOCKLIST:
+ sensitive_path = (home_dir / sensitive).resolve()
+ try:
+ # Block if path IS the sensitive dir or is INSIDE it
+ if path == sensitive_path or path.is_relative_to(sensitive_path):
+ print(f" - Warning: EXTRA_READ_PATHS blocked sensitive path: {path_str}")
+ is_blocked = True
+ break
+ # Also block if sensitive dir is INSIDE the requested path
+ if sensitive_path.is_relative_to(path):
+ print(f" - Warning: EXTRA_READ_PATHS path contains sensitive directory ({sensitive}): {path_str}")
+ is_blocked = True
+ break
+ except (OSError, ValueError):
+ # is_relative_to can raise on some edge cases
+ continue
+
+ if is_blocked:
+ continue
+
+ validated_paths.append(path)
+
+ return validated_paths
# Feature MCP tools for feature/test management
@@ -172,6 +331,16 @@ def create_client(
# Allow Feature MCP tools for feature management
*FEATURE_MCP_TOOLS,
]
+
+ # Add extra read paths from environment variable (read-only access)
+ # Paths are validated, canonicalized, and checked against sensitive blocklist
+ extra_read_paths = get_extra_read_paths()
+ for path in extra_read_paths:
+ # Add read-only permissions for each validated path
+ permissions_list.append(f"Read({path}/**)")
+ permissions_list.append(f"Glob({path}/**)")
+ permissions_list.append(f"Grep({path}/**)")
+
if not yolo_mode:
# Allow Playwright MCP tools for browser automation (standard mode only)
permissions_list.extend(PLAYWRIGHT_TOOLS)
@@ -198,6 +367,8 @@ def create_client(
print(f"Created security settings at {settings_file}")
print(" - Sandbox enabled (OS-level bash isolation)")
print(f" - Filesystem restricted to: {project_dir.resolve()}")
+ if extra_read_paths:
+ print(f" - Extra read paths (validated): {', '.join(str(p) for p in extra_read_paths)}")
print(" - Bash commands restricted to allowlist (see security.py)")
if yolo_mode:
print(" - MCP servers: features (database) - YOLO MODE (no Playwright)")
@@ -228,10 +399,16 @@ def create_client(
}
if not yolo_mode:
# Include Playwright MCP server for browser automation (standard mode only)
- # Headless mode is configurable via PLAYWRIGHT_HEADLESS environment variable
- playwright_args = ["@playwright/mcp@latest", "--viewport-size", "1280x720"]
+ # Browser and headless mode configurable via environment variables
+ browser = get_playwright_browser()
+ playwright_args = [
+ "@playwright/mcp@latest",
+ "--viewport-size", "1280x720",
+ "--browser", browser,
+ ]
if get_playwright_headless():
playwright_args.append("--headless")
+ print(f" - Browser: {browser} (headless={get_playwright_headless()})")
# Browser isolation for parallel execution
# Each agent gets its own isolated browser context to prevent tab conflicts
@@ -257,9 +434,21 @@ def create_client(
if value:
sdk_env[var] = value
+ # Detect alternative API mode (Ollama, GLM, or Vertex AI)
+ base_url = sdk_env.get("ANTHROPIC_BASE_URL", "")
+ is_vertex = sdk_env.get("CLAUDE_CODE_USE_VERTEX") == "1"
+ is_alternative_api = bool(base_url) or is_vertex
+ is_ollama = "localhost:11434" in base_url or "127.0.0.1:11434" in base_url
+ model = convert_model_for_vertex(model)
if sdk_env:
print(f" - API overrides: {', '.join(sdk_env.keys())}")
- if "ANTHROPIC_BASE_URL" in sdk_env:
+ if is_vertex:
+ project_id = sdk_env.get("ANTHROPIC_VERTEX_PROJECT_ID", "unknown")
+ region = sdk_env.get("CLOUD_ML_REGION", "unknown")
+ print(f" - Vertex AI Mode: Using GCP project '{project_id}' with model '{model}' in region '{region}'")
+ elif is_ollama:
+ print(" - Ollama Mode: Using local models")
+ elif "ANTHROPIC_BASE_URL" in sdk_env:
print(f" - GLM Mode: Using {sdk_env['ANTHROPIC_BASE_URL']}")
# Create a wrapper for bash_security_hook that passes project_dir via context
@@ -336,7 +525,8 @@ def create_client(
# Enable extended context beta for better handling of long sessions.
# This provides up to 1M tokens of context with automatic compaction.
# See: https://docs.anthropic.com/en/api/beta-headers
- betas=["context-1m-2025-08-07"],
+ # Disabled for alternative APIs (Ollama, GLM, Vertex AI) as they don't support this beta.
+ betas=[] if is_alternative_api else ["context-1m-2025-08-07"],
# Note on context management:
# The Claude Agent SDK handles context management automatically through the
# underlying Claude Code CLI. When context approaches limits, the CLI
diff --git a/parallel_orchestrator.py b/parallel_orchestrator.py
index d2db637..574cbd2 100644
--- a/parallel_orchestrator.py
+++ b/parallel_orchestrator.py
@@ -169,9 +169,11 @@ class ParallelOrchestrator:
# Thread-safe state
self._lock = threading.Lock()
# Coding agents: feature_id -> process
+ # Safe to key by feature_id because start_feature() checks for duplicates before spawning
self.running_coding_agents: dict[int, subprocess.Popen] = {}
- # Testing agents: feature_id -> process (feature being tested)
- self.running_testing_agents: dict[int, subprocess.Popen] = {}
+ # Testing agents: pid -> (feature_id, process)
+ # Keyed by PID (not feature_id) because multiple agents can test the same feature
+ self.running_testing_agents: dict[int, tuple[int, subprocess.Popen]] = {}
# Legacy alias for backward compatibility
self.running_agents = self.running_coding_agents
self.abort_events: dict[int, threading.Event] = {}
@@ -401,6 +403,10 @@ class ParallelOrchestrator:
if passing_count == 0:
return
+ # Don't spawn testing agents if all features are already complete
+ if self.get_all_complete():
+ return
+
# Spawn testing agents one at a time, re-checking limits each time
# This avoids TOCTOU race by holding lock during the decision
while True:
@@ -425,7 +431,10 @@ class ParallelOrchestrator:
# Spawn outside lock (I/O bound operation)
print(f"[DEBUG] Spawning testing agent ({spawn_index}/{desired})", flush=True)
- self._spawn_testing_agent()
+ success, msg = self._spawn_testing_agent()
+ if not success:
+ debug_log.log("TESTING", f"Spawn failed, stopping: {msg}")
+ return
def start_feature(self, feature_id: int, resume: bool = False) -> tuple[bool, str]:
"""Start a single coding agent for a feature.
@@ -500,14 +509,20 @@ class ParallelOrchestrator:
cmd.append("--yolo")
try:
- proc = subprocess.Popen(
- cmd,
- stdout=subprocess.PIPE,
- stderr=subprocess.STDOUT,
- text=True,
- cwd=str(AUTOCODER_ROOT),
- env={**os.environ, "PYTHONUNBUFFERED": "1"},
- )
+ # CREATE_NO_WINDOW on Windows prevents console window pop-ups
+ # stdin=DEVNULL prevents blocking on stdin reads
+ popen_kwargs = {
+ "stdin": subprocess.DEVNULL,
+ "stdout": subprocess.PIPE,
+ "stderr": subprocess.STDOUT,
+ "text": True,
+ "cwd": str(AUTOCODER_ROOT), # Run from autocoder root for proper imports
+ "env": {**os.environ, "PYTHONUNBUFFERED": "1"},
+ }
+ if sys.platform == "win32":
+ popen_kwargs["creationflags"] = subprocess.CREATE_NO_WINDOW
+
+ proc = subprocess.Popen(cmd, **popen_kwargs)
except Exception as e:
# Reset in_progress on failure
session = self.get_session()
@@ -583,20 +598,27 @@ class ParallelOrchestrator:
cmd.extend(["--model", self.model])
try:
- proc = subprocess.Popen(
- cmd,
- stdout=subprocess.PIPE,
- stderr=subprocess.STDOUT,
- text=True,
- cwd=str(AUTOCODER_ROOT),
- env={**os.environ, "PYTHONUNBUFFERED": "1"},
- )
+ # CREATE_NO_WINDOW on Windows prevents console window pop-ups
+ # stdin=DEVNULL prevents blocking on stdin reads
+ popen_kwargs = {
+ "stdin": subprocess.DEVNULL,
+ "stdout": subprocess.PIPE,
+ "stderr": subprocess.STDOUT,
+ "text": True,
+ "cwd": str(AUTOCODER_ROOT),
+ "env": {**os.environ, "PYTHONUNBUFFERED": "1"},
+ }
+ if sys.platform == "win32":
+ popen_kwargs["creationflags"] = subprocess.CREATE_NO_WINDOW
+
+ proc = subprocess.Popen(cmd, **popen_kwargs)
except Exception as e:
debug_log.log("TESTING", f"FAILED to spawn testing agent: {e}")
return False, f"Failed to start testing agent: {e}"
- # Register process with feature ID (same pattern as coding agents)
- self.running_testing_agents[feature_id] = proc
+ # Register process by PID (not feature_id) to avoid overwrites
+ # when multiple agents test the same feature
+ self.running_testing_agents[proc.pid] = (feature_id, proc)
testing_count = len(self.running_testing_agents)
# Start output reader thread with feature ID (same as coding agents)
@@ -634,14 +656,20 @@ class ParallelOrchestrator:
print("Running initializer agent...", flush=True)
- proc = subprocess.Popen(
- cmd,
- stdout=subprocess.PIPE,
- stderr=subprocess.STDOUT,
- text=True,
- cwd=str(AUTOCODER_ROOT),
- env={**os.environ, "PYTHONUNBUFFERED": "1"},
- )
+ # CREATE_NO_WINDOW on Windows prevents console window pop-ups
+ # stdin=DEVNULL prevents blocking on stdin reads
+ popen_kwargs = {
+ "stdin": subprocess.DEVNULL,
+ "stdout": subprocess.PIPE,
+ "stderr": subprocess.STDOUT,
+ "text": True,
+ "cwd": str(AUTOCODER_ROOT),
+ "env": {**os.environ, "PYTHONUNBUFFERED": "1"},
+ }
+ if sys.platform == "win32":
+ popen_kwargs["creationflags"] = subprocess.CREATE_NO_WINDOW
+
+ proc = subprocess.Popen(cmd, **popen_kwargs)
debug_log.log("INIT", "Initializer subprocess started", pid=proc.pid)
@@ -699,6 +727,12 @@ class ParallelOrchestrator:
print(f"[Feature #{feature_id}] {line}", flush=True)
proc.wait()
finally:
+ # CRITICAL: Kill the process tree to clean up any child processes (e.g., Claude CLI)
+ # This prevents zombie processes from accumulating
+ try:
+ kill_process_tree(proc, timeout=2.0)
+ except Exception as e:
+ debug_log.log("CLEANUP", f"Error killing process tree for {agent_type} agent", error=str(e))
self._on_agent_complete(feature_id, proc.returncode, agent_type, proc)
def _signal_agent_completed(self):
@@ -767,11 +801,8 @@ class ParallelOrchestrator:
"""
if agent_type == "testing":
with self._lock:
- # Remove from dict by finding the feature_id for this proc
- for fid, p in list(self.running_testing_agents.items()):
- if p is proc:
- del self.running_testing_agents[fid]
- break
+ # Remove by PID
+ self.running_testing_agents.pop(proc.pid, None)
status = "completed" if return_code == 0 else "failed"
print(f"Feature #{feature_id} testing {status}", flush=True)
@@ -870,12 +901,17 @@ class ParallelOrchestrator:
with self._lock:
testing_items = list(self.running_testing_agents.items())
- for feature_id, proc in testing_items:
+ for pid, (feature_id, proc) in testing_items:
result = kill_process_tree(proc, timeout=5.0)
- debug_log.log("STOP", f"Killed testing agent for feature #{feature_id} (PID {proc.pid})",
+ debug_log.log("STOP", f"Killed testing agent for feature #{feature_id} (PID {pid})",
status=result.status, children_found=result.children_found,
children_terminated=result.children_terminated, children_killed=result.children_killed)
+ # Clear dict so get_status() doesn't report stale agents while
+ # _on_agent_complete callbacks are still in flight.
+ with self._lock:
+ self.running_testing_agents.clear()
+
async def run_loop(self):
"""Main orchestration loop."""
self.is_running = True
diff --git a/registry.py b/registry.py
index 2949bf6..2e67f3c 100644
--- a/registry.py
+++ b/registry.py
@@ -16,7 +16,7 @@ from datetime import datetime
from pathlib import Path
from typing import Any
-from sqlalchemy import Column, DateTime, String, create_engine
+from sqlalchemy import Column, DateTime, Integer, String, create_engine, text
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
@@ -86,6 +86,7 @@ class Project(Base):
name = Column(String(50), primary_key=True, index=True)
path = Column(String, nullable=False) # POSIX format for cross-platform
created_at = Column(DateTime, nullable=False)
+ default_concurrency = Column(Integer, nullable=False, default=3)
class Settings(Base):
@@ -147,12 +148,26 @@ def _get_engine():
}
)
Base.metadata.create_all(bind=_engine)
+ _migrate_add_default_concurrency(_engine)
_SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=_engine)
logger.debug("Initialized registry database at: %s", db_path)
return _engine, _SessionLocal
+def _migrate_add_default_concurrency(engine) -> None:
+ """Add default_concurrency column if missing (for existing databases)."""
+ with engine.connect() as conn:
+ result = conn.execute(text("PRAGMA table_info(projects)"))
+ columns = [row[1] for row in result.fetchall()]
+ if "default_concurrency" not in columns:
+ conn.execute(text(
+ "ALTER TABLE projects ADD COLUMN default_concurrency INTEGER DEFAULT 3"
+ ))
+ conn.commit()
+ logger.info("Migrated projects table: added default_concurrency column")
+
+
@contextmanager
def _get_session():
"""
@@ -308,7 +323,8 @@ def list_registered_projects() -> dict[str, dict[str, Any]]:
return {
p.name: {
"path": p.path,
- "created_at": p.created_at.isoformat() if p.created_at else None
+ "created_at": p.created_at.isoformat() if p.created_at else None,
+ "default_concurrency": getattr(p, 'default_concurrency', 3) or 3
}
for p in projects
}
@@ -334,7 +350,8 @@ def get_project_info(name: str) -> dict[str, Any] | None:
return None
return {
"path": project.path,
- "created_at": project.created_at.isoformat() if project.created_at else None
+ "created_at": project.created_at.isoformat() if project.created_at else None,
+ "default_concurrency": getattr(project, 'default_concurrency', 3) or 3
}
finally:
session.close()
@@ -363,6 +380,55 @@ def update_project_path(name: str, new_path: Path) -> bool:
return True
+def get_project_concurrency(name: str) -> int:
+ """
+ Get project's default concurrency (1-5).
+
+ Args:
+ name: The project name.
+
+ Returns:
+ The default concurrency value (defaults to 3 if not set or project not found).
+ """
+ _, SessionLocal = _get_engine()
+ session = SessionLocal()
+ try:
+ project = session.query(Project).filter(Project.name == name).first()
+ if project is None:
+ return 3
+ return getattr(project, 'default_concurrency', 3) or 3
+ finally:
+ session.close()
+
+
+def set_project_concurrency(name: str, concurrency: int) -> bool:
+ """
+ Set project's default concurrency (1-5).
+
+ Args:
+ name: The project name.
+ concurrency: The concurrency value (1-5).
+
+ Returns:
+ True if updated, False if project wasn't found.
+
+ Raises:
+ ValueError: If concurrency is not between 1 and 5.
+ """
+ if concurrency < 1 or concurrency > 5:
+ raise ValueError("concurrency must be between 1 and 5")
+
+ with _get_session() as session:
+ project = session.query(Project).filter(Project.name == name).first()
+ if not project:
+ return False
+
+ project.default_concurrency = concurrency
+
+ logger.info("Set project '%s' default_concurrency to %d", name, concurrency)
+ return True
+
+
# =============================================================================
# Validation Functions
# =============================================================================
diff --git a/security.py b/security.py
index c15f3dc..024ad04 100644
--- a/security.py
+++ b/security.py
@@ -6,13 +6,22 @@ Pre-tool-use hooks that validate bash commands for security.
Uses an allowlist approach - only explicitly permitted commands can run.
"""
+import logging
import os
+import re
import shlex
from pathlib import Path
from typing import Optional
import yaml
+# Logger for security-related events (fallback parsing, validation failures, etc.)
+logger = logging.getLogger(__name__)
+
+# Regex pattern for valid pkill process names (no regex metacharacters allowed)
+# Matches alphanumeric names with dots, underscores, and hyphens
+VALID_PROCESS_NAME_PATTERN = re.compile(r"^[A-Za-z0-9._-]+$")
+
# Allowed commands for development tasks
# Minimal set needed for the autonomous coding demo
ALLOWED_COMMANDS = {
@@ -135,6 +144,45 @@ def split_command_segments(command_string: str) -> list[str]:
return result
+def _extract_primary_command(segment: str) -> str | None:
+ """
+ Fallback command extraction when shlex fails.
+
+ Extracts the first word that looks like a command, handling cases
+ like complex docker exec commands with nested quotes.
+
+ Args:
+ segment: The command segment to parse
+
+ Returns:
+ The primary command name, or None if extraction fails
+ """
+ # Remove leading whitespace
+ segment = segment.lstrip()
+
+ if not segment:
+ return None
+
+ # Skip env var assignments at start (VAR=value cmd)
+ words = segment.split()
+ while words and "=" in words[0] and not words[0].startswith("="):
+ words = words[1:]
+
+ if not words:
+ return None
+
+ # Extract first token (the command)
+ first_word = words[0]
+
+ # Match valid command characters (alphanumeric, dots, underscores, hyphens, slashes)
+ match = re.match(r"^([a-zA-Z0-9_./-]+)", first_word)
+ if match:
+ cmd = match.group(1)
+ return os.path.basename(cmd)
+
+ return None
+
+
def extract_commands(command_string: str) -> list[str]:
"""
Extract command names from a shell command string.
@@ -151,7 +199,6 @@ def extract_commands(command_string: str) -> list[str]:
commands = []
# shlex doesn't treat ; as a separator, so we need to pre-process
- import re
# Split on semicolons that aren't inside quotes (simple heuristic)
# This handles common cases like "echo hello; ls"
@@ -166,8 +213,21 @@ def extract_commands(command_string: str) -> list[str]:
tokens = shlex.split(segment)
except ValueError:
# Malformed command (unclosed quotes, etc.)
- # Return empty to trigger block (fail-safe)
- return []
+ # Try fallback extraction instead of blocking entirely
+ fallback_cmd = _extract_primary_command(segment)
+ if fallback_cmd:
+ logger.debug(
+ "shlex fallback used: segment=%r -> command=%r",
+ segment,
+ fallback_cmd,
+ )
+ commands.append(fallback_cmd)
+ else:
+ logger.debug(
+ "shlex fallback failed: segment=%r (no command extracted)",
+ segment,
+ )
+ continue
if not tokens:
continue
@@ -219,23 +279,37 @@ def extract_commands(command_string: str) -> list[str]:
return commands
-def validate_pkill_command(command_string: str) -> tuple[bool, str]:
+# Default pkill process names (hardcoded baseline, always available)
+DEFAULT_PKILL_PROCESSES = {
+ "node",
+ "npm",
+ "npx",
+ "vite",
+ "next",
+}
+
+
+def validate_pkill_command(
+ command_string: str,
+ extra_processes: Optional[set[str]] = None
+) -> tuple[bool, str]:
"""
Validate pkill commands - only allow killing dev-related processes.
Uses shlex to parse the command, avoiding regex bypass vulnerabilities.
+ Args:
+ command_string: The pkill command to validate
+ extra_processes: Optional set of additional process names to allow
+ (from org/project config pkill_processes)
+
Returns:
Tuple of (is_allowed, reason_if_blocked)
"""
- # Allowed process names for pkill
- allowed_process_names = {
- "node",
- "npm",
- "npx",
- "vite",
- "next",
- }
+ # Merge default processes with any extra configured processes
+ allowed_process_names = DEFAULT_PKILL_PROCESSES.copy()
+ if extra_processes:
+ allowed_process_names |= extra_processes
try:
tokens = shlex.split(command_string)
@@ -254,17 +328,19 @@ def validate_pkill_command(command_string: str) -> tuple[bool, str]:
if not args:
return False, "pkill requires a process name"
- # The target is typically the last non-flag argument
- target = args[-1]
+ # Validate every non-flag argument (pkill accepts multiple patterns on BSD)
+ # This defensively ensures no disallowed process can be targeted
+ targets = []
+ for arg in args:
+ # For -f flag (full command line match), take the first word as process name
+ # e.g., "pkill -f 'node server.js'" -> target is "node server.js", process is "node"
+ t = arg.split()[0] if " " in arg else arg
+ targets.append(t)
- # For -f flag (full command line match), extract the first word as process name
- # e.g., "pkill -f 'node server.js'" -> target is "node server.js", process is "node"
- if " " in target:
- target = target.split()[0]
-
- if target in allowed_process_names:
+ disallowed = [t for t in targets if t not in allowed_process_names]
+ if not disallowed:
return True, ""
- return False, f"pkill only allowed for dev processes: {allowed_process_names}"
+ return False, f"pkill only allowed for processes: {sorted(allowed_process_names)}"
def validate_chmod_command(command_string: str) -> tuple[bool, str]:
@@ -423,41 +499,74 @@ def load_org_config() -> Optional[dict]:
config = yaml.safe_load(f)
if not config:
+ logger.warning(f"Org config at {config_path} is empty")
return None
# Validate structure
if not isinstance(config, dict):
+ logger.warning(f"Org config at {config_path} must be a YAML dictionary")
return None
if "version" not in config:
+ logger.warning(f"Org config at {config_path} missing required 'version' field")
return None
# Validate allowed_commands if present
if "allowed_commands" in config:
allowed = config["allowed_commands"]
if not isinstance(allowed, list):
+ logger.warning(f"Org config at {config_path}: 'allowed_commands' must be a list")
return None
- for cmd in allowed:
+ for i, cmd in enumerate(allowed):
if not isinstance(cmd, dict):
+ logger.warning(f"Org config at {config_path}: allowed_commands[{i}] must be a dict")
return None
if "name" not in cmd:
+ logger.warning(f"Org config at {config_path}: allowed_commands[{i}] missing 'name'")
return None
# Validate that name is a non-empty string
if not isinstance(cmd["name"], str) or cmd["name"].strip() == "":
+ logger.warning(f"Org config at {config_path}: allowed_commands[{i}] has invalid 'name'")
return None
# Validate blocked_commands if present
if "blocked_commands" in config:
blocked = config["blocked_commands"]
if not isinstance(blocked, list):
+ logger.warning(f"Org config at {config_path}: 'blocked_commands' must be a list")
return None
- for cmd in blocked:
+ for i, cmd in enumerate(blocked):
if not isinstance(cmd, str):
+ logger.warning(f"Org config at {config_path}: blocked_commands[{i}] must be a string")
return None
+ # Validate pkill_processes if present
+ if "pkill_processes" in config:
+ processes = config["pkill_processes"]
+ if not isinstance(processes, list):
+ logger.warning(f"Org config at {config_path}: 'pkill_processes' must be a list")
+ return None
+ # Normalize and validate each process name against safe pattern
+ normalized = []
+ for i, proc in enumerate(processes):
+ if not isinstance(proc, str):
+ logger.warning(f"Org config at {config_path}: pkill_processes[{i}] must be a string")
+ return None
+ proc = proc.strip()
+ # Block empty strings and regex metacharacters
+ if not proc or not VALID_PROCESS_NAME_PATTERN.fullmatch(proc):
+ logger.warning(f"Org config at {config_path}: pkill_processes[{i}] has invalid value '{proc}'")
+ return None
+ normalized.append(proc)
+ config["pkill_processes"] = normalized
+
return config
- except (yaml.YAMLError, IOError, OSError):
+ except yaml.YAMLError as e:
+ logger.warning(f"Failed to parse org config at {config_path}: {e}")
+ return None
+ except (IOError, OSError) as e:
+ logger.warning(f"Failed to read org config at {config_path}: {e}")
return None
@@ -471,7 +580,7 @@ def load_project_commands(project_dir: Path) -> Optional[dict]:
Returns:
Dict with parsed YAML config, or None if file doesn't exist or is invalid
"""
- config_path = project_dir / ".autocoder" / "allowed_commands.yaml"
+ config_path = project_dir.resolve() / ".autocoder" / "allowed_commands.yaml"
if not config_path.exists():
return None
@@ -481,36 +590,68 @@ def load_project_commands(project_dir: Path) -> Optional[dict]:
config = yaml.safe_load(f)
if not config:
+ logger.warning(f"Project config at {config_path} is empty")
return None
# Validate structure
if not isinstance(config, dict):
+ logger.warning(f"Project config at {config_path} must be a YAML dictionary")
return None
if "version" not in config:
+ logger.warning(f"Project config at {config_path} missing required 'version' field")
return None
commands = config.get("commands", [])
if not isinstance(commands, list):
+ logger.warning(f"Project config at {config_path}: 'commands' must be a list")
return None
# Enforce 100 command limit
if len(commands) > 100:
+ logger.warning(f"Project config at {config_path} exceeds 100 command limit ({len(commands)} commands)")
return None
# Validate each command entry
- for cmd in commands:
+ for i, cmd in enumerate(commands):
if not isinstance(cmd, dict):
+ logger.warning(f"Project config at {config_path}: commands[{i}] must be a dict")
return None
if "name" not in cmd:
+ logger.warning(f"Project config at {config_path}: commands[{i}] missing 'name'")
return None
- # Validate name is a string
- if not isinstance(cmd["name"], str):
+ # Validate name is a non-empty string
+ if not isinstance(cmd["name"], str) or cmd["name"].strip() == "":
+ logger.warning(f"Project config at {config_path}: commands[{i}] has invalid 'name'")
return None
+ # Validate pkill_processes if present
+ if "pkill_processes" in config:
+ processes = config["pkill_processes"]
+ if not isinstance(processes, list):
+ logger.warning(f"Project config at {config_path}: 'pkill_processes' must be a list")
+ return None
+ # Normalize and validate each process name against safe pattern
+ normalized = []
+ for i, proc in enumerate(processes):
+ if not isinstance(proc, str):
+ logger.warning(f"Project config at {config_path}: pkill_processes[{i}] must be a string")
+ return None
+ proc = proc.strip()
+ # Block empty strings and regex metacharacters
+ if not proc or not VALID_PROCESS_NAME_PATTERN.fullmatch(proc):
+ logger.warning(f"Project config at {config_path}: pkill_processes[{i}] has invalid value '{proc}'")
+ return None
+ normalized.append(proc)
+ config["pkill_processes"] = normalized
+
return config
- except (yaml.YAMLError, IOError, OSError):
+ except yaml.YAMLError as e:
+ logger.warning(f"Failed to parse project config at {config_path}: {e}")
+ return None
+ except (IOError, OSError) as e:
+ logger.warning(f"Failed to read project config at {config_path}: {e}")
return None
@@ -628,6 +769,42 @@ def get_project_allowed_commands(project_dir: Optional[Path]) -> set[str]:
return allowed
+def get_effective_pkill_processes(project_dir: Optional[Path]) -> set[str]:
+ """
+ Get effective pkill process names after hierarchy resolution.
+
+ Merges processes from:
+ 1. DEFAULT_PKILL_PROCESSES (hardcoded baseline)
+ 2. Org config pkill_processes
+ 3. Project config pkill_processes
+
+ Args:
+ project_dir: Path to the project directory, or None
+
+ Returns:
+ Set of allowed process names for pkill
+ """
+ # Start with default processes
+ processes = DEFAULT_PKILL_PROCESSES.copy()
+
+ # Add org-level pkill_processes
+ org_config = load_org_config()
+ if org_config:
+ org_processes = org_config.get("pkill_processes", [])
+ if isinstance(org_processes, list):
+ processes |= {p for p in org_processes if isinstance(p, str) and p.strip()}
+
+ # Add project-level pkill_processes
+ if project_dir:
+ project_config = load_project_commands(project_dir)
+ if project_config:
+ proj_processes = project_config.get("pkill_processes", [])
+ if isinstance(proj_processes, list):
+ processes |= {p for p in proj_processes if isinstance(p, str) and p.strip()}
+
+ return processes
+
+
def is_command_allowed(command: str, allowed_commands: set[str]) -> bool:
"""
Check if a command is allowed (supports patterns).
@@ -692,6 +869,9 @@ async def bash_security_hook(input_data, tool_use_id=None, context=None):
# Get effective commands using hierarchy resolution
allowed_commands, blocked_commands = get_effective_commands(project_dir)
+ # Get effective pkill processes (includes org/project config)
+ pkill_processes = get_effective_pkill_processes(project_dir)
+
# Split into segments for per-command validation
segments = split_command_segments(command)
@@ -725,7 +905,9 @@ async def bash_security_hook(input_data, tool_use_id=None, context=None):
cmd_segment = command # Fallback to full command
if cmd == "pkill":
- allowed, reason = validate_pkill_command(cmd_segment)
+ # Pass configured extra processes (beyond defaults)
+ extra_procs = pkill_processes - DEFAULT_PKILL_PROCESSES
+ allowed, reason = validate_pkill_command(cmd_segment, extra_procs if extra_procs else None)
if not allowed:
return {"decision": "block", "reason": reason}
elif cmd == "chmod":
diff --git a/server/main.py b/server/main.py
index f3d3504..1b01f79 100644
--- a/server/main.py
+++ b/server/main.py
@@ -88,35 +88,49 @@ app = FastAPI(
lifespan=lifespan,
)
-# CORS - allow only localhost origins for security
-app.add_middleware(
- CORSMiddleware,
- allow_origins=[
- "http://localhost:5173", # Vite dev server
- "http://127.0.0.1:5173",
- "http://localhost:8888", # Production
- "http://127.0.0.1:8888",
- ],
- allow_credentials=True,
- allow_methods=["*"],
- allow_headers=["*"],
-)
+# Check if remote access is enabled via environment variable
+# Set by start_ui.py when --host is not 127.0.0.1
+ALLOW_REMOTE = os.environ.get("AUTOCODER_ALLOW_REMOTE", "").lower() in ("1", "true", "yes")
+
+# CORS - allow all origins when remote access is enabled, otherwise localhost only
+if ALLOW_REMOTE:
+ app.add_middleware(
+ CORSMiddleware,
+ allow_origins=["*"], # Allow all origins for remote access
+ allow_credentials=True,
+ allow_methods=["*"],
+ allow_headers=["*"],
+ )
+else:
+ app.add_middleware(
+ CORSMiddleware,
+ allow_origins=[
+ "http://localhost:5173", # Vite dev server
+ "http://127.0.0.1:5173",
+ "http://localhost:8888", # Production
+ "http://127.0.0.1:8888",
+ ],
+ allow_credentials=True,
+ allow_methods=["*"],
+ allow_headers=["*"],
+ )
# ============================================================================
# Security Middleware
# ============================================================================
-@app.middleware("http")
-async def require_localhost(request: Request, call_next):
- """Only allow requests from localhost."""
- client_host = request.client.host if request.client else None
+if not ALLOW_REMOTE:
+ @app.middleware("http")
+ async def require_localhost(request: Request, call_next):
+ """Only allow requests from localhost (disabled when AUTOCODER_ALLOW_REMOTE=1)."""
+ client_host = request.client.host if request.client else None
- # Allow localhost connections
- if client_host not in ("127.0.0.1", "::1", "localhost", None):
- raise HTTPException(status_code=403, detail="Localhost access only")
+ # Allow localhost connections
+ if client_host not in ("127.0.0.1", "::1", "localhost", None):
+ raise HTTPException(status_code=403, detail="Localhost access only")
- return await call_next(request)
+ return await call_next(request)
# ============================================================================
diff --git a/server/routers/agent.py b/server/routers/agent.py
index 309024d..422f86b 100644
--- a/server/routers/agent.py
+++ b/server/routers/agent.py
@@ -93,7 +93,7 @@ async def get_agent_status(project_name: str):
return AgentStatus(
status=manager.status,
pid=manager.pid,
- started_at=manager.started_at,
+ started_at=manager.started_at.isoformat() if manager.started_at else None,
yolo_mode=manager.yolo_mode,
model=manager.model,
parallel_mode=manager.parallel_mode,
diff --git a/server/routers/devserver.py b/server/routers/devserver.py
index 673bc3e..18f91ec 100644
--- a/server/routers/devserver.py
+++ b/server/routers/devserver.py
@@ -129,7 +129,7 @@ async def get_devserver_status(project_name: str) -> DevServerStatus:
pid=manager.pid,
url=manager.detected_url,
command=manager._command,
- started_at=manager.started_at,
+ started_at=manager.started_at.isoformat() if manager.started_at else None,
)
diff --git a/server/routers/features.py b/server/routers/features.py
index a830001..c4c9c27 100644
--- a/server/routers/features.py
+++ b/server/routers/features.py
@@ -551,9 +551,9 @@ async def skip_feature(project_name: str, feature_id: int):
if not feature:
raise HTTPException(status_code=404, detail=f"Feature {feature_id} not found")
- # Set priority to max + 1000 to push to end
+ # Set priority to max + 1 to push to end (consistent with MCP server)
max_priority = session.query(Feature).order_by(Feature.priority.desc()).first()
- feature.priority = (max_priority.priority if max_priority else 0) + 1000
+ feature.priority = (max_priority.priority + 1) if max_priority else 1
session.commit()
diff --git a/server/routers/projects.py b/server/routers/projects.py
index 68cf526..0f76ff9 100644
--- a/server/routers/projects.py
+++ b/server/routers/projects.py
@@ -18,6 +18,7 @@ from ..schemas import (
ProjectDetail,
ProjectPrompts,
ProjectPromptsUpdate,
+ ProjectSettingsUpdate,
ProjectStats,
ProjectSummary,
)
@@ -63,13 +64,23 @@ def _get_registry_functions():
sys.path.insert(0, str(root))
from registry import (
+ get_project_concurrency,
get_project_path,
list_registered_projects,
register_project,
+ set_project_concurrency,
unregister_project,
validate_project_path,
)
- return register_project, unregister_project, get_project_path, list_registered_projects, validate_project_path
+ return (
+ register_project,
+ unregister_project,
+ get_project_path,
+ list_registered_projects,
+ validate_project_path,
+ get_project_concurrency,
+ set_project_concurrency,
+ )
router = APIRouter(prefix="/api/projects", tags=["projects"])
@@ -102,7 +113,8 @@ def get_project_stats(project_dir: Path) -> ProjectStats:
async def list_projects():
"""List all registered projects."""
_init_imports()
- _, _, _, list_registered_projects, validate_project_path = _get_registry_functions()
+ (_, _, _, list_registered_projects, validate_project_path,
+ get_project_concurrency, _) = _get_registry_functions()
projects = list_registered_projects()
result = []
@@ -123,6 +135,7 @@ async def list_projects():
path=info["path"],
has_spec=has_spec,
stats=stats,
+ default_concurrency=info.get("default_concurrency", 3),
))
return result
@@ -132,7 +145,8 @@ async def list_projects():
async def create_project(project: ProjectCreate):
"""Create a new project at the specified path."""
_init_imports()
- register_project, _, get_project_path, list_registered_projects, _ = _get_registry_functions()
+ (register_project, _, get_project_path, list_registered_projects,
+ _, _, _) = _get_registry_functions()
name = validate_project_name(project.name)
project_path = Path(project.path).resolve()
@@ -203,6 +217,7 @@ async def create_project(project: ProjectCreate):
path=project_path.as_posix(),
has_spec=False, # Just created, no spec yet
stats=ProjectStats(passing=0, total=0, percentage=0.0),
+ default_concurrency=3,
)
@@ -210,7 +225,7 @@ async def create_project(project: ProjectCreate):
async def get_project(name: str):
"""Get detailed information about a project."""
_init_imports()
- _, _, get_project_path, _, _ = _get_registry_functions()
+ (_, _, get_project_path, _, _, get_project_concurrency, _) = _get_registry_functions()
name = validate_project_name(name)
project_dir = get_project_path(name)
@@ -231,6 +246,7 @@ async def get_project(name: str):
has_spec=has_spec,
stats=stats,
prompts_dir=str(prompts_dir),
+ default_concurrency=get_project_concurrency(name),
)
@@ -244,7 +260,7 @@ async def delete_project(name: str, delete_files: bool = False):
delete_files: If True, also delete the project directory and files
"""
_init_imports()
- _, unregister_project, get_project_path, _, _ = _get_registry_functions()
+ (_, unregister_project, get_project_path, _, _, _, _) = _get_registry_functions()
name = validate_project_name(name)
project_dir = get_project_path(name)
@@ -280,7 +296,7 @@ async def delete_project(name: str, delete_files: bool = False):
async def get_project_prompts(name: str):
"""Get the content of project prompt files."""
_init_imports()
- _, _, get_project_path, _, _ = _get_registry_functions()
+ (_, _, get_project_path, _, _, _, _) = _get_registry_functions()
name = validate_project_name(name)
project_dir = get_project_path(name)
@@ -313,7 +329,7 @@ async def get_project_prompts(name: str):
async def update_project_prompts(name: str, prompts: ProjectPromptsUpdate):
"""Update project prompt files."""
_init_imports()
- _, _, get_project_path, _, _ = _get_registry_functions()
+ (_, _, get_project_path, _, _, _, _) = _get_registry_functions()
name = validate_project_name(name)
project_dir = get_project_path(name)
@@ -343,7 +359,7 @@ async def update_project_prompts(name: str, prompts: ProjectPromptsUpdate):
async def get_project_stats_endpoint(name: str):
"""Get current progress statistics for a project."""
_init_imports()
- _, _, get_project_path, _, _ = _get_registry_functions()
+ (_, _, get_project_path, _, _, _, _) = _get_registry_functions()
name = validate_project_name(name)
project_dir = get_project_path(name)
@@ -355,3 +371,121 @@ async def get_project_stats_endpoint(name: str):
raise HTTPException(status_code=404, detail="Project directory not found")
return get_project_stats(project_dir)
+
+
+@router.post("/{name}/reset")
+async def reset_project(name: str, full_reset: bool = False):
+ """
+ Reset a project to its initial state.
+
+ Args:
+ name: Project name to reset
+ full_reset: If True, also delete prompts/ directory (triggers setup wizard)
+
+ Returns:
+ Dictionary with list of deleted files and reset type
+ """
+ _init_imports()
+ (_, _, get_project_path, _, _, _, _) = _get_registry_functions()
+
+ name = validate_project_name(name)
+ project_dir = get_project_path(name)
+
+ if not project_dir:
+ raise HTTPException(status_code=404, detail=f"Project '{name}' not found")
+
+ if not project_dir.exists():
+ raise HTTPException(status_code=404, detail="Project directory not found")
+
+ # Check if agent is running
+ lock_file = project_dir / ".agent.lock"
+ if lock_file.exists():
+ raise HTTPException(
+ status_code=409,
+ detail="Cannot reset project while agent is running. Stop the agent first."
+ )
+
+ # Dispose of database engines to release file locks (required on Windows)
+ # Import here to avoid circular imports
+ from api.database import dispose_engine as dispose_features_engine
+ from server.services.assistant_database import dispose_engine as dispose_assistant_engine
+
+ dispose_features_engine(project_dir)
+ dispose_assistant_engine(project_dir)
+
+ deleted_files: list[str] = []
+
+ # Files to delete in quick reset
+ quick_reset_files = [
+ "features.db",
+ "features.db-wal", # WAL mode journal file
+ "features.db-shm", # WAL mode shared memory file
+ "assistant.db",
+ "assistant.db-wal",
+ "assistant.db-shm",
+ ".claude_settings.json",
+ ".claude_assistant_settings.json",
+ ]
+
+ for filename in quick_reset_files:
+ file_path = project_dir / filename
+ if file_path.exists():
+ try:
+ file_path.unlink()
+ deleted_files.append(filename)
+ except Exception as e:
+ raise HTTPException(status_code=500, detail=f"Failed to delete {filename}: {e}")
+
+ # Full reset: also delete prompts directory
+ if full_reset:
+ prompts_dir = project_dir / "prompts"
+ if prompts_dir.exists():
+ try:
+ shutil.rmtree(prompts_dir)
+ deleted_files.append("prompts/")
+ except Exception as e:
+ raise HTTPException(status_code=500, detail=f"Failed to delete prompts/: {e}")
+
+ return {
+ "success": True,
+ "reset_type": "full" if full_reset else "quick",
+ "deleted_files": deleted_files,
+ "message": f"Project '{name}' has been reset" + (" (full reset)" if full_reset else " (quick reset)")
+ }
+
+
+@router.patch("/{name}/settings", response_model=ProjectDetail)
+async def update_project_settings(name: str, settings: ProjectSettingsUpdate):
+ """Update project-level settings (concurrency, etc.)."""
+ _init_imports()
+ (_, _, get_project_path, _, _, get_project_concurrency,
+ set_project_concurrency) = _get_registry_functions()
+
+ name = validate_project_name(name)
+ project_dir = get_project_path(name)
+
+ if not project_dir:
+ raise HTTPException(status_code=404, detail=f"Project '{name}' not found")
+
+ if not project_dir.exists():
+ raise HTTPException(status_code=404, detail="Project directory not found")
+
+ # Update concurrency if provided
+ if settings.default_concurrency is not None:
+ success = set_project_concurrency(name, settings.default_concurrency)
+ if not success:
+ raise HTTPException(status_code=500, detail="Failed to update concurrency")
+
+ # Return updated project details
+ has_spec = _check_spec_exists(project_dir)
+ stats = get_project_stats(project_dir)
+ prompts_dir = _get_project_prompts_dir(project_dir)
+
+ return ProjectDetail(
+ name=name,
+ path=project_dir.as_posix(),
+ has_spec=has_spec,
+ stats=stats,
+ prompts_dir=str(prompts_dir),
+ default_concurrency=get_project_concurrency(name),
+ )
diff --git a/server/routers/schedules.py b/server/routers/schedules.py
index 7c6c4ed..2a11ba3 100644
--- a/server/routers/schedules.py
+++ b/server/routers/schedules.py
@@ -256,8 +256,8 @@ async def get_next_scheduled_run(project_name: str):
return NextRunResponse(
has_schedules=True,
- next_start=next_start if active_count == 0 else None,
- next_end=latest_end,
+ next_start=next_start.isoformat() if (active_count == 0 and next_start) else None,
+ next_end=latest_end.isoformat() if latest_end else None,
is_currently_running=active_count > 0,
active_schedule_count=active_count,
)
diff --git a/server/routers/settings.py b/server/routers/settings.py
index cf16045..8f3f906 100644
--- a/server/routers/settings.py
+++ b/server/routers/settings.py
@@ -40,7 +40,15 @@ def _parse_yolo_mode(value: str | None) -> bool:
def _is_glm_mode() -> bool:
"""Check if GLM API is configured via environment variables."""
- return bool(os.getenv("ANTHROPIC_BASE_URL"))
+ base_url = os.getenv("ANTHROPIC_BASE_URL", "")
+ # GLM mode is when ANTHROPIC_BASE_URL is set but NOT pointing to Ollama
+ return bool(base_url) and not _is_ollama_mode()
+
+
+def _is_ollama_mode() -> bool:
+ """Check if Ollama API is configured via environment variables."""
+ base_url = os.getenv("ANTHROPIC_BASE_URL", "")
+ return "localhost:11434" in base_url or "127.0.0.1:11434" in base_url
@router.get("/models", response_model=ModelsResponse)
@@ -82,6 +90,7 @@ async def get_settings():
yolo_mode=_parse_yolo_mode(all_settings.get("yolo_mode")),
model=all_settings.get("model", DEFAULT_MODEL),
glm_mode=_is_glm_mode(),
+ ollama_mode=_is_ollama_mode(),
testing_agent_ratio=_parse_int(all_settings.get("testing_agent_ratio"), 1),
)
@@ -104,5 +113,6 @@ async def update_settings(update: SettingsUpdate):
yolo_mode=_parse_yolo_mode(all_settings.get("yolo_mode")),
model=all_settings.get("model", DEFAULT_MODEL),
glm_mode=_is_glm_mode(),
+ ollama_mode=_is_ollama_mode(),
testing_agent_ratio=_parse_int(all_settings.get("testing_agent_ratio"), 1),
)
diff --git a/server/schemas.py b/server/schemas.py
index 844aaa1..03e73ef 100644
--- a/server/schemas.py
+++ b/server/schemas.py
@@ -45,6 +45,7 @@ class ProjectSummary(BaseModel):
path: str
has_spec: bool
stats: ProjectStats
+ default_concurrency: int = 3
class ProjectDetail(BaseModel):
@@ -54,6 +55,7 @@ class ProjectDetail(BaseModel):
has_spec: bool
stats: ProjectStats
prompts_dir: str
+ default_concurrency: int = 3
class ProjectPrompts(BaseModel):
@@ -70,6 +72,18 @@ class ProjectPromptsUpdate(BaseModel):
coding_prompt: str | None = None
+class ProjectSettingsUpdate(BaseModel):
+ """Request schema for updating project-level settings."""
+ default_concurrency: int | None = None
+
+ @field_validator('default_concurrency')
+ @classmethod
+ def validate_concurrency(cls, v: int | None) -> int | None:
+ if v is not None and (v < 1 or v > 5):
+ raise ValueError("default_concurrency must be between 1 and 5")
+ return v
+
+
# ============================================================================
# Feature Schemas
# ============================================================================
@@ -382,6 +396,7 @@ class SettingsResponse(BaseModel):
yolo_mode: bool = False
model: str = DEFAULT_MODEL
glm_mode: bool = False # True if GLM API is configured via .env
+ ollama_mode: bool = False # True if Ollama API is configured via .env
testing_agent_ratio: int = 1 # Regression testing agents (0-3)
diff --git a/server/services/assistant_database.py b/server/services/assistant_database.py
index 1545310..f2ade75 100644
--- a/server/services/assistant_database.py
+++ b/server/services/assistant_database.py
@@ -79,6 +79,26 @@ def get_engine(project_dir: Path):
return _engine_cache[cache_key]
+def dispose_engine(project_dir: Path) -> bool:
+ """Dispose of and remove the cached engine for a project.
+
+ This closes all database connections, releasing file locks on Windows.
+ Should be called before deleting the database file.
+
+ Returns:
+ True if an engine was disposed, False if no engine was cached.
+ """
+ cache_key = project_dir.as_posix()
+
+ if cache_key in _engine_cache:
+ engine = _engine_cache.pop(cache_key)
+ engine.dispose()
+ logger.debug(f"Disposed database engine for {cache_key}")
+ return True
+
+ return False
+
+
def get_session(project_dir: Path):
"""Get a new database session for a project."""
engine = get_engine(project_dir)
diff --git a/server/services/dev_server_manager.py b/server/services/dev_server_manager.py
index 063e076..5acfbc8 100644
--- a/server/services/dev_server_manager.py
+++ b/server/services/dev_server_manager.py
@@ -428,7 +428,9 @@ class DevServerProcessManager:
# Global registry of dev server managers per project with thread safety
-_managers: dict[str, DevServerProcessManager] = {}
+# Key is (project_name, resolved_project_dir) to prevent cross-project contamination
+# when different projects share the same name but have different paths
+_managers: dict[tuple[str, str], DevServerProcessManager] = {}
_managers_lock = threading.Lock()
@@ -444,18 +446,11 @@ def get_devserver_manager(project_name: str, project_dir: Path) -> DevServerProc
DevServerProcessManager instance for the project
"""
with _managers_lock:
- if project_name in _managers:
- manager = _managers[project_name]
- # Update project_dir in case project was moved
- if manager.project_dir.resolve() != project_dir.resolve():
- logger.info(
- f"Project {project_name} path updated: {manager.project_dir} -> {project_dir}"
- )
- manager.project_dir = project_dir
- manager.lock_file = project_dir / ".devserver.lock"
- return manager
- _managers[project_name] = DevServerProcessManager(project_name, project_dir)
- return _managers[project_name]
+ # Use composite key to prevent cross-project UI contamination (#71)
+ key = (project_name, str(project_dir.resolve()))
+ if key not in _managers:
+ _managers[key] = DevServerProcessManager(project_name, project_dir)
+ return _managers[key]
async def cleanup_all_devservers() -> None:
diff --git a/server/services/expand_chat_session.py b/server/services/expand_chat_session.py
index f582e7b..58dd50d 100644
--- a/server/services/expand_chat_session.py
+++ b/server/services/expand_chat_session.py
@@ -10,8 +10,8 @@ import asyncio
import json
import logging
import os
-import re
import shutil
+import sys
import threading
import uuid
from datetime import datetime
@@ -38,6 +38,13 @@ API_ENV_VARS = [
"ANTHROPIC_DEFAULT_HAIKU_MODEL",
]
+# Feature MCP tools needed for expand session
+EXPAND_FEATURE_TOOLS = [
+ "mcp__features__feature_create",
+ "mcp__features__feature_create_bulk",
+ "mcp__features__feature_get_stats",
+]
+
async def _make_multimodal_message(content_blocks: list[dict]) -> AsyncGenerator[dict, None]:
"""
@@ -61,9 +68,8 @@ class ExpandChatSession:
Unlike SpecChatSession which writes spec files, this session:
1. Reads existing app_spec.txt for context
- 2. Parses feature definitions from Claude's output
- 3. Creates features via REST API
- 4. Tracks which features were created during the session
+ 2. Chats with the user to define new features
+ 3. Claude creates features via the feature_create_bulk MCP tool
"""
def __init__(self, project_name: str, project_dir: Path):
@@ -145,10 +151,14 @@ class ExpandChatSession:
return
# Create temporary security settings file (unique per session to avoid conflicts)
+ # Note: permission_mode="bypassPermissions" is safe here because:
+ # 1. Only Read/Glob file tools are allowed (no Write/Edit)
+ # 2. MCP tools are restricted to feature creation only
+ # 3. No Bash access - cannot execute arbitrary commands
security_settings = {
"sandbox": {"enabled": True},
"permissions": {
- "defaultMode": "acceptEdits",
+ "defaultMode": "bypassPermissions",
"allow": [
"Read(./**)",
"Glob(./**)",
@@ -171,6 +181,18 @@ class ExpandChatSession:
# This allows using alternative APIs (e.g., GLM via z.ai) that may not support Claude model names
model = os.getenv("ANTHROPIC_DEFAULT_OPUS_MODEL", "claude-opus-4-5-20251101")
+ # Build MCP servers config for feature creation
+ mcp_servers = {
+ "features": {
+ "command": sys.executable,
+ "args": ["-m", "mcp_server.feature_mcp"],
+ "env": {
+ "PROJECT_DIR": str(self.project_dir.resolve()),
+ "PYTHONPATH": str(ROOT_DIR.resolve()),
+ },
+ },
+ }
+
# Create Claude SDK client
try:
self.client = ClaudeSDKClient(
@@ -181,8 +203,10 @@ class ExpandChatSession:
allowed_tools=[
"Read",
"Glob",
+ *EXPAND_FEATURE_TOOLS,
],
- permission_mode="acceptEdits",
+ mcp_servers=mcp_servers,
+ permission_mode="bypassPermissions",
max_turns=100,
cwd=str(self.project_dir.resolve()),
settings=str(settings_file.resolve()),
@@ -267,7 +291,8 @@ class ExpandChatSession:
"""
Internal method to query Claude and stream responses.
- Handles text responses and detects feature creation blocks.
+ Feature creation is handled by Claude calling the feature_create_bulk
+ MCP tool directly -- no text parsing needed.
"""
if not self.client:
return
@@ -291,9 +316,6 @@ class ExpandChatSession:
else:
await self.client.query(message)
- # Accumulate full response to detect feature blocks
- full_response = ""
-
# Stream the response
async for msg in self.client.receive_response():
msg_type = type(msg).__name__
@@ -305,7 +327,6 @@ class ExpandChatSession:
if block_type == "TextBlock" and hasattr(block, "text"):
text = block.text
if text:
- full_response += text
yield {"type": "text", "content": text}
self.messages.append({
@@ -314,123 +335,6 @@ class ExpandChatSession:
"timestamp": datetime.now().isoformat()
})
- # Check for feature creation blocks in full response (handle multiple blocks)
- features_matches = re.findall(
- r'\s*(\[[\s\S]*?\])\s*',
- full_response
- )
-
- if features_matches:
- # Collect all features from all blocks, deduplicating by name
- all_features: list[dict] = []
- seen_names: set[str] = set()
-
- for features_json in features_matches:
- try:
- features_data = json.loads(features_json)
-
- if features_data and isinstance(features_data, list):
- for feature in features_data:
- name = feature.get("name", "")
- if name and name not in seen_names:
- seen_names.add(name)
- all_features.append(feature)
- except json.JSONDecodeError as e:
- logger.error(f"Failed to parse features JSON block: {e}")
- # Continue processing other blocks
-
- if all_features:
- try:
- # Create all deduplicated features
- created = await self._create_features_bulk(all_features)
-
- if created:
- self.features_created += len(created)
- self.created_feature_ids.extend([f["id"] for f in created])
-
- yield {
- "type": "features_created",
- "count": len(created),
- "features": created
- }
-
- logger.info(f"Created {len(created)} features for {self.project_name}")
- except Exception:
- logger.exception("Failed to create features")
- yield {
- "type": "error",
- "content": "Failed to create features"
- }
-
- async def _create_features_bulk(self, features: list[dict]) -> list[dict]:
- """
- Create features directly in the database.
-
- Args:
- features: List of feature dictionaries with category, name, description, steps
-
- Returns:
- List of created feature dictionaries with IDs
-
- Note:
- Uses flush() to get IDs immediately without re-querying by priority range,
- which could pick up rows from concurrent writers.
- """
- # Import database classes
- import sys
- root = Path(__file__).parent.parent.parent
- if str(root) not in sys.path:
- sys.path.insert(0, str(root))
-
- from api.database import Feature, create_database
-
- # Get database session
- _, SessionLocal = create_database(self.project_dir)
- session = SessionLocal()
-
- try:
- # Determine starting priority
- max_priority_feature = session.query(Feature).order_by(Feature.priority.desc()).first()
- current_priority = (max_priority_feature.priority + 1) if max_priority_feature else 1
-
- created_rows: list = []
-
- for f in features:
- db_feature = Feature(
- priority=current_priority,
- category=f.get("category", "functional"),
- name=f.get("name", "Unnamed feature"),
- description=f.get("description", ""),
- steps=f.get("steps", []),
- passes=False,
- in_progress=False,
- )
- session.add(db_feature)
- created_rows.append(db_feature)
- current_priority += 1
-
- # Flush to get IDs without relying on priority range query
- session.flush()
-
- # Build result from the flushed objects (IDs are now populated)
- created_features = [
- {
- "id": db_feature.id,
- "name": db_feature.name,
- "category": db_feature.category,
- }
- for db_feature in created_rows
- ]
-
- session.commit()
- return created_features
-
- except Exception:
- session.rollback()
- raise
- finally:
- session.close()
-
def get_features_created(self) -> int:
"""Get the total number of features created in this session."""
return self.features_created
diff --git a/server/services/process_manager.py b/server/services/process_manager.py
index 350905f..fd1a192 100644
--- a/server/services/process_manager.py
+++ b/server/services/process_manager.py
@@ -349,14 +349,20 @@ class AgentProcessManager:
try:
# Start subprocess with piped stdout/stderr
# Use project_dir as cwd so Claude SDK sandbox allows access to project files
- # IMPORTANT: Set PYTHONUNBUFFERED to ensure output isn't delayed
- self.process = subprocess.Popen(
- cmd,
- stdout=subprocess.PIPE,
- stderr=subprocess.STDOUT,
- cwd=str(self.project_dir),
- env={**os.environ, "PYTHONUNBUFFERED": "1"},
- )
+ # stdin=DEVNULL prevents blocking if Claude CLI or child process tries to read stdin
+ # CREATE_NO_WINDOW on Windows prevents console window pop-ups
+ # PYTHONUNBUFFERED ensures output isn't delayed
+ popen_kwargs = {
+ "stdin": subprocess.DEVNULL,
+ "stdout": subprocess.PIPE,
+ "stderr": subprocess.STDOUT,
+ "cwd": str(self.project_dir),
+ "env": {**os.environ, "PYTHONUNBUFFERED": "1"},
+ }
+ if sys.platform == "win32":
+ popen_kwargs["creationflags"] = subprocess.CREATE_NO_WINDOW
+
+ self.process = subprocess.Popen(cmd, **popen_kwargs)
# Atomic lock creation - if it fails, another process beat us
if not self._create_lock():
@@ -510,7 +516,9 @@ class AgentProcessManager:
# Global registry of process managers per project with thread safety
-_managers: dict[str, AgentProcessManager] = {}
+# Key is (project_name, resolved_project_dir) to prevent cross-project contamination
+# when different projects share the same name but have different paths
+_managers: dict[tuple[str, str], AgentProcessManager] = {}
_managers_lock = threading.Lock()
@@ -523,9 +531,11 @@ def get_manager(project_name: str, project_dir: Path, root_dir: Path) -> AgentPr
root_dir: Root directory of the autonomous-coding-ui project
"""
with _managers_lock:
- if project_name not in _managers:
- _managers[project_name] = AgentProcessManager(project_name, project_dir, root_dir)
- return _managers[project_name]
+ # Use composite key to prevent cross-project UI contamination (#71)
+ key = (project_name, str(project_dir.resolve()))
+ if key not in _managers:
+ _managers[key] = AgentProcessManager(project_name, project_dir, root_dir)
+ return _managers[key]
async def cleanup_all_managers() -> None:
diff --git a/start_ui.bat b/start_ui.bat
index 2c59753..c8ad646 100644
--- a/start_ui.bat
+++ b/start_ui.bat
@@ -39,5 +39,3 @@ pip install -r requirements.txt --quiet
REM Run the Python launcher
python "%~dp0start_ui.py" %*
-
-pause
diff --git a/start_ui.py b/start_ui.py
index 7270d27..3e619c1 100644
--- a/start_ui.py
+++ b/start_ui.py
@@ -13,12 +13,16 @@ Automated launcher that handles all setup:
7. Opens browser to the UI
Usage:
- python start_ui.py [--dev]
+ python start_ui.py [--dev] [--host HOST] [--port PORT]
Options:
- --dev Run in development mode with Vite hot reload
+ --dev Run in development mode with Vite hot reload
+ --host HOST Host to bind to (default: 127.0.0.1)
+ Use 0.0.0.0 for remote access (security warning will be shown)
+ --port PORT Port to bind to (default: 8888)
"""
+import argparse
import asyncio
import os
import shutil
@@ -133,10 +137,25 @@ def check_node() -> bool:
def install_npm_deps() -> bool:
- """Install npm dependencies if node_modules doesn't exist."""
+ """Install npm dependencies if node_modules doesn't exist or is stale."""
node_modules = UI_DIR / "node_modules"
+ package_json = UI_DIR / "package.json"
+ package_lock = UI_DIR / "package-lock.json"
- if node_modules.exists():
+ # Check if npm install is needed
+ needs_install = False
+
+ if not node_modules.exists():
+ needs_install = True
+ elif package_json.exists():
+ # If package.json or package-lock.json is newer than node_modules, reinstall
+ node_modules_mtime = node_modules.stat().st_mtime
+ if package_json.stat().st_mtime > node_modules_mtime:
+ needs_install = True
+ elif package_lock.exists() and package_lock.stat().st_mtime > node_modules_mtime:
+ needs_install = True
+
+ if not needs_install:
print(" npm dependencies already installed")
return True
@@ -235,26 +254,31 @@ def build_frontend() -> bool:
return run_command([npm_cmd, "run", "build"], cwd=UI_DIR)
-def start_dev_server(port: int) -> tuple:
+def start_dev_server(port: int, host: str = "127.0.0.1") -> tuple:
"""Start both Vite and FastAPI in development mode."""
venv_python = get_venv_python()
print("\n Starting development servers...")
- print(f" - FastAPI backend: http://127.0.0.1:{port}")
+ print(f" - FastAPI backend: http://{host}:{port}")
print(" - Vite frontend: http://127.0.0.1:5173")
+ # Set environment for remote access if needed
+ env = os.environ.copy()
+ if host != "127.0.0.1":
+ env["AUTOCODER_ALLOW_REMOTE"] = "1"
+
# Start FastAPI
backend = subprocess.Popen([
str(venv_python), "-m", "uvicorn",
"server.main:app",
- "--host", "127.0.0.1",
+ "--host", host,
"--port", str(port),
"--reload"
- ], cwd=str(ROOT))
+ ], cwd=str(ROOT), env=env)
# Start Vite with API port env var for proxy configuration
npm_cmd = "npm.cmd" if sys.platform == "win32" else "npm"
- vite_env = os.environ.copy()
+ vite_env = env.copy()
vite_env["VITE_API_PORT"] = str(port)
frontend = subprocess.Popen([
npm_cmd, "run", "dev"
@@ -263,15 +287,18 @@ def start_dev_server(port: int) -> tuple:
return backend, frontend
-def start_production_server(port: int):
- """Start FastAPI server in production mode with hot reload."""
+def start_production_server(port: int, host: str = "127.0.0.1"):
+ """Start FastAPI server in production mode."""
venv_python = get_venv_python()
- print(f"\n Starting server at http://127.0.0.1:{port} (with hot reload)")
+ print(f"\n Starting server at http://{host}:{port}")
- # Set PYTHONASYNCIODEBUG to help with Windows subprocess issues
env = os.environ.copy()
+ # Enable remote access in server if not localhost
+ if host != "127.0.0.1":
+ env["AUTOCODER_ALLOW_REMOTE"] = "1"
+
# NOTE: --reload is NOT used because on Windows it breaks asyncio subprocess
# support (uvicorn's reload worker doesn't inherit the ProactorEventLoop policy).
# This affects Claude SDK which uses asyncio.create_subprocess_exec.
@@ -279,14 +306,34 @@ def start_production_server(port: int):
return subprocess.Popen([
str(venv_python), "-m", "uvicorn",
"server.main:app",
- "--host", "127.0.0.1",
+ "--host", host,
"--port", str(port),
], cwd=str(ROOT), env=env)
def main() -> None:
"""Main entry point."""
- dev_mode = "--dev" in sys.argv
+ parser = argparse.ArgumentParser(description="AutoCoder UI Launcher")
+ parser.add_argument("--dev", action="store_true", help="Run in development mode with Vite hot reload")
+ parser.add_argument("--host", default="127.0.0.1", help="Host to bind to (default: 127.0.0.1)")
+ parser.add_argument("--port", type=int, default=None, help="Port to bind to (default: auto-detect from 8888)")
+ args = parser.parse_args()
+
+ dev_mode = args.dev
+ host = args.host
+
+ # Security warning for remote access
+ if host != "127.0.0.1":
+ print("\n" + "!" * 50)
+ print(" SECURITY WARNING")
+ print("!" * 50)
+ print(f" Remote access enabled on host: {host}")
+ print(" The AutoCoder UI will be accessible from other machines.")
+ print(" Ensure you understand the security implications:")
+ print(" - The agent has file system access to project directories")
+ print(" - The API can start/stop agents and modify files")
+ print(" - Consider using a firewall or VPN for protection")
+ print("!" * 50 + "\n")
print("=" * 50)
print(" AutoCoder UI Setup")
@@ -335,18 +382,20 @@ def main() -> None:
step = 5 if dev_mode else 6
print_step(step, total_steps, "Starting server")
- port = find_available_port()
+ port = args.port if args.port else find_available_port()
try:
if dev_mode:
- backend, frontend = start_dev_server(port)
+ backend, frontend = start_dev_server(port, host)
- # Open browser to Vite dev server
+ # Open browser to Vite dev server (always localhost for Vite)
time.sleep(3)
webbrowser.open("http://127.0.0.1:5173")
print("\n" + "=" * 50)
print(" Development mode active")
+ if host != "127.0.0.1":
+ print(f" Backend accessible at: http://{host}:{port}")
print(" Press Ctrl+C to stop")
print("=" * 50)
@@ -362,14 +411,15 @@ def main() -> None:
backend.wait()
frontend.wait()
else:
- server = start_production_server(port)
+ server = start_production_server(port, host)
- # Open browser
+ # Open browser (only if localhost)
time.sleep(2)
- webbrowser.open(f"http://127.0.0.1:{port}")
+ if host == "127.0.0.1":
+ webbrowser.open(f"http://127.0.0.1:{port}")
print("\n" + "=" * 50)
- print(f" Server running at http://127.0.0.1:{port}")
+ print(f" Server running at http://{host}:{port}")
print(" Press Ctrl+C to stop")
print("=" * 50)
diff --git a/test_client.py b/test_client.py
new file mode 100644
index 0000000..48f52c4
--- /dev/null
+++ b/test_client.py
@@ -0,0 +1,105 @@
+#!/usr/bin/env python3
+"""
+Client Utility Tests
+====================
+
+Tests for the client module utility functions.
+Run with: python test_client.py
+"""
+
+import os
+import unittest
+
+from client import convert_model_for_vertex
+
+
+class TestConvertModelForVertex(unittest.TestCase):
+ """Tests for convert_model_for_vertex function."""
+
+ def setUp(self):
+ """Save original env state."""
+ self._orig_vertex = os.environ.get("CLAUDE_CODE_USE_VERTEX")
+
+ def tearDown(self):
+ """Restore original env state."""
+ if self._orig_vertex is None:
+ os.environ.pop("CLAUDE_CODE_USE_VERTEX", None)
+ else:
+ os.environ["CLAUDE_CODE_USE_VERTEX"] = self._orig_vertex
+
+ # --- Vertex AI disabled (default) ---
+
+ def test_returns_model_unchanged_when_vertex_disabled(self):
+ os.environ.pop("CLAUDE_CODE_USE_VERTEX", None)
+ self.assertEqual(
+ convert_model_for_vertex("claude-opus-4-5-20251101"),
+ "claude-opus-4-5-20251101",
+ )
+
+ def test_returns_model_unchanged_when_vertex_set_to_zero(self):
+ os.environ["CLAUDE_CODE_USE_VERTEX"] = "0"
+ self.assertEqual(
+ convert_model_for_vertex("claude-opus-4-5-20251101"),
+ "claude-opus-4-5-20251101",
+ )
+
+ def test_returns_model_unchanged_when_vertex_set_to_empty(self):
+ os.environ["CLAUDE_CODE_USE_VERTEX"] = ""
+ self.assertEqual(
+ convert_model_for_vertex("claude-sonnet-4-5-20250929"),
+ "claude-sonnet-4-5-20250929",
+ )
+
+ # --- Vertex AI enabled: standard conversions ---
+
+ def test_converts_opus_model(self):
+ os.environ["CLAUDE_CODE_USE_VERTEX"] = "1"
+ self.assertEqual(
+ convert_model_for_vertex("claude-opus-4-5-20251101"),
+ "claude-opus-4-5@20251101",
+ )
+
+ def test_converts_sonnet_model(self):
+ os.environ["CLAUDE_CODE_USE_VERTEX"] = "1"
+ self.assertEqual(
+ convert_model_for_vertex("claude-sonnet-4-5-20250929"),
+ "claude-sonnet-4-5@20250929",
+ )
+
+ def test_converts_haiku_model(self):
+ os.environ["CLAUDE_CODE_USE_VERTEX"] = "1"
+ self.assertEqual(
+ convert_model_for_vertex("claude-3-5-haiku-20241022"),
+ "claude-3-5-haiku@20241022",
+ )
+
+ # --- Vertex AI enabled: already converted or non-matching ---
+
+ def test_already_vertex_format_unchanged(self):
+ os.environ["CLAUDE_CODE_USE_VERTEX"] = "1"
+ self.assertEqual(
+ convert_model_for_vertex("claude-opus-4-5@20251101"),
+ "claude-opus-4-5@20251101",
+ )
+
+ def test_non_claude_model_unchanged(self):
+ os.environ["CLAUDE_CODE_USE_VERTEX"] = "1"
+ self.assertEqual(
+ convert_model_for_vertex("gpt-4o"),
+ "gpt-4o",
+ )
+
+ def test_model_without_date_suffix_unchanged(self):
+ os.environ["CLAUDE_CODE_USE_VERTEX"] = "1"
+ self.assertEqual(
+ convert_model_for_vertex("claude-opus-4-5"),
+ "claude-opus-4-5",
+ )
+
+ def test_empty_string_unchanged(self):
+ os.environ["CLAUDE_CODE_USE_VERTEX"] = "1"
+ self.assertEqual(convert_model_for_vertex(""), "")
+
+
+if __name__ == "__main__":
+ unittest.main()
diff --git a/test_dependency_resolver.py b/test_dependency_resolver.py
new file mode 100644
index 0000000..2155023
--- /dev/null
+++ b/test_dependency_resolver.py
@@ -0,0 +1,426 @@
+#!/usr/bin/env python3
+"""
+Dependency Resolver Tests
+=========================
+
+Tests for the dependency resolver functions including cycle detection.
+Run with: python test_dependency_resolver.py
+"""
+
+import sys
+import time
+from concurrent.futures import ThreadPoolExecutor
+from concurrent.futures import TimeoutError as FuturesTimeoutError
+
+from api.dependency_resolver import (
+ are_dependencies_satisfied,
+ compute_scheduling_scores,
+ get_blocked_features,
+ get_blocking_dependencies,
+ get_ready_features,
+ resolve_dependencies,
+ would_create_circular_dependency,
+)
+
+
+def test_compute_scheduling_scores_simple_chain():
+ """Test scheduling scores for a simple linear dependency chain."""
+ print("\nTesting compute_scheduling_scores with simple chain:")
+
+ features = [
+ {"id": 1, "priority": 1, "dependencies": []},
+ {"id": 2, "priority": 2, "dependencies": [1]},
+ {"id": 3, "priority": 3, "dependencies": [2]},
+ ]
+
+ scores = compute_scheduling_scores(features)
+
+ # All features should have scores
+ passed = True
+ for f in features:
+ if f["id"] not in scores:
+ print(f" FAIL: Feature {f['id']} missing from scores")
+ passed = False
+
+ if passed:
+ # Root feature (1) should have highest score (unblocks most)
+ if scores[1] > scores[2] > scores[3]:
+ print(" PASS: Root feature has highest score, leaf has lowest")
+ else:
+ print(f" FAIL: Expected scores[1] > scores[2] > scores[3], got {scores}")
+ passed = False
+
+ return passed
+
+
+def test_compute_scheduling_scores_with_cycle():
+ """Test that compute_scheduling_scores handles circular dependencies without hanging."""
+ print("\nTesting compute_scheduling_scores with circular dependencies:")
+
+ # Create a cycle: 1 -> 2 -> 3 -> 1
+ features = [
+ {"id": 1, "priority": 1, "dependencies": [3]},
+ {"id": 2, "priority": 2, "dependencies": [1]},
+ {"id": 3, "priority": 3, "dependencies": [2]},
+ ]
+
+ # Use timeout to detect infinite loop
+ def compute_with_timeout():
+ return compute_scheduling_scores(features)
+
+ start = time.time()
+ try:
+ with ThreadPoolExecutor(max_workers=1) as executor:
+ future = executor.submit(compute_with_timeout)
+ scores = future.result(timeout=5.0) # 5 second timeout
+
+ elapsed = time.time() - start
+
+ # Should complete quickly (< 1 second for 3 features)
+ if elapsed > 1.0:
+ print(f" FAIL: Took {elapsed:.2f}s (expected < 1s)")
+ return False
+
+ # All features should have scores (even cyclic ones)
+ if len(scores) == 3:
+ print(f" PASS: Completed in {elapsed:.3f}s with {len(scores)} scores")
+ return True
+ else:
+ print(f" FAIL: Expected 3 scores, got {len(scores)}")
+ return False
+
+ except FuturesTimeoutError:
+ print(" FAIL: Infinite loop detected (timed out after 5s)")
+ return False
+
+
+def test_compute_scheduling_scores_self_reference():
+ """Test scheduling scores with self-referencing dependency."""
+ print("\nTesting compute_scheduling_scores with self-reference:")
+
+ features = [
+ {"id": 1, "priority": 1, "dependencies": [1]}, # Self-reference
+ {"id": 2, "priority": 2, "dependencies": []},
+ ]
+
+ start = time.time()
+ try:
+ with ThreadPoolExecutor(max_workers=1) as executor:
+ future = executor.submit(lambda: compute_scheduling_scores(features))
+ scores = future.result(timeout=5.0)
+
+ elapsed = time.time() - start
+
+ if elapsed > 1.0:
+ print(f" FAIL: Took {elapsed:.2f}s (expected < 1s)")
+ return False
+
+ if len(scores) == 2:
+ print(f" PASS: Completed in {elapsed:.3f}s with {len(scores)} scores")
+ return True
+ else:
+ print(f" FAIL: Expected 2 scores, got {len(scores)}")
+ return False
+
+ except FuturesTimeoutError:
+ print(" FAIL: Infinite loop detected (timed out after 5s)")
+ return False
+
+
+def test_compute_scheduling_scores_complex_cycle():
+ """Test scheduling scores with complex circular dependencies."""
+ print("\nTesting compute_scheduling_scores with complex cycle:")
+
+ # Features 1-3 form a cycle, feature 4 depends on 1
+ features = [
+ {"id": 1, "priority": 1, "dependencies": [3]},
+ {"id": 2, "priority": 2, "dependencies": [1]},
+ {"id": 3, "priority": 3, "dependencies": [2]},
+ {"id": 4, "priority": 4, "dependencies": [1]}, # Outside cycle
+ ]
+
+ start = time.time()
+ try:
+ with ThreadPoolExecutor(max_workers=1) as executor:
+ future = executor.submit(lambda: compute_scheduling_scores(features))
+ scores = future.result(timeout=5.0)
+
+ elapsed = time.time() - start
+
+ if elapsed > 1.0:
+ print(f" FAIL: Took {elapsed:.2f}s (expected < 1s)")
+ return False
+
+ if len(scores) == 4:
+ print(f" PASS: Completed in {elapsed:.3f}s with {len(scores)} scores")
+ return True
+ else:
+ print(f" FAIL: Expected 4 scores, got {len(scores)}")
+ return False
+
+ except FuturesTimeoutError:
+ print(" FAIL: Infinite loop detected (timed out after 5s)")
+ return False
+
+
+def test_compute_scheduling_scores_diamond():
+ """Test scheduling scores with diamond dependency pattern."""
+ print("\nTesting compute_scheduling_scores with diamond pattern:")
+
+ # 1
+ # / \
+ # 2 3
+ # \ /
+ # 4
+ features = [
+ {"id": 1, "priority": 1, "dependencies": []},
+ {"id": 2, "priority": 2, "dependencies": [1]},
+ {"id": 3, "priority": 3, "dependencies": [1]},
+ {"id": 4, "priority": 4, "dependencies": [2, 3]},
+ ]
+
+ scores = compute_scheduling_scores(features)
+
+ # Feature 1 should have highest score (unblocks 2, 3, and transitively 4)
+ if scores[1] > scores[2] and scores[1] > scores[3] and scores[1] > scores[4]:
+ # Feature 4 should have lowest score (leaf, unblocks nothing)
+ if scores[4] < scores[2] and scores[4] < scores[3]:
+ print(" PASS: Root has highest score, leaf has lowest")
+ return True
+ else:
+ print(f" FAIL: Leaf should have lowest score. Scores: {scores}")
+ return False
+ else:
+ print(f" FAIL: Root should have highest score. Scores: {scores}")
+ return False
+
+
+def test_compute_scheduling_scores_empty():
+ """Test scheduling scores with empty feature list."""
+ print("\nTesting compute_scheduling_scores with empty list:")
+
+ scores = compute_scheduling_scores([])
+
+ if scores == {}:
+ print(" PASS: Returns empty dict for empty input")
+ return True
+ else:
+ print(f" FAIL: Expected empty dict, got {scores}")
+ return False
+
+
+def test_would_create_circular_dependency():
+ """Test cycle detection for new dependencies."""
+ print("\nTesting would_create_circular_dependency:")
+
+ # Current dependencies: 2 depends on 1, 3 depends on 2
+ # Dependency chain: 3 -> 2 -> 1 (arrows mean "depends on")
+ features = [
+ {"id": 1, "priority": 1, "dependencies": []},
+ {"id": 2, "priority": 2, "dependencies": [1]},
+ {"id": 3, "priority": 3, "dependencies": [2]},
+ ]
+
+ passed = True
+
+ # source_id gains dependency on target_id
+ # Adding "1 depends on 3" would create cycle: 1 -> 3 -> 2 -> 1
+ if would_create_circular_dependency(features, 1, 3):
+ print(" PASS: Detected cycle when adding 1 depends on 3")
+ else:
+ print(" FAIL: Should detect cycle when adding 1 depends on 3")
+ passed = False
+
+ # Adding "3 depends on 1" would NOT create cycle (redundant but not circular)
+ if not would_create_circular_dependency(features, 3, 1):
+ print(" PASS: No false positive for 3 depends on 1")
+ else:
+ print(" FAIL: False positive for 3 depends on 1")
+ passed = False
+
+ # Self-reference should be detected
+ if would_create_circular_dependency(features, 1, 1):
+ print(" PASS: Detected self-reference")
+ else:
+ print(" FAIL: Should detect self-reference")
+ passed = False
+
+ return passed
+
+
+def test_resolve_dependencies_with_cycle():
+ """Test resolve_dependencies detects and reports cycles."""
+ print("\nTesting resolve_dependencies with cycle:")
+
+ # Create a cycle: 1 -> 2 -> 3 -> 1
+ features = [
+ {"id": 1, "priority": 1, "dependencies": [3]},
+ {"id": 2, "priority": 2, "dependencies": [1]},
+ {"id": 3, "priority": 3, "dependencies": [2]},
+ ]
+
+ result = resolve_dependencies(features)
+
+ # Should report circular dependencies
+ if result["circular_dependencies"]:
+ print(f" PASS: Detected cycle: {result['circular_dependencies']}")
+ return True
+ else:
+ print(" FAIL: Should report circular dependencies")
+ return False
+
+
+def test_are_dependencies_satisfied():
+ """Test dependency satisfaction checking."""
+ print("\nTesting are_dependencies_satisfied:")
+
+ features = [
+ {"id": 1, "priority": 1, "dependencies": [], "passes": True},
+ {"id": 2, "priority": 2, "dependencies": [1], "passes": False},
+ {"id": 3, "priority": 3, "dependencies": [2], "passes": False},
+ ]
+
+ passed = True
+
+ # Feature 1 has no deps, should be satisfied
+ if are_dependencies_satisfied(features[0], features):
+ print(" PASS: Feature 1 (no deps) is satisfied")
+ else:
+ print(" FAIL: Feature 1 should be satisfied")
+ passed = False
+
+ # Feature 2 depends on 1 which passes, should be satisfied
+ if are_dependencies_satisfied(features[1], features):
+ print(" PASS: Feature 2 (dep on passing) is satisfied")
+ else:
+ print(" FAIL: Feature 2 should be satisfied")
+ passed = False
+
+ # Feature 3 depends on 2 which doesn't pass, should NOT be satisfied
+ if not are_dependencies_satisfied(features[2], features):
+ print(" PASS: Feature 3 (dep on non-passing) is not satisfied")
+ else:
+ print(" FAIL: Feature 3 should not be satisfied")
+ passed = False
+
+ return passed
+
+
+def test_get_blocking_dependencies():
+ """Test getting blocking dependency IDs."""
+ print("\nTesting get_blocking_dependencies:")
+
+ features = [
+ {"id": 1, "priority": 1, "dependencies": [], "passes": True},
+ {"id": 2, "priority": 2, "dependencies": [], "passes": False},
+ {"id": 3, "priority": 3, "dependencies": [1, 2], "passes": False},
+ ]
+
+ blocking = get_blocking_dependencies(features[2], features)
+
+ # Only feature 2 should be blocking (1 passes)
+ if blocking == [2]:
+ print(" PASS: Correctly identified blocking dependency")
+ return True
+ else:
+ print(f" FAIL: Expected [2], got {blocking}")
+ return False
+
+
+def test_get_ready_features():
+ """Test getting ready features."""
+ print("\nTesting get_ready_features:")
+
+ features = [
+ {"id": 1, "priority": 1, "dependencies": [], "passes": True},
+ {"id": 2, "priority": 2, "dependencies": [], "passes": False, "in_progress": False},
+ {"id": 3, "priority": 3, "dependencies": [1], "passes": False, "in_progress": False},
+ {"id": 4, "priority": 4, "dependencies": [2], "passes": False, "in_progress": False},
+ ]
+
+ ready = get_ready_features(features)
+
+ # Features 2 and 3 should be ready
+ # Feature 1 passes, feature 4 blocked by 2
+ ready_ids = [f["id"] for f in ready]
+
+ if 2 in ready_ids and 3 in ready_ids:
+ if 1 not in ready_ids and 4 not in ready_ids:
+ print(f" PASS: Ready features: {ready_ids}")
+ return True
+ else:
+ print(f" FAIL: Should not include passing/blocked. Got: {ready_ids}")
+ return False
+ else:
+ print(f" FAIL: Should include 2 and 3. Got: {ready_ids}")
+ return False
+
+
+def test_get_blocked_features():
+ """Test getting blocked features."""
+ print("\nTesting get_blocked_features:")
+
+ features = [
+ {"id": 1, "priority": 1, "dependencies": [], "passes": False},
+ {"id": 2, "priority": 2, "dependencies": [1], "passes": False},
+ ]
+
+ blocked = get_blocked_features(features)
+
+ # Feature 2 should be blocked by 1
+ if len(blocked) == 1 and blocked[0]["id"] == 2:
+ if blocked[0]["blocked_by"] == [1]:
+ print(" PASS: Correctly identified blocked feature")
+ return True
+ else:
+ print(f" FAIL: Wrong blocked_by: {blocked[0]['blocked_by']}")
+ return False
+ else:
+ print(f" FAIL: Expected feature 2 blocked, got: {blocked}")
+ return False
+
+
+def run_all_tests():
+ """Run all tests and report results."""
+ print("=" * 60)
+ print("Dependency Resolver Tests")
+ print("=" * 60)
+
+ tests = [
+ test_compute_scheduling_scores_simple_chain,
+ test_compute_scheduling_scores_with_cycle,
+ test_compute_scheduling_scores_self_reference,
+ test_compute_scheduling_scores_complex_cycle,
+ test_compute_scheduling_scores_diamond,
+ test_compute_scheduling_scores_empty,
+ test_would_create_circular_dependency,
+ test_resolve_dependencies_with_cycle,
+ test_are_dependencies_satisfied,
+ test_get_blocking_dependencies,
+ test_get_ready_features,
+ test_get_blocked_features,
+ ]
+
+ passed = 0
+ failed = 0
+
+ for test in tests:
+ try:
+ if test():
+ passed += 1
+ else:
+ failed += 1
+ except Exception as e:
+ print(f" ERROR: {e}")
+ failed += 1
+
+ print("\n" + "=" * 60)
+ print(f"Results: {passed} passed, {failed} failed")
+ print("=" * 60)
+
+ return failed == 0
+
+
+if __name__ == "__main__":
+ success = run_all_tests()
+ sys.exit(0 if success else 1)
diff --git a/test_security.py b/test_security.py
index 5b46cfe..d8cb256 100644
--- a/test_security.py
+++ b/test_security.py
@@ -18,11 +18,13 @@ from security import (
bash_security_hook,
extract_commands,
get_effective_commands,
+ get_effective_pkill_processes,
load_org_config,
load_project_commands,
matches_pattern,
validate_chmod_command,
validate_init_script,
+ validate_pkill_command,
validate_project_command,
)
@@ -105,6 +107,8 @@ def test_extract_commands():
("/usr/bin/node script.js", ["node"]),
("VAR=value ls", ["ls"]),
("git status || git init", ["git", "git"]),
+ # Fallback parser test: complex nested quotes that break shlex
+ ('docker exec container php -r "echo \\"test\\";"', ["docker"]),
]
for cmd, expected in test_cases:
@@ -451,6 +455,21 @@ commands:
print(" FAIL: Non-allowed command 'rustc' should be blocked")
failed += 1
+ # Test 4: Empty command name is rejected
+ config_path.write_text("""version: 1
+commands:
+ - name: ""
+ description: Empty name should be rejected
+""")
+ result = load_project_commands(project_dir)
+ if result is None:
+ print(" PASS: Empty command name rejected in project config")
+ passed += 1
+ else:
+ print(" FAIL: Empty command name should be rejected in project config")
+ print(f" Got: {result}")
+ failed += 1
+
return passed, failed
@@ -670,6 +689,240 @@ blocked_commands:
return passed, failed
+def test_pkill_extensibility():
+ """Test that pkill processes can be extended via config."""
+ print("\nTesting pkill process extensibility:\n")
+ passed = 0
+ failed = 0
+
+ # Test 1: Default processes work without config
+ allowed, reason = validate_pkill_command("pkill node")
+ if allowed:
+ print(" PASS: Default process 'node' allowed")
+ passed += 1
+ else:
+ print(f" FAIL: Default process 'node' should be allowed: {reason}")
+ failed += 1
+
+ # Test 2: Non-default process blocked without config
+ allowed, reason = validate_pkill_command("pkill python")
+ if not allowed:
+ print(" PASS: Non-default process 'python' blocked without config")
+ passed += 1
+ else:
+ print(" FAIL: Non-default process 'python' should be blocked without config")
+ failed += 1
+
+ # Test 3: Extra processes allowed when passed
+ allowed, reason = validate_pkill_command("pkill python", extra_processes={"python"})
+ if allowed:
+ print(" PASS: Extra process 'python' allowed when configured")
+ passed += 1
+ else:
+ print(f" FAIL: Extra process 'python' should be allowed when configured: {reason}")
+ failed += 1
+
+ # Test 4: Default processes still work with extra processes
+ allowed, reason = validate_pkill_command("pkill npm", extra_processes={"python"})
+ if allowed:
+ print(" PASS: Default process 'npm' still works with extra processes")
+ passed += 1
+ else:
+ print(f" FAIL: Default process should still work: {reason}")
+ failed += 1
+
+ # Test 5: Test get_effective_pkill_processes with org config
+ with tempfile.TemporaryDirectory() as tmphome:
+ with tempfile.TemporaryDirectory() as tmpproject:
+ with temporary_home(tmphome):
+ org_dir = Path(tmphome) / ".autocoder"
+ org_dir.mkdir()
+ org_config_path = org_dir / "config.yaml"
+
+ # Create org config with extra pkill processes
+ org_config_path.write_text("""version: 1
+pkill_processes:
+ - python
+ - uvicorn
+""")
+
+ project_dir = Path(tmpproject)
+ processes = get_effective_pkill_processes(project_dir)
+
+ # Should include defaults + org processes
+ if "node" in processes and "python" in processes and "uvicorn" in processes:
+ print(" PASS: Org pkill_processes merged with defaults")
+ passed += 1
+ else:
+ print(f" FAIL: Expected node, python, uvicorn in {processes}")
+ failed += 1
+
+ # Test 6: Test get_effective_pkill_processes with project config
+ with tempfile.TemporaryDirectory() as tmphome:
+ with tempfile.TemporaryDirectory() as tmpproject:
+ with temporary_home(tmphome):
+ project_dir = Path(tmpproject)
+ project_autocoder = project_dir / ".autocoder"
+ project_autocoder.mkdir()
+ project_config = project_autocoder / "allowed_commands.yaml"
+
+ # Create project config with extra pkill processes
+ project_config.write_text("""version: 1
+commands: []
+pkill_processes:
+ - gunicorn
+ - flask
+""")
+
+ processes = get_effective_pkill_processes(project_dir)
+
+ # Should include defaults + project processes
+ if "node" in processes and "gunicorn" in processes and "flask" in processes:
+ print(" PASS: Project pkill_processes merged with defaults")
+ passed += 1
+ else:
+ print(f" FAIL: Expected node, gunicorn, flask in {processes}")
+ failed += 1
+
+ # Test 7: Integration test - pkill python blocked by default
+ with tempfile.TemporaryDirectory() as tmphome:
+ with tempfile.TemporaryDirectory() as tmpproject:
+ with temporary_home(tmphome):
+ project_dir = Path(tmpproject)
+ input_data = {"tool_name": "Bash", "tool_input": {"command": "pkill python"}}
+ context = {"project_dir": str(project_dir)}
+ result = asyncio.run(bash_security_hook(input_data, context=context))
+
+ if result.get("decision") == "block":
+ print(" PASS: pkill python blocked without config")
+ passed += 1
+ else:
+ print(" FAIL: pkill python should be blocked without config")
+ failed += 1
+
+ # Test 8: Integration test - pkill python allowed with org config
+ with tempfile.TemporaryDirectory() as tmphome:
+ with tempfile.TemporaryDirectory() as tmpproject:
+ with temporary_home(tmphome):
+ org_dir = Path(tmphome) / ".autocoder"
+ org_dir.mkdir()
+ org_config_path = org_dir / "config.yaml"
+
+ org_config_path.write_text("""version: 1
+pkill_processes:
+ - python
+""")
+
+ project_dir = Path(tmpproject)
+ input_data = {"tool_name": "Bash", "tool_input": {"command": "pkill python"}}
+ context = {"project_dir": str(project_dir)}
+ result = asyncio.run(bash_security_hook(input_data, context=context))
+
+ if result.get("decision") != "block":
+ print(" PASS: pkill python allowed with org config")
+ passed += 1
+ else:
+ print(f" FAIL: pkill python should be allowed with org config: {result}")
+ failed += 1
+
+ # Test 9: Regex metacharacters should be rejected in pkill_processes
+ with tempfile.TemporaryDirectory() as tmphome:
+ with tempfile.TemporaryDirectory() as tmpproject:
+ with temporary_home(tmphome):
+ org_dir = Path(tmphome) / ".autocoder"
+ org_dir.mkdir()
+ org_config_path = org_dir / "config.yaml"
+
+ # Try to register a regex pattern (should be rejected)
+ org_config_path.write_text("""version: 1
+pkill_processes:
+ - ".*"
+""")
+
+ config = load_org_config()
+ if config is None:
+ print(" PASS: Regex pattern '.*' rejected in pkill_processes")
+ passed += 1
+ else:
+ print(" FAIL: Regex pattern '.*' should be rejected")
+ failed += 1
+
+ # Test 10: Valid process names with dots/underscores/hyphens should be accepted
+ with tempfile.TemporaryDirectory() as tmphome:
+ with tempfile.TemporaryDirectory() as tmpproject:
+ with temporary_home(tmphome):
+ org_dir = Path(tmphome) / ".autocoder"
+ org_dir.mkdir()
+ org_config_path = org_dir / "config.yaml"
+
+ # Valid names with special chars
+ org_config_path.write_text("""version: 1
+pkill_processes:
+ - my-app
+ - app_server
+ - node.js
+""")
+
+ config = load_org_config()
+ if config is not None and config.get("pkill_processes") == ["my-app", "app_server", "node.js"]:
+ print(" PASS: Valid process names with dots/underscores/hyphens accepted")
+ passed += 1
+ else:
+ print(f" FAIL: Valid process names should be accepted: {config}")
+ failed += 1
+
+ # Test 11: Names with spaces should be rejected
+ with tempfile.TemporaryDirectory() as tmphome:
+ with tempfile.TemporaryDirectory() as tmpproject:
+ with temporary_home(tmphome):
+ org_dir = Path(tmphome) / ".autocoder"
+ org_dir.mkdir()
+ org_config_path = org_dir / "config.yaml"
+
+ org_config_path.write_text("""version: 1
+pkill_processes:
+ - "my app"
+""")
+
+ config = load_org_config()
+ if config is None:
+ print(" PASS: Process name with space rejected")
+ passed += 1
+ else:
+ print(" FAIL: Process name with space should be rejected")
+ failed += 1
+
+ # Test 12: Multiple patterns - all must be allowed (BSD behavior)
+ # On BSD, "pkill node sshd" would kill both, so we must validate all patterns
+ allowed, reason = validate_pkill_command("pkill node npm")
+ if allowed:
+ print(" PASS: Multiple allowed patterns accepted")
+ passed += 1
+ else:
+ print(f" FAIL: Multiple allowed patterns should be accepted: {reason}")
+ failed += 1
+
+ # Test 13: Multiple patterns - block if any is disallowed
+ allowed, reason = validate_pkill_command("pkill node sshd")
+ if not allowed:
+ print(" PASS: Multiple patterns blocked when one is disallowed")
+ passed += 1
+ else:
+ print(" FAIL: Should block when any pattern is disallowed")
+ failed += 1
+
+ # Test 14: Multiple patterns - only first allowed, second disallowed
+ allowed, reason = validate_pkill_command("pkill npm python")
+ if not allowed:
+ print(" PASS: Multiple patterns blocked (first allowed, second not)")
+ passed += 1
+ else:
+ print(" FAIL: Should block when second pattern is disallowed")
+ failed += 1
+
+ return passed, failed
+
+
def main():
print("=" * 70)
print(" SECURITY HOOK TESTS")
@@ -733,6 +986,11 @@ def main():
passed += org_block_passed
failed += org_block_failed
+ # Test pkill process extensibility
+ pkill_passed, pkill_failed = test_pkill_extensibility()
+ passed += pkill_passed
+ failed += pkill_failed
+
# Commands that SHOULD be blocked
print("\nCommands that should be BLOCKED:\n")
dangerous = [
diff --git a/ui/components.json b/ui/components.json
new file mode 100644
index 0000000..0b6231b
--- /dev/null
+++ b/ui/components.json
@@ -0,0 +1,22 @@
+{
+ "$schema": "https://ui.shadcn.com/schema.json",
+ "style": "new-york",
+ "rsc": false,
+ "tsx": true,
+ "tailwind": {
+ "config": "",
+ "css": "src/styles/globals.css",
+ "baseColor": "neutral",
+ "cssVariables": true,
+ "prefix": ""
+ },
+ "iconLibrary": "lucide",
+ "aliases": {
+ "components": "@/components",
+ "utils": "@/lib/utils",
+ "ui": "@/components/ui",
+ "lib": "@/lib",
+ "hooks": "@/hooks"
+ },
+ "registries": {}
+}
diff --git a/ui/index.html b/ui/index.html
index afbdba2..9564da9 100644
--- a/ui/index.html
+++ b/ui/index.html
@@ -7,7 +7,7 @@
AutoCoder
-
+
diff --git a/ui/package-lock.json b/ui/package-lock.json
index 6784b9f..2c33986 100644
--- a/ui/package-lock.json
+++ b/ui/package-lock.json
@@ -8,38 +8,53 @@
"name": "autocoder",
"version": "1.0.0",
"dependencies": {
- "@radix-ui/react-dialog": "^1.1.2",
- "@radix-ui/react-dropdown-menu": "^2.1.2",
- "@radix-ui/react-tooltip": "^1.1.3",
- "@tanstack/react-query": "^5.60.0",
+ "@radix-ui/react-checkbox": "^1.3.3",
+ "@radix-ui/react-dialog": "^1.1.15",
+ "@radix-ui/react-dropdown-menu": "^2.1.16",
+ "@radix-ui/react-label": "^2.1.8",
+ "@radix-ui/react-popover": "^1.1.15",
+ "@radix-ui/react-radio-group": "^1.3.8",
+ "@radix-ui/react-scroll-area": "^1.2.10",
+ "@radix-ui/react-select": "^2.2.6",
+ "@radix-ui/react-separator": "^1.1.8",
+ "@radix-ui/react-slot": "^1.2.4",
+ "@radix-ui/react-switch": "^1.2.6",
+ "@radix-ui/react-tabs": "^1.1.13",
+ "@radix-ui/react-toggle": "^1.1.10",
+ "@radix-ui/react-tooltip": "^1.2.8",
+ "@tanstack/react-query": "^5.72.0",
"@xterm/addon-fit": "^0.11.0",
"@xterm/addon-web-links": "^0.12.0",
"@xterm/xterm": "^6.0.0",
"@xyflow/react": "^12.10.0",
"canvas-confetti": "^1.9.4",
+ "class-variance-authority": "^0.7.1",
"clsx": "^2.1.1",
"dagre": "^0.8.5",
- "lucide-react": "^0.460.0",
- "react": "^18.3.1",
- "react-dom": "^18.3.1"
+ "lucide-react": "^0.475.0",
+ "react": "^19.0.0",
+ "react-dom": "^19.0.0",
+ "tailwind-merge": "^3.4.0"
},
"devDependencies": {
- "@eslint/js": "^9.13.0",
+ "@eslint/js": "^9.19.0",
"@playwright/test": "^1.57.0",
- "@tailwindcss/vite": "^4.0.0-beta.4",
+ "@tailwindcss/vite": "^4.1.0",
"@types/canvas-confetti": "^1.9.0",
"@types/dagre": "^0.7.53",
- "@types/react": "^18.3.12",
- "@types/react-dom": "^18.3.1",
- "@vitejs/plugin-react": "^4.3.3",
- "eslint": "^9.13.0",
- "eslint-plugin-react-hooks": "^5.0.0",
- "eslint-plugin-react-refresh": "^0.4.14",
- "globals": "^15.11.0",
- "tailwindcss": "^4.0.0-beta.4",
- "typescript": "~5.6.2",
- "typescript-eslint": "^8.11.0",
- "vite": "^5.4.10"
+ "@types/node": "^22.12.0",
+ "@types/react": "^19.0.0",
+ "@types/react-dom": "^19.0.0",
+ "@vitejs/plugin-react": "^4.4.0",
+ "eslint": "^9.19.0",
+ "eslint-plugin-react-hooks": "^5.1.0",
+ "eslint-plugin-react-refresh": "^0.4.19",
+ "globals": "^15.14.0",
+ "tailwindcss": "^4.1.0",
+ "tw-animate-css": "^1.4.0",
+ "typescript": "~5.7.3",
+ "typescript-eslint": "^8.23.0",
+ "vite": "^7.3.0"
}
},
"node_modules/@babel/code-frame": {
@@ -325,9 +340,9 @@
}
},
"node_modules/@esbuild/aix-ppc64": {
- "version": "0.21.5",
- "resolved": "https://registry.npmjs.org/@esbuild/aix-ppc64/-/aix-ppc64-0.21.5.tgz",
- "integrity": "sha512-1SDgH6ZSPTlggy1yI6+Dbkiz8xzpHJEVAlF/AM1tHPLsf5STom9rwtjE4hKAF20FfXXNTFqEYXyJNWh1GiZedQ==",
+ "version": "0.27.2",
+ "resolved": "https://registry.npmjs.org/@esbuild/aix-ppc64/-/aix-ppc64-0.27.2.tgz",
+ "integrity": "sha512-GZMB+a0mOMZs4MpDbj8RJp4cw+w1WV5NYD6xzgvzUJ5Ek2jerwfO2eADyI6ExDSUED+1X8aMbegahsJi+8mgpw==",
"cpu": [
"ppc64"
],
@@ -338,13 +353,13 @@
"aix"
],
"engines": {
- "node": ">=12"
+ "node": ">=18"
}
},
"node_modules/@esbuild/android-arm": {
- "version": "0.21.5",
- "resolved": "https://registry.npmjs.org/@esbuild/android-arm/-/android-arm-0.21.5.tgz",
- "integrity": "sha512-vCPvzSjpPHEi1siZdlvAlsPxXl7WbOVUBBAowWug4rJHb68Ox8KualB+1ocNvT5fjv6wpkX6o/iEpbDrf68zcg==",
+ "version": "0.27.2",
+ "resolved": "https://registry.npmjs.org/@esbuild/android-arm/-/android-arm-0.27.2.tgz",
+ "integrity": "sha512-DVNI8jlPa7Ujbr1yjU2PfUSRtAUZPG9I1RwW4F4xFB1Imiu2on0ADiI/c3td+KmDtVKNbi+nffGDQMfcIMkwIA==",
"cpu": [
"arm"
],
@@ -355,13 +370,13 @@
"android"
],
"engines": {
- "node": ">=12"
+ "node": ">=18"
}
},
"node_modules/@esbuild/android-arm64": {
- "version": "0.21.5",
- "resolved": "https://registry.npmjs.org/@esbuild/android-arm64/-/android-arm64-0.21.5.tgz",
- "integrity": "sha512-c0uX9VAUBQ7dTDCjq+wdyGLowMdtR/GoC2U5IYk/7D1H1JYC0qseD7+11iMP2mRLN9RcCMRcjC4YMclCzGwS/A==",
+ "version": "0.27.2",
+ "resolved": "https://registry.npmjs.org/@esbuild/android-arm64/-/android-arm64-0.27.2.tgz",
+ "integrity": "sha512-pvz8ZZ7ot/RBphf8fv60ljmaoydPU12VuXHImtAs0XhLLw+EXBi2BLe3OYSBslR4rryHvweW5gmkKFwTiFy6KA==",
"cpu": [
"arm64"
],
@@ -372,13 +387,13 @@
"android"
],
"engines": {
- "node": ">=12"
+ "node": ">=18"
}
},
"node_modules/@esbuild/android-x64": {
- "version": "0.21.5",
- "resolved": "https://registry.npmjs.org/@esbuild/android-x64/-/android-x64-0.21.5.tgz",
- "integrity": "sha512-D7aPRUUNHRBwHxzxRvp856rjUHRFW1SdQATKXH2hqA0kAZb1hKmi02OpYRacl0TxIGz/ZmXWlbZgjwWYaCakTA==",
+ "version": "0.27.2",
+ "resolved": "https://registry.npmjs.org/@esbuild/android-x64/-/android-x64-0.27.2.tgz",
+ "integrity": "sha512-z8Ank4Byh4TJJOh4wpz8g2vDy75zFL0TlZlkUkEwYXuPSgX8yzep596n6mT7905kA9uHZsf/o2OJZubl2l3M7A==",
"cpu": [
"x64"
],
@@ -389,13 +404,13 @@
"android"
],
"engines": {
- "node": ">=12"
+ "node": ">=18"
}
},
"node_modules/@esbuild/darwin-arm64": {
- "version": "0.21.5",
- "resolved": "https://registry.npmjs.org/@esbuild/darwin-arm64/-/darwin-arm64-0.21.5.tgz",
- "integrity": "sha512-DwqXqZyuk5AiWWf3UfLiRDJ5EDd49zg6O9wclZ7kUMv2WRFr4HKjXp/5t8JZ11QbQfUS6/cRCKGwYhtNAY88kQ==",
+ "version": "0.27.2",
+ "resolved": "https://registry.npmjs.org/@esbuild/darwin-arm64/-/darwin-arm64-0.27.2.tgz",
+ "integrity": "sha512-davCD2Zc80nzDVRwXTcQP/28fiJbcOwvdolL0sOiOsbwBa72kegmVU0Wrh1MYrbuCL98Omp5dVhQFWRKR2ZAlg==",
"cpu": [
"arm64"
],
@@ -406,13 +421,13 @@
"darwin"
],
"engines": {
- "node": ">=12"
+ "node": ">=18"
}
},
"node_modules/@esbuild/darwin-x64": {
- "version": "0.21.5",
- "resolved": "https://registry.npmjs.org/@esbuild/darwin-x64/-/darwin-x64-0.21.5.tgz",
- "integrity": "sha512-se/JjF8NlmKVG4kNIuyWMV/22ZaerB+qaSi5MdrXtd6R08kvs2qCN4C09miupktDitvh8jRFflwGFBQcxZRjbw==",
+ "version": "0.27.2",
+ "resolved": "https://registry.npmjs.org/@esbuild/darwin-x64/-/darwin-x64-0.27.2.tgz",
+ "integrity": "sha512-ZxtijOmlQCBWGwbVmwOF/UCzuGIbUkqB1faQRf5akQmxRJ1ujusWsb3CVfk/9iZKr2L5SMU5wPBi1UWbvL+VQA==",
"cpu": [
"x64"
],
@@ -423,13 +438,13 @@
"darwin"
],
"engines": {
- "node": ">=12"
+ "node": ">=18"
}
},
"node_modules/@esbuild/freebsd-arm64": {
- "version": "0.21.5",
- "resolved": "https://registry.npmjs.org/@esbuild/freebsd-arm64/-/freebsd-arm64-0.21.5.tgz",
- "integrity": "sha512-5JcRxxRDUJLX8JXp/wcBCy3pENnCgBR9bN6JsY4OmhfUtIHe3ZW0mawA7+RDAcMLrMIZaf03NlQiX9DGyB8h4g==",
+ "version": "0.27.2",
+ "resolved": "https://registry.npmjs.org/@esbuild/freebsd-arm64/-/freebsd-arm64-0.27.2.tgz",
+ "integrity": "sha512-lS/9CN+rgqQ9czogxlMcBMGd+l8Q3Nj1MFQwBZJyoEKI50XGxwuzznYdwcav6lpOGv5BqaZXqvBSiB/kJ5op+g==",
"cpu": [
"arm64"
],
@@ -440,13 +455,13 @@
"freebsd"
],
"engines": {
- "node": ">=12"
+ "node": ">=18"
}
},
"node_modules/@esbuild/freebsd-x64": {
- "version": "0.21.5",
- "resolved": "https://registry.npmjs.org/@esbuild/freebsd-x64/-/freebsd-x64-0.21.5.tgz",
- "integrity": "sha512-J95kNBj1zkbMXtHVH29bBriQygMXqoVQOQYA+ISs0/2l3T9/kj42ow2mpqerRBxDJnmkUDCaQT/dfNXWX/ZZCQ==",
+ "version": "0.27.2",
+ "resolved": "https://registry.npmjs.org/@esbuild/freebsd-x64/-/freebsd-x64-0.27.2.tgz",
+ "integrity": "sha512-tAfqtNYb4YgPnJlEFu4c212HYjQWSO/w/h/lQaBK7RbwGIkBOuNKQI9tqWzx7Wtp7bTPaGC6MJvWI608P3wXYA==",
"cpu": [
"x64"
],
@@ -457,13 +472,13 @@
"freebsd"
],
"engines": {
- "node": ">=12"
+ "node": ">=18"
}
},
"node_modules/@esbuild/linux-arm": {
- "version": "0.21.5",
- "resolved": "https://registry.npmjs.org/@esbuild/linux-arm/-/linux-arm-0.21.5.tgz",
- "integrity": "sha512-bPb5AHZtbeNGjCKVZ9UGqGwo8EUu4cLq68E95A53KlxAPRmUyYv2D6F0uUI65XisGOL1hBP5mTronbgo+0bFcA==",
+ "version": "0.27.2",
+ "resolved": "https://registry.npmjs.org/@esbuild/linux-arm/-/linux-arm-0.27.2.tgz",
+ "integrity": "sha512-vWfq4GaIMP9AIe4yj1ZUW18RDhx6EPQKjwe7n8BbIecFtCQG4CfHGaHuh7fdfq+y3LIA2vGS/o9ZBGVxIDi9hw==",
"cpu": [
"arm"
],
@@ -474,13 +489,13 @@
"linux"
],
"engines": {
- "node": ">=12"
+ "node": ">=18"
}
},
"node_modules/@esbuild/linux-arm64": {
- "version": "0.21.5",
- "resolved": "https://registry.npmjs.org/@esbuild/linux-arm64/-/linux-arm64-0.21.5.tgz",
- "integrity": "sha512-ibKvmyYzKsBeX8d8I7MH/TMfWDXBF3db4qM6sy+7re0YXya+K1cem3on9XgdT2EQGMu4hQyZhan7TeQ8XkGp4Q==",
+ "version": "0.27.2",
+ "resolved": "https://registry.npmjs.org/@esbuild/linux-arm64/-/linux-arm64-0.27.2.tgz",
+ "integrity": "sha512-hYxN8pr66NsCCiRFkHUAsxylNOcAQaxSSkHMMjcpx0si13t1LHFphxJZUiGwojB1a/Hd5OiPIqDdXONia6bhTw==",
"cpu": [
"arm64"
],
@@ -491,13 +506,13 @@
"linux"
],
"engines": {
- "node": ">=12"
+ "node": ">=18"
}
},
"node_modules/@esbuild/linux-ia32": {
- "version": "0.21.5",
- "resolved": "https://registry.npmjs.org/@esbuild/linux-ia32/-/linux-ia32-0.21.5.tgz",
- "integrity": "sha512-YvjXDqLRqPDl2dvRODYmmhz4rPeVKYvppfGYKSNGdyZkA01046pLWyRKKI3ax8fbJoK5QbxblURkwK/MWY18Tg==",
+ "version": "0.27.2",
+ "resolved": "https://registry.npmjs.org/@esbuild/linux-ia32/-/linux-ia32-0.27.2.tgz",
+ "integrity": "sha512-MJt5BRRSScPDwG2hLelYhAAKh9imjHK5+NE/tvnRLbIqUWa+0E9N4WNMjmp/kXXPHZGqPLxggwVhz7QP8CTR8w==",
"cpu": [
"ia32"
],
@@ -508,13 +523,13 @@
"linux"
],
"engines": {
- "node": ">=12"
+ "node": ">=18"
}
},
"node_modules/@esbuild/linux-loong64": {
- "version": "0.21.5",
- "resolved": "https://registry.npmjs.org/@esbuild/linux-loong64/-/linux-loong64-0.21.5.tgz",
- "integrity": "sha512-uHf1BmMG8qEvzdrzAqg2SIG/02+4/DHB6a9Kbya0XDvwDEKCoC8ZRWI5JJvNdUjtciBGFQ5PuBlpEOXQj+JQSg==",
+ "version": "0.27.2",
+ "resolved": "https://registry.npmjs.org/@esbuild/linux-loong64/-/linux-loong64-0.27.2.tgz",
+ "integrity": "sha512-lugyF1atnAT463aO6KPshVCJK5NgRnU4yb3FUumyVz+cGvZbontBgzeGFO1nF+dPueHD367a2ZXe1NtUkAjOtg==",
"cpu": [
"loong64"
],
@@ -525,13 +540,13 @@
"linux"
],
"engines": {
- "node": ">=12"
+ "node": ">=18"
}
},
"node_modules/@esbuild/linux-mips64el": {
- "version": "0.21.5",
- "resolved": "https://registry.npmjs.org/@esbuild/linux-mips64el/-/linux-mips64el-0.21.5.tgz",
- "integrity": "sha512-IajOmO+KJK23bj52dFSNCMsz1QP1DqM6cwLUv3W1QwyxkyIWecfafnI555fvSGqEKwjMXVLokcV5ygHW5b3Jbg==",
+ "version": "0.27.2",
+ "resolved": "https://registry.npmjs.org/@esbuild/linux-mips64el/-/linux-mips64el-0.27.2.tgz",
+ "integrity": "sha512-nlP2I6ArEBewvJ2gjrrkESEZkB5mIoaTswuqNFRv/WYd+ATtUpe9Y09RnJvgvdag7he0OWgEZWhviS1OTOKixw==",
"cpu": [
"mips64el"
],
@@ -542,13 +557,13 @@
"linux"
],
"engines": {
- "node": ">=12"
+ "node": ">=18"
}
},
"node_modules/@esbuild/linux-ppc64": {
- "version": "0.21.5",
- "resolved": "https://registry.npmjs.org/@esbuild/linux-ppc64/-/linux-ppc64-0.21.5.tgz",
- "integrity": "sha512-1hHV/Z4OEfMwpLO8rp7CvlhBDnjsC3CttJXIhBi+5Aj5r+MBvy4egg7wCbe//hSsT+RvDAG7s81tAvpL2XAE4w==",
+ "version": "0.27.2",
+ "resolved": "https://registry.npmjs.org/@esbuild/linux-ppc64/-/linux-ppc64-0.27.2.tgz",
+ "integrity": "sha512-C92gnpey7tUQONqg1n6dKVbx3vphKtTHJaNG2Ok9lGwbZil6DrfyecMsp9CrmXGQJmZ7iiVXvvZH6Ml5hL6XdQ==",
"cpu": [
"ppc64"
],
@@ -559,13 +574,13 @@
"linux"
],
"engines": {
- "node": ">=12"
+ "node": ">=18"
}
},
"node_modules/@esbuild/linux-riscv64": {
- "version": "0.21.5",
- "resolved": "https://registry.npmjs.org/@esbuild/linux-riscv64/-/linux-riscv64-0.21.5.tgz",
- "integrity": "sha512-2HdXDMd9GMgTGrPWnJzP2ALSokE/0O5HhTUvWIbD3YdjME8JwvSCnNGBnTThKGEB91OZhzrJ4qIIxk/SBmyDDA==",
+ "version": "0.27.2",
+ "resolved": "https://registry.npmjs.org/@esbuild/linux-riscv64/-/linux-riscv64-0.27.2.tgz",
+ "integrity": "sha512-B5BOmojNtUyN8AXlK0QJyvjEZkWwy/FKvakkTDCziX95AowLZKR6aCDhG7LeF7uMCXEJqwa8Bejz5LTPYm8AvA==",
"cpu": [
"riscv64"
],
@@ -576,13 +591,13 @@
"linux"
],
"engines": {
- "node": ">=12"
+ "node": ">=18"
}
},
"node_modules/@esbuild/linux-s390x": {
- "version": "0.21.5",
- "resolved": "https://registry.npmjs.org/@esbuild/linux-s390x/-/linux-s390x-0.21.5.tgz",
- "integrity": "sha512-zus5sxzqBJD3eXxwvjN1yQkRepANgxE9lgOW2qLnmr8ikMTphkjgXu1HR01K4FJg8h1kEEDAqDcZQtbrRnB41A==",
+ "version": "0.27.2",
+ "resolved": "https://registry.npmjs.org/@esbuild/linux-s390x/-/linux-s390x-0.27.2.tgz",
+ "integrity": "sha512-p4bm9+wsPwup5Z8f4EpfN63qNagQ47Ua2znaqGH6bqLlmJ4bx97Y9JdqxgGZ6Y8xVTixUnEkoKSHcpRlDnNr5w==",
"cpu": [
"s390x"
],
@@ -593,13 +608,13 @@
"linux"
],
"engines": {
- "node": ">=12"
+ "node": ">=18"
}
},
"node_modules/@esbuild/linux-x64": {
- "version": "0.21.5",
- "resolved": "https://registry.npmjs.org/@esbuild/linux-x64/-/linux-x64-0.21.5.tgz",
- "integrity": "sha512-1rYdTpyv03iycF1+BhzrzQJCdOuAOtaqHTWJZCWvijKD2N5Xu0TtVC8/+1faWqcP9iBCWOmjmhoH94dH82BxPQ==",
+ "version": "0.27.2",
+ "resolved": "https://registry.npmjs.org/@esbuild/linux-x64/-/linux-x64-0.27.2.tgz",
+ "integrity": "sha512-uwp2Tip5aPmH+NRUwTcfLb+W32WXjpFejTIOWZFw/v7/KnpCDKG66u4DLcurQpiYTiYwQ9B7KOeMJvLCu/OvbA==",
"cpu": [
"x64"
],
@@ -610,13 +625,30 @@
"linux"
],
"engines": {
- "node": ">=12"
+ "node": ">=18"
+ }
+ },
+ "node_modules/@esbuild/netbsd-arm64": {
+ "version": "0.27.2",
+ "resolved": "https://registry.npmjs.org/@esbuild/netbsd-arm64/-/netbsd-arm64-0.27.2.tgz",
+ "integrity": "sha512-Kj6DiBlwXrPsCRDeRvGAUb/LNrBASrfqAIok+xB0LxK8CHqxZ037viF13ugfsIpePH93mX7xfJp97cyDuTZ3cw==",
+ "cpu": [
+ "arm64"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "netbsd"
+ ],
+ "engines": {
+ "node": ">=18"
}
},
"node_modules/@esbuild/netbsd-x64": {
- "version": "0.21.5",
- "resolved": "https://registry.npmjs.org/@esbuild/netbsd-x64/-/netbsd-x64-0.21.5.tgz",
- "integrity": "sha512-Woi2MXzXjMULccIwMnLciyZH4nCIMpWQAs049KEeMvOcNADVxo0UBIQPfSmxB3CWKedngg7sWZdLvLczpe0tLg==",
+ "version": "0.27.2",
+ "resolved": "https://registry.npmjs.org/@esbuild/netbsd-x64/-/netbsd-x64-0.27.2.tgz",
+ "integrity": "sha512-HwGDZ0VLVBY3Y+Nw0JexZy9o/nUAWq9MlV7cahpaXKW6TOzfVno3y3/M8Ga8u8Yr7GldLOov27xiCnqRZf0tCA==",
"cpu": [
"x64"
],
@@ -627,13 +659,30 @@
"netbsd"
],
"engines": {
- "node": ">=12"
+ "node": ">=18"
+ }
+ },
+ "node_modules/@esbuild/openbsd-arm64": {
+ "version": "0.27.2",
+ "resolved": "https://registry.npmjs.org/@esbuild/openbsd-arm64/-/openbsd-arm64-0.27.2.tgz",
+ "integrity": "sha512-DNIHH2BPQ5551A7oSHD0CKbwIA/Ox7+78/AWkbS5QoRzaqlev2uFayfSxq68EkonB+IKjiuxBFoV8ESJy8bOHA==",
+ "cpu": [
+ "arm64"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "openbsd"
+ ],
+ "engines": {
+ "node": ">=18"
}
},
"node_modules/@esbuild/openbsd-x64": {
- "version": "0.21.5",
- "resolved": "https://registry.npmjs.org/@esbuild/openbsd-x64/-/openbsd-x64-0.21.5.tgz",
- "integrity": "sha512-HLNNw99xsvx12lFBUwoT8EVCsSvRNDVxNpjZ7bPn947b8gJPzeHWyNVhFsaerc0n3TsbOINvRP2byTZ5LKezow==",
+ "version": "0.27.2",
+ "resolved": "https://registry.npmjs.org/@esbuild/openbsd-x64/-/openbsd-x64-0.27.2.tgz",
+ "integrity": "sha512-/it7w9Nb7+0KFIzjalNJVR5bOzA9Vay+yIPLVHfIQYG/j+j9VTH84aNB8ExGKPU4AzfaEvN9/V4HV+F+vo8OEg==",
"cpu": [
"x64"
],
@@ -644,13 +693,30 @@
"openbsd"
],
"engines": {
- "node": ">=12"
+ "node": ">=18"
+ }
+ },
+ "node_modules/@esbuild/openharmony-arm64": {
+ "version": "0.27.2",
+ "resolved": "https://registry.npmjs.org/@esbuild/openharmony-arm64/-/openharmony-arm64-0.27.2.tgz",
+ "integrity": "sha512-LRBbCmiU51IXfeXk59csuX/aSaToeG7w48nMwA6049Y4J4+VbWALAuXcs+qcD04rHDuSCSRKdmY63sruDS5qag==",
+ "cpu": [
+ "arm64"
+ ],
+ "dev": true,
+ "license": "MIT",
+ "optional": true,
+ "os": [
+ "openharmony"
+ ],
+ "engines": {
+ "node": ">=18"
}
},
"node_modules/@esbuild/sunos-x64": {
- "version": "0.21.5",
- "resolved": "https://registry.npmjs.org/@esbuild/sunos-x64/-/sunos-x64-0.21.5.tgz",
- "integrity": "sha512-6+gjmFpfy0BHU5Tpptkuh8+uw3mnrvgs+dSPQXQOv3ekbordwnzTVEb4qnIvQcYXq6gzkyTnoZ9dZG+D4garKg==",
+ "version": "0.27.2",
+ "resolved": "https://registry.npmjs.org/@esbuild/sunos-x64/-/sunos-x64-0.27.2.tgz",
+ "integrity": "sha512-kMtx1yqJHTmqaqHPAzKCAkDaKsffmXkPHThSfRwZGyuqyIeBvf08KSsYXl+abf5HDAPMJIPnbBfXvP2ZC2TfHg==",
"cpu": [
"x64"
],
@@ -661,13 +727,13 @@
"sunos"
],
"engines": {
- "node": ">=12"
+ "node": ">=18"
}
},
"node_modules/@esbuild/win32-arm64": {
- "version": "0.21.5",
- "resolved": "https://registry.npmjs.org/@esbuild/win32-arm64/-/win32-arm64-0.21.5.tgz",
- "integrity": "sha512-Z0gOTd75VvXqyq7nsl93zwahcTROgqvuAcYDUr+vOv8uHhNSKROyU961kgtCD1e95IqPKSQKH7tBTslnS3tA8A==",
+ "version": "0.27.2",
+ "resolved": "https://registry.npmjs.org/@esbuild/win32-arm64/-/win32-arm64-0.27.2.tgz",
+ "integrity": "sha512-Yaf78O/B3Kkh+nKABUF++bvJv5Ijoy9AN1ww904rOXZFLWVc5OLOfL56W+C8F9xn5JQZa3UX6m+IktJnIb1Jjg==",
"cpu": [
"arm64"
],
@@ -678,13 +744,13 @@
"win32"
],
"engines": {
- "node": ">=12"
+ "node": ">=18"
}
},
"node_modules/@esbuild/win32-ia32": {
- "version": "0.21.5",
- "resolved": "https://registry.npmjs.org/@esbuild/win32-ia32/-/win32-ia32-0.21.5.tgz",
- "integrity": "sha512-SWXFF1CL2RVNMaVs+BBClwtfZSvDgtL//G/smwAc5oVK/UPu2Gu9tIaRgFmYFFKrmg3SyAjSrElf0TiJ1v8fYA==",
+ "version": "0.27.2",
+ "resolved": "https://registry.npmjs.org/@esbuild/win32-ia32/-/win32-ia32-0.27.2.tgz",
+ "integrity": "sha512-Iuws0kxo4yusk7sw70Xa2E2imZU5HoixzxfGCdxwBdhiDgt9vX9VUCBhqcwY7/uh//78A1hMkkROMJq9l27oLQ==",
"cpu": [
"ia32"
],
@@ -695,13 +761,13 @@
"win32"
],
"engines": {
- "node": ">=12"
+ "node": ">=18"
}
},
"node_modules/@esbuild/win32-x64": {
- "version": "0.21.5",
- "resolved": "https://registry.npmjs.org/@esbuild/win32-x64/-/win32-x64-0.21.5.tgz",
- "integrity": "sha512-tQd/1efJuzPC6rCFwEvLtci/xNFcTZknmXs98FYDfGE4wP9ClFV98nyKrzJKVPMhdDnjzLhdUyMX4PsQAPjwIw==",
+ "version": "0.27.2",
+ "resolved": "https://registry.npmjs.org/@esbuild/win32-x64/-/win32-x64-0.27.2.tgz",
+ "integrity": "sha512-sRdU18mcKf7F+YgheI/zGf5alZatMUTKj/jNS6l744f9u3WFu4v7twcUI9vu4mknF4Y9aDlblIie0IM+5xxaqQ==",
"cpu": [
"x64"
],
@@ -712,7 +778,7 @@
"win32"
],
"engines": {
- "node": ">=12"
+ "node": ">=18"
}
},
"node_modules/@eslint-community/eslint-utils": {
@@ -1027,6 +1093,12 @@
"node": ">=18"
}
},
+ "node_modules/@radix-ui/number": {
+ "version": "1.1.1",
+ "resolved": "https://registry.npmjs.org/@radix-ui/number/-/number-1.1.1.tgz",
+ "integrity": "sha512-MkKCwxlXTgz6CFoJx3pCwn07GKp36+aZyu/u2Ln2VrA5DcdyCZkASEDBTd8x5whTQQL5CiYf4prXKLcgQdv29g==",
+ "license": "MIT"
+ },
"node_modules/@radix-ui/primitive": {
"version": "1.1.3",
"resolved": "https://registry.npmjs.org/@radix-ui/primitive/-/primitive-1.1.3.tgz",
@@ -1056,6 +1128,36 @@
}
}
},
+ "node_modules/@radix-ui/react-checkbox": {
+ "version": "1.3.3",
+ "resolved": "https://registry.npmjs.org/@radix-ui/react-checkbox/-/react-checkbox-1.3.3.tgz",
+ "integrity": "sha512-wBbpv+NQftHDdG86Qc0pIyXk5IR3tM8Vd0nWLKDcX8nNn4nXFOFwsKuqw2okA/1D/mpaAkmuyndrPJTYDNZtFw==",
+ "license": "MIT",
+ "dependencies": {
+ "@radix-ui/primitive": "1.1.3",
+ "@radix-ui/react-compose-refs": "1.1.2",
+ "@radix-ui/react-context": "1.1.2",
+ "@radix-ui/react-presence": "1.1.5",
+ "@radix-ui/react-primitive": "2.1.3",
+ "@radix-ui/react-use-controllable-state": "1.2.2",
+ "@radix-ui/react-use-previous": "1.1.1",
+ "@radix-ui/react-use-size": "1.1.1"
+ },
+ "peerDependencies": {
+ "@types/react": "*",
+ "@types/react-dom": "*",
+ "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc",
+ "react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc"
+ },
+ "peerDependenciesMeta": {
+ "@types/react": {
+ "optional": true
+ },
+ "@types/react-dom": {
+ "optional": true
+ }
+ }
+ },
"node_modules/@radix-ui/react-collection": {
"version": "1.1.7",
"resolved": "https://registry.npmjs.org/@radix-ui/react-collection/-/react-collection-1.1.7.tgz",
@@ -1082,6 +1184,24 @@
}
}
},
+ "node_modules/@radix-ui/react-collection/node_modules/@radix-ui/react-slot": {
+ "version": "1.2.3",
+ "resolved": "https://registry.npmjs.org/@radix-ui/react-slot/-/react-slot-1.2.3.tgz",
+ "integrity": "sha512-aeNmHnBxbi2St0au6VBVC7JXFlhLlOnvIIlePNniyUNAClzmtAUEY8/pBiK3iHjufOlwA+c20/8jngo7xcrg8A==",
+ "license": "MIT",
+ "dependencies": {
+ "@radix-ui/react-compose-refs": "1.1.2"
+ },
+ "peerDependencies": {
+ "@types/react": "*",
+ "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc"
+ },
+ "peerDependenciesMeta": {
+ "@types/react": {
+ "optional": true
+ }
+ }
+ },
"node_modules/@radix-ui/react-compose-refs": {
"version": "1.1.2",
"resolved": "https://registry.npmjs.org/@radix-ui/react-compose-refs/-/react-compose-refs-1.1.2.tgz",
@@ -1148,6 +1268,24 @@
}
}
},
+ "node_modules/@radix-ui/react-dialog/node_modules/@radix-ui/react-slot": {
+ "version": "1.2.3",
+ "resolved": "https://registry.npmjs.org/@radix-ui/react-slot/-/react-slot-1.2.3.tgz",
+ "integrity": "sha512-aeNmHnBxbi2St0au6VBVC7JXFlhLlOnvIIlePNniyUNAClzmtAUEY8/pBiK3iHjufOlwA+c20/8jngo7xcrg8A==",
+ "license": "MIT",
+ "dependencies": {
+ "@radix-ui/react-compose-refs": "1.1.2"
+ },
+ "peerDependencies": {
+ "@types/react": "*",
+ "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc"
+ },
+ "peerDependenciesMeta": {
+ "@types/react": {
+ "optional": true
+ }
+ }
+ },
"node_modules/@radix-ui/react-direction": {
"version": "1.1.1",
"resolved": "https://registry.npmjs.org/@radix-ui/react-direction/-/react-direction-1.1.1.tgz",
@@ -1277,6 +1415,52 @@
}
}
},
+ "node_modules/@radix-ui/react-label": {
+ "version": "2.1.8",
+ "resolved": "https://registry.npmjs.org/@radix-ui/react-label/-/react-label-2.1.8.tgz",
+ "integrity": "sha512-FmXs37I6hSBVDlO4y764TNz1rLgKwjJMQ0EGte6F3Cb3f4bIuHB/iLa/8I9VKkmOy+gNHq8rql3j686ACVV21A==",
+ "license": "MIT",
+ "dependencies": {
+ "@radix-ui/react-primitive": "2.1.4"
+ },
+ "peerDependencies": {
+ "@types/react": "*",
+ "@types/react-dom": "*",
+ "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc",
+ "react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc"
+ },
+ "peerDependenciesMeta": {
+ "@types/react": {
+ "optional": true
+ },
+ "@types/react-dom": {
+ "optional": true
+ }
+ }
+ },
+ "node_modules/@radix-ui/react-label/node_modules/@radix-ui/react-primitive": {
+ "version": "2.1.4",
+ "resolved": "https://registry.npmjs.org/@radix-ui/react-primitive/-/react-primitive-2.1.4.tgz",
+ "integrity": "sha512-9hQc4+GNVtJAIEPEqlYqW5RiYdrr8ea5XQ0ZOnD6fgru+83kqT15mq2OCcbe8KnjRZl5vF3ks69AKz3kh1jrhg==",
+ "license": "MIT",
+ "dependencies": {
+ "@radix-ui/react-slot": "1.2.4"
+ },
+ "peerDependencies": {
+ "@types/react": "*",
+ "@types/react-dom": "*",
+ "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc",
+ "react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc"
+ },
+ "peerDependenciesMeta": {
+ "@types/react": {
+ "optional": true
+ },
+ "@types/react-dom": {
+ "optional": true
+ }
+ }
+ },
"node_modules/@radix-ui/react-menu": {
"version": "2.1.16",
"resolved": "https://registry.npmjs.org/@radix-ui/react-menu/-/react-menu-2.1.16.tgz",
@@ -1317,6 +1501,79 @@
}
}
},
+ "node_modules/@radix-ui/react-menu/node_modules/@radix-ui/react-slot": {
+ "version": "1.2.3",
+ "resolved": "https://registry.npmjs.org/@radix-ui/react-slot/-/react-slot-1.2.3.tgz",
+ "integrity": "sha512-aeNmHnBxbi2St0au6VBVC7JXFlhLlOnvIIlePNniyUNAClzmtAUEY8/pBiK3iHjufOlwA+c20/8jngo7xcrg8A==",
+ "license": "MIT",
+ "dependencies": {
+ "@radix-ui/react-compose-refs": "1.1.2"
+ },
+ "peerDependencies": {
+ "@types/react": "*",
+ "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc"
+ },
+ "peerDependenciesMeta": {
+ "@types/react": {
+ "optional": true
+ }
+ }
+ },
+ "node_modules/@radix-ui/react-popover": {
+ "version": "1.1.15",
+ "resolved": "https://registry.npmjs.org/@radix-ui/react-popover/-/react-popover-1.1.15.tgz",
+ "integrity": "sha512-kr0X2+6Yy/vJzLYJUPCZEc8SfQcf+1COFoAqauJm74umQhta9M7lNJHP7QQS3vkvcGLQUbWpMzwrXYwrYztHKA==",
+ "license": "MIT",
+ "dependencies": {
+ "@radix-ui/primitive": "1.1.3",
+ "@radix-ui/react-compose-refs": "1.1.2",
+ "@radix-ui/react-context": "1.1.2",
+ "@radix-ui/react-dismissable-layer": "1.1.11",
+ "@radix-ui/react-focus-guards": "1.1.3",
+ "@radix-ui/react-focus-scope": "1.1.7",
+ "@radix-ui/react-id": "1.1.1",
+ "@radix-ui/react-popper": "1.2.8",
+ "@radix-ui/react-portal": "1.1.9",
+ "@radix-ui/react-presence": "1.1.5",
+ "@radix-ui/react-primitive": "2.1.3",
+ "@radix-ui/react-slot": "1.2.3",
+ "@radix-ui/react-use-controllable-state": "1.2.2",
+ "aria-hidden": "^1.2.4",
+ "react-remove-scroll": "^2.6.3"
+ },
+ "peerDependencies": {
+ "@types/react": "*",
+ "@types/react-dom": "*",
+ "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc",
+ "react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc"
+ },
+ "peerDependenciesMeta": {
+ "@types/react": {
+ "optional": true
+ },
+ "@types/react-dom": {
+ "optional": true
+ }
+ }
+ },
+ "node_modules/@radix-ui/react-popover/node_modules/@radix-ui/react-slot": {
+ "version": "1.2.3",
+ "resolved": "https://registry.npmjs.org/@radix-ui/react-slot/-/react-slot-1.2.3.tgz",
+ "integrity": "sha512-aeNmHnBxbi2St0au6VBVC7JXFlhLlOnvIIlePNniyUNAClzmtAUEY8/pBiK3iHjufOlwA+c20/8jngo7xcrg8A==",
+ "license": "MIT",
+ "dependencies": {
+ "@radix-ui/react-compose-refs": "1.1.2"
+ },
+ "peerDependencies": {
+ "@types/react": "*",
+ "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc"
+ },
+ "peerDependenciesMeta": {
+ "@types/react": {
+ "optional": true
+ }
+ }
+ },
"node_modules/@radix-ui/react-popper": {
"version": "1.2.8",
"resolved": "https://registry.npmjs.org/@radix-ui/react-popper/-/react-popper-1.2.8.tgz",
@@ -1420,6 +1677,56 @@
}
}
},
+ "node_modules/@radix-ui/react-primitive/node_modules/@radix-ui/react-slot": {
+ "version": "1.2.3",
+ "resolved": "https://registry.npmjs.org/@radix-ui/react-slot/-/react-slot-1.2.3.tgz",
+ "integrity": "sha512-aeNmHnBxbi2St0au6VBVC7JXFlhLlOnvIIlePNniyUNAClzmtAUEY8/pBiK3iHjufOlwA+c20/8jngo7xcrg8A==",
+ "license": "MIT",
+ "dependencies": {
+ "@radix-ui/react-compose-refs": "1.1.2"
+ },
+ "peerDependencies": {
+ "@types/react": "*",
+ "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc"
+ },
+ "peerDependenciesMeta": {
+ "@types/react": {
+ "optional": true
+ }
+ }
+ },
+ "node_modules/@radix-ui/react-radio-group": {
+ "version": "1.3.8",
+ "resolved": "https://registry.npmjs.org/@radix-ui/react-radio-group/-/react-radio-group-1.3.8.tgz",
+ "integrity": "sha512-VBKYIYImA5zsxACdisNQ3BjCBfmbGH3kQlnFVqlWU4tXwjy7cGX8ta80BcrO+WJXIn5iBylEH3K6ZTlee//lgQ==",
+ "license": "MIT",
+ "dependencies": {
+ "@radix-ui/primitive": "1.1.3",
+ "@radix-ui/react-compose-refs": "1.1.2",
+ "@radix-ui/react-context": "1.1.2",
+ "@radix-ui/react-direction": "1.1.1",
+ "@radix-ui/react-presence": "1.1.5",
+ "@radix-ui/react-primitive": "2.1.3",
+ "@radix-ui/react-roving-focus": "1.1.11",
+ "@radix-ui/react-use-controllable-state": "1.2.2",
+ "@radix-ui/react-use-previous": "1.1.1",
+ "@radix-ui/react-use-size": "1.1.1"
+ },
+ "peerDependencies": {
+ "@types/react": "*",
+ "@types/react-dom": "*",
+ "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc",
+ "react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc"
+ },
+ "peerDependenciesMeta": {
+ "@types/react": {
+ "optional": true
+ },
+ "@types/react-dom": {
+ "optional": true
+ }
+ }
+ },
"node_modules/@radix-ui/react-roving-focus": {
"version": "1.1.11",
"resolved": "https://registry.npmjs.org/@radix-ui/react-roving-focus/-/react-roving-focus-1.1.11.tgz",
@@ -1451,7 +1758,81 @@
}
}
},
- "node_modules/@radix-ui/react-slot": {
+ "node_modules/@radix-ui/react-scroll-area": {
+ "version": "1.2.10",
+ "resolved": "https://registry.npmjs.org/@radix-ui/react-scroll-area/-/react-scroll-area-1.2.10.tgz",
+ "integrity": "sha512-tAXIa1g3sM5CGpVT0uIbUx/U3Gs5N8T52IICuCtObaos1S8fzsrPXG5WObkQN3S6NVl6wKgPhAIiBGbWnvc97A==",
+ "license": "MIT",
+ "dependencies": {
+ "@radix-ui/number": "1.1.1",
+ "@radix-ui/primitive": "1.1.3",
+ "@radix-ui/react-compose-refs": "1.1.2",
+ "@radix-ui/react-context": "1.1.2",
+ "@radix-ui/react-direction": "1.1.1",
+ "@radix-ui/react-presence": "1.1.5",
+ "@radix-ui/react-primitive": "2.1.3",
+ "@radix-ui/react-use-callback-ref": "1.1.1",
+ "@radix-ui/react-use-layout-effect": "1.1.1"
+ },
+ "peerDependencies": {
+ "@types/react": "*",
+ "@types/react-dom": "*",
+ "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc",
+ "react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc"
+ },
+ "peerDependenciesMeta": {
+ "@types/react": {
+ "optional": true
+ },
+ "@types/react-dom": {
+ "optional": true
+ }
+ }
+ },
+ "node_modules/@radix-ui/react-select": {
+ "version": "2.2.6",
+ "resolved": "https://registry.npmjs.org/@radix-ui/react-select/-/react-select-2.2.6.tgz",
+ "integrity": "sha512-I30RydO+bnn2PQztvo25tswPH+wFBjehVGtmagkU78yMdwTwVf12wnAOF+AeP8S2N8xD+5UPbGhkUfPyvT+mwQ==",
+ "license": "MIT",
+ "dependencies": {
+ "@radix-ui/number": "1.1.1",
+ "@radix-ui/primitive": "1.1.3",
+ "@radix-ui/react-collection": "1.1.7",
+ "@radix-ui/react-compose-refs": "1.1.2",
+ "@radix-ui/react-context": "1.1.2",
+ "@radix-ui/react-direction": "1.1.1",
+ "@radix-ui/react-dismissable-layer": "1.1.11",
+ "@radix-ui/react-focus-guards": "1.1.3",
+ "@radix-ui/react-focus-scope": "1.1.7",
+ "@radix-ui/react-id": "1.1.1",
+ "@radix-ui/react-popper": "1.2.8",
+ "@radix-ui/react-portal": "1.1.9",
+ "@radix-ui/react-primitive": "2.1.3",
+ "@radix-ui/react-slot": "1.2.3",
+ "@radix-ui/react-use-callback-ref": "1.1.1",
+ "@radix-ui/react-use-controllable-state": "1.2.2",
+ "@radix-ui/react-use-layout-effect": "1.1.1",
+ "@radix-ui/react-use-previous": "1.1.1",
+ "@radix-ui/react-visually-hidden": "1.2.3",
+ "aria-hidden": "^1.2.4",
+ "react-remove-scroll": "^2.6.3"
+ },
+ "peerDependencies": {
+ "@types/react": "*",
+ "@types/react-dom": "*",
+ "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc",
+ "react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc"
+ },
+ "peerDependenciesMeta": {
+ "@types/react": {
+ "optional": true
+ },
+ "@types/react-dom": {
+ "optional": true
+ }
+ }
+ },
+ "node_modules/@radix-ui/react-select/node_modules/@radix-ui/react-slot": {
"version": "1.2.3",
"resolved": "https://registry.npmjs.org/@radix-ui/react-slot/-/react-slot-1.2.3.tgz",
"integrity": "sha512-aeNmHnBxbi2St0au6VBVC7JXFlhLlOnvIIlePNniyUNAClzmtAUEY8/pBiK3iHjufOlwA+c20/8jngo7xcrg8A==",
@@ -1469,6 +1850,154 @@
}
}
},
+ "node_modules/@radix-ui/react-separator": {
+ "version": "1.1.8",
+ "resolved": "https://registry.npmjs.org/@radix-ui/react-separator/-/react-separator-1.1.8.tgz",
+ "integrity": "sha512-sDvqVY4itsKwwSMEe0jtKgfTh+72Sy3gPmQpjqcQneqQ4PFmr/1I0YA+2/puilhggCe2gJcx5EBAYFkWkdpa5g==",
+ "license": "MIT",
+ "dependencies": {
+ "@radix-ui/react-primitive": "2.1.4"
+ },
+ "peerDependencies": {
+ "@types/react": "*",
+ "@types/react-dom": "*",
+ "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc",
+ "react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc"
+ },
+ "peerDependenciesMeta": {
+ "@types/react": {
+ "optional": true
+ },
+ "@types/react-dom": {
+ "optional": true
+ }
+ }
+ },
+ "node_modules/@radix-ui/react-separator/node_modules/@radix-ui/react-primitive": {
+ "version": "2.1.4",
+ "resolved": "https://registry.npmjs.org/@radix-ui/react-primitive/-/react-primitive-2.1.4.tgz",
+ "integrity": "sha512-9hQc4+GNVtJAIEPEqlYqW5RiYdrr8ea5XQ0ZOnD6fgru+83kqT15mq2OCcbe8KnjRZl5vF3ks69AKz3kh1jrhg==",
+ "license": "MIT",
+ "dependencies": {
+ "@radix-ui/react-slot": "1.2.4"
+ },
+ "peerDependencies": {
+ "@types/react": "*",
+ "@types/react-dom": "*",
+ "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc",
+ "react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc"
+ },
+ "peerDependenciesMeta": {
+ "@types/react": {
+ "optional": true
+ },
+ "@types/react-dom": {
+ "optional": true
+ }
+ }
+ },
+ "node_modules/@radix-ui/react-slot": {
+ "version": "1.2.4",
+ "resolved": "https://registry.npmjs.org/@radix-ui/react-slot/-/react-slot-1.2.4.tgz",
+ "integrity": "sha512-Jl+bCv8HxKnlTLVrcDE8zTMJ09R9/ukw4qBs/oZClOfoQk/cOTbDn+NceXfV7j09YPVQUryJPHurafcSg6EVKA==",
+ "license": "MIT",
+ "dependencies": {
+ "@radix-ui/react-compose-refs": "1.1.2"
+ },
+ "peerDependencies": {
+ "@types/react": "*",
+ "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc"
+ },
+ "peerDependenciesMeta": {
+ "@types/react": {
+ "optional": true
+ }
+ }
+ },
+ "node_modules/@radix-ui/react-switch": {
+ "version": "1.2.6",
+ "resolved": "https://registry.npmjs.org/@radix-ui/react-switch/-/react-switch-1.2.6.tgz",
+ "integrity": "sha512-bByzr1+ep1zk4VubeEVViV592vu2lHE2BZY5OnzehZqOOgogN80+mNtCqPkhn2gklJqOpxWgPoYTSnhBCqpOXQ==",
+ "license": "MIT",
+ "dependencies": {
+ "@radix-ui/primitive": "1.1.3",
+ "@radix-ui/react-compose-refs": "1.1.2",
+ "@radix-ui/react-context": "1.1.2",
+ "@radix-ui/react-primitive": "2.1.3",
+ "@radix-ui/react-use-controllable-state": "1.2.2",
+ "@radix-ui/react-use-previous": "1.1.1",
+ "@radix-ui/react-use-size": "1.1.1"
+ },
+ "peerDependencies": {
+ "@types/react": "*",
+ "@types/react-dom": "*",
+ "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc",
+ "react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc"
+ },
+ "peerDependenciesMeta": {
+ "@types/react": {
+ "optional": true
+ },
+ "@types/react-dom": {
+ "optional": true
+ }
+ }
+ },
+ "node_modules/@radix-ui/react-tabs": {
+ "version": "1.1.13",
+ "resolved": "https://registry.npmjs.org/@radix-ui/react-tabs/-/react-tabs-1.1.13.tgz",
+ "integrity": "sha512-7xdcatg7/U+7+Udyoj2zodtI9H/IIopqo+YOIcZOq1nJwXWBZ9p8xiu5llXlekDbZkca79a/fozEYQXIA4sW6A==",
+ "license": "MIT",
+ "dependencies": {
+ "@radix-ui/primitive": "1.1.3",
+ "@radix-ui/react-context": "1.1.2",
+ "@radix-ui/react-direction": "1.1.1",
+ "@radix-ui/react-id": "1.1.1",
+ "@radix-ui/react-presence": "1.1.5",
+ "@radix-ui/react-primitive": "2.1.3",
+ "@radix-ui/react-roving-focus": "1.1.11",
+ "@radix-ui/react-use-controllable-state": "1.2.2"
+ },
+ "peerDependencies": {
+ "@types/react": "*",
+ "@types/react-dom": "*",
+ "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc",
+ "react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc"
+ },
+ "peerDependenciesMeta": {
+ "@types/react": {
+ "optional": true
+ },
+ "@types/react-dom": {
+ "optional": true
+ }
+ }
+ },
+ "node_modules/@radix-ui/react-toggle": {
+ "version": "1.1.10",
+ "resolved": "https://registry.npmjs.org/@radix-ui/react-toggle/-/react-toggle-1.1.10.tgz",
+ "integrity": "sha512-lS1odchhFTeZv3xwHH31YPObmJn8gOg7Lq12inrr0+BH/l3Tsq32VfjqH1oh80ARM3mlkfMic15n0kg4sD1poQ==",
+ "license": "MIT",
+ "dependencies": {
+ "@radix-ui/primitive": "1.1.3",
+ "@radix-ui/react-primitive": "2.1.3",
+ "@radix-ui/react-use-controllable-state": "1.2.2"
+ },
+ "peerDependencies": {
+ "@types/react": "*",
+ "@types/react-dom": "*",
+ "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc",
+ "react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc"
+ },
+ "peerDependenciesMeta": {
+ "@types/react": {
+ "optional": true
+ },
+ "@types/react-dom": {
+ "optional": true
+ }
+ }
+ },
"node_modules/@radix-ui/react-tooltip": {
"version": "1.2.8",
"resolved": "https://registry.npmjs.org/@radix-ui/react-tooltip/-/react-tooltip-1.2.8.tgz",
@@ -1503,6 +2032,24 @@
}
}
},
+ "node_modules/@radix-ui/react-tooltip/node_modules/@radix-ui/react-slot": {
+ "version": "1.2.3",
+ "resolved": "https://registry.npmjs.org/@radix-ui/react-slot/-/react-slot-1.2.3.tgz",
+ "integrity": "sha512-aeNmHnBxbi2St0au6VBVC7JXFlhLlOnvIIlePNniyUNAClzmtAUEY8/pBiK3iHjufOlwA+c20/8jngo7xcrg8A==",
+ "license": "MIT",
+ "dependencies": {
+ "@radix-ui/react-compose-refs": "1.1.2"
+ },
+ "peerDependencies": {
+ "@types/react": "*",
+ "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc"
+ },
+ "peerDependenciesMeta": {
+ "@types/react": {
+ "optional": true
+ }
+ }
+ },
"node_modules/@radix-ui/react-use-callback-ref": {
"version": "1.1.1",
"resolved": "https://registry.npmjs.org/@radix-ui/react-use-callback-ref/-/react-use-callback-ref-1.1.1.tgz",
@@ -1588,6 +2135,21 @@
}
}
},
+ "node_modules/@radix-ui/react-use-previous": {
+ "version": "1.1.1",
+ "resolved": "https://registry.npmjs.org/@radix-ui/react-use-previous/-/react-use-previous-1.1.1.tgz",
+ "integrity": "sha512-2dHfToCj/pzca2Ck724OZ5L0EVrr3eHRNsG/b3xQJLA2hZpVCS99bLAX+hm1IHXDEnzU6by5z/5MIY794/a8NQ==",
+ "license": "MIT",
+ "peerDependencies": {
+ "@types/react": "*",
+ "react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc"
+ },
+ "peerDependenciesMeta": {
+ "@types/react": {
+ "optional": true
+ }
+ }
+ },
"node_modules/@radix-ui/react-use-rect": {
"version": "1.1.1",
"resolved": "https://registry.npmjs.org/@radix-ui/react-use-rect/-/react-use-rect-1.1.1.tgz",
@@ -2448,32 +3010,34 @@
"dev": true,
"license": "MIT"
},
- "node_modules/@types/prop-types": {
- "version": "15.7.15",
- "resolved": "https://registry.npmjs.org/@types/prop-types/-/prop-types-15.7.15.tgz",
- "integrity": "sha512-F6bEyamV9jKGAFBEmlQnesRPGOQqS2+Uwi0Em15xenOxHaf2hv6L8YCVn3rPdPJOiJfPiCnLIRyvwVaqMY3MIw==",
- "devOptional": true,
- "license": "MIT"
+ "node_modules/@types/node": {
+ "version": "22.19.7",
+ "resolved": "https://registry.npmjs.org/@types/node/-/node-22.19.7.tgz",
+ "integrity": "sha512-MciR4AKGHWl7xwxkBa6xUGxQJ4VBOmPTF7sL+iGzuahOFaO0jHCsuEfS80pan1ef4gWId1oWOweIhrDEYLuaOw==",
+ "dev": true,
+ "license": "MIT",
+ "dependencies": {
+ "undici-types": "~6.21.0"
+ }
},
"node_modules/@types/react": {
- "version": "18.3.27",
- "resolved": "https://registry.npmjs.org/@types/react/-/react-18.3.27.tgz",
- "integrity": "sha512-cisd7gxkzjBKU2GgdYrTdtQx1SORymWyaAFhaxQPK9bYO9ot3Y5OikQRvY0VYQtvwjeQnizCINJAenh/V7MK2w==",
+ "version": "19.2.9",
+ "resolved": "https://registry.npmjs.org/@types/react/-/react-19.2.9.tgz",
+ "integrity": "sha512-Lpo8kgb/igvMIPeNV2rsYKTgaORYdO1XGVZ4Qz3akwOj0ySGYMPlQWa8BaLn0G63D1aSaAQ5ldR06wCpChQCjA==",
"devOptional": true,
"license": "MIT",
"dependencies": {
- "@types/prop-types": "*",
"csstype": "^3.2.2"
}
},
"node_modules/@types/react-dom": {
- "version": "18.3.7",
- "resolved": "https://registry.npmjs.org/@types/react-dom/-/react-dom-18.3.7.tgz",
- "integrity": "sha512-MEe3UeoENYVFXzoXEWsvcpg6ZvlrFNlOQ7EOsvhI3CfAXwzPfO8Qwuxd40nepsYKqyyVQnTdEfv68q91yLcKrQ==",
+ "version": "19.2.3",
+ "resolved": "https://registry.npmjs.org/@types/react-dom/-/react-dom-19.2.3.tgz",
+ "integrity": "sha512-jp2L/eY6fn+KgVVQAOqYItbF0VY/YApe5Mz2F0aykSO8gx31bYCZyvSeYxCHKvzHG5eZjc+zyaS5BrBWya2+kQ==",
"devOptional": true,
"license": "MIT",
"peerDependencies": {
- "@types/react": "^18.0.0"
+ "@types/react": "^19.2.0"
}
},
"node_modules/@typescript-eslint/eslint-plugin": {
@@ -3014,6 +3578,18 @@
"url": "https://github.com/chalk/chalk?sponsor=1"
}
},
+ "node_modules/class-variance-authority": {
+ "version": "0.7.1",
+ "resolved": "https://registry.npmjs.org/class-variance-authority/-/class-variance-authority-0.7.1.tgz",
+ "integrity": "sha512-Ka+9Trutv7G8M6WT6SeiRWz792K5qEqIGEGzXKhAE6xOWAY6pPH8U+9IY3oCMv6kqTmLsv7Xh/2w2RigkePMsg==",
+ "license": "Apache-2.0",
+ "dependencies": {
+ "clsx": "^2.1.1"
+ },
+ "funding": {
+ "url": "https://polar.sh/cva"
+ }
+ },
"node_modules/classcat": {
"version": "5.0.5",
"resolved": "https://registry.npmjs.org/classcat/-/classcat-5.0.5.tgz",
@@ -3263,9 +3839,9 @@
}
},
"node_modules/esbuild": {
- "version": "0.21.5",
- "resolved": "https://registry.npmjs.org/esbuild/-/esbuild-0.21.5.tgz",
- "integrity": "sha512-mg3OPMV4hXywwpoDxu3Qda5xCKQi+vCTZq8S9J/EpkhB2HzKXq4SNFZE3+NK93JYxc8VMSep+lOUSC/RVKaBqw==",
+ "version": "0.27.2",
+ "resolved": "https://registry.npmjs.org/esbuild/-/esbuild-0.27.2.tgz",
+ "integrity": "sha512-HyNQImnsOC7X9PMNaCIeAm4ISCQXs5a5YasTXVliKv4uuBo1dKrG0A+uQS8M5eXjVMnLg3WgXaKvprHlFJQffw==",
"dev": true,
"hasInstallScript": true,
"license": "MIT",
@@ -3273,32 +3849,35 @@
"esbuild": "bin/esbuild"
},
"engines": {
- "node": ">=12"
+ "node": ">=18"
},
"optionalDependencies": {
- "@esbuild/aix-ppc64": "0.21.5",
- "@esbuild/android-arm": "0.21.5",
- "@esbuild/android-arm64": "0.21.5",
- "@esbuild/android-x64": "0.21.5",
- "@esbuild/darwin-arm64": "0.21.5",
- "@esbuild/darwin-x64": "0.21.5",
- "@esbuild/freebsd-arm64": "0.21.5",
- "@esbuild/freebsd-x64": "0.21.5",
- "@esbuild/linux-arm": "0.21.5",
- "@esbuild/linux-arm64": "0.21.5",
- "@esbuild/linux-ia32": "0.21.5",
- "@esbuild/linux-loong64": "0.21.5",
- "@esbuild/linux-mips64el": "0.21.5",
- "@esbuild/linux-ppc64": "0.21.5",
- "@esbuild/linux-riscv64": "0.21.5",
- "@esbuild/linux-s390x": "0.21.5",
- "@esbuild/linux-x64": "0.21.5",
- "@esbuild/netbsd-x64": "0.21.5",
- "@esbuild/openbsd-x64": "0.21.5",
- "@esbuild/sunos-x64": "0.21.5",
- "@esbuild/win32-arm64": "0.21.5",
- "@esbuild/win32-ia32": "0.21.5",
- "@esbuild/win32-x64": "0.21.5"
+ "@esbuild/aix-ppc64": "0.27.2",
+ "@esbuild/android-arm": "0.27.2",
+ "@esbuild/android-arm64": "0.27.2",
+ "@esbuild/android-x64": "0.27.2",
+ "@esbuild/darwin-arm64": "0.27.2",
+ "@esbuild/darwin-x64": "0.27.2",
+ "@esbuild/freebsd-arm64": "0.27.2",
+ "@esbuild/freebsd-x64": "0.27.2",
+ "@esbuild/linux-arm": "0.27.2",
+ "@esbuild/linux-arm64": "0.27.2",
+ "@esbuild/linux-ia32": "0.27.2",
+ "@esbuild/linux-loong64": "0.27.2",
+ "@esbuild/linux-mips64el": "0.27.2",
+ "@esbuild/linux-ppc64": "0.27.2",
+ "@esbuild/linux-riscv64": "0.27.2",
+ "@esbuild/linux-s390x": "0.27.2",
+ "@esbuild/linux-x64": "0.27.2",
+ "@esbuild/netbsd-arm64": "0.27.2",
+ "@esbuild/netbsd-x64": "0.27.2",
+ "@esbuild/openbsd-arm64": "0.27.2",
+ "@esbuild/openbsd-x64": "0.27.2",
+ "@esbuild/openharmony-arm64": "0.27.2",
+ "@esbuild/sunos-x64": "0.27.2",
+ "@esbuild/win32-arm64": "0.27.2",
+ "@esbuild/win32-ia32": "0.27.2",
+ "@esbuild/win32-x64": "0.27.2"
}
},
"node_modules/escalade": {
@@ -3758,6 +4337,7 @@
"version": "4.0.0",
"resolved": "https://registry.npmjs.org/js-tokens/-/js-tokens-4.0.0.tgz",
"integrity": "sha512-RdJUflcE3cUzKiMqQgsCu06FPu9UdIJO0beYbPhHN4k6apgJtifcoCtT9bcxOpYBtpD2kCM6Sbzg4CausW/PKQ==",
+ "dev": true,
"license": "MIT"
},
"node_modules/js-yaml": {
@@ -4134,18 +4714,6 @@
"dev": true,
"license": "MIT"
},
- "node_modules/loose-envify": {
- "version": "1.4.0",
- "resolved": "https://registry.npmjs.org/loose-envify/-/loose-envify-1.4.0.tgz",
- "integrity": "sha512-lyuxPGr/Wfhrlem2CL/UcnUc1zcqKAImBDzukY7Y5F/yQiNdko6+fRLevlw1HgMySw7f611UIY408EtxRSoK3Q==",
- "license": "MIT",
- "dependencies": {
- "js-tokens": "^3.0.0 || ^4.0.0"
- },
- "bin": {
- "loose-envify": "cli.js"
- }
- },
"node_modules/lru-cache": {
"version": "5.1.1",
"resolved": "https://registry.npmjs.org/lru-cache/-/lru-cache-5.1.1.tgz",
@@ -4157,12 +4725,12 @@
}
},
"node_modules/lucide-react": {
- "version": "0.460.0",
- "resolved": "https://registry.npmjs.org/lucide-react/-/lucide-react-0.460.0.tgz",
- "integrity": "sha512-BVtq/DykVeIvRTJvRAgCsOwaGL8Un3Bxh8MbDxMhEWlZay3T4IpEKDEpwt5KZ0KJMHzgm6jrltxlT5eXOWXDHg==",
+ "version": "0.475.0",
+ "resolved": "https://registry.npmjs.org/lucide-react/-/lucide-react-0.475.0.tgz",
+ "integrity": "sha512-NJzvVu1HwFVeZ+Gwq2q00KygM1aBhy/ZrhY9FsAgJtpB+E4R7uxRk9M2iKvHa6/vNxZydIB59htha4c2vvwvVg==",
"license": "ISC",
"peerDependencies": {
- "react": "^16.5.1 || ^17.0.0 || ^18.0.0 || ^19.0.0-rc"
+ "react": "^16.5.1 || ^17.0.0 || ^18.0.0 || ^19.0.0"
}
},
"node_modules/magic-string": {
@@ -4425,28 +4993,24 @@
}
},
"node_modules/react": {
- "version": "18.3.1",
- "resolved": "https://registry.npmjs.org/react/-/react-18.3.1.tgz",
- "integrity": "sha512-wS+hAgJShR0KhEvPJArfuPVN1+Hz1t0Y6n5jLrGQbkb4urgPE/0Rve+1kMB1v/oWgHgm4WIcV+i7F2pTVj+2iQ==",
+ "version": "19.2.3",
+ "resolved": "https://registry.npmjs.org/react/-/react-19.2.3.tgz",
+ "integrity": "sha512-Ku/hhYbVjOQnXDZFv2+RibmLFGwFdeeKHFcOTlrt7xplBnya5OGn/hIRDsqDiSUcfORsDC7MPxwork8jBwsIWA==",
"license": "MIT",
- "dependencies": {
- "loose-envify": "^1.1.0"
- },
"engines": {
"node": ">=0.10.0"
}
},
"node_modules/react-dom": {
- "version": "18.3.1",
- "resolved": "https://registry.npmjs.org/react-dom/-/react-dom-18.3.1.tgz",
- "integrity": "sha512-5m4nQKp+rZRb09LNH59GM4BxTh9251/ylbKIbpe7TpGxfJ+9kv6BLkLBXIjjspbgbnIBNqlI23tRnTWT0snUIw==",
+ "version": "19.2.3",
+ "resolved": "https://registry.npmjs.org/react-dom/-/react-dom-19.2.3.tgz",
+ "integrity": "sha512-yELu4WmLPw5Mr/lmeEpox5rw3RETacE++JgHqQzd2dg+YbJuat3jH4ingc+WPZhxaoFzdv9y33G+F7Nl5O0GBg==",
"license": "MIT",
"dependencies": {
- "loose-envify": "^1.1.0",
- "scheduler": "^0.23.2"
+ "scheduler": "^0.27.0"
},
"peerDependencies": {
- "react": "^18.3.1"
+ "react": "^19.2.3"
}
},
"node_modules/react-refresh": {
@@ -4581,13 +5145,10 @@
}
},
"node_modules/scheduler": {
- "version": "0.23.2",
- "resolved": "https://registry.npmjs.org/scheduler/-/scheduler-0.23.2.tgz",
- "integrity": "sha512-UOShsPwz7NrMUqhR6t0hWjFduvOzbtv7toDH1/hIrfRNIDBnnBWd0CwJTGvTpngVlmwGCdP9/Zl/tVrDqcuYzQ==",
- "license": "MIT",
- "dependencies": {
- "loose-envify": "^1.1.0"
- }
+ "version": "0.27.0",
+ "resolved": "https://registry.npmjs.org/scheduler/-/scheduler-0.27.0.tgz",
+ "integrity": "sha512-eNv+WrVbKu1f3vbYJT/xtiF5syA5HPIMtf9IgY/nKg0sWqzAUEvqY/xm7OcZc/qafLx/iO9FgOmeSAp4v5ti/Q==",
+ "license": "MIT"
},
"node_modules/semver": {
"version": "6.3.1",
@@ -4658,6 +5219,16 @@
"node": ">=8"
}
},
+ "node_modules/tailwind-merge": {
+ "version": "3.4.0",
+ "resolved": "https://registry.npmjs.org/tailwind-merge/-/tailwind-merge-3.4.0.tgz",
+ "integrity": "sha512-uSaO4gnW+b3Y2aWoWfFpX62vn2sR3skfhbjsEnaBI81WD1wBLlHZe5sWf0AqjksNdYTbGBEd0UasQMT3SNV15g==",
+ "license": "MIT",
+ "funding": {
+ "type": "github",
+ "url": "https://github.com/sponsors/dcastil"
+ }
+ },
"node_modules/tailwindcss": {
"version": "4.1.18",
"resolved": "https://registry.npmjs.org/tailwindcss/-/tailwindcss-4.1.18.tgz",
@@ -4715,6 +5286,16 @@
"integrity": "sha512-oJFu94HQb+KVduSUQL7wnpmqnfmLsOA/nAh6b6EH0wCEoK0/mPeXU6c3wKDV83MkOuHPRHtSXKKU99IBazS/2w==",
"license": "0BSD"
},
+ "node_modules/tw-animate-css": {
+ "version": "1.4.0",
+ "resolved": "https://registry.npmjs.org/tw-animate-css/-/tw-animate-css-1.4.0.tgz",
+ "integrity": "sha512-7bziOlRqH0hJx80h/3mbicLW7o8qLsH5+RaLR2t+OHM3D0JlWGODQKQ4cxbK7WlvmUxpcj6Kgu6EKqjrGFe3QQ==",
+ "dev": true,
+ "license": "MIT",
+ "funding": {
+ "url": "https://github.com/sponsors/Wombosvideo"
+ }
+ },
"node_modules/type-check": {
"version": "0.4.0",
"resolved": "https://registry.npmjs.org/type-check/-/type-check-0.4.0.tgz",
@@ -4729,9 +5310,9 @@
}
},
"node_modules/typescript": {
- "version": "5.6.3",
- "resolved": "https://registry.npmjs.org/typescript/-/typescript-5.6.3.tgz",
- "integrity": "sha512-hjcS1mhfuyi4WW8IWtjP7brDrG2cuDZukyrYrSauoXGNgx0S7zceP07adYkJycEr56BOUTNPzbInooiN3fn1qw==",
+ "version": "5.7.3",
+ "resolved": "https://registry.npmjs.org/typescript/-/typescript-5.7.3.tgz",
+ "integrity": "sha512-84MVSjMEHP+FQRPy3pX9sTVV/INIex71s9TL2Gm5FG/WG1SqXeKyZ0k7/blY/4FdOzI12CBy1vGc4og/eus0fw==",
"dev": true,
"license": "Apache-2.0",
"bin": {
@@ -4766,6 +5347,13 @@
"typescript": ">=4.8.4 <6.0.0"
}
},
+ "node_modules/undici-types": {
+ "version": "6.21.0",
+ "resolved": "https://registry.npmjs.org/undici-types/-/undici-types-6.21.0.tgz",
+ "integrity": "sha512-iwDZqg0QAGrg9Rav5H4n0M64c3mkR59cJ6wQp+7C4nI0gsmExaedaYLNO44eT4AtBBwjbTiGPMlt2Md0T9H9JQ==",
+ "dev": true,
+ "license": "MIT"
+ },
"node_modules/update-browserslist-db": {
"version": "1.2.3",
"resolved": "https://registry.npmjs.org/update-browserslist-db/-/update-browserslist-db-1.2.3.tgz",
@@ -4860,21 +5448,24 @@
}
},
"node_modules/vite": {
- "version": "5.4.21",
- "resolved": "https://registry.npmjs.org/vite/-/vite-5.4.21.tgz",
- "integrity": "sha512-o5a9xKjbtuhY6Bi5S3+HvbRERmouabWbyUcpXXUA1u+GNUKoROi9byOJ8M0nHbHYHkYICiMlqxkg1KkYmm25Sw==",
+ "version": "7.3.1",
+ "resolved": "https://registry.npmjs.org/vite/-/vite-7.3.1.tgz",
+ "integrity": "sha512-w+N7Hifpc3gRjZ63vYBXA56dvvRlNWRczTdmCBBa+CotUzAPf5b7YMdMR/8CQoeYE5LX3W4wj6RYTgonm1b9DA==",
"dev": true,
"license": "MIT",
"dependencies": {
- "esbuild": "^0.21.3",
- "postcss": "^8.4.43",
- "rollup": "^4.20.0"
+ "esbuild": "^0.27.0",
+ "fdir": "^6.5.0",
+ "picomatch": "^4.0.3",
+ "postcss": "^8.5.6",
+ "rollup": "^4.43.0",
+ "tinyglobby": "^0.2.15"
},
"bin": {
"vite": "bin/vite.js"
},
"engines": {
- "node": "^18.0.0 || >=20.0.0"
+ "node": "^20.19.0 || >=22.12.0"
},
"funding": {
"url": "https://github.com/vitejs/vite?sponsor=1"
@@ -4883,19 +5474,25 @@
"fsevents": "~2.3.3"
},
"peerDependencies": {
- "@types/node": "^18.0.0 || >=20.0.0",
- "less": "*",
+ "@types/node": "^20.19.0 || >=22.12.0",
+ "jiti": ">=1.21.0",
+ "less": "^4.0.0",
"lightningcss": "^1.21.0",
- "sass": "*",
- "sass-embedded": "*",
- "stylus": "*",
- "sugarss": "*",
- "terser": "^5.4.0"
+ "sass": "^1.70.0",
+ "sass-embedded": "^1.70.0",
+ "stylus": ">=0.54.8",
+ "sugarss": "^5.0.0",
+ "terser": "^5.16.0",
+ "tsx": "^4.8.1",
+ "yaml": "^2.4.2"
},
"peerDependenciesMeta": {
"@types/node": {
"optional": true
},
+ "jiti": {
+ "optional": true
+ },
"less": {
"optional": true
},
@@ -4916,6 +5513,12 @@
},
"terser": {
"optional": true
+ },
+ "tsx": {
+ "optional": true
+ },
+ "yaml": {
+ "optional": true
}
}
},
diff --git a/ui/package.json b/ui/package.json
index 6226472..f70b9ca 100644
--- a/ui/package.json
+++ b/ui/package.json
@@ -12,37 +12,52 @@
"test:e2e:ui": "playwright test --ui"
},
"dependencies": {
- "@radix-ui/react-dialog": "^1.1.2",
- "@radix-ui/react-dropdown-menu": "^2.1.2",
- "@radix-ui/react-tooltip": "^1.1.3",
- "@tanstack/react-query": "^5.60.0",
+ "@radix-ui/react-checkbox": "^1.3.3",
+ "@radix-ui/react-dialog": "^1.1.15",
+ "@radix-ui/react-dropdown-menu": "^2.1.16",
+ "@radix-ui/react-label": "^2.1.8",
+ "@radix-ui/react-popover": "^1.1.15",
+ "@radix-ui/react-radio-group": "^1.3.8",
+ "@radix-ui/react-scroll-area": "^1.2.10",
+ "@radix-ui/react-select": "^2.2.6",
+ "@radix-ui/react-separator": "^1.1.8",
+ "@radix-ui/react-slot": "^1.2.4",
+ "@radix-ui/react-switch": "^1.2.6",
+ "@radix-ui/react-tabs": "^1.1.13",
+ "@radix-ui/react-toggle": "^1.1.10",
+ "@radix-ui/react-tooltip": "^1.2.8",
+ "@tanstack/react-query": "^5.72.0",
"@xterm/addon-fit": "^0.11.0",
"@xterm/addon-web-links": "^0.12.0",
"@xterm/xterm": "^6.0.0",
"@xyflow/react": "^12.10.0",
"canvas-confetti": "^1.9.4",
+ "class-variance-authority": "^0.7.1",
"clsx": "^2.1.1",
"dagre": "^0.8.5",
- "lucide-react": "^0.460.0",
- "react": "^18.3.1",
- "react-dom": "^18.3.1"
+ "lucide-react": "^0.475.0",
+ "react": "^19.0.0",
+ "react-dom": "^19.0.0",
+ "tailwind-merge": "^3.4.0"
},
"devDependencies": {
- "@eslint/js": "^9.13.0",
+ "@eslint/js": "^9.19.0",
"@playwright/test": "^1.57.0",
- "@tailwindcss/vite": "^4.0.0-beta.4",
+ "@tailwindcss/vite": "^4.1.0",
"@types/canvas-confetti": "^1.9.0",
"@types/dagre": "^0.7.53",
- "@types/react": "^18.3.12",
- "@types/react-dom": "^18.3.1",
- "@vitejs/plugin-react": "^4.3.3",
- "eslint": "^9.13.0",
- "eslint-plugin-react-hooks": "^5.0.0",
- "eslint-plugin-react-refresh": "^0.4.14",
- "globals": "^15.11.0",
- "tailwindcss": "^4.0.0-beta.4",
- "typescript": "~5.6.2",
- "typescript-eslint": "^8.11.0",
- "vite": "^5.4.10"
+ "@types/node": "^22.12.0",
+ "@types/react": "^19.0.0",
+ "@types/react-dom": "^19.0.0",
+ "@vitejs/plugin-react": "^4.4.0",
+ "eslint": "^9.19.0",
+ "eslint-plugin-react-hooks": "^5.1.0",
+ "eslint-plugin-react-refresh": "^0.4.19",
+ "globals": "^15.14.0",
+ "tailwindcss": "^4.1.0",
+ "tw-animate-css": "^1.4.0",
+ "typescript": "~5.7.3",
+ "typescript-eslint": "^8.23.0",
+ "vite": "^7.3.0"
}
}
diff --git a/ui/public/ollama.png b/ui/public/ollama.png
new file mode 100644
index 0000000..9f559ae
Binary files /dev/null and b/ui/public/ollama.png differ
diff --git a/ui/src/App.tsx b/ui/src/App.tsx
index 59ed0ab..6c8fa00 100644
--- a/ui/src/App.tsx
+++ b/ui/src/App.tsx
@@ -4,6 +4,7 @@ import { useProjects, useFeatures, useAgentStatus, useSettings } from './hooks/u
import { useProjectWebSocket } from './hooks/useWebSocket'
import { useFeatureSound } from './hooks/useFeatureSound'
import { useCelebration } from './hooks/useCelebration'
+import { useTheme } from './hooks/useTheme'
import { ProjectSelector } from './components/ProjectSelector'
import { KanbanBoard } from './components/KanbanBoard'
import { AgentControl } from './components/AgentControl'
@@ -24,14 +25,22 @@ import { DevServerControl } from './components/DevServerControl'
import { ViewToggle, type ViewMode } from './components/ViewToggle'
import { DependencyGraph } from './components/DependencyGraph'
import { KeyboardShortcutsHelp } from './components/KeyboardShortcutsHelp'
+import { ThemeSelector } from './components/ThemeSelector'
+import { ResetProjectModal } from './components/ResetProjectModal'
+import { ProjectSetupRequired } from './components/ProjectSetupRequired'
import { getDependencyGraph } from './lib/api'
-import { Loader2, Settings, Moon, Sun } from 'lucide-react'
+import { Loader2, Settings, Moon, Sun, RotateCcw } from 'lucide-react'
import type { Feature } from './lib/types'
+import { Button } from '@/components/ui/button'
+import { Card, CardContent } from '@/components/ui/card'
+import { Badge } from '@/components/ui/badge'
const STORAGE_KEY = 'autocoder-selected-project'
-const DARK_MODE_KEY = 'autocoder-dark-mode'
const VIEW_MODE_KEY = 'autocoder-view-mode'
+// Bottom padding for main content when debug panel is collapsed (40px header + 8px margin)
+const COLLAPSED_DEBUG_PANEL_CLEARANCE = 48
+
function App() {
// Initialize selected project from localStorage
const [selectedProject, setSelectedProject] = useState(() => {
@@ -52,14 +61,8 @@ function App() {
const [showSettings, setShowSettings] = useState(false)
const [showKeyboardHelp, setShowKeyboardHelp] = useState(false)
const [isSpecCreating, setIsSpecCreating] = useState(false)
+ const [showResetModal, setShowResetModal] = useState(false)
const [showSpecChat, setShowSpecChat] = useState(false) // For "Create Spec" button in empty kanban
- const [darkMode, setDarkMode] = useState(() => {
- try {
- return localStorage.getItem(DARK_MODE_KEY) === 'true'
- } catch {
- return false
- }
- })
const [viewMode, setViewMode] = useState(() => {
try {
const stored = localStorage.getItem(VIEW_MODE_KEY)
@@ -75,6 +78,7 @@ function App() {
const { data: settings } = useSettings()
useAgentStatus(selectedProject) // Keep polling for status updates
const wsState = useProjectWebSocket(selectedProject)
+ const { theme, setTheme, darkMode, toggleDarkMode, themes } = useTheme()
// Get has_spec from the selected project
const selectedProjectData = projects?.find(p => p.name === selectedProject)
@@ -88,20 +92,6 @@ function App() {
refetchInterval: 5000, // Refresh every 5 seconds
})
- // Apply dark mode class to document
- useEffect(() => {
- if (darkMode) {
- document.documentElement.classList.add('dark')
- } else {
- document.documentElement.classList.remove('dark')
- }
- try {
- localStorage.setItem(DARK_MODE_KEY, String(darkMode))
- } catch {
- // localStorage not available
- }
- }, [darkMode])
-
// Persist view mode to localStorage
useEffect(() => {
try {
@@ -216,10 +206,18 @@ function App() {
setShowKeyboardHelp(true)
}
+ // R : Open reset modal (when project selected and agent not running)
+ if ((e.key === 'r' || e.key === 'R') && selectedProject && wsState.agentStatus !== 'running') {
+ e.preventDefault()
+ setShowResetModal(true)
+ }
+
// Escape : Close modals
if (e.key === 'Escape') {
if (showKeyboardHelp) {
setShowKeyboardHelp(false)
+ } else if (showResetModal) {
+ setShowResetModal(false)
} else if (showExpandProject) {
setShowExpandProject(false)
} else if (showSettings) {
@@ -238,7 +236,7 @@ function App() {
window.addEventListener('keydown', handleKeyDown)
return () => window.removeEventListener('keydown', handleKeyDown)
- }, [selectedProject, showAddFeature, showExpandProject, selectedFeature, debugOpen, debugActiveTab, assistantOpen, features, showSettings, showKeyboardHelp, isSpecCreating, viewMode])
+ }, [selectedProject, showAddFeature, showExpandProject, selectedFeature, debugOpen, debugActiveTab, assistantOpen, features, showSettings, showKeyboardHelp, isSpecCreating, viewMode, showResetModal, wsState.agentStatus])
// Combine WebSocket progress with feature data
const progress = wsState.progress.total > 0 ? wsState.progress : {
@@ -256,9 +254,9 @@ function App() {
}
return (
-
+
{/* Header */}
-
+
{/* Logo and Title */}
@@ -281,6 +279,7 @@ function App() {
-
+
+
+
+
+ {/* Ollama Mode Indicator */}
+ {settings?.ollama_mode && (
+
@@ -327,17 +357,27 @@ function App() {
{/* Main Content */}
{!selectedProject ? (
-
+
Welcome to AutoCoder
-
+
Select a project from the dropdown above or create a new one to get started.
+ ) : !hasSpec ? (
+ setShowSpecChat(true)}
+ onEditManually={() => {
+ // Open debug panel for the user to see the project path
+ setDebugOpen(true)
+ }}
+ />
) : (