18 Commits

Author SHA1 Message Date
Auto
f24c7cbf62 patch npm version 2026-02-06 09:44:20 +02:00
Auto
f664378775 0.1.5 2026-02-06 09:43:31 +02:00
Auto
a52f191a54 refactor: make Settings UI the single source of truth for API provider
Remove legacy env-var-based provider/mode detection that caused misleading
UI badges (e.g., GLM badge showing when Settings was set to Claude).

Key changes:
- Remove _is_glm_mode() and _is_ollama_mode() env-var sniffing functions
  from server/routers/settings.py; derive glm_mode/ollama_mode purely from
  the api_provider setting
- Remove `import os` from settings router (no longer needed)
- Update schema comments to reflect settings-based derivation
- Remove "(configured via .env)" from badge tooltips in App.tsx
- Remove Kimi/GLM/Ollama/Playwright-headless sections from .env.example;
  add note pointing to Settings UI
- Update CLAUDE.md and README.md documentation to reference Settings UI
  for alternative provider configuration
- Update model IDs from claude-opus-4-5-20251101 to claude-opus-4-6
  across registry, client, chat sessions, tests, and UI defaults
- Add LEGACY_MODEL_MAP with auto-migration in get_all_settings()
- Show model ID subtitle in SettingsModal model selector
- Add Vertex passthrough test for claude-opus-4-6 (no date suffix)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-06 09:23:06 +02:00
Auto
c0aaac241c npm version patch 2026-02-06 08:10:59 +02:00
Auto
547f1e7d9b 0.1.4 2026-02-06 08:10:39 +02:00
Auto
73d6cfcd36 fix: address PR #163 review findings
- Fix model selection regression: _get_settings_defaults() now checks
  api_model (set by new provider UI) before falling back to legacy
  model setting, ensuring Claude model selection works end-to-end
- Add input validation for provider settings: api_base_url must start
  with http:// or https:// (max 500 chars), api_auth_token max 500
  chars, api_model max 200 chars
- Fix terminal.py misleading import alias: replace
  is_valid_project_name aliased as validate_project_name with direct
  is_valid_project_name import across all 5 call sites

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-06 08:10:18 +02:00
Leon van Zyl
d15fd37e33 Merge pull request #163 from nioasoft/feat/api-provider-ui
feat: add API provider selection UI (Claude, Kimi, GLM, Ollama, Custom)
2026-02-06 08:06:37 +02:00
Auto
97a3250a37 update README 2026-02-06 07:49:28 +02:00
nioasoft
a752ece70c fix: wrong import alias overwrote project_name with bool
assistant_chat.py and spec_creation.py imported is_valid_project_name
(returns bool) aliased as validate_project_name. When used as
`project_name = validate_project_name(project_name)`, the project name
was replaced with True, causing "Project not found in registry" errors.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-06 06:20:03 +02:00
nioasoft
3c61496021 fix: clean up stuck features on agent start
Ensures features stuck from a previous crash are reset before
launching a new agent, not just on stop/crash going forward.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-06 06:02:30 +02:00
nioasoft
6d4a198380 fix: remove unused API_ENV_VARS imports from chat sessions
The provider refactor moved env building to get_effective_sdk_env(),
making these imports unused. Fixes ruff F401 lint errors in CI.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-06 05:57:47 +02:00
nioasoft
13785325d7 feat: add API provider selection UI and fix stuck features on agent crash
API Provider Selection:
- Add provider switcher in Settings modal (Claude, Kimi, GLM, Ollama, Custom)
- Auth tokens stored locally only (registry.db), never returned by API
- get_effective_sdk_env() builds provider-specific env vars for agent subprocess
- All chat sessions (spec, expand, assistant) use provider settings
- Backward compatible: defaults to Claude, env vars still work as override

Fix Stuck Features:
- Add _cleanup_stale_features() to process_manager.py
- Reset in_progress features when agent stops, crashes, or fails healthcheck
- Prevents features from being permanently stuck after rate limit crashes
- Uses separate SQLAlchemy engine to avoid session conflicts with subprocess

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-06 05:55:51 +02:00
nioasoft
70131f2271 fix: accept WebSocket before validation to prevent opaque 403 errors
All WebSocket endpoints now call websocket.accept() before any
validation checks. Previously, closing the connection before accepting
caused Starlette to return an opaque HTTP 403 instead of a meaningful
error message.

Changes:
- Server: Accept WebSocket first, then send JSON error + close with
  4xxx code if validation fails (expand, spec, assistant, terminal,
  main project WS)
- Server: ConnectionManager.connect() no longer calls accept() to
  avoid double-accept
- UI: Gate expand button and keyboard shortcut on hasSpec
- UI: Skip WebSocket reconnection on application error codes (4000-4999)
- UI: Update keyboard shortcuts help text

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-06 05:46:24 +02:00
nioasoft
035e8fdfca fix: accept WebSocket before validation to prevent opaque 403 errors
All 5 WebSocket endpoints (expand, spec, assistant, terminal, project)
were closing the connection before calling accept() when validation
failed. Starlette converts pre-accept close into an HTTP 403, giving
clients no meaningful error information.

Server changes:
- Move websocket.accept() before all validation checks in every WS handler
- Send JSON error message before closing so clients get actionable errors
- Fix validate_project_name usage (raises HTTPException, not returns bool)
- ConnectionManager.connect() no longer calls accept() (caller's job)

Client changes:
- All 3 WS hooks (useWebSocket, useExpandChat, useSpecChat) skip
  reconnection on 4xxx close codes (application errors won't self-resolve)
- Gate expand button, keyboard shortcut, and modal on hasSpec
- Add hasSpec to useEffect dependency array to prevent stale closure
- Update keyboard shortcuts help text for E key context

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-05 21:08:46 +02:00
Auto
f4facb3200 update lock 2026-02-05 09:55:39 +02:00
Auto
2f8a6a6274 0.1.3 2026-02-05 09:54:57 +02:00
Auto
76246bad69 fix: add temp_cleanup.py to npm package files whitelist
PR #158 added temp_cleanup.py and its import in autonomous_agent_demo.py
but did not include the file in the package.json "files" array. This
caused ModuleNotFoundError for npm installations since the module was
missing from the published tarball.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-05 09:54:33 +02:00
Auto
b736fb7382 update packagelock 2026-02-05 08:53:26 +02:00
32 changed files with 720 additions and 238 deletions

View File

@@ -9,11 +9,6 @@
# - webkit: Safari engine # - webkit: Safari engine
# - msedge: Microsoft Edge # - msedge: Microsoft Edge
# PLAYWRIGHT_BROWSER=firefox # PLAYWRIGHT_BROWSER=firefox
#
# PLAYWRIGHT_HEADLESS: Run browser without visible window
# - true: Browser runs in background, saves CPU (default)
# - false: Browser opens a visible window (useful for debugging)
# PLAYWRIGHT_HEADLESS=true
# Extra Read Paths (Optional) # Extra Read Paths (Optional)
# Comma-separated list of absolute paths for read-only access to external directories. # Comma-separated list of absolute paths for read-only access to external directories.
@@ -25,40 +20,17 @@
# Google Cloud Vertex AI Configuration (Optional) # Google Cloud Vertex AI Configuration (Optional)
# To use Claude via Vertex AI on Google Cloud Platform, uncomment and set these variables. # To use Claude via Vertex AI on Google Cloud Platform, uncomment and set these variables.
# Requires: gcloud CLI installed and authenticated (run: gcloud auth application-default login) # Requires: gcloud CLI installed and authenticated (run: gcloud auth application-default login)
# Note: Use @ instead of - in model names (e.g., claude-opus-4-5@20251101) # Note: Use @ instead of - in model names for date-suffixed models (e.g., claude-sonnet-4-5@20250929)
# #
# CLAUDE_CODE_USE_VERTEX=1 # CLAUDE_CODE_USE_VERTEX=1
# CLOUD_ML_REGION=us-east5 # CLOUD_ML_REGION=us-east5
# ANTHROPIC_VERTEX_PROJECT_ID=your-gcp-project-id # ANTHROPIC_VERTEX_PROJECT_ID=your-gcp-project-id
# ANTHROPIC_DEFAULT_OPUS_MODEL=claude-opus-4-5@20251101 # ANTHROPIC_DEFAULT_OPUS_MODEL=claude-opus-4-6
# ANTHROPIC_DEFAULT_SONNET_MODEL=claude-sonnet-4-5@20250929 # ANTHROPIC_DEFAULT_SONNET_MODEL=claude-sonnet-4-5@20250929
# ANTHROPIC_DEFAULT_HAIKU_MODEL=claude-3-5-haiku@20241022 # ANTHROPIC_DEFAULT_HAIKU_MODEL=claude-3-5-haiku@20241022
# GLM/Alternative API Configuration (Optional) # ===================
# To use Zhipu AI's GLM models instead of Claude, uncomment and set these variables. # Alternative API Providers (GLM, Ollama, Kimi, Custom)
# This only affects AutoForge - your global Claude Code settings remain unchanged. # ===================
# Get an API key at: https://z.ai/subscribe # Configure alternative providers via the Settings UI (gear icon > API Provider).
# # The Settings UI is the recommended way to switch providers and models.
# ANTHROPIC_BASE_URL=https://api.z.ai/api/anthropic
# ANTHROPIC_AUTH_TOKEN=your-zhipu-api-key
# API_TIMEOUT_MS=3000000
# ANTHROPIC_DEFAULT_SONNET_MODEL=glm-4.7
# ANTHROPIC_DEFAULT_OPUS_MODEL=glm-4.7
# ANTHROPIC_DEFAULT_HAIKU_MODEL=glm-4.5-air
# Ollama Local Model Configuration (Optional)
# To use local models via Ollama instead of Claude, uncomment and set these variables.
# Requires Ollama v0.14.0+ with Anthropic API compatibility.
# See: https://ollama.com/blog/claude
#
# ANTHROPIC_BASE_URL=http://localhost:11434
# ANTHROPIC_AUTH_TOKEN=ollama
# API_TIMEOUT_MS=3000000
# ANTHROPIC_DEFAULT_SONNET_MODEL=qwen3-coder
# ANTHROPIC_DEFAULT_OPUS_MODEL=qwen3-coder
# ANTHROPIC_DEFAULT_HAIKU_MODEL=qwen3-coder
#
# Model recommendations:
# - For best results, use a capable coding model like qwen3-coder or deepseek-coder-v2
# - You can use the same model for all tiers, or different models per tier
# - Larger models (70B+) work best for Opus tier, smaller (7B-20B) for Haiku

View File

@@ -408,44 +408,23 @@ Run coding agents via Google Cloud Vertex AI:
CLAUDE_CODE_USE_VERTEX=1 CLAUDE_CODE_USE_VERTEX=1
CLOUD_ML_REGION=us-east5 CLOUD_ML_REGION=us-east5
ANTHROPIC_VERTEX_PROJECT_ID=your-gcp-project-id ANTHROPIC_VERTEX_PROJECT_ID=your-gcp-project-id
ANTHROPIC_DEFAULT_OPUS_MODEL=claude-opus-4-5@20251101 ANTHROPIC_DEFAULT_OPUS_MODEL=claude-opus-4-6
ANTHROPIC_DEFAULT_SONNET_MODEL=claude-sonnet-4-5@20250929 ANTHROPIC_DEFAULT_SONNET_MODEL=claude-sonnet-4-5@20250929
ANTHROPIC_DEFAULT_HAIKU_MODEL=claude-3-5-haiku@20241022 ANTHROPIC_DEFAULT_HAIKU_MODEL=claude-3-5-haiku@20241022
``` ```
**Note:** Use `@` instead of `-` in model names for Vertex AI. **Note:** Use `@` instead of `-` in model names for Vertex AI.
### Ollama Local Models (Optional) ### Alternative API Providers (GLM, Ollama, Kimi, Custom)
Run coding agents using local models via Ollama v0.14.0+: Alternative providers are configured via the **Settings UI** (gear icon > API Provider section). Select a provider, set the base URL, auth token, and model — no `.env` changes needed.
1. Install Ollama: https://ollama.com **Available providers:** Claude (default), GLM (Zhipu AI), Ollama (local models), Kimi (Moonshot), Custom
2. Start Ollama: `ollama serve`
3. Pull a coding model: `ollama pull qwen3-coder`
4. Configure `.env`:
```
ANTHROPIC_BASE_URL=http://localhost:11434
ANTHROPIC_AUTH_TOKEN=ollama
API_TIMEOUT_MS=3000000
ANTHROPIC_DEFAULT_SONNET_MODEL=qwen3-coder
ANTHROPIC_DEFAULT_OPUS_MODEL=qwen3-coder
ANTHROPIC_DEFAULT_HAIKU_MODEL=qwen3-coder
```
5. Run AutoForge normally - it will use your local Ollama models
**Recommended coding models:** **Ollama notes:**
- `qwen3-coder` - Good balance of speed and capability - Requires Ollama v0.14.0+ with Anthropic API compatibility
- `deepseek-coder-v2` - Strong coding performance - Install: https://ollama.com → `ollama serve` → `ollama pull qwen3-coder`
- `codellama` - Meta's code-focused model - Recommended models: `qwen3-coder`, `deepseek-coder-v2`, `codellama`
**Model tier mapping:**
- Use the same model for all tiers, or map different models per capability level
- Larger models (70B+) work best for Opus tier
- Smaller models (7B-20B) work well for Haiku tier
**Known limitations:**
- Smaller context windows than Claude (model-dependent)
- Extended context beta disabled (not supported by Ollama)
- Performance depends on local hardware (GPU recommended) - Performance depends on local hardware (GPU recommended)
## Claude Code Integration ## Claude Code Integration

View File

@@ -6,9 +6,9 @@ A long-running autonomous coding agent powered by the Claude Agent SDK. This too
## Video Tutorial ## Video Tutorial
[![Watch the tutorial](https://img.youtube.com/vi/lGWFlpffWk4/hqdefault.jpg)](https://youtu.be/lGWFlpffWk4) [![Watch the tutorial](https://img.youtube.com/vi/nKiPOxDpcJY/hqdefault.jpg)](https://youtu.be/nKiPOxDpcJY)
> **[Watch the setup and usage guide →](https://youtu.be/lGWFlpffWk4)** > **[Watch the setup and usage guide →](https://youtu.be/nKiPOxDpcJY)**
--- ---
@@ -326,37 +326,13 @@ When test progress increases, the agent sends:
} }
``` ```
### Using GLM Models (Alternative to Claude) ### Alternative API Providers (GLM, Ollama, Kimi, Custom)
Add these variables to your `.env` file to use Zhipu AI's GLM models: Alternative providers are configured via the **Settings UI** (gear icon > API Provider). Select your provider, set the base URL, auth token, and model directly in the UI — no `.env` changes needed.
```bash Available providers: **Claude** (default), **GLM** (Zhipu AI), **Ollama** (local models), **Kimi** (Moonshot), **Custom**
ANTHROPIC_BASE_URL=https://api.z.ai/api/anthropic
ANTHROPIC_AUTH_TOKEN=your-zhipu-api-key
API_TIMEOUT_MS=3000000
ANTHROPIC_DEFAULT_SONNET_MODEL=glm-4.7
ANTHROPIC_DEFAULT_OPUS_MODEL=glm-4.7
ANTHROPIC_DEFAULT_HAIKU_MODEL=glm-4.5-air
```
This routes AutoForge's API requests through Zhipu's Claude-compatible API, allowing you to use GLM-4.7 and other models. **This only affects AutoForge** - your global Claude Code settings remain unchanged. For Ollama, install [Ollama v0.14.0+](https://ollama.com), run `ollama serve`, and pull a coding model (e.g., `ollama pull qwen3-coder`). Then select "Ollama" in the Settings UI.
Get an API key at: https://z.ai/subscribe
### Using Ollama Local Models
Add these variables to your `.env` file to run agents with local models via Ollama v0.14.0+:
```bash
ANTHROPIC_BASE_URL=http://localhost:11434
ANTHROPIC_AUTH_TOKEN=ollama
API_TIMEOUT_MS=3000000
ANTHROPIC_DEFAULT_SONNET_MODEL=qwen3-coder
ANTHROPIC_DEFAULT_OPUS_MODEL=qwen3-coder
ANTHROPIC_DEFAULT_HAIKU_MODEL=qwen3-coder
```
See the [CLAUDE.md](CLAUDE.md) for recommended models and known limitations.
### Using Vertex AI ### Using Vertex AI
@@ -366,7 +342,7 @@ Add these variables to your `.env` file to run agents via Google Cloud Vertex AI
CLAUDE_CODE_USE_VERTEX=1 CLAUDE_CODE_USE_VERTEX=1
CLOUD_ML_REGION=us-east5 CLOUD_ML_REGION=us-east5
ANTHROPIC_VERTEX_PROJECT_ID=your-gcp-project-id ANTHROPIC_VERTEX_PROJECT_ID=your-gcp-project-id
ANTHROPIC_DEFAULT_OPUS_MODEL=claude-opus-4-5@20251101 ANTHROPIC_DEFAULT_OPUS_MODEL=claude-opus-4-6
ANTHROPIC_DEFAULT_SONNET_MODEL=claude-sonnet-4-5@20250929 ANTHROPIC_DEFAULT_SONNET_MODEL=claude-sonnet-4-5@20250929
ANTHROPIC_DEFAULT_HAIKU_MODEL=claude-3-5-haiku@20241022 ANTHROPIC_DEFAULT_HAIKU_MODEL=claude-3-5-haiku@20241022
``` ```

0
bin/autoforge.js Normal file → Executable file
View File

View File

@@ -46,8 +46,9 @@ def convert_model_for_vertex(model: str) -> str:
""" """
Convert model name format for Vertex AI compatibility. Convert model name format for Vertex AI compatibility.
Vertex AI uses @ to separate model name from version (e.g., claude-opus-4-5@20251101) Vertex AI uses @ to separate model name from version (e.g., claude-sonnet-4-5@20250929)
while the Anthropic API uses - (e.g., claude-opus-4-5-20251101). while the Anthropic API uses - (e.g., claude-sonnet-4-5-20250929).
Models without a date suffix (e.g., claude-opus-4-6) pass through unchanged.
Args: Args:
model: Model name in Anthropic format (with hyphens) model: Model name in Anthropic format (with hyphens)
@@ -61,7 +62,7 @@ def convert_model_for_vertex(model: str) -> str:
return model return model
# Pattern: claude-{name}-{version}-{date} -> claude-{name}-{version}@{date} # Pattern: claude-{name}-{version}-{date} -> claude-{name}-{version}@{date}
# Example: claude-opus-4-5-20251101 -> claude-opus-4-5@20251101 # Example: claude-sonnet-4-5-20250929 -> claude-sonnet-4-5@20250929
# The date is always 8 digits at the end # The date is always 8 digits at the end
match = re.match(r'^(claude-.+)-(\d{8})$', model) match = re.match(r'^(claude-.+)-(\d{8})$', model)
if match: if match:

View File

@@ -15,6 +15,7 @@ API_ENV_VARS: list[str] = [
# Core API configuration # Core API configuration
"ANTHROPIC_BASE_URL", # Custom API endpoint (e.g., https://api.z.ai/api/anthropic) "ANTHROPIC_BASE_URL", # Custom API endpoint (e.g., https://api.z.ai/api/anthropic)
"ANTHROPIC_AUTH_TOKEN", # API authentication token "ANTHROPIC_AUTH_TOKEN", # API authentication token
"ANTHROPIC_API_KEY", # API key (used by Kimi and other providers)
"API_TIMEOUT_MS", # Request timeout in milliseconds "API_TIMEOUT_MS", # Request timeout in milliseconds
# Model tier overrides # Model tier overrides
"ANTHROPIC_DEFAULT_SONNET_MODEL", # Model override for Sonnet "ANTHROPIC_DEFAULT_SONNET_MODEL", # Model override for Sonnet

View File

@@ -1,6 +1,6 @@
{ {
"name": "autoforge-ai", "name": "autoforge-ai",
"version": "0.1.2", "version": "0.1.5",
"description": "Autonomous coding agent with web UI - build complete apps with AI", "description": "Autonomous coding agent with web UI - build complete apps with AI",
"license": "AGPL-3.0", "license": "AGPL-3.0",
"bin": { "bin": {
@@ -34,6 +34,7 @@
"registry.py", "registry.py",
"rate_limit_utils.py", "rate_limit_utils.py",
"security.py", "security.py",
"temp_cleanup.py",
"requirements-prod.txt", "requirements-prod.txt",
"pyproject.toml", "pyproject.toml",
".env.example", ".env.example",

View File

@@ -46,10 +46,16 @@ def _migrate_registry_dir() -> None:
# Available models with display names # Available models with display names
# To add a new model: add an entry here with {"id": "model-id", "name": "Display Name"} # To add a new model: add an entry here with {"id": "model-id", "name": "Display Name"}
AVAILABLE_MODELS = [ AVAILABLE_MODELS = [
{"id": "claude-opus-4-5-20251101", "name": "Claude Opus 4.5"}, {"id": "claude-opus-4-6", "name": "Claude Opus"},
{"id": "claude-sonnet-4-5-20250929", "name": "Claude Sonnet 4.5"}, {"id": "claude-sonnet-4-5-20250929", "name": "Claude Sonnet"},
] ]
# Map legacy model IDs to their current replacements.
# Used by get_all_settings() to auto-migrate stale values on first read after upgrade.
LEGACY_MODEL_MAP = {
"claude-opus-4-5-20251101": "claude-opus-4-6",
}
# List of valid model IDs (derived from AVAILABLE_MODELS) # List of valid model IDs (derived from AVAILABLE_MODELS)
VALID_MODELS = [m["id"] for m in AVAILABLE_MODELS] VALID_MODELS = [m["id"] for m in AVAILABLE_MODELS]
@@ -59,7 +65,7 @@ VALID_MODELS = [m["id"] for m in AVAILABLE_MODELS]
_env_default_model = os.getenv("ANTHROPIC_DEFAULT_OPUS_MODEL") _env_default_model = os.getenv("ANTHROPIC_DEFAULT_OPUS_MODEL")
if _env_default_model is not None: if _env_default_model is not None:
_env_default_model = _env_default_model.strip() _env_default_model = _env_default_model.strip()
DEFAULT_MODEL = _env_default_model or "claude-opus-4-5-20251101" DEFAULT_MODEL = _env_default_model or "claude-opus-4-6"
# Ensure env-provided DEFAULT_MODEL is in VALID_MODELS for validation consistency # Ensure env-provided DEFAULT_MODEL is in VALID_MODELS for validation consistency
# (idempotent: only adds if missing, doesn't alter AVAILABLE_MODELS semantics) # (idempotent: only adds if missing, doesn't alter AVAILABLE_MODELS semantics)
@@ -598,6 +604,9 @@ def get_all_settings() -> dict[str, str]:
""" """
Get all settings as a dictionary. Get all settings as a dictionary.
Automatically migrates legacy model IDs (e.g. claude-opus-4-5-20251101 -> claude-opus-4-6)
on first read after upgrade. This is a one-time silent migration.
Returns: Returns:
Dictionary mapping setting keys to values. Dictionary mapping setting keys to values.
""" """
@@ -606,9 +615,145 @@ def get_all_settings() -> dict[str, str]:
session = SessionLocal() session = SessionLocal()
try: try:
settings = session.query(Settings).all() settings = session.query(Settings).all()
return {s.key: s.value for s in settings} result = {s.key: s.value for s in settings}
# Auto-migrate legacy model IDs
migrated = False
for key in ("model", "api_model"):
old_id = result.get(key)
if old_id and old_id in LEGACY_MODEL_MAP:
new_id = LEGACY_MODEL_MAP[old_id]
setting = session.query(Settings).filter(Settings.key == key).first()
if setting:
setting.value = new_id
setting.updated_at = datetime.now()
result[key] = new_id
migrated = True
logger.info("Migrated setting '%s': %s -> %s", key, old_id, new_id)
if migrated:
session.commit()
return result
finally: finally:
session.close() session.close()
except Exception as e: except Exception as e:
logger.warning("Failed to read settings: %s", e) logger.warning("Failed to read settings: %s", e)
return {} return {}
# =============================================================================
# API Provider Definitions
# =============================================================================
API_PROVIDERS: dict[str, dict[str, Any]] = {
"claude": {
"name": "Claude (Anthropic)",
"base_url": None,
"requires_auth": False,
"models": [
{"id": "claude-opus-4-6", "name": "Claude Opus"},
{"id": "claude-sonnet-4-5-20250929", "name": "Claude Sonnet"},
],
"default_model": "claude-opus-4-6",
},
"kimi": {
"name": "Kimi K2.5 (Moonshot)",
"base_url": "https://api.kimi.com/coding/",
"requires_auth": True,
"auth_env_var": "ANTHROPIC_API_KEY",
"models": [{"id": "kimi-k2.5", "name": "Kimi K2.5"}],
"default_model": "kimi-k2.5",
},
"glm": {
"name": "GLM (Zhipu AI)",
"base_url": "https://api.z.ai/api/anthropic",
"requires_auth": True,
"auth_env_var": "ANTHROPIC_AUTH_TOKEN",
"models": [
{"id": "glm-4.7", "name": "GLM 4.7"},
{"id": "glm-4.5-air", "name": "GLM 4.5 Air"},
],
"default_model": "glm-4.7",
},
"ollama": {
"name": "Ollama (Local)",
"base_url": "http://localhost:11434",
"requires_auth": False,
"models": [
{"id": "qwen3-coder", "name": "Qwen3 Coder"},
{"id": "deepseek-coder-v2", "name": "DeepSeek Coder V2"},
],
"default_model": "qwen3-coder",
},
"custom": {
"name": "Custom Provider",
"base_url": "",
"requires_auth": True,
"auth_env_var": "ANTHROPIC_AUTH_TOKEN",
"models": [],
"default_model": "",
},
}
def get_effective_sdk_env() -> dict[str, str]:
"""Build environment variable dict for Claude SDK based on current API provider settings.
When api_provider is "claude" (or unset), falls back to existing env vars (current behavior).
For other providers, builds env dict from stored settings (api_base_url, api_auth_token, api_model).
Returns:
Dict ready to merge into subprocess env or pass to SDK.
"""
all_settings = get_all_settings()
provider_id = all_settings.get("api_provider", "claude")
if provider_id == "claude":
# Default behavior: forward existing env vars
from env_constants import API_ENV_VARS
sdk_env: dict[str, str] = {}
for var in API_ENV_VARS:
value = os.getenv(var)
if value:
sdk_env[var] = value
return sdk_env
# Alternative provider: build env from settings
provider = API_PROVIDERS.get(provider_id)
if not provider:
logger.warning("Unknown API provider '%s', falling back to claude", provider_id)
from env_constants import API_ENV_VARS
sdk_env = {}
for var in API_ENV_VARS:
value = os.getenv(var)
if value:
sdk_env[var] = value
return sdk_env
sdk_env = {}
# Base URL
base_url = all_settings.get("api_base_url") or provider.get("base_url")
if base_url:
sdk_env["ANTHROPIC_BASE_URL"] = base_url
# Auth token
auth_token = all_settings.get("api_auth_token")
if auth_token:
auth_env_var = provider.get("auth_env_var", "ANTHROPIC_AUTH_TOKEN")
sdk_env[auth_env_var] = auth_token
# Model - set all three tier overrides to the same model
model = all_settings.get("api_model") or provider.get("default_model")
if model:
sdk_env["ANTHROPIC_DEFAULT_OPUS_MODEL"] = model
sdk_env["ANTHROPIC_DEFAULT_SONNET_MODEL"] = model
sdk_env["ANTHROPIC_DEFAULT_HAIKU_MODEL"] = model
# Timeout
timeout = all_settings.get("api_timeout_ms")
if timeout:
sdk_env["API_TIMEOUT_MS"] = timeout
return sdk_env

View File

@@ -32,7 +32,7 @@ def _get_settings_defaults() -> tuple[bool, str, int, bool, int]:
settings = get_all_settings() settings = get_all_settings()
yolo_mode = (settings.get("yolo_mode") or "false").lower() == "true" yolo_mode = (settings.get("yolo_mode") or "false").lower() == "true"
model = settings.get("model", DEFAULT_MODEL) model = settings.get("api_model") or settings.get("model", DEFAULT_MODEL)
# Parse testing agent settings with defaults # Parse testing agent settings with defaults
try: try:

View File

@@ -26,7 +26,7 @@ from ..services.assistant_database import (
get_conversations, get_conversations,
) )
from ..utils.project_helpers import get_project_path as _get_project_path from ..utils.project_helpers import get_project_path as _get_project_path
from ..utils.validation import is_valid_project_name as validate_project_name from ..utils.validation import validate_project_name
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@@ -217,20 +217,26 @@ async def assistant_chat_websocket(websocket: WebSocket, project_name: str):
- {"type": "error", "content": "..."} - Error message - {"type": "error", "content": "..."} - Error message
- {"type": "pong"} - Keep-alive pong - {"type": "pong"} - Keep-alive pong
""" """
if not validate_project_name(project_name): # Always accept WebSocket first to avoid opaque 403 errors
await websocket.accept()
try:
project_name = validate_project_name(project_name)
except HTTPException:
await websocket.send_json({"type": "error", "content": "Invalid project name"})
await websocket.close(code=4000, reason="Invalid project name") await websocket.close(code=4000, reason="Invalid project name")
return return
project_dir = _get_project_path(project_name) project_dir = _get_project_path(project_name)
if not project_dir: if not project_dir:
await websocket.send_json({"type": "error", "content": "Project not found in registry"})
await websocket.close(code=4004, reason="Project not found in registry") await websocket.close(code=4004, reason="Project not found in registry")
return return
if not project_dir.exists(): if not project_dir.exists():
await websocket.send_json({"type": "error", "content": "Project directory not found"})
await websocket.close(code=4004, reason="Project directory not found") await websocket.close(code=4004, reason="Project directory not found")
return return
await websocket.accept()
logger.info(f"Assistant WebSocket connected for project: {project_name}") logger.info(f"Assistant WebSocket connected for project: {project_name}")
session: Optional[AssistantChatSession] = None session: Optional[AssistantChatSession] = None

View File

@@ -104,19 +104,26 @@ async def expand_project_websocket(websocket: WebSocket, project_name: str):
- {"type": "error", "content": "..."} - Error message - {"type": "error", "content": "..."} - Error message
- {"type": "pong"} - Keep-alive pong - {"type": "pong"} - Keep-alive pong
""" """
# Always accept the WebSocket first to avoid opaque 403 errors.
# Starlette returns 403 if we close before accepting.
await websocket.accept()
try: try:
project_name = validate_project_name(project_name) project_name = validate_project_name(project_name)
except HTTPException: except HTTPException:
await websocket.send_json({"type": "error", "content": "Invalid project name"})
await websocket.close(code=4000, reason="Invalid project name") await websocket.close(code=4000, reason="Invalid project name")
return return
# Look up project directory from registry # Look up project directory from registry
project_dir = _get_project_path(project_name) project_dir = _get_project_path(project_name)
if not project_dir: if not project_dir:
await websocket.send_json({"type": "error", "content": "Project not found in registry"})
await websocket.close(code=4004, reason="Project not found in registry") await websocket.close(code=4004, reason="Project not found in registry")
return return
if not project_dir.exists(): if not project_dir.exists():
await websocket.send_json({"type": "error", "content": "Project directory not found"})
await websocket.close(code=4004, reason="Project directory not found") await websocket.close(code=4004, reason="Project directory not found")
return return
@@ -124,11 +131,10 @@ async def expand_project_websocket(websocket: WebSocket, project_name: str):
from autoforge_paths import get_prompts_dir from autoforge_paths import get_prompts_dir
spec_path = get_prompts_dir(project_dir) / "app_spec.txt" spec_path = get_prompts_dir(project_dir) / "app_spec.txt"
if not spec_path.exists(): if not spec_path.exists():
await websocket.send_json({"type": "error", "content": "Project has no spec. Create a spec first before expanding."})
await websocket.close(code=4004, reason="Project has no spec. Create spec first.") await websocket.close(code=4004, reason="Project has no spec. Create spec first.")
return return
await websocket.accept()
session: Optional[ExpandChatSession] = None session: Optional[ExpandChatSession] = None
try: try:

View File

@@ -7,12 +7,11 @@ Settings are stored in the registry database and shared across all projects.
""" """
import mimetypes import mimetypes
import os
import sys import sys
from fastapi import APIRouter from fastapi import APIRouter
from ..schemas import ModelInfo, ModelsResponse, SettingsResponse, SettingsUpdate from ..schemas import ModelInfo, ModelsResponse, ProviderInfo, ProvidersResponse, SettingsResponse, SettingsUpdate
from ..services.chat_constants import ROOT_DIR from ..services.chat_constants import ROOT_DIR
# Mimetype fix for Windows - must run before StaticFiles is mounted # Mimetype fix for Windows - must run before StaticFiles is mounted
@@ -23,9 +22,11 @@ if str(ROOT_DIR) not in sys.path:
sys.path.insert(0, str(ROOT_DIR)) sys.path.insert(0, str(ROOT_DIR))
from registry import ( from registry import (
API_PROVIDERS,
AVAILABLE_MODELS, AVAILABLE_MODELS,
DEFAULT_MODEL, DEFAULT_MODEL,
get_all_settings, get_all_settings,
get_setting,
set_setting, set_setting,
) )
@@ -37,26 +38,40 @@ def _parse_yolo_mode(value: str | None) -> bool:
return (value or "false").lower() == "true" return (value or "false").lower() == "true"
def _is_glm_mode() -> bool: @router.get("/providers", response_model=ProvidersResponse)
"""Check if GLM API is configured via environment variables.""" async def get_available_providers():
base_url = os.getenv("ANTHROPIC_BASE_URL", "") """Get list of available API providers."""
# GLM mode is when ANTHROPIC_BASE_URL is set but NOT pointing to Ollama current = get_setting("api_provider", "claude") or "claude"
return bool(base_url) and not _is_ollama_mode() providers = []
for pid, pdata in API_PROVIDERS.items():
providers.append(ProviderInfo(
def _is_ollama_mode() -> bool: id=pid,
"""Check if Ollama API is configured via environment variables.""" name=pdata["name"],
base_url = os.getenv("ANTHROPIC_BASE_URL", "") base_url=pdata.get("base_url"),
return "localhost:11434" in base_url or "127.0.0.1:11434" in base_url models=[ModelInfo(id=m["id"], name=m["name"]) for m in pdata.get("models", [])],
default_model=pdata.get("default_model", ""),
requires_auth=pdata.get("requires_auth", False),
))
return ProvidersResponse(providers=providers, current=current)
@router.get("/models", response_model=ModelsResponse) @router.get("/models", response_model=ModelsResponse)
async def get_available_models(): async def get_available_models():
"""Get list of available models. """Get list of available models.
Frontend should call this to get the current list of models Returns models for the currently selected API provider.
instead of hardcoding them.
""" """
current_provider = get_setting("api_provider", "claude") or "claude"
provider = API_PROVIDERS.get(current_provider)
if provider and current_provider != "claude":
provider_models = provider.get("models", [])
return ModelsResponse(
models=[ModelInfo(id=m["id"], name=m["name"]) for m in provider_models],
default=provider.get("default_model", ""),
)
# Default: return Claude models
return ModelsResponse( return ModelsResponse(
models=[ModelInfo(id=m["id"], name=m["name"]) for m in AVAILABLE_MODELS], models=[ModelInfo(id=m["id"], name=m["name"]) for m in AVAILABLE_MODELS],
default=DEFAULT_MODEL, default=DEFAULT_MODEL,
@@ -85,14 +100,23 @@ async def get_settings():
"""Get current global settings.""" """Get current global settings."""
all_settings = get_all_settings() all_settings = get_all_settings()
api_provider = all_settings.get("api_provider", "claude")
glm_mode = api_provider == "glm"
ollama_mode = api_provider == "ollama"
return SettingsResponse( return SettingsResponse(
yolo_mode=_parse_yolo_mode(all_settings.get("yolo_mode")), yolo_mode=_parse_yolo_mode(all_settings.get("yolo_mode")),
model=all_settings.get("model", DEFAULT_MODEL), model=all_settings.get("model", DEFAULT_MODEL),
glm_mode=_is_glm_mode(), glm_mode=glm_mode,
ollama_mode=_is_ollama_mode(), ollama_mode=ollama_mode,
testing_agent_ratio=_parse_int(all_settings.get("testing_agent_ratio"), 1), testing_agent_ratio=_parse_int(all_settings.get("testing_agent_ratio"), 1),
playwright_headless=_parse_bool(all_settings.get("playwright_headless"), default=True), playwright_headless=_parse_bool(all_settings.get("playwright_headless"), default=True),
batch_size=_parse_int(all_settings.get("batch_size"), 3), batch_size=_parse_int(all_settings.get("batch_size"), 3),
api_provider=api_provider,
api_base_url=all_settings.get("api_base_url"),
api_has_auth_token=bool(all_settings.get("api_auth_token")),
api_model=all_settings.get("api_model"),
) )
@@ -114,14 +138,47 @@ async def update_settings(update: SettingsUpdate):
if update.batch_size is not None: if update.batch_size is not None:
set_setting("batch_size", str(update.batch_size)) set_setting("batch_size", str(update.batch_size))
# API provider settings
if update.api_provider is not None:
old_provider = get_setting("api_provider", "claude")
set_setting("api_provider", update.api_provider)
# When provider changes, auto-set defaults for the new provider
if update.api_provider != old_provider:
provider = API_PROVIDERS.get(update.api_provider)
if provider:
# Auto-set base URL from provider definition
if provider.get("base_url"):
set_setting("api_base_url", provider["base_url"])
# Auto-set model to provider's default
if provider.get("default_model") and update.api_model is None:
set_setting("api_model", provider["default_model"])
if update.api_base_url is not None:
set_setting("api_base_url", update.api_base_url)
if update.api_auth_token is not None:
set_setting("api_auth_token", update.api_auth_token)
if update.api_model is not None:
set_setting("api_model", update.api_model)
# Return updated settings # Return updated settings
all_settings = get_all_settings() all_settings = get_all_settings()
api_provider = all_settings.get("api_provider", "claude")
glm_mode = api_provider == "glm"
ollama_mode = api_provider == "ollama"
return SettingsResponse( return SettingsResponse(
yolo_mode=_parse_yolo_mode(all_settings.get("yolo_mode")), yolo_mode=_parse_yolo_mode(all_settings.get("yolo_mode")),
model=all_settings.get("model", DEFAULT_MODEL), model=all_settings.get("model", DEFAULT_MODEL),
glm_mode=_is_glm_mode(), glm_mode=glm_mode,
ollama_mode=_is_ollama_mode(), ollama_mode=ollama_mode,
testing_agent_ratio=_parse_int(all_settings.get("testing_agent_ratio"), 1), testing_agent_ratio=_parse_int(all_settings.get("testing_agent_ratio"), 1),
playwright_headless=_parse_bool(all_settings.get("playwright_headless"), default=True), playwright_headless=_parse_bool(all_settings.get("playwright_headless"), default=True),
batch_size=_parse_int(all_settings.get("batch_size"), 3), batch_size=_parse_int(all_settings.get("batch_size"), 3),
api_provider=api_provider,
api_base_url=all_settings.get("api_base_url"),
api_has_auth_token=bool(all_settings.get("api_auth_token")),
api_model=all_settings.get("api_model"),
) )

View File

@@ -21,7 +21,7 @@ from ..services.spec_chat_session import (
remove_session, remove_session,
) )
from ..utils.project_helpers import get_project_path as _get_project_path from ..utils.project_helpers import get_project_path as _get_project_path
from ..utils.validation import is_valid_project_name as validate_project_name from ..utils.validation import is_valid_project_name, validate_project_name
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@@ -49,7 +49,7 @@ async def list_spec_sessions():
@router.get("/sessions/{project_name}", response_model=SpecSessionStatus) @router.get("/sessions/{project_name}", response_model=SpecSessionStatus)
async def get_session_status(project_name: str): async def get_session_status(project_name: str):
"""Get status of a spec creation session.""" """Get status of a spec creation session."""
if not validate_project_name(project_name): if not is_valid_project_name(project_name):
raise HTTPException(status_code=400, detail="Invalid project name") raise HTTPException(status_code=400, detail="Invalid project name")
session = get_session(project_name) session = get_session(project_name)
@@ -67,7 +67,7 @@ async def get_session_status(project_name: str):
@router.delete("/sessions/{project_name}") @router.delete("/sessions/{project_name}")
async def cancel_session(project_name: str): async def cancel_session(project_name: str):
"""Cancel and remove a spec creation session.""" """Cancel and remove a spec creation session."""
if not validate_project_name(project_name): if not is_valid_project_name(project_name):
raise HTTPException(status_code=400, detail="Invalid project name") raise HTTPException(status_code=400, detail="Invalid project name")
session = get_session(project_name) session = get_session(project_name)
@@ -95,7 +95,7 @@ async def get_spec_file_status(project_name: str):
This is used for polling to detect when Claude has finished writing spec files. This is used for polling to detect when Claude has finished writing spec files.
Claude writes this status file as the final step after completing all spec work. Claude writes this status file as the final step after completing all spec work.
""" """
if not validate_project_name(project_name): if not is_valid_project_name(project_name):
raise HTTPException(status_code=400, detail="Invalid project name") raise HTTPException(status_code=400, detail="Invalid project name")
project_dir = _get_project_path(project_name) project_dir = _get_project_path(project_name)
@@ -166,22 +166,28 @@ async def spec_chat_websocket(websocket: WebSocket, project_name: str):
- {"type": "error", "content": "..."} - Error message - {"type": "error", "content": "..."} - Error message
- {"type": "pong"} - Keep-alive pong - {"type": "pong"} - Keep-alive pong
""" """
if not validate_project_name(project_name): # Always accept WebSocket first to avoid opaque 403 errors
await websocket.accept()
try:
project_name = validate_project_name(project_name)
except HTTPException:
await websocket.send_json({"type": "error", "content": "Invalid project name"})
await websocket.close(code=4000, reason="Invalid project name") await websocket.close(code=4000, reason="Invalid project name")
return return
# Look up project directory from registry # Look up project directory from registry
project_dir = _get_project_path(project_name) project_dir = _get_project_path(project_name)
if not project_dir: if not project_dir:
await websocket.send_json({"type": "error", "content": "Project not found in registry"})
await websocket.close(code=4004, reason="Project not found in registry") await websocket.close(code=4004, reason="Project not found in registry")
return return
if not project_dir.exists(): if not project_dir.exists():
await websocket.send_json({"type": "error", "content": "Project directory not found"})
await websocket.close(code=4004, reason="Project directory not found") await websocket.close(code=4004, reason="Project directory not found")
return return
await websocket.accept()
session: Optional[SpecChatSession] = None session: Optional[SpecChatSession] = None
try: try:

View File

@@ -26,7 +26,7 @@ from ..services.terminal_manager import (
stop_terminal_session, stop_terminal_session,
) )
from ..utils.project_helpers import get_project_path as _get_project_path from ..utils.project_helpers import get_project_path as _get_project_path
from ..utils.validation import is_valid_project_name as validate_project_name from ..utils.validation import is_valid_project_name
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@@ -89,7 +89,7 @@ async def list_project_terminals(project_name: str) -> list[TerminalInfoResponse
Returns: Returns:
List of terminal info objects List of terminal info objects
""" """
if not validate_project_name(project_name): if not is_valid_project_name(project_name):
raise HTTPException(status_code=400, detail="Invalid project name") raise HTTPException(status_code=400, detail="Invalid project name")
project_dir = _get_project_path(project_name) project_dir = _get_project_path(project_name)
@@ -122,7 +122,7 @@ async def create_project_terminal(
Returns: Returns:
The created terminal info The created terminal info
""" """
if not validate_project_name(project_name): if not is_valid_project_name(project_name):
raise HTTPException(status_code=400, detail="Invalid project name") raise HTTPException(status_code=400, detail="Invalid project name")
project_dir = _get_project_path(project_name) project_dir = _get_project_path(project_name)
@@ -148,7 +148,7 @@ async def rename_project_terminal(
Returns: Returns:
The updated terminal info The updated terminal info
""" """
if not validate_project_name(project_name): if not is_valid_project_name(project_name):
raise HTTPException(status_code=400, detail="Invalid project name") raise HTTPException(status_code=400, detail="Invalid project name")
if not validate_terminal_id(terminal_id): if not validate_terminal_id(terminal_id):
@@ -180,7 +180,7 @@ async def delete_project_terminal(project_name: str, terminal_id: str) -> dict:
Returns: Returns:
Success message Success message
""" """
if not validate_project_name(project_name): if not is_valid_project_name(project_name):
raise HTTPException(status_code=400, detail="Invalid project name") raise HTTPException(status_code=400, detail="Invalid project name")
if not validate_terminal_id(terminal_id): if not validate_terminal_id(terminal_id):
@@ -221,8 +221,12 @@ async def terminal_websocket(websocket: WebSocket, project_name: str, terminal_i
- {"type": "pong"} - Keep-alive response - {"type": "pong"} - Keep-alive response
- {"type": "error", "message": "..."} - Error message - {"type": "error", "message": "..."} - Error message
""" """
# Always accept WebSocket first to avoid opaque 403 errors
await websocket.accept()
# Validate project name # Validate project name
if not validate_project_name(project_name): if not is_valid_project_name(project_name):
await websocket.send_json({"type": "error", "message": "Invalid project name"})
await websocket.close( await websocket.close(
code=TerminalCloseCode.INVALID_PROJECT_NAME, reason="Invalid project name" code=TerminalCloseCode.INVALID_PROJECT_NAME, reason="Invalid project name"
) )
@@ -230,6 +234,7 @@ async def terminal_websocket(websocket: WebSocket, project_name: str, terminal_i
# Validate terminal ID # Validate terminal ID
if not validate_terminal_id(terminal_id): if not validate_terminal_id(terminal_id):
await websocket.send_json({"type": "error", "message": "Invalid terminal ID"})
await websocket.close( await websocket.close(
code=TerminalCloseCode.INVALID_PROJECT_NAME, reason="Invalid terminal ID" code=TerminalCloseCode.INVALID_PROJECT_NAME, reason="Invalid terminal ID"
) )
@@ -238,6 +243,7 @@ async def terminal_websocket(websocket: WebSocket, project_name: str, terminal_i
# Look up project directory from registry # Look up project directory from registry
project_dir = _get_project_path(project_name) project_dir = _get_project_path(project_name)
if not project_dir: if not project_dir:
await websocket.send_json({"type": "error", "message": "Project not found in registry"})
await websocket.close( await websocket.close(
code=TerminalCloseCode.PROJECT_NOT_FOUND, code=TerminalCloseCode.PROJECT_NOT_FOUND,
reason="Project not found in registry", reason="Project not found in registry",
@@ -245,6 +251,7 @@ async def terminal_websocket(websocket: WebSocket, project_name: str, terminal_i
return return
if not project_dir.exists(): if not project_dir.exists():
await websocket.send_json({"type": "error", "message": "Project directory not found"})
await websocket.close( await websocket.close(
code=TerminalCloseCode.PROJECT_NOT_FOUND, code=TerminalCloseCode.PROJECT_NOT_FOUND,
reason="Project directory not found", reason="Project directory not found",
@@ -254,14 +261,13 @@ async def terminal_websocket(websocket: WebSocket, project_name: str, terminal_i
# Verify terminal exists in metadata # Verify terminal exists in metadata
terminal_info = get_terminal_info(project_name, terminal_id) terminal_info = get_terminal_info(project_name, terminal_id)
if not terminal_info: if not terminal_info:
await websocket.send_json({"type": "error", "message": "Terminal not found"})
await websocket.close( await websocket.close(
code=TerminalCloseCode.PROJECT_NOT_FOUND, code=TerminalCloseCode.PROJECT_NOT_FOUND,
reason="Terminal not found", reason="Terminal not found",
) )
return return
await websocket.accept()
# Get or create terminal session for this project/terminal # Get or create terminal session for this project/terminal
session = get_terminal_session(project_name, project_dir, terminal_id) session = get_terminal_session(project_name, project_dir, terminal_id)

View File

@@ -391,15 +391,35 @@ class ModelInfo(BaseModel):
name: str name: str
class ProviderInfo(BaseModel):
"""Information about an API provider."""
id: str
name: str
base_url: str | None = None
models: list[ModelInfo]
default_model: str
requires_auth: bool = False
class ProvidersResponse(BaseModel):
"""Response schema for available providers list."""
providers: list[ProviderInfo]
current: str
class SettingsResponse(BaseModel): class SettingsResponse(BaseModel):
"""Response schema for global settings.""" """Response schema for global settings."""
yolo_mode: bool = False yolo_mode: bool = False
model: str = DEFAULT_MODEL model: str = DEFAULT_MODEL
glm_mode: bool = False # True if GLM API is configured via .env glm_mode: bool = False # True when api_provider is "glm"
ollama_mode: bool = False # True if Ollama API is configured via .env ollama_mode: bool = False # True when api_provider is "ollama"
testing_agent_ratio: int = 1 # Regression testing agents (0-3) testing_agent_ratio: int = 1 # Regression testing agents (0-3)
playwright_headless: bool = True playwright_headless: bool = True
batch_size: int = 3 # Features per coding agent batch (1-3) batch_size: int = 3 # Features per coding agent batch (1-3)
api_provider: str = "claude"
api_base_url: str | None = None
api_has_auth_token: bool = False # Never expose actual token
api_model: str | None = None
class ModelsResponse(BaseModel): class ModelsResponse(BaseModel):
@@ -415,12 +435,30 @@ class SettingsUpdate(BaseModel):
testing_agent_ratio: int | None = None # 0-3 testing_agent_ratio: int | None = None # 0-3
playwright_headless: bool | None = None playwright_headless: bool | None = None
batch_size: int | None = None # Features per agent batch (1-3) batch_size: int | None = None # Features per agent batch (1-3)
api_provider: str | None = None
api_base_url: str | None = Field(None, max_length=500)
api_auth_token: str | None = Field(None, max_length=500) # Write-only, never returned
api_model: str | None = Field(None, max_length=200)
@field_validator('api_base_url')
@classmethod
def validate_api_base_url(cls, v: str | None) -> str | None:
if v is not None and v.strip():
v = v.strip()
if not v.startswith(("http://", "https://")):
raise ValueError("api_base_url must start with http:// or https://")
return v
@field_validator('model') @field_validator('model')
@classmethod @classmethod
def validate_model(cls, v: str | None) -> str | None: def validate_model(cls, v: str | None, info) -> str | None: # type: ignore[override]
if v is not None and v not in VALID_MODELS: if v is not None:
raise ValueError(f"Invalid model. Must be one of: {VALID_MODELS}") # Skip VALID_MODELS check when using an alternative API provider
api_provider = info.data.get("api_provider")
if api_provider and api_provider != "claude":
return v
if v not in VALID_MODELS:
raise ValueError(f"Invalid model. Must be one of: {VALID_MODELS}")
return v return v
@field_validator('testing_agent_ratio') @field_validator('testing_agent_ratio')

View File

@@ -25,7 +25,7 @@ from .assistant_database import (
create_conversation, create_conversation,
get_messages, get_messages,
) )
from .chat_constants import API_ENV_VARS, ROOT_DIR from .chat_constants import ROOT_DIR
# Load environment variables from .env file if present # Load environment variables from .env file if present
load_dotenv() load_dotenv()
@@ -157,7 +157,7 @@ class AssistantChatSession:
""" """
Manages a read-only assistant conversation for a project. Manages a read-only assistant conversation for a project.
Uses Claude Opus 4.5 with only read-only tools enabled. Uses Claude Opus with only read-only tools enabled.
Persists conversation history to SQLite. Persists conversation history to SQLite.
""" """
@@ -258,15 +258,11 @@ class AssistantChatSession:
system_cli = shutil.which("claude") system_cli = shutil.which("claude")
# Build environment overrides for API configuration # Build environment overrides for API configuration
sdk_env: dict[str, str] = {} from registry import DEFAULT_MODEL, get_effective_sdk_env
for var in API_ENV_VARS: sdk_env = get_effective_sdk_env()
value = os.getenv(var)
if value:
sdk_env[var] = value
# Determine model from environment or use default # Determine model from SDK env (provider-aware) or fallback to env/default
# This allows using alternative APIs (e.g., GLM via z.ai) that may not support Claude model names model = sdk_env.get("ANTHROPIC_DEFAULT_OPUS_MODEL") or os.getenv("ANTHROPIC_DEFAULT_OPUS_MODEL", DEFAULT_MODEL)
model = os.getenv("ANTHROPIC_DEFAULT_OPUS_MODEL", "claude-opus-4-5-20251101")
try: try:
logger.info("Creating ClaudeSDKClient...") logger.info("Creating ClaudeSDKClient...")

View File

@@ -22,7 +22,7 @@ from claude_agent_sdk import ClaudeAgentOptions, ClaudeSDKClient
from dotenv import load_dotenv from dotenv import load_dotenv
from ..schemas import ImageAttachment from ..schemas import ImageAttachment
from .chat_constants import API_ENV_VARS, ROOT_DIR, make_multimodal_message from .chat_constants import ROOT_DIR, make_multimodal_message
# Load environment variables from .env file if present # Load environment variables from .env file if present
load_dotenv() load_dotenv()
@@ -154,16 +154,11 @@ class ExpandChatSession:
system_prompt = skill_content.replace("$ARGUMENTS", project_path) system_prompt = skill_content.replace("$ARGUMENTS", project_path)
# Build environment overrides for API configuration # Build environment overrides for API configuration
# Filter to only include vars that are actually set (non-None) from registry import DEFAULT_MODEL, get_effective_sdk_env
sdk_env: dict[str, str] = {} sdk_env = get_effective_sdk_env()
for var in API_ENV_VARS:
value = os.getenv(var)
if value:
sdk_env[var] = value
# Determine model from environment or use default # Determine model from SDK env (provider-aware) or fallback to env/default
# This allows using alternative APIs (e.g., GLM via z.ai) that may not support Claude model names model = sdk_env.get("ANTHROPIC_DEFAULT_OPUS_MODEL") or os.getenv("ANTHROPIC_DEFAULT_OPUS_MODEL", DEFAULT_MODEL)
model = os.getenv("ANTHROPIC_DEFAULT_OPUS_MODEL", "claude-opus-4-5-20251101")
# Build MCP servers config for feature creation # Build MCP servers config for feature creation
mcp_servers = { mcp_servers = {

View File

@@ -227,6 +227,46 @@ class AgentProcessManager:
"""Remove lock file.""" """Remove lock file."""
self.lock_file.unlink(missing_ok=True) self.lock_file.unlink(missing_ok=True)
def _cleanup_stale_features(self) -> None:
"""Clear in_progress flag for all features when agent stops/crashes.
When the agent process exits (normally or crash), any features left
with in_progress=True were being worked on and didn't complete.
Reset them so they can be picked up on next agent start.
"""
try:
from autoforge_paths import get_features_db_path
features_db = get_features_db_path(self.project_dir)
if not features_db.exists():
return
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from api.database import Feature
engine = create_engine(f"sqlite:///{features_db}")
Session = sessionmaker(bind=engine)
session = Session()
try:
stuck = session.query(Feature).filter(
Feature.in_progress == True, # noqa: E712
Feature.passes == False, # noqa: E712
).all()
if stuck:
for f in stuck:
f.in_progress = False
session.commit()
logger.info(
"Cleaned up %d stuck feature(s) for %s",
len(stuck), self.project_name,
)
finally:
session.close()
engine.dispose()
except Exception as e:
logger.warning("Failed to cleanup features for %s: %s", self.project_name, e)
async def _broadcast_output(self, line: str) -> None: async def _broadcast_output(self, line: str) -> None:
"""Broadcast output line to all registered callbacks.""" """Broadcast output line to all registered callbacks."""
with self._callbacks_lock: with self._callbacks_lock:
@@ -288,6 +328,7 @@ class AgentProcessManager:
self.status = "crashed" self.status = "crashed"
elif self.status == "running": elif self.status == "running":
self.status = "stopped" self.status = "stopped"
self._cleanup_stale_features()
self._remove_lock() self._remove_lock()
async def start( async def start(
@@ -305,7 +346,7 @@ class AgentProcessManager:
Args: Args:
yolo_mode: If True, run in YOLO mode (skip testing agents) yolo_mode: If True, run in YOLO mode (skip testing agents)
model: Model to use (e.g., claude-opus-4-5-20251101) model: Model to use (e.g., claude-opus-4-6)
parallel_mode: DEPRECATED - ignored, always uses unified orchestrator parallel_mode: DEPRECATED - ignored, always uses unified orchestrator
max_concurrency: Max concurrent coding agents (1-5, default 1) max_concurrency: Max concurrent coding agents (1-5, default 1)
testing_agent_ratio: Number of regression testing agents (0-3, default 1) testing_agent_ratio: Number of regression testing agents (0-3, default 1)
@@ -320,6 +361,9 @@ class AgentProcessManager:
if not self._check_lock(): if not self._check_lock():
return False, "Another agent instance is already running for this project" return False, "Another agent instance is already running for this project"
# Clean up features stuck from a previous crash/stop
self._cleanup_stale_features()
# Store for status queries # Store for status queries
self.yolo_mode = yolo_mode self.yolo_mode = yolo_mode
self.model = model self.model = model
@@ -359,12 +403,22 @@ class AgentProcessManager:
# stdin=DEVNULL prevents blocking if Claude CLI or child process tries to read stdin # stdin=DEVNULL prevents blocking if Claude CLI or child process tries to read stdin
# CREATE_NO_WINDOW on Windows prevents console window pop-ups # CREATE_NO_WINDOW on Windows prevents console window pop-ups
# PYTHONUNBUFFERED ensures output isn't delayed # PYTHONUNBUFFERED ensures output isn't delayed
# Build subprocess environment with API provider settings
from registry import get_effective_sdk_env
api_env = get_effective_sdk_env()
subprocess_env = {
**os.environ,
"PYTHONUNBUFFERED": "1",
"PLAYWRIGHT_HEADLESS": "true" if playwright_headless else "false",
**api_env,
}
popen_kwargs: dict[str, Any] = { popen_kwargs: dict[str, Any] = {
"stdin": subprocess.DEVNULL, "stdin": subprocess.DEVNULL,
"stdout": subprocess.PIPE, "stdout": subprocess.PIPE,
"stderr": subprocess.STDOUT, "stderr": subprocess.STDOUT,
"cwd": str(self.project_dir), "cwd": str(self.project_dir),
"env": {**os.environ, "PYTHONUNBUFFERED": "1", "PLAYWRIGHT_HEADLESS": "true" if playwright_headless else "false"}, "env": subprocess_env,
} }
if sys.platform == "win32": if sys.platform == "win32":
popen_kwargs["creationflags"] = subprocess.CREATE_NO_WINDOW popen_kwargs["creationflags"] = subprocess.CREATE_NO_WINDOW
@@ -425,6 +479,7 @@ class AgentProcessManager:
result.children_terminated, result.children_killed result.children_terminated, result.children_killed
) )
self._cleanup_stale_features()
self._remove_lock() self._remove_lock()
self.status = "stopped" self.status = "stopped"
self.process = None self.process = None
@@ -502,6 +557,7 @@ class AgentProcessManager:
if poll is not None: if poll is not None:
# Process has terminated # Process has terminated
if self.status in ("running", "paused"): if self.status in ("running", "paused"):
self._cleanup_stale_features()
self.status = "crashed" self.status = "crashed"
self._remove_lock() self._remove_lock()
return False return False

View File

@@ -19,7 +19,7 @@ from claude_agent_sdk import ClaudeAgentOptions, ClaudeSDKClient
from dotenv import load_dotenv from dotenv import load_dotenv
from ..schemas import ImageAttachment from ..schemas import ImageAttachment
from .chat_constants import API_ENV_VARS, ROOT_DIR, make_multimodal_message from .chat_constants import ROOT_DIR, make_multimodal_message
# Load environment variables from .env file if present # Load environment variables from .env file if present
load_dotenv() load_dotenv()
@@ -140,16 +140,11 @@ class SpecChatSession:
system_cli = shutil.which("claude") system_cli = shutil.which("claude")
# Build environment overrides for API configuration # Build environment overrides for API configuration
# Filter to only include vars that are actually set (non-None) from registry import DEFAULT_MODEL, get_effective_sdk_env
sdk_env: dict[str, str] = {} sdk_env = get_effective_sdk_env()
for var in API_ENV_VARS:
value = os.getenv(var)
if value:
sdk_env[var] = value
# Determine model from environment or use default # Determine model from SDK env (provider-aware) or fallback to env/default
# This allows using alternative APIs (e.g., GLM via z.ai) that may not support Claude model names model = sdk_env.get("ANTHROPIC_DEFAULT_OPUS_MODEL") or os.getenv("ANTHROPIC_DEFAULT_OPUS_MODEL", DEFAULT_MODEL)
model = os.getenv("ANTHROPIC_DEFAULT_OPUS_MODEL", "claude-opus-4-5-20251101")
try: try:
self.client = ClaudeSDKClient( self.client = ClaudeSDKClient(

View File

@@ -640,9 +640,7 @@ class ConnectionManager:
self._lock = asyncio.Lock() self._lock = asyncio.Lock()
async def connect(self, websocket: WebSocket, project_name: str): async def connect(self, websocket: WebSocket, project_name: str):
"""Accept a WebSocket connection for a project.""" """Register a WebSocket connection for a project (must already be accepted)."""
await websocket.accept()
async with self._lock: async with self._lock:
if project_name not in self.active_connections: if project_name not in self.active_connections:
self.active_connections[project_name] = set() self.active_connections[project_name] = set()
@@ -727,16 +725,22 @@ async def project_websocket(websocket: WebSocket, project_name: str):
- Agent status changes - Agent status changes
- Agent stdout/stderr lines - Agent stdout/stderr lines
""" """
# Always accept WebSocket first to avoid opaque 403 errors
await websocket.accept()
if not validate_project_name(project_name): if not validate_project_name(project_name):
await websocket.send_json({"type": "error", "content": "Invalid project name"})
await websocket.close(code=4000, reason="Invalid project name") await websocket.close(code=4000, reason="Invalid project name")
return return
project_dir = _get_project_path(project_name) project_dir = _get_project_path(project_name)
if not project_dir: if not project_dir:
await websocket.send_json({"type": "error", "content": "Project not found in registry"})
await websocket.close(code=4004, reason="Project not found in registry") await websocket.close(code=4004, reason="Project not found in registry")
return return
if not project_dir.exists(): if not project_dir.exists():
await websocket.send_json({"type": "error", "content": "Project directory not found"})
await websocket.close(code=4004, reason="Project directory not found") await websocket.close(code=4004, reason="Project directory not found")
return return
@@ -879,8 +883,7 @@ async def project_websocket(websocket: WebSocket, project_name: str):
break break
except json.JSONDecodeError: except json.JSONDecodeError:
logger.warning(f"Invalid JSON from WebSocket: {data[:100] if data else 'empty'}") logger.warning(f"Invalid JSON from WebSocket: {data[:100] if data else 'empty'}")
except Exception as e: except Exception:
logger.warning(f"WebSocket error: {e}")
break break
finally: finally:

View File

@@ -40,15 +40,15 @@ class TestConvertModelForVertex(unittest.TestCase):
def test_returns_model_unchanged_when_vertex_disabled(self): def test_returns_model_unchanged_when_vertex_disabled(self):
os.environ.pop("CLAUDE_CODE_USE_VERTEX", None) os.environ.pop("CLAUDE_CODE_USE_VERTEX", None)
self.assertEqual( self.assertEqual(
convert_model_for_vertex("claude-opus-4-5-20251101"), convert_model_for_vertex("claude-opus-4-6"),
"claude-opus-4-5-20251101", "claude-opus-4-6",
) )
def test_returns_model_unchanged_when_vertex_set_to_zero(self): def test_returns_model_unchanged_when_vertex_set_to_zero(self):
os.environ["CLAUDE_CODE_USE_VERTEX"] = "0" os.environ["CLAUDE_CODE_USE_VERTEX"] = "0"
self.assertEqual( self.assertEqual(
convert_model_for_vertex("claude-opus-4-5-20251101"), convert_model_for_vertex("claude-opus-4-6"),
"claude-opus-4-5-20251101", "claude-opus-4-6",
) )
def test_returns_model_unchanged_when_vertex_set_to_empty(self): def test_returns_model_unchanged_when_vertex_set_to_empty(self):
@@ -60,13 +60,20 @@ class TestConvertModelForVertex(unittest.TestCase):
# --- Vertex AI enabled: standard conversions --- # --- Vertex AI enabled: standard conversions ---
def test_converts_opus_model(self): def test_converts_legacy_opus_model(self):
os.environ["CLAUDE_CODE_USE_VERTEX"] = "1" os.environ["CLAUDE_CODE_USE_VERTEX"] = "1"
self.assertEqual( self.assertEqual(
convert_model_for_vertex("claude-opus-4-5-20251101"), convert_model_for_vertex("claude-opus-4-5-20251101"),
"claude-opus-4-5@20251101", "claude-opus-4-5@20251101",
) )
def test_opus_4_6_passthrough_on_vertex(self):
os.environ["CLAUDE_CODE_USE_VERTEX"] = "1"
self.assertEqual(
convert_model_for_vertex("claude-opus-4-6"),
"claude-opus-4-6",
)
def test_converts_sonnet_model(self): def test_converts_sonnet_model(self):
os.environ["CLAUDE_CODE_USE_VERTEX"] = "1" os.environ["CLAUDE_CODE_USE_VERTEX"] = "1"
self.assertEqual( self.assertEqual(
@@ -86,8 +93,8 @@ class TestConvertModelForVertex(unittest.TestCase):
def test_already_vertex_format_unchanged(self): def test_already_vertex_format_unchanged(self):
os.environ["CLAUDE_CODE_USE_VERTEX"] = "1" os.environ["CLAUDE_CODE_USE_VERTEX"] = "1"
self.assertEqual( self.assertEqual(
convert_model_for_vertex("claude-opus-4-5@20251101"), convert_model_for_vertex("claude-sonnet-4-5@20250929"),
"claude-opus-4-5@20251101", "claude-sonnet-4-5@20250929",
) )
def test_non_claude_model_unchanged(self): def test_non_claude_model_unchanged(self):
@@ -100,8 +107,8 @@ class TestConvertModelForVertex(unittest.TestCase):
def test_model_without_date_suffix_unchanged(self): def test_model_without_date_suffix_unchanged(self):
os.environ["CLAUDE_CODE_USE_VERTEX"] = "1" os.environ["CLAUDE_CODE_USE_VERTEX"] = "1"
self.assertEqual( self.assertEqual(
convert_model_for_vertex("claude-opus-4-5"), convert_model_for_vertex("claude-opus-4-6"),
"claude-opus-4-5", "claude-opus-4-6",
) )
def test_empty_string_unchanged(self): def test_empty_string_unchanged(self):

2
ui/package-lock.json generated
View File

@@ -53,7 +53,7 @@
}, },
"..": { "..": {
"name": "autoforge-ai", "name": "autoforge-ai",
"version": "0.1.0", "version": "0.1.5",
"license": "AGPL-3.0", "license": "AGPL-3.0",
"bin": { "bin": {
"autoforge": "bin/autoforge.js" "autoforge": "bin/autoforge.js"

View File

@@ -178,8 +178,8 @@ function App() {
setShowAddFeature(true) setShowAddFeature(true)
} }
// E : Expand project with AI (when project selected and has features) // E : Expand project with AI (when project selected, has spec and has features)
if ((e.key === 'e' || e.key === 'E') && selectedProject && features && if ((e.key === 'e' || e.key === 'E') && selectedProject && hasSpec && features &&
(features.pending.length + features.in_progress.length + features.done.length) > 0) { (features.pending.length + features.in_progress.length + features.done.length) > 0) {
e.preventDefault() e.preventDefault()
setShowExpandProject(true) setShowExpandProject(true)
@@ -239,7 +239,7 @@ function App() {
window.addEventListener('keydown', handleKeyDown) window.addEventListener('keydown', handleKeyDown)
return () => window.removeEventListener('keydown', handleKeyDown) return () => window.removeEventListener('keydown', handleKeyDown)
}, [selectedProject, showAddFeature, showExpandProject, selectedFeature, debugOpen, debugActiveTab, assistantOpen, features, showSettings, showKeyboardHelp, isSpecCreating, viewMode, showResetModal, wsState.agentStatus]) }, [selectedProject, showAddFeature, showExpandProject, selectedFeature, debugOpen, debugActiveTab, assistantOpen, features, showSettings, showKeyboardHelp, isSpecCreating, viewMode, showResetModal, wsState.agentStatus, hasSpec])
// Combine WebSocket progress with feature data // Combine WebSocket progress with feature data
const progress = wsState.progress.total > 0 ? wsState.progress : { const progress = wsState.progress.total > 0 ? wsState.progress : {
@@ -319,7 +319,7 @@ function App() {
{settings?.ollama_mode && ( {settings?.ollama_mode && (
<div <div
className="flex items-center gap-1.5 px-2 py-1 bg-card rounded border-2 border-border shadow-sm" className="flex items-center gap-1.5 px-2 py-1 bg-card rounded border-2 border-border shadow-sm"
title="Using Ollama local models (configured via .env)" title="Using Ollama local models"
> >
<img src="/ollama.png" alt="Ollama" className="w-5 h-5" /> <img src="/ollama.png" alt="Ollama" className="w-5 h-5" />
<span className="text-xs font-bold text-foreground">Ollama</span> <span className="text-xs font-bold text-foreground">Ollama</span>
@@ -330,7 +330,7 @@ function App() {
{settings?.glm_mode && ( {settings?.glm_mode && (
<Badge <Badge
className="bg-purple-500 text-white hover:bg-purple-600" className="bg-purple-500 text-white hover:bg-purple-600"
title="Using GLM API (configured via .env)" title="Using GLM API"
> >
GLM GLM
</Badge> </Badge>
@@ -490,7 +490,7 @@ function App() {
)} )}
{/* Expand Project Modal - AI-powered bulk feature creation */} {/* Expand Project Modal - AI-powered bulk feature creation */}
{showExpandProject && selectedProject && ( {showExpandProject && selectedProject && hasSpec && (
<ExpandProjectModal <ExpandProjectModal
isOpen={showExpandProject} isOpen={showExpandProject}
projectName={selectedProject} projectName={selectedProject}

View File

@@ -51,7 +51,7 @@ export function KanbanBoard({ features, onFeatureClick, onAddFeature, onExpandPr
onFeatureClick={onFeatureClick} onFeatureClick={onFeatureClick}
onAddFeature={onAddFeature} onAddFeature={onAddFeature}
onExpandProject={onExpandProject} onExpandProject={onExpandProject}
showExpandButton={hasFeatures} showExpandButton={hasFeatures && hasSpec}
onCreateSpec={onCreateSpec} onCreateSpec={onCreateSpec}
showCreateSpec={!hasSpec && !hasFeatures} showCreateSpec={!hasSpec && !hasFeatures}
/> />

View File

@@ -19,7 +19,7 @@ const shortcuts: Shortcut[] = [
{ key: 'D', description: 'Toggle debug panel' }, { key: 'D', description: 'Toggle debug panel' },
{ key: 'T', description: 'Toggle terminal tab' }, { key: 'T', description: 'Toggle terminal tab' },
{ key: 'N', description: 'Add new feature', context: 'with project' }, { key: 'N', description: 'Add new feature', context: 'with project' },
{ key: 'E', description: 'Expand project with AI', context: 'with features' }, { key: 'E', description: 'Expand project with AI', context: 'with spec & features' },
{ key: 'A', description: 'Toggle AI assistant', context: 'with project' }, { key: 'A', description: 'Toggle AI assistant', context: 'with project' },
{ key: 'G', description: 'Toggle Kanban/Graph view', context: 'with project' }, { key: 'G', description: 'Toggle Kanban/Graph view', context: 'with project' },
{ key: ',', description: 'Open settings' }, { key: ',', description: 'Open settings' },

View File

@@ -1,6 +1,8 @@
import { Loader2, AlertCircle, Check, Moon, Sun } from 'lucide-react' import { useState } from 'react'
import { useSettings, useUpdateSettings, useAvailableModels } from '../hooks/useProjects' import { Loader2, AlertCircle, Check, Moon, Sun, Eye, EyeOff, ShieldCheck } from 'lucide-react'
import { useSettings, useUpdateSettings, useAvailableModels, useAvailableProviders } from '../hooks/useProjects'
import { useTheme, THEMES } from '../hooks/useTheme' import { useTheme, THEMES } from '../hooks/useTheme'
import type { ProviderInfo } from '../lib/types'
import { import {
Dialog, Dialog,
DialogContent, DialogContent,
@@ -17,12 +19,26 @@ interface SettingsModalProps {
onClose: () => void onClose: () => void
} }
const PROVIDER_INFO_TEXT: Record<string, string> = {
claude: 'Default provider. Uses your Claude CLI credentials.',
kimi: 'Get an API key at kimi.com',
glm: 'Get an API key at open.bigmodel.cn',
ollama: 'Run models locally. Install from ollama.com',
custom: 'Connect to any OpenAI-compatible API endpoint.',
}
export function SettingsModal({ isOpen, onClose }: SettingsModalProps) { export function SettingsModal({ isOpen, onClose }: SettingsModalProps) {
const { data: settings, isLoading, isError, refetch } = useSettings() const { data: settings, isLoading, isError, refetch } = useSettings()
const { data: modelsData } = useAvailableModels() const { data: modelsData } = useAvailableModels()
const { data: providersData } = useAvailableProviders()
const updateSettings = useUpdateSettings() const updateSettings = useUpdateSettings()
const { theme, setTheme, darkMode, toggleDarkMode } = useTheme() const { theme, setTheme, darkMode, toggleDarkMode } = useTheme()
const [showAuthToken, setShowAuthToken] = useState(false)
const [authTokenInput, setAuthTokenInput] = useState('')
const [customModelInput, setCustomModelInput] = useState('')
const [customBaseUrlInput, setCustomBaseUrlInput] = useState('')
const handleYoloToggle = () => { const handleYoloToggle = () => {
if (settings && !updateSettings.isPending) { if (settings && !updateSettings.isPending) {
updateSettings.mutate({ yolo_mode: !settings.yolo_mode }) updateSettings.mutate({ yolo_mode: !settings.yolo_mode })
@@ -31,7 +47,7 @@ export function SettingsModal({ isOpen, onClose }: SettingsModalProps) {
const handleModelChange = (modelId: string) => { const handleModelChange = (modelId: string) => {
if (!updateSettings.isPending) { if (!updateSettings.isPending) {
updateSettings.mutate({ model: modelId }) updateSettings.mutate({ api_model: modelId })
} }
} }
@@ -47,12 +63,51 @@ export function SettingsModal({ isOpen, onClose }: SettingsModalProps) {
} }
} }
const handleProviderChange = (providerId: string) => {
if (!updateSettings.isPending) {
updateSettings.mutate({ api_provider: providerId })
// Reset local state
setAuthTokenInput('')
setShowAuthToken(false)
setCustomModelInput('')
setCustomBaseUrlInput('')
}
}
const handleSaveAuthToken = () => {
if (authTokenInput.trim() && !updateSettings.isPending) {
updateSettings.mutate({ api_auth_token: authTokenInput.trim() })
setAuthTokenInput('')
setShowAuthToken(false)
}
}
const handleSaveCustomBaseUrl = () => {
if (customBaseUrlInput.trim() && !updateSettings.isPending) {
updateSettings.mutate({ api_base_url: customBaseUrlInput.trim() })
}
}
const handleSaveCustomModel = () => {
if (customModelInput.trim() && !updateSettings.isPending) {
updateSettings.mutate({ api_model: customModelInput.trim() })
setCustomModelInput('')
}
}
const providers = providersData?.providers ?? []
const models = modelsData?.models ?? [] const models = modelsData?.models ?? []
const isSaving = updateSettings.isPending const isSaving = updateSettings.isPending
const currentProvider = settings?.api_provider ?? 'claude'
const currentProviderInfo: ProviderInfo | undefined = providers.find(p => p.id === currentProvider)
const isAlternativeProvider = currentProvider !== 'claude'
const showAuthField = isAlternativeProvider && currentProviderInfo?.requires_auth
const showBaseUrlField = currentProvider === 'custom'
const showCustomModelInput = currentProvider === 'custom' || currentProvider === 'ollama'
return ( return (
<Dialog open={isOpen} onOpenChange={(open) => !open && onClose()}> <Dialog open={isOpen} onOpenChange={(open) => !open && onClose()}>
<DialogContent className="sm:max-w-sm"> <DialogContent aria-describedby={undefined} className="sm:max-w-sm max-h-[85vh] overflow-y-auto">
<DialogHeader> <DialogHeader>
<DialogTitle className="flex items-center gap-2"> <DialogTitle className="flex items-center gap-2">
Settings Settings
@@ -159,6 +214,147 @@ export function SettingsModal({ isOpen, onClose }: SettingsModalProps) {
<hr className="border-border" /> <hr className="border-border" />
{/* API Provider Selection */}
<div className="space-y-3">
<Label className="font-medium">API Provider</Label>
<div className="flex flex-wrap gap-1.5">
{providers.map((provider) => (
<button
key={provider.id}
onClick={() => handleProviderChange(provider.id)}
disabled={isSaving}
className={`py-1.5 px-3 text-sm font-medium rounded-md border transition-colors ${
currentProvider === provider.id
? 'bg-primary text-primary-foreground border-primary'
: 'bg-background text-foreground border-border hover:bg-muted'
} ${isSaving ? 'opacity-50 cursor-not-allowed' : ''}`}
>
{provider.name.split(' (')[0]}
</button>
))}
</div>
<p className="text-xs text-muted-foreground">
{PROVIDER_INFO_TEXT[currentProvider] ?? ''}
</p>
{/* Auth Token Field */}
{showAuthField && (
<div className="space-y-2 pt-1">
<Label className="text-sm">API Key</Label>
{settings.api_has_auth_token && !authTokenInput && (
<div className="flex items-center gap-2 text-sm text-muted-foreground">
<ShieldCheck size={14} className="text-green-500" />
<span>Configured</span>
<Button
variant="ghost"
size="sm"
className="h-auto py-0.5 px-2 text-xs"
onClick={() => setAuthTokenInput(' ')}
>
Change
</Button>
</div>
)}
{(!settings.api_has_auth_token || authTokenInput) && (
<div className="flex gap-2">
<div className="relative flex-1">
<input
type={showAuthToken ? 'text' : 'password'}
value={authTokenInput.trim()}
onChange={(e) => setAuthTokenInput(e.target.value)}
placeholder="Enter API key..."
className="w-full py-1.5 px-3 pe-9 text-sm border rounded-md bg-background"
/>
<button
type="button"
onClick={() => setShowAuthToken(!showAuthToken)}
className="absolute end-2 top-1/2 -translate-y-1/2 text-muted-foreground hover:text-foreground"
>
{showAuthToken ? <EyeOff size={14} /> : <Eye size={14} />}
</button>
</div>
<Button
size="sm"
onClick={handleSaveAuthToken}
disabled={!authTokenInput.trim() || isSaving}
>
Save
</Button>
</div>
)}
</div>
)}
{/* Custom Base URL Field */}
{showBaseUrlField && (
<div className="space-y-2 pt-1">
<Label className="text-sm">Base URL</Label>
<div className="flex gap-2">
<input
type="text"
value={customBaseUrlInput || settings.api_base_url || ''}
onChange={(e) => setCustomBaseUrlInput(e.target.value)}
placeholder="https://api.example.com/v1"
className="flex-1 py-1.5 px-3 text-sm border rounded-md bg-background"
/>
<Button
size="sm"
onClick={handleSaveCustomBaseUrl}
disabled={!customBaseUrlInput.trim() || isSaving}
>
Save
</Button>
</div>
</div>
)}
</div>
{/* Model Selection */}
<div className="space-y-2">
<Label className="font-medium">Model</Label>
{models.length > 0 && (
<div className="flex rounded-lg border overflow-hidden">
{models.map((model) => (
<button
key={model.id}
onClick={() => handleModelChange(model.id)}
disabled={isSaving}
className={`flex-1 py-2 px-3 text-sm font-medium transition-colors ${
(settings.api_model ?? settings.model) === model.id
? 'bg-primary text-primary-foreground'
: 'bg-background text-foreground hover:bg-muted'
} ${isSaving ? 'opacity-50 cursor-not-allowed' : ''}`}
>
<span className="block">{model.name}</span>
<span className="block text-xs opacity-60">{model.id}</span>
</button>
))}
</div>
)}
{/* Custom model input for Ollama/Custom */}
{showCustomModelInput && (
<div className="flex gap-2 pt-1">
<input
type="text"
value={customModelInput}
onChange={(e) => setCustomModelInput(e.target.value)}
placeholder="Custom model name..."
className="flex-1 py-1.5 px-3 text-sm border rounded-md bg-background"
onKeyDown={(e) => e.key === 'Enter' && handleSaveCustomModel()}
/>
<Button
size="sm"
onClick={handleSaveCustomModel}
disabled={!customModelInput.trim() || isSaving}
>
Set
</Button>
</div>
)}
</div>
<hr className="border-border" />
{/* YOLO Mode Toggle */} {/* YOLO Mode Toggle */}
<div className="flex items-center justify-between"> <div className="flex items-center justify-between">
<div className="space-y-0.5"> <div className="space-y-0.5">
@@ -195,27 +391,6 @@ export function SettingsModal({ isOpen, onClose }: SettingsModalProps) {
/> />
</div> </div>
{/* Model Selection */}
<div className="space-y-2">
<Label className="font-medium">Model</Label>
<div className="flex rounded-lg border overflow-hidden">
{models.map((model) => (
<button
key={model.id}
onClick={() => handleModelChange(model.id)}
disabled={isSaving}
className={`flex-1 py-2 px-3 text-sm font-medium transition-colors ${
settings.model === model.id
? 'bg-primary text-primary-foreground'
: 'bg-background text-foreground hover:bg-muted'
} ${isSaving ? 'opacity-50 cursor-not-allowed' : ''}`}
>
{model.name}
</button>
))}
</div>
</div>
{/* Regression Agents */} {/* Regression Agents */}
<div className="space-y-2"> <div className="space-y-2">
<Label className="font-medium">Regression Agents</Label> <Label className="font-medium">Regression Agents</Label>

View File

@@ -107,16 +107,20 @@ export function useExpandChat({
}, 30000) }, 30000)
} }
ws.onclose = () => { ws.onclose = (event) => {
setConnectionStatus('disconnected') setConnectionStatus('disconnected')
if (pingIntervalRef.current) { if (pingIntervalRef.current) {
clearInterval(pingIntervalRef.current) clearInterval(pingIntervalRef.current)
pingIntervalRef.current = null pingIntervalRef.current = null
} }
// Don't retry on application-level errors (4xxx codes won't resolve on retry)
const isAppError = event.code >= 4000 && event.code <= 4999
// Attempt reconnection if not intentionally closed // Attempt reconnection if not intentionally closed
if ( if (
!manuallyDisconnectedRef.current && !manuallyDisconnectedRef.current &&
!isAppError &&
reconnectAttempts.current < maxReconnectAttempts && reconnectAttempts.current < maxReconnectAttempts &&
!isCompleteRef.current !isCompleteRef.current
) { ) {

View File

@@ -4,7 +4,7 @@
import { useQuery, useMutation, useQueryClient } from '@tanstack/react-query' import { useQuery, useMutation, useQueryClient } from '@tanstack/react-query'
import * as api from '../lib/api' import * as api from '../lib/api'
import type { FeatureCreate, FeatureUpdate, ModelsResponse, ProjectSettingsUpdate, Settings, SettingsUpdate } from '../lib/types' import type { FeatureCreate, FeatureUpdate, ModelsResponse, ProjectSettingsUpdate, ProvidersResponse, Settings, SettingsUpdate } from '../lib/types'
// ============================================================================ // ============================================================================
// Projects // Projects
@@ -254,20 +254,41 @@ export function useValidatePath() {
// Default models response for placeholder (until API responds) // Default models response for placeholder (until API responds)
const DEFAULT_MODELS: ModelsResponse = { const DEFAULT_MODELS: ModelsResponse = {
models: [ models: [
{ id: 'claude-opus-4-5-20251101', name: 'Claude Opus 4.5' }, { id: 'claude-opus-4-6', name: 'Claude Opus' },
{ id: 'claude-sonnet-4-5-20250929', name: 'Claude Sonnet 4.5' }, { id: 'claude-sonnet-4-5-20250929', name: 'Claude Sonnet' },
], ],
default: 'claude-opus-4-5-20251101', default: 'claude-opus-4-6',
} }
const DEFAULT_SETTINGS: Settings = { const DEFAULT_SETTINGS: Settings = {
yolo_mode: false, yolo_mode: false,
model: 'claude-opus-4-5-20251101', model: 'claude-opus-4-6',
glm_mode: false, glm_mode: false,
ollama_mode: false, ollama_mode: false,
testing_agent_ratio: 1, testing_agent_ratio: 1,
playwright_headless: true, playwright_headless: true,
batch_size: 3, batch_size: 3,
api_provider: 'claude',
api_base_url: null,
api_has_auth_token: false,
api_model: null,
}
const DEFAULT_PROVIDERS: ProvidersResponse = {
providers: [
{ id: 'claude', name: 'Claude (Anthropic)', base_url: null, models: DEFAULT_MODELS.models, default_model: 'claude-opus-4-6', requires_auth: false },
],
current: 'claude',
}
export function useAvailableProviders() {
return useQuery({
queryKey: ['available-providers'],
queryFn: api.getAvailableProviders,
staleTime: 300000,
retry: 1,
placeholderData: DEFAULT_PROVIDERS,
})
} }
export function useAvailableModels() { export function useAvailableModels() {
@@ -319,6 +340,8 @@ export function useUpdateSettings() {
}, },
onSettled: () => { onSettled: () => {
queryClient.invalidateQueries({ queryKey: ['settings'] }) queryClient.invalidateQueries({ queryKey: ['settings'] })
queryClient.invalidateQueries({ queryKey: ['available-models'] })
queryClient.invalidateQueries({ queryKey: ['available-providers'] })
}, },
}) })
} }

View File

@@ -157,15 +157,18 @@ export function useSpecChat({
}, 30000) }, 30000)
} }
ws.onclose = () => { ws.onclose = (event) => {
setConnectionStatus('disconnected') setConnectionStatus('disconnected')
if (pingIntervalRef.current) { if (pingIntervalRef.current) {
clearInterval(pingIntervalRef.current) clearInterval(pingIntervalRef.current)
pingIntervalRef.current = null pingIntervalRef.current = null
} }
// Don't retry on application-level errors (4xxx codes won't resolve on retry)
const isAppError = event.code >= 4000 && event.code <= 4999
// Attempt reconnection if not intentionally closed // Attempt reconnection if not intentionally closed
if (reconnectAttempts.current < maxReconnectAttempts && !isCompleteRef.current) { if (!isAppError && reconnectAttempts.current < maxReconnectAttempts && !isCompleteRef.current) {
reconnectAttempts.current++ reconnectAttempts.current++
const delay = Math.min(1000 * Math.pow(2, reconnectAttempts.current), 10000) const delay = Math.min(1000 * Math.pow(2, reconnectAttempts.current), 10000)
reconnectTimeoutRef.current = window.setTimeout(connect, delay) reconnectTimeoutRef.current = window.setTimeout(connect, delay)

View File

@@ -335,10 +335,14 @@ export function useProjectWebSocket(projectName: string | null) {
} }
} }
ws.onclose = () => { ws.onclose = (event) => {
setState(prev => ({ ...prev, isConnected: false })) setState(prev => ({ ...prev, isConnected: false }))
wsRef.current = null wsRef.current = null
// Don't retry on application-level errors (4xxx codes won't resolve on retry)
const isAppError = event.code >= 4000 && event.code <= 4999
if (isAppError) return
// Exponential backoff reconnection // Exponential backoff reconnection
const delay = Math.min(1000 * Math.pow(2, reconnectAttempts.current), 30000) const delay = Math.min(1000 * Math.pow(2, reconnectAttempts.current), 30000)
reconnectAttempts.current++ reconnectAttempts.current++

View File

@@ -24,6 +24,7 @@ import type {
Settings, Settings,
SettingsUpdate, SettingsUpdate,
ModelsResponse, ModelsResponse,
ProvidersResponse,
DevServerStatusResponse, DevServerStatusResponse,
DevServerConfig, DevServerConfig,
TerminalInfo, TerminalInfo,
@@ -399,6 +400,10 @@ export async function getAvailableModels(): Promise<ModelsResponse> {
return fetchJSON('/settings/models') return fetchJSON('/settings/models')
} }
export async function getAvailableProviders(): Promise<ProvidersResponse> {
return fetchJSON('/settings/providers')
}
export async function getSettings(): Promise<Settings> { export async function getSettings(): Promise<Settings> {
return fetchJSON('/settings') return fetchJSON('/settings')
} }

View File

@@ -525,6 +525,20 @@ export interface ModelsResponse {
default: string default: string
} }
export interface ProviderInfo {
id: string
name: string
base_url: string | null
models: ModelInfo[]
default_model: string
requires_auth: boolean
}
export interface ProvidersResponse {
providers: ProviderInfo[]
current: string
}
export interface Settings { export interface Settings {
yolo_mode: boolean yolo_mode: boolean
model: string model: string
@@ -533,6 +547,10 @@ export interface Settings {
testing_agent_ratio: number // Regression testing agents (0-3) testing_agent_ratio: number // Regression testing agents (0-3)
playwright_headless: boolean playwright_headless: boolean
batch_size: number // Features per coding agent batch (1-3) batch_size: number // Features per coding agent batch (1-3)
api_provider: string
api_base_url: string | null
api_has_auth_token: boolean
api_model: string | null
} }
export interface SettingsUpdate { export interface SettingsUpdate {
@@ -541,6 +559,10 @@ export interface SettingsUpdate {
testing_agent_ratio?: number testing_agent_ratio?: number
playwright_headless?: boolean playwright_headless?: boolean
batch_size?: number batch_size?: number
api_provider?: string
api_base_url?: string
api_auth_token?: string
api_model?: string
} }
export interface ProjectSettingsUpdate { export interface ProjectSettingsUpdate {