Compare commits

...

13 Commits

Author SHA1 Message Date
Leon van Zyl
95c3cafecd Merge pull request #101 from cabana8471-arch/feat/extensible-pkill-processes
feat: allow extending pkill process names via config
2026-01-26 13:05:09 +02:00
Auto
f1c529e1a7 Merge branch 'master' of https://github.com/leonvanzyl/autonomous-coding-ui 2026-01-26 12:41:06 +02:00
Auto
fe5f58cf45 add a pr review command 2026-01-26 12:41:01 +02:00
Leon van Zyl
a437af7f96 Merge pull request #102 from cabana8471-arch/fix/websocket-project-isolation
fix: prevent cross-project UI contamination
2026-01-26 10:32:06 +02:00
Leon van Zyl
0ef6cf7d62 Merge pull request #103 from cabana8471-arch/feat/webui-remote-access
feat: add --host argument for WebUI remote access
2026-01-26 10:27:05 +02:00
Leon van Zyl
aa9e8b1ab7 Merge pull request #104 from leonvanzyl/ollama-support
add ollama support
2026-01-26 09:50:22 +02:00
Auto
2dc12061fa chore: remove duplicate asset and gitignore local settings
- Remove assets/ollama.png (duplicate of ui/public/ollama.png)
- Remove .claude/settings.local.json from tracking
- Add .claude/settings.local.json to .gitignore

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-26 09:49:21 +02:00
Auto
095d248a66 add ollama support 2026-01-26 09:42:01 +02:00
cabana8471
34b9b5f5b2 security: validate all pkill patterns for BSD compatibility
pkill on BSD systems accepts multiple pattern operands. Previous code
only validated args[-1], allowing disallowed processes to slip through
when combined with allowed ones (e.g., "pkill node sshd" would only
check "sshd").

Now validates every non-flag argument to ensure no disallowed process
can be targeted. Added tests for multiple pattern scenarios.

Addresses CodeRabbit feedback on PR #101.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-25 20:12:54 +01:00
cabana8471
fed2516f08 security: validate pkill process names against safe character set
Address CodeRabbit security feedback - restrict pkill_processes entries
to alphanumeric names with dots, underscores, and hyphens only.

This prevents potential exploitation through regex metacharacters like
'.*' being registered as process names.

Changes:
- Added VALID_PROCESS_NAME_PATTERN regex constant
- Updated both org and project config validation to:
  - Normalize (trim whitespace) process names
  - Reject names with regex metacharacters
  - Reject names with spaces
- Added 3 new tests for regex validation

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-25 16:34:56 +01:00
cabana8471
be20c8a3ef feat: add --host argument for WebUI remote access (#81)
Users can now access the WebUI remotely (e.g., via VS Code tunnels,
remote servers) by specifying a host address:

    python start_ui.py --host 0.0.0.0
    python start_ui.py --host 0.0.0.0 --port 8888

Changes:
- Added --host and --port CLI arguments to start_ui.py
- Security warning displayed when remote access is enabled
- AUTOCODER_ALLOW_REMOTE env var passed to server
- server/main.py conditionally disables localhost middleware
- CORS updated to allow all origins when remote access is enabled
- Browser auto-open disabled for remote hosts

Security considerations documented in warning:
- File system access to project directories
- API can start/stop agents and modify files
- Recommend firewall or VPN for protection

Fixes: leonvanzyl/autocoder#81

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-25 12:14:23 +01:00
cabana8471
32c7778ee5 fix: prevent cross-project UI contamination (#71)
When running multiple projects simultaneously, UI would show mixed data
because the manager registry used only project_name as key. Projects with
the same name but different paths shared the same manager instance.

Changed manager registries to use composite key (project_name, resolved_path):
- server/services/process_manager.py: AgentProcessManager registry
- server/services/dev_server_manager.py: DevServerProcessManager registry

This ensures that:
- /old/my-app and /new/my-app get separate managers
- Multiple browser tabs viewing different projects stay isolated
- Project renames don't cause callback contamination

Fixes: leonvanzyl/autocoder#71
Also fixes: leonvanzyl/autocoder#62 (progress bar sync)
Also fixes: leonvanzyl/autocoder#61 (features missing in kanban)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-25 12:12:38 +01:00
cabana8471
dbbc7d5ce5 feat: allow extending pkill process names via config (#85)
Previously, pkill was limited to a hardcoded set of process names
(node, npm, npx, vite, next). Users building Python/Ruby/Go apps
couldn't kill their dev servers.

Changes:
- Added pkill_processes config option to org config (~/.autocoder/config.yaml)
- Added pkill_processes config option to project config (.autocoder/allowed_commands.yaml)
- Modified validate_pkill_command() to accept extra_processes parameter
- Added get_effective_pkill_processes() to merge default + org + project processes
- Updated bash_security_hook to pass configured processes to validator

Example usage:
```yaml
# ~/.autocoder/config.yaml
version: 1
pkill_processes:
  - python
  - uvicorn
  - gunicorn
```

Fixes: leonvanzyl/autocoder#85

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-25 12:11:58 +01:00
17 changed files with 568 additions and 80 deletions

View File

@@ -0,0 +1,15 @@
---
description: Review pull requests
---
Pull request(s): $ARGUMENTS
- If no PR numbers are provided, ask the user to provide PR number(s).
- At least 1 PR is required.
## TASKS
- Use the GH CLI tool to retrieve the details (descriptions, divs, comments, feedback, reviews, etc)
- Use 3 deepdive subagents to analyze the impact of the codebase
- Provide a review on whether the PR is safe to merge as-is
- Provide any feedback in terms of risk level
- Propose any improments in terms of importance and complexity

View File

@@ -19,3 +19,20 @@
# ANTHROPIC_DEFAULT_SONNET_MODEL=glm-4.7 # ANTHROPIC_DEFAULT_SONNET_MODEL=glm-4.7
# ANTHROPIC_DEFAULT_OPUS_MODEL=glm-4.7 # ANTHROPIC_DEFAULT_OPUS_MODEL=glm-4.7
# ANTHROPIC_DEFAULT_HAIKU_MODEL=glm-4.5-air # ANTHROPIC_DEFAULT_HAIKU_MODEL=glm-4.5-air
# Ollama Local Model Configuration (Optional)
# To use local models via Ollama instead of Claude, uncomment and set these variables.
# Requires Ollama v0.14.0+ with Anthropic API compatibility.
# See: https://ollama.com/blog/claude
#
# ANTHROPIC_BASE_URL=http://localhost:11434
# ANTHROPIC_AUTH_TOKEN=ollama
# API_TIMEOUT_MS=3000000
# ANTHROPIC_DEFAULT_SONNET_MODEL=qwen3-coder
# ANTHROPIC_DEFAULT_OPUS_MODEL=qwen3-coder
# ANTHROPIC_DEFAULT_HAIKU_MODEL=qwen3-coder
#
# Model recommendations:
# - For best results, use a capable coding model like qwen3-coder or deepseek-coder-v2
# - You can use the same model for all tiers, or different models per tier
# - Larger models (70B+) work best for Opus tier, smaller (7B-20B) for Haiku

5
.gitignore vendored
View File

@@ -76,6 +76,11 @@ ui/playwright-report/
.dmypy.json .dmypy.json
dmypy.json dmypy.json
# ===================
# Claude Code
# ===================
.claude/settings.local.json
# =================== # ===================
# IDE / Editors # IDE / Editors
# =================== # ===================

View File

@@ -256,6 +256,39 @@ python test_security_integration.py
- `examples/README.md` - Comprehensive guide with use cases, testing, and troubleshooting - `examples/README.md` - Comprehensive guide with use cases, testing, and troubleshooting
- `PHASE3_SPEC.md` - Specification for mid-session approval feature (future enhancement) - `PHASE3_SPEC.md` - Specification for mid-session approval feature (future enhancement)
### Ollama Local Models (Optional)
Run coding agents using local models via Ollama v0.14.0+:
1. Install Ollama: https://ollama.com
2. Start Ollama: `ollama serve`
3. Pull a coding model: `ollama pull qwen3-coder`
4. Configure `.env`:
```
ANTHROPIC_BASE_URL=http://localhost:11434
ANTHROPIC_AUTH_TOKEN=ollama
API_TIMEOUT_MS=3000000
ANTHROPIC_DEFAULT_SONNET_MODEL=qwen3-coder
ANTHROPIC_DEFAULT_OPUS_MODEL=qwen3-coder
ANTHROPIC_DEFAULT_HAIKU_MODEL=qwen3-coder
```
5. Run autocoder normally - it will use your local Ollama models
**Recommended coding models:**
- `qwen3-coder` - Good balance of speed and capability
- `deepseek-coder-v2` - Strong coding performance
- `codellama` - Meta's code-focused model
**Model tier mapping:**
- Use the same model for all tiers, or map different models per capability level
- Larger models (70B+) work best for Opus tier
- Smaller models (7B-20B) work well for Haiku tier
**Known limitations:**
- Smaller context windows than Claude (model-dependent)
- Extended context beta disabled (not supported by Ollama)
- Performance depends on local hardware (GPU recommended)
## Claude Code Integration ## Claude Code Integration
- `.claude/commands/create-spec.md` - `/create-spec` slash command for interactive spec creation - `.claude/commands/create-spec.md` - `/create-spec` slash command for interactive spec creation

View File

@@ -257,9 +257,16 @@ def create_client(
if value: if value:
sdk_env[var] = value sdk_env[var] = value
# Detect alternative API mode (Ollama or GLM)
base_url = sdk_env.get("ANTHROPIC_BASE_URL", "")
is_alternative_api = bool(base_url)
is_ollama = "localhost:11434" in base_url or "127.0.0.1:11434" in base_url
if sdk_env: if sdk_env:
print(f" - API overrides: {', '.join(sdk_env.keys())}") print(f" - API overrides: {', '.join(sdk_env.keys())}")
if "ANTHROPIC_BASE_URL" in sdk_env: if is_ollama:
print(" - Ollama Mode: Using local models")
elif "ANTHROPIC_BASE_URL" in sdk_env:
print(f" - GLM Mode: Using {sdk_env['ANTHROPIC_BASE_URL']}") print(f" - GLM Mode: Using {sdk_env['ANTHROPIC_BASE_URL']}")
# Create a wrapper for bash_security_hook that passes project_dir via context # Create a wrapper for bash_security_hook that passes project_dir via context
@@ -336,7 +343,8 @@ def create_client(
# Enable extended context beta for better handling of long sessions. # Enable extended context beta for better handling of long sessions.
# This provides up to 1M tokens of context with automatic compaction. # This provides up to 1M tokens of context with automatic compaction.
# See: https://docs.anthropic.com/en/api/beta-headers # See: https://docs.anthropic.com/en/api/beta-headers
betas=["context-1m-2025-08-07"], # Disabled for alternative APIs (Ollama, GLM) as they don't support Claude-specific betas.
betas=[] if is_alternative_api else ["context-1m-2025-08-07"],
# Note on context management: # Note on context management:
# The Claude Agent SDK handles context management automatically through the # The Claude Agent SDK handles context management automatically through the
# underlying Claude Code CLI. When context approaches limits, the CLI # underlying Claude Code CLI. When context approaches limits, the CLI

View File

@@ -7,12 +7,17 @@ Uses an allowlist approach - only explicitly permitted commands can run.
""" """
import os import os
import re
import shlex import shlex
from pathlib import Path from pathlib import Path
from typing import Optional from typing import Optional
import yaml import yaml
# Regex pattern for valid pkill process names (no regex metacharacters allowed)
# Matches alphanumeric names with dots, underscores, and hyphens
VALID_PROCESS_NAME_PATTERN = re.compile(r"^[A-Za-z0-9._-]+$")
# Allowed commands for development tasks # Allowed commands for development tasks
# Minimal set needed for the autonomous coding demo # Minimal set needed for the autonomous coding demo
ALLOWED_COMMANDS = { ALLOWED_COMMANDS = {
@@ -219,23 +224,37 @@ def extract_commands(command_string: str) -> list[str]:
return commands return commands
def validate_pkill_command(command_string: str) -> tuple[bool, str]: # Default pkill process names (hardcoded baseline, always available)
DEFAULT_PKILL_PROCESSES = {
"node",
"npm",
"npx",
"vite",
"next",
}
def validate_pkill_command(
command_string: str,
extra_processes: Optional[set[str]] = None
) -> tuple[bool, str]:
""" """
Validate pkill commands - only allow killing dev-related processes. Validate pkill commands - only allow killing dev-related processes.
Uses shlex to parse the command, avoiding regex bypass vulnerabilities. Uses shlex to parse the command, avoiding regex bypass vulnerabilities.
Args:
command_string: The pkill command to validate
extra_processes: Optional set of additional process names to allow
(from org/project config pkill_processes)
Returns: Returns:
Tuple of (is_allowed, reason_if_blocked) Tuple of (is_allowed, reason_if_blocked)
""" """
# Allowed process names for pkill # Merge default processes with any extra configured processes
allowed_process_names = { allowed_process_names = DEFAULT_PKILL_PROCESSES.copy()
"node", if extra_processes:
"npm", allowed_process_names |= extra_processes
"npx",
"vite",
"next",
}
try: try:
tokens = shlex.split(command_string) tokens = shlex.split(command_string)
@@ -254,17 +273,19 @@ def validate_pkill_command(command_string: str) -> tuple[bool, str]:
if not args: if not args:
return False, "pkill requires a process name" return False, "pkill requires a process name"
# The target is typically the last non-flag argument # Validate every non-flag argument (pkill accepts multiple patterns on BSD)
target = args[-1] # This defensively ensures no disallowed process can be targeted
targets = []
for arg in args:
# For -f flag (full command line match), take the first word as process name
# e.g., "pkill -f 'node server.js'" -> target is "node server.js", process is "node"
t = arg.split()[0] if " " in arg else arg
targets.append(t)
# For -f flag (full command line match), extract the first word as process name disallowed = [t for t in targets if t not in allowed_process_names]
# e.g., "pkill -f 'node server.js'" -> target is "node server.js", process is "node" if not disallowed:
if " " in target:
target = target.split()[0]
if target in allowed_process_names:
return True, "" return True, ""
return False, f"pkill only allowed for dev processes: {allowed_process_names}" return False, f"pkill only allowed for processes: {sorted(allowed_process_names)}"
def validate_chmod_command(command_string: str) -> tuple[bool, str]: def validate_chmod_command(command_string: str) -> tuple[bool, str]:
@@ -455,6 +476,23 @@ def load_org_config() -> Optional[dict]:
if not isinstance(cmd, str): if not isinstance(cmd, str):
return None return None
# Validate pkill_processes if present
if "pkill_processes" in config:
processes = config["pkill_processes"]
if not isinstance(processes, list):
return None
# Normalize and validate each process name against safe pattern
normalized = []
for proc in processes:
if not isinstance(proc, str):
return None
proc = proc.strip()
# Block empty strings and regex metacharacters
if not proc or not VALID_PROCESS_NAME_PATTERN.fullmatch(proc):
return None
normalized.append(proc)
config["pkill_processes"] = normalized
return config return config
except (yaml.YAMLError, IOError, OSError): except (yaml.YAMLError, IOError, OSError):
@@ -508,6 +546,23 @@ def load_project_commands(project_dir: Path) -> Optional[dict]:
if not isinstance(cmd["name"], str): if not isinstance(cmd["name"], str):
return None return None
# Validate pkill_processes if present
if "pkill_processes" in config:
processes = config["pkill_processes"]
if not isinstance(processes, list):
return None
# Normalize and validate each process name against safe pattern
normalized = []
for proc in processes:
if not isinstance(proc, str):
return None
proc = proc.strip()
# Block empty strings and regex metacharacters
if not proc or not VALID_PROCESS_NAME_PATTERN.fullmatch(proc):
return None
normalized.append(proc)
config["pkill_processes"] = normalized
return config return config
except (yaml.YAMLError, IOError, OSError): except (yaml.YAMLError, IOError, OSError):
@@ -628,6 +683,42 @@ def get_project_allowed_commands(project_dir: Optional[Path]) -> set[str]:
return allowed return allowed
def get_effective_pkill_processes(project_dir: Optional[Path]) -> set[str]:
"""
Get effective pkill process names after hierarchy resolution.
Merges processes from:
1. DEFAULT_PKILL_PROCESSES (hardcoded baseline)
2. Org config pkill_processes
3. Project config pkill_processes
Args:
project_dir: Path to the project directory, or None
Returns:
Set of allowed process names for pkill
"""
# Start with default processes
processes = DEFAULT_PKILL_PROCESSES.copy()
# Add org-level pkill_processes
org_config = load_org_config()
if org_config:
org_processes = org_config.get("pkill_processes", [])
if isinstance(org_processes, list):
processes |= {p for p in org_processes if isinstance(p, str) and p.strip()}
# Add project-level pkill_processes
if project_dir:
project_config = load_project_commands(project_dir)
if project_config:
proj_processes = project_config.get("pkill_processes", [])
if isinstance(proj_processes, list):
processes |= {p for p in proj_processes if isinstance(p, str) and p.strip()}
return processes
def is_command_allowed(command: str, allowed_commands: set[str]) -> bool: def is_command_allowed(command: str, allowed_commands: set[str]) -> bool:
""" """
Check if a command is allowed (supports patterns). Check if a command is allowed (supports patterns).
@@ -692,6 +783,9 @@ async def bash_security_hook(input_data, tool_use_id=None, context=None):
# Get effective commands using hierarchy resolution # Get effective commands using hierarchy resolution
allowed_commands, blocked_commands = get_effective_commands(project_dir) allowed_commands, blocked_commands = get_effective_commands(project_dir)
# Get effective pkill processes (includes org/project config)
pkill_processes = get_effective_pkill_processes(project_dir)
# Split into segments for per-command validation # Split into segments for per-command validation
segments = split_command_segments(command) segments = split_command_segments(command)
@@ -725,7 +819,9 @@ async def bash_security_hook(input_data, tool_use_id=None, context=None):
cmd_segment = command # Fallback to full command cmd_segment = command # Fallback to full command
if cmd == "pkill": if cmd == "pkill":
allowed, reason = validate_pkill_command(cmd_segment) # Pass configured extra processes (beyond defaults)
extra_procs = pkill_processes - DEFAULT_PKILL_PROCESSES
allowed, reason = validate_pkill_command(cmd_segment, extra_procs if extra_procs else None)
if not allowed: if not allowed:
return {"decision": "block", "reason": reason} return {"decision": "block", "reason": reason}
elif cmd == "chmod": elif cmd == "chmod":

View File

@@ -88,35 +88,49 @@ app = FastAPI(
lifespan=lifespan, lifespan=lifespan,
) )
# CORS - allow only localhost origins for security # Check if remote access is enabled via environment variable
app.add_middleware( # Set by start_ui.py when --host is not 127.0.0.1
CORSMiddleware, ALLOW_REMOTE = os.environ.get("AUTOCODER_ALLOW_REMOTE", "").lower() in ("1", "true", "yes")
allow_origins=[
"http://localhost:5173", # Vite dev server # CORS - allow all origins when remote access is enabled, otherwise localhost only
"http://127.0.0.1:5173", if ALLOW_REMOTE:
"http://localhost:8888", # Production app.add_middleware(
"http://127.0.0.1:8888", CORSMiddleware,
], allow_origins=["*"], # Allow all origins for remote access
allow_credentials=True, allow_credentials=True,
allow_methods=["*"], allow_methods=["*"],
allow_headers=["*"], allow_headers=["*"],
) )
else:
app.add_middleware(
CORSMiddleware,
allow_origins=[
"http://localhost:5173", # Vite dev server
"http://127.0.0.1:5173",
"http://localhost:8888", # Production
"http://127.0.0.1:8888",
],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
# ============================================================================ # ============================================================================
# Security Middleware # Security Middleware
# ============================================================================ # ============================================================================
@app.middleware("http") if not ALLOW_REMOTE:
async def require_localhost(request: Request, call_next): @app.middleware("http")
"""Only allow requests from localhost.""" async def require_localhost(request: Request, call_next):
client_host = request.client.host if request.client else None """Only allow requests from localhost (disabled when AUTOCODER_ALLOW_REMOTE=1)."""
client_host = request.client.host if request.client else None
# Allow localhost connections # Allow localhost connections
if client_host not in ("127.0.0.1", "::1", "localhost", None): if client_host not in ("127.0.0.1", "::1", "localhost", None):
raise HTTPException(status_code=403, detail="Localhost access only") raise HTTPException(status_code=403, detail="Localhost access only")
return await call_next(request) return await call_next(request)
# ============================================================================ # ============================================================================

View File

@@ -40,7 +40,15 @@ def _parse_yolo_mode(value: str | None) -> bool:
def _is_glm_mode() -> bool: def _is_glm_mode() -> bool:
"""Check if GLM API is configured via environment variables.""" """Check if GLM API is configured via environment variables."""
return bool(os.getenv("ANTHROPIC_BASE_URL")) base_url = os.getenv("ANTHROPIC_BASE_URL", "")
# GLM mode is when ANTHROPIC_BASE_URL is set but NOT pointing to Ollama
return bool(base_url) and not _is_ollama_mode()
def _is_ollama_mode() -> bool:
"""Check if Ollama API is configured via environment variables."""
base_url = os.getenv("ANTHROPIC_BASE_URL", "")
return "localhost:11434" in base_url or "127.0.0.1:11434" in base_url
@router.get("/models", response_model=ModelsResponse) @router.get("/models", response_model=ModelsResponse)
@@ -82,6 +90,7 @@ async def get_settings():
yolo_mode=_parse_yolo_mode(all_settings.get("yolo_mode")), yolo_mode=_parse_yolo_mode(all_settings.get("yolo_mode")),
model=all_settings.get("model", DEFAULT_MODEL), model=all_settings.get("model", DEFAULT_MODEL),
glm_mode=_is_glm_mode(), glm_mode=_is_glm_mode(),
ollama_mode=_is_ollama_mode(),
testing_agent_ratio=_parse_int(all_settings.get("testing_agent_ratio"), 1), testing_agent_ratio=_parse_int(all_settings.get("testing_agent_ratio"), 1),
) )
@@ -104,5 +113,6 @@ async def update_settings(update: SettingsUpdate):
yolo_mode=_parse_yolo_mode(all_settings.get("yolo_mode")), yolo_mode=_parse_yolo_mode(all_settings.get("yolo_mode")),
model=all_settings.get("model", DEFAULT_MODEL), model=all_settings.get("model", DEFAULT_MODEL),
glm_mode=_is_glm_mode(), glm_mode=_is_glm_mode(),
ollama_mode=_is_ollama_mode(),
testing_agent_ratio=_parse_int(all_settings.get("testing_agent_ratio"), 1), testing_agent_ratio=_parse_int(all_settings.get("testing_agent_ratio"), 1),
) )

View File

@@ -382,6 +382,7 @@ class SettingsResponse(BaseModel):
yolo_mode: bool = False yolo_mode: bool = False
model: str = DEFAULT_MODEL model: str = DEFAULT_MODEL
glm_mode: bool = False # True if GLM API is configured via .env glm_mode: bool = False # True if GLM API is configured via .env
ollama_mode: bool = False # True if Ollama API is configured via .env
testing_agent_ratio: int = 1 # Regression testing agents (0-3) testing_agent_ratio: int = 1 # Regression testing agents (0-3)

View File

@@ -428,7 +428,9 @@ class DevServerProcessManager:
# Global registry of dev server managers per project with thread safety # Global registry of dev server managers per project with thread safety
_managers: dict[str, DevServerProcessManager] = {} # Key is (project_name, resolved_project_dir) to prevent cross-project contamination
# when different projects share the same name but have different paths
_managers: dict[tuple[str, str], DevServerProcessManager] = {}
_managers_lock = threading.Lock() _managers_lock = threading.Lock()
@@ -444,18 +446,11 @@ def get_devserver_manager(project_name: str, project_dir: Path) -> DevServerProc
DevServerProcessManager instance for the project DevServerProcessManager instance for the project
""" """
with _managers_lock: with _managers_lock:
if project_name in _managers: # Use composite key to prevent cross-project UI contamination (#71)
manager = _managers[project_name] key = (project_name, str(project_dir.resolve()))
# Update project_dir in case project was moved if key not in _managers:
if manager.project_dir.resolve() != project_dir.resolve(): _managers[key] = DevServerProcessManager(project_name, project_dir)
logger.info( return _managers[key]
f"Project {project_name} path updated: {manager.project_dir} -> {project_dir}"
)
manager.project_dir = project_dir
manager.lock_file = project_dir / ".devserver.lock"
return manager
_managers[project_name] = DevServerProcessManager(project_name, project_dir)
return _managers[project_name]
async def cleanup_all_devservers() -> None: async def cleanup_all_devservers() -> None:

View File

@@ -510,7 +510,9 @@ class AgentProcessManager:
# Global registry of process managers per project with thread safety # Global registry of process managers per project with thread safety
_managers: dict[str, AgentProcessManager] = {} # Key is (project_name, resolved_project_dir) to prevent cross-project contamination
# when different projects share the same name but have different paths
_managers: dict[tuple[str, str], AgentProcessManager] = {}
_managers_lock = threading.Lock() _managers_lock = threading.Lock()
@@ -523,9 +525,11 @@ def get_manager(project_name: str, project_dir: Path, root_dir: Path) -> AgentPr
root_dir: Root directory of the autonomous-coding-ui project root_dir: Root directory of the autonomous-coding-ui project
""" """
with _managers_lock: with _managers_lock:
if project_name not in _managers: # Use composite key to prevent cross-project UI contamination (#71)
_managers[project_name] = AgentProcessManager(project_name, project_dir, root_dir) key = (project_name, str(project_dir.resolve()))
return _managers[project_name] if key not in _managers:
_managers[key] = AgentProcessManager(project_name, project_dir, root_dir)
return _managers[key]
async def cleanup_all_managers() -> None: async def cleanup_all_managers() -> None:

View File

@@ -13,12 +13,16 @@ Automated launcher that handles all setup:
7. Opens browser to the UI 7. Opens browser to the UI
Usage: Usage:
python start_ui.py [--dev] python start_ui.py [--dev] [--host HOST] [--port PORT]
Options: Options:
--dev Run in development mode with Vite hot reload --dev Run in development mode with Vite hot reload
--host HOST Host to bind to (default: 127.0.0.1)
Use 0.0.0.0 for remote access (security warning will be shown)
--port PORT Port to bind to (default: 8888)
""" """
import argparse
import asyncio import asyncio
import os import os
import shutil import shutil
@@ -235,26 +239,31 @@ def build_frontend() -> bool:
return run_command([npm_cmd, "run", "build"], cwd=UI_DIR) return run_command([npm_cmd, "run", "build"], cwd=UI_DIR)
def start_dev_server(port: int) -> tuple: def start_dev_server(port: int, host: str = "127.0.0.1") -> tuple:
"""Start both Vite and FastAPI in development mode.""" """Start both Vite and FastAPI in development mode."""
venv_python = get_venv_python() venv_python = get_venv_python()
print("\n Starting development servers...") print("\n Starting development servers...")
print(f" - FastAPI backend: http://127.0.0.1:{port}") print(f" - FastAPI backend: http://{host}:{port}")
print(" - Vite frontend: http://127.0.0.1:5173") print(" - Vite frontend: http://127.0.0.1:5173")
# Set environment for remote access if needed
env = os.environ.copy()
if host != "127.0.0.1":
env["AUTOCODER_ALLOW_REMOTE"] = "1"
# Start FastAPI # Start FastAPI
backend = subprocess.Popen([ backend = subprocess.Popen([
str(venv_python), "-m", "uvicorn", str(venv_python), "-m", "uvicorn",
"server.main:app", "server.main:app",
"--host", "127.0.0.1", "--host", host,
"--port", str(port), "--port", str(port),
"--reload" "--reload"
], cwd=str(ROOT)) ], cwd=str(ROOT), env=env)
# Start Vite with API port env var for proxy configuration # Start Vite with API port env var for proxy configuration
npm_cmd = "npm.cmd" if sys.platform == "win32" else "npm" npm_cmd = "npm.cmd" if sys.platform == "win32" else "npm"
vite_env = os.environ.copy() vite_env = env.copy()
vite_env["VITE_API_PORT"] = str(port) vite_env["VITE_API_PORT"] = str(port)
frontend = subprocess.Popen([ frontend = subprocess.Popen([
npm_cmd, "run", "dev" npm_cmd, "run", "dev"
@@ -263,15 +272,18 @@ def start_dev_server(port: int) -> tuple:
return backend, frontend return backend, frontend
def start_production_server(port: int): def start_production_server(port: int, host: str = "127.0.0.1"):
"""Start FastAPI server in production mode with hot reload.""" """Start FastAPI server in production mode."""
venv_python = get_venv_python() venv_python = get_venv_python()
print(f"\n Starting server at http://127.0.0.1:{port} (with hot reload)") print(f"\n Starting server at http://{host}:{port}")
# Set PYTHONASYNCIODEBUG to help with Windows subprocess issues
env = os.environ.copy() env = os.environ.copy()
# Enable remote access in server if not localhost
if host != "127.0.0.1":
env["AUTOCODER_ALLOW_REMOTE"] = "1"
# NOTE: --reload is NOT used because on Windows it breaks asyncio subprocess # NOTE: --reload is NOT used because on Windows it breaks asyncio subprocess
# support (uvicorn's reload worker doesn't inherit the ProactorEventLoop policy). # support (uvicorn's reload worker doesn't inherit the ProactorEventLoop policy).
# This affects Claude SDK which uses asyncio.create_subprocess_exec. # This affects Claude SDK which uses asyncio.create_subprocess_exec.
@@ -279,14 +291,34 @@ def start_production_server(port: int):
return subprocess.Popen([ return subprocess.Popen([
str(venv_python), "-m", "uvicorn", str(venv_python), "-m", "uvicorn",
"server.main:app", "server.main:app",
"--host", "127.0.0.1", "--host", host,
"--port", str(port), "--port", str(port),
], cwd=str(ROOT), env=env) ], cwd=str(ROOT), env=env)
def main() -> None: def main() -> None:
"""Main entry point.""" """Main entry point."""
dev_mode = "--dev" in sys.argv parser = argparse.ArgumentParser(description="AutoCoder UI Launcher")
parser.add_argument("--dev", action="store_true", help="Run in development mode with Vite hot reload")
parser.add_argument("--host", default="127.0.0.1", help="Host to bind to (default: 127.0.0.1)")
parser.add_argument("--port", type=int, default=None, help="Port to bind to (default: auto-detect from 8888)")
args = parser.parse_args()
dev_mode = args.dev
host = args.host
# Security warning for remote access
if host != "127.0.0.1":
print("\n" + "!" * 50)
print(" SECURITY WARNING")
print("!" * 50)
print(f" Remote access enabled on host: {host}")
print(" The AutoCoder UI will be accessible from other machines.")
print(" Ensure you understand the security implications:")
print(" - The agent has file system access to project directories")
print(" - The API can start/stop agents and modify files")
print(" - Consider using a firewall or VPN for protection")
print("!" * 50 + "\n")
print("=" * 50) print("=" * 50)
print(" AutoCoder UI Setup") print(" AutoCoder UI Setup")
@@ -335,18 +367,20 @@ def main() -> None:
step = 5 if dev_mode else 6 step = 5 if dev_mode else 6
print_step(step, total_steps, "Starting server") print_step(step, total_steps, "Starting server")
port = find_available_port() port = args.port if args.port else find_available_port()
try: try:
if dev_mode: if dev_mode:
backend, frontend = start_dev_server(port) backend, frontend = start_dev_server(port, host)
# Open browser to Vite dev server # Open browser to Vite dev server (always localhost for Vite)
time.sleep(3) time.sleep(3)
webbrowser.open("http://127.0.0.1:5173") webbrowser.open("http://127.0.0.1:5173")
print("\n" + "=" * 50) print("\n" + "=" * 50)
print(" Development mode active") print(" Development mode active")
if host != "127.0.0.1":
print(f" Backend accessible at: http://{host}:{port}")
print(" Press Ctrl+C to stop") print(" Press Ctrl+C to stop")
print("=" * 50) print("=" * 50)
@@ -362,14 +396,15 @@ def main() -> None:
backend.wait() backend.wait()
frontend.wait() frontend.wait()
else: else:
server = start_production_server(port) server = start_production_server(port, host)
# Open browser # Open browser (only if localhost)
time.sleep(2) time.sleep(2)
webbrowser.open(f"http://127.0.0.1:{port}") if host == "127.0.0.1":
webbrowser.open(f"http://127.0.0.1:{port}")
print("\n" + "=" * 50) print("\n" + "=" * 50)
print(f" Server running at http://127.0.0.1:{port}") print(f" Server running at http://{host}:{port}")
print(" Press Ctrl+C to stop") print(" Press Ctrl+C to stop")
print("=" * 50) print("=" * 50)

View File

@@ -15,14 +15,17 @@ from contextlib import contextmanager
from pathlib import Path from pathlib import Path
from security import ( from security import (
DEFAULT_PKILL_PROCESSES,
bash_security_hook, bash_security_hook,
extract_commands, extract_commands,
get_effective_commands, get_effective_commands,
get_effective_pkill_processes,
load_org_config, load_org_config,
load_project_commands, load_project_commands,
matches_pattern, matches_pattern,
validate_chmod_command, validate_chmod_command,
validate_init_script, validate_init_script,
validate_pkill_command,
validate_project_command, validate_project_command,
) )
@@ -670,6 +673,240 @@ blocked_commands:
return passed, failed return passed, failed
def test_pkill_extensibility():
"""Test that pkill processes can be extended via config."""
print("\nTesting pkill process extensibility:\n")
passed = 0
failed = 0
# Test 1: Default processes work without config
allowed, reason = validate_pkill_command("pkill node")
if allowed:
print(" PASS: Default process 'node' allowed")
passed += 1
else:
print(f" FAIL: Default process 'node' should be allowed: {reason}")
failed += 1
# Test 2: Non-default process blocked without config
allowed, reason = validate_pkill_command("pkill python")
if not allowed:
print(" PASS: Non-default process 'python' blocked without config")
passed += 1
else:
print(" FAIL: Non-default process 'python' should be blocked without config")
failed += 1
# Test 3: Extra processes allowed when passed
allowed, reason = validate_pkill_command("pkill python", extra_processes={"python"})
if allowed:
print(" PASS: Extra process 'python' allowed when configured")
passed += 1
else:
print(f" FAIL: Extra process 'python' should be allowed when configured: {reason}")
failed += 1
# Test 4: Default processes still work with extra processes
allowed, reason = validate_pkill_command("pkill npm", extra_processes={"python"})
if allowed:
print(" PASS: Default process 'npm' still works with extra processes")
passed += 1
else:
print(f" FAIL: Default process should still work: {reason}")
failed += 1
# Test 5: Test get_effective_pkill_processes with org config
with tempfile.TemporaryDirectory() as tmphome:
with tempfile.TemporaryDirectory() as tmpproject:
with temporary_home(tmphome):
org_dir = Path(tmphome) / ".autocoder"
org_dir.mkdir()
org_config_path = org_dir / "config.yaml"
# Create org config with extra pkill processes
org_config_path.write_text("""version: 1
pkill_processes:
- python
- uvicorn
""")
project_dir = Path(tmpproject)
processes = get_effective_pkill_processes(project_dir)
# Should include defaults + org processes
if "node" in processes and "python" in processes and "uvicorn" in processes:
print(" PASS: Org pkill_processes merged with defaults")
passed += 1
else:
print(f" FAIL: Expected node, python, uvicorn in {processes}")
failed += 1
# Test 6: Test get_effective_pkill_processes with project config
with tempfile.TemporaryDirectory() as tmphome:
with tempfile.TemporaryDirectory() as tmpproject:
with temporary_home(tmphome):
project_dir = Path(tmpproject)
project_autocoder = project_dir / ".autocoder"
project_autocoder.mkdir()
project_config = project_autocoder / "allowed_commands.yaml"
# Create project config with extra pkill processes
project_config.write_text("""version: 1
commands: []
pkill_processes:
- gunicorn
- flask
""")
processes = get_effective_pkill_processes(project_dir)
# Should include defaults + project processes
if "node" in processes and "gunicorn" in processes and "flask" in processes:
print(" PASS: Project pkill_processes merged with defaults")
passed += 1
else:
print(f" FAIL: Expected node, gunicorn, flask in {processes}")
failed += 1
# Test 7: Integration test - pkill python blocked by default
with tempfile.TemporaryDirectory() as tmphome:
with tempfile.TemporaryDirectory() as tmpproject:
with temporary_home(tmphome):
project_dir = Path(tmpproject)
input_data = {"tool_name": "Bash", "tool_input": {"command": "pkill python"}}
context = {"project_dir": str(project_dir)}
result = asyncio.run(bash_security_hook(input_data, context=context))
if result.get("decision") == "block":
print(" PASS: pkill python blocked without config")
passed += 1
else:
print(" FAIL: pkill python should be blocked without config")
failed += 1
# Test 8: Integration test - pkill python allowed with org config
with tempfile.TemporaryDirectory() as tmphome:
with tempfile.TemporaryDirectory() as tmpproject:
with temporary_home(tmphome):
org_dir = Path(tmphome) / ".autocoder"
org_dir.mkdir()
org_config_path = org_dir / "config.yaml"
org_config_path.write_text("""version: 1
pkill_processes:
- python
""")
project_dir = Path(tmpproject)
input_data = {"tool_name": "Bash", "tool_input": {"command": "pkill python"}}
context = {"project_dir": str(project_dir)}
result = asyncio.run(bash_security_hook(input_data, context=context))
if result.get("decision") != "block":
print(" PASS: pkill python allowed with org config")
passed += 1
else:
print(f" FAIL: pkill python should be allowed with org config: {result}")
failed += 1
# Test 9: Regex metacharacters should be rejected in pkill_processes
with tempfile.TemporaryDirectory() as tmphome:
with tempfile.TemporaryDirectory() as tmpproject:
with temporary_home(tmphome):
org_dir = Path(tmphome) / ".autocoder"
org_dir.mkdir()
org_config_path = org_dir / "config.yaml"
# Try to register a regex pattern (should be rejected)
org_config_path.write_text("""version: 1
pkill_processes:
- ".*"
""")
config = load_org_config()
if config is None:
print(" PASS: Regex pattern '.*' rejected in pkill_processes")
passed += 1
else:
print(" FAIL: Regex pattern '.*' should be rejected")
failed += 1
# Test 10: Valid process names with dots/underscores/hyphens should be accepted
with tempfile.TemporaryDirectory() as tmphome:
with tempfile.TemporaryDirectory() as tmpproject:
with temporary_home(tmphome):
org_dir = Path(tmphome) / ".autocoder"
org_dir.mkdir()
org_config_path = org_dir / "config.yaml"
# Valid names with special chars
org_config_path.write_text("""version: 1
pkill_processes:
- my-app
- app_server
- node.js
""")
config = load_org_config()
if config is not None and config.get("pkill_processes") == ["my-app", "app_server", "node.js"]:
print(" PASS: Valid process names with dots/underscores/hyphens accepted")
passed += 1
else:
print(f" FAIL: Valid process names should be accepted: {config}")
failed += 1
# Test 11: Names with spaces should be rejected
with tempfile.TemporaryDirectory() as tmphome:
with tempfile.TemporaryDirectory() as tmpproject:
with temporary_home(tmphome):
org_dir = Path(tmphome) / ".autocoder"
org_dir.mkdir()
org_config_path = org_dir / "config.yaml"
org_config_path.write_text("""version: 1
pkill_processes:
- "my app"
""")
config = load_org_config()
if config is None:
print(" PASS: Process name with space rejected")
passed += 1
else:
print(" FAIL: Process name with space should be rejected")
failed += 1
# Test 12: Multiple patterns - all must be allowed (BSD behavior)
# On BSD, "pkill node sshd" would kill both, so we must validate all patterns
allowed, reason = validate_pkill_command("pkill node npm")
if allowed:
print(" PASS: Multiple allowed patterns accepted")
passed += 1
else:
print(f" FAIL: Multiple allowed patterns should be accepted: {reason}")
failed += 1
# Test 13: Multiple patterns - block if any is disallowed
allowed, reason = validate_pkill_command("pkill node sshd")
if not allowed:
print(" PASS: Multiple patterns blocked when one is disallowed")
passed += 1
else:
print(" FAIL: Should block when any pattern is disallowed")
failed += 1
# Test 14: Multiple patterns - only first allowed, second disallowed
allowed, reason = validate_pkill_command("pkill npm python")
if not allowed:
print(" PASS: Multiple patterns blocked (first allowed, second not)")
passed += 1
else:
print(" FAIL: Should block when second pattern is disallowed")
failed += 1
return passed, failed
def main(): def main():
print("=" * 70) print("=" * 70)
print(" SECURITY HOOK TESTS") print(" SECURITY HOOK TESTS")
@@ -733,6 +970,11 @@ def main():
passed += org_block_passed passed += org_block_passed
failed += org_block_failed failed += org_block_failed
# Test pkill process extensibility
pkill_passed, pkill_failed = test_pkill_extensibility()
passed += pkill_passed
failed += pkill_failed
# Commands that SHOULD be blocked # Commands that SHOULD be blocked
print("\nCommands that should be BLOCKED:\n") print("\nCommands that should be BLOCKED:\n")
dangerous = [ dangerous = [

BIN
ui/public/ollama.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.6 KiB

View File

@@ -298,6 +298,17 @@ function App() {
<Settings size={18} /> <Settings size={18} />
</button> </button>
{/* Ollama Mode Indicator */}
{settings?.ollama_mode && (
<div
className="flex items-center gap-1.5 px-2 py-1 bg-white rounded border-2 border-neo-border shadow-neo-sm"
title="Using Ollama local models (configured via .env)"
>
<img src="/ollama.png" alt="Ollama" className="w-5 h-5" />
<span className="text-xs font-bold text-neo-text">Ollama</span>
</div>
)}
{/* GLM Mode Badge */} {/* GLM Mode Badge */}
{settings?.glm_mode && ( {settings?.glm_mode && (
<span <span

View File

@@ -237,6 +237,7 @@ const DEFAULT_SETTINGS: Settings = {
yolo_mode: false, yolo_mode: false,
model: 'claude-opus-4-5-20251101', model: 'claude-opus-4-5-20251101',
glm_mode: false, glm_mode: false,
ollama_mode: false,
testing_agent_ratio: 1, testing_agent_ratio: 1,
} }

View File

@@ -526,6 +526,7 @@ export interface Settings {
yolo_mode: boolean yolo_mode: boolean
model: string model: string
glm_mode: boolean glm_mode: boolean
ollama_mode: boolean
testing_agent_ratio: number // Regression testing agents (0-3) testing_agent_ratio: number // Regression testing agents (0-3)
} }