feat: decouple regression testing agents from coding agents

Major refactoring of the parallel orchestrator to run regression testing
agents independently from coding agents. This improves system reliability
and provides better control over testing behavior.

Key changes:

Database & MCP Layer:
- Add testing_in_progress and last_tested_at columns to Feature model
- Add feature_claim_for_testing() for atomic test claim with retry
- Add feature_release_testing() to release claims after testing
- Refactor claim functions to iterative loops (no recursion)
- Add OperationalError retry handling for transient DB errors
- Reduce MAX_CLAIM_RETRIES from 10 to 5

Orchestrator:
- Decouple testing agent lifecycle from coding agents
- Add _maintain_testing_agents() for continuous testing maintenance
- Fix TOCTOU race in _spawn_testing_agent() - hold lock during spawn
- Add _cleanup_stale_testing_locks() with 30-min timeout
- Fix log ordering - start_session() before stale flag cleanup
- Add stale testing_in_progress cleanup on startup

Dead Code Removal:
- Remove count_testing_in_concurrency from entire stack (12+ files)
- Remove ineffective with_for_update() from features router

API & UI:
- Pass testing_agent_ratio via CLI to orchestrator
- Update testing prompt template to use new claim/release tools
- Rename UI label to "Regression Agents" with clearer description
- Add process_utils.py for cross-platform process tree management

Testing agents now:
- Run continuously as long as passing features exist
- Can re-test features multiple times to catch regressions
- Are controlled by fixed count (0-3) via testing_agent_ratio setting
- Have atomic claiming to prevent concurrent testing of same feature

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
Auto
2026-01-22 15:22:48 +02:00
parent 29c6b252a9
commit 357083dbae
20 changed files with 841 additions and 382 deletions

View File

@@ -171,8 +171,7 @@ class AgentStartRequest(BaseModel):
model: str | None = None # None means use global settings
parallel_mode: bool | None = None # DEPRECATED: Use max_concurrency instead
max_concurrency: int | None = None # Max concurrent coding agents (1-5)
testing_agent_ratio: int | None = None # Testing agents per coding agent (0-3)
count_testing_in_concurrency: bool | None = None # Count testing toward limit
testing_agent_ratio: int | None = None # Regression testing agents (0-3)
@field_validator('model')
@classmethod
@@ -208,8 +207,7 @@ class AgentStatus(BaseModel):
model: str | None = None # Model being used by running agent
parallel_mode: bool = False # DEPRECATED: Always True now (unified orchestrator)
max_concurrency: int | None = None
testing_agent_ratio: int = 1 # Testing agents per coding agent
count_testing_in_concurrency: bool = False # Count testing toward limit
testing_agent_ratio: int = 1 # Regression testing agents (0-3)
class AgentActionResponse(BaseModel):
@@ -384,8 +382,7 @@ class SettingsResponse(BaseModel):
yolo_mode: bool = False
model: str = DEFAULT_MODEL
glm_mode: bool = False # True if GLM API is configured via .env
testing_agent_ratio: int = 1 # Testing agents per coding agent (0-3)
count_testing_in_concurrency: bool = False # Count testing toward concurrency
testing_agent_ratio: int = 1 # Regression testing agents (0-3)
class ModelsResponse(BaseModel):
@@ -399,7 +396,6 @@ class SettingsUpdate(BaseModel):
yolo_mode: bool | None = None
model: str | None = None
testing_agent_ratio: int | None = None # 0-3
count_testing_in_concurrency: bool | None = None
@field_validator('model')
@classmethod