refactor: remove testing agent claim mechanism for concurrent testing

Remove the testing_in_progress claim/release mechanism from the testing
agent architecture. Multiple testing agents can now test the same feature
concurrently, simplifying the system and eliminating potential stale lock
issues.

Changes:
- parallel_orchestrator.py:
  - Remove claim_feature_for_testing() and release_testing_claim() methods
  - Remove _cleanup_stale_testing_locks() periodic cleanup
  - Replace with simple _get_random_passing_feature() selection
  - Remove startup stale lock cleanup code
  - Remove STALE_TESTING_LOCK_MINUTES constant
  - Remove unused imports (timedelta, text)

- api/database.py:
  - Remove testing_in_progress and last_tested_at columns from Feature model
  - Update to_dict() to exclude these fields
  - Convert _migrate_add_testing_columns() to no-op for backwards compat

- mcp_server/feature_mcp.py:
  - Remove feature_release_testing tool entirely
  - Remove unused datetime import

- prompts.py:
  - Update testing prompt to remove feature_release_testing instruction
  - Testing agents now just verify and exit (no cleanup needed)

- server/websocket.py:
  - Update AgentTracker to use composite keys (feature_id, agent_type)
  - Prevents ghost agent creation from ambiguous [Feature #X] messages
  - Proper separation of coding vs testing agent tracking

Benefits:
- Eliminates artificial bottleneck from claim coordination
- No stale locks to clean up after crashes
- Simpler crash recovery (no testing state to restore)
- Reduced database writes (no claim/release transactions)
- Matches intended design: random, concurrent regression testing

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
Auto
2026-01-23 15:30:31 +02:00
parent 874359fcf6
commit 486979c3d9
5 changed files with 83 additions and 247 deletions

View File

@@ -93,12 +93,11 @@ def get_testing_prompt(project_dir: Path | None = None, testing_feature_id: int
**You are assigned to regression test Feature #{testing_feature_id}.**
The orchestrator has already claimed this feature for you.
### Your workflow:
1. Call `feature_get_by_id` with ID {testing_feature_id} to get the feature details
2. Verify the feature through the UI using browser automation
3. When done, call `feature_release_testing` with feature_id={testing_feature_id}
3. If regression found, call `feature_mark_failing` with feature_id={testing_feature_id}
4. Exit when done (no cleanup needed)
---