6 Commits

Author SHA1 Message Date
Auto
d846a021b8 fix: address PR #184 review findings for blocked-for-human-input feature
A) Graph view: add needs_human_input bucket to handleGraphNodeClick so
   clicking blocked nodes opens the feature modal
B) MCP validation: validate field type enum, require options for select,
   enforce unique non-empty field IDs and labels
C) Progress fallback: include needs_human_input in non-WebSocket total
D) WebSocket: track needs_human_input count in progress state
E) Cleanup guard: remove unnecessary needs_human_input check in
   _cleanup_stale_features (resolved via merge conflict)
F) Defensive SQL: require in_progress=1 in feature_request_human_input

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-12 07:36:48 +02:00
Auto
819ebcd112 Merge remote-tracking branch 'origin/master' into feature/blocked-for-human-input
# Conflicts:
#	server/services/process_manager.py
2026-02-12 07:36:11 +02:00
Auto
f4636fdfd5 fix: handle pausing/draining states in UI guards and process cleanup
Follow-up fixes after merging PR #183 (graceful pause/drain mode):

- process_manager: _stream_output finally block now transitions from
  pausing/paused_graceful to crashed/stopped (not just running), and
  cleans up the drain signal file on process exit
- App.tsx: block Reset button and R shortcut during pausing/paused_graceful
- AgentThought/ProgressDashboard: keep thought bubble visible while pausing
- OrchestratorAvatar: add draining/paused cases to animation, glow, and
  description switch statements
- AgentMissionControl: show Draining/Paused badge text for new states
- registry.py: remove redundant type annotation to fix mypy no-redef
- process_manager.py: add type:ignore for SQLAlchemy Column assignment
- websocket.py: reclassify test-pass lines as 'testing' not 'success'
- review-pr.md: add post-review recommended action guidance

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-12 07:28:37 +02:00
Leon van Zyl
c114248b09 Merge pull request #183 from CaitlynByrne/feat/pause-drain
feat: add graceful pause (drain mode) for running agents
2026-02-12 07:22:01 +02:00
Caitlyn Byrne
656df0fd9a feat: add "blocked for human input" feature across full stack
Agents can now request structured human input when they encounter
genuine blockers (API keys, design choices, external configs). The
request is displayed in the UI with a dynamic form, and the human's
response is stored and made available when the agent resumes.

Changes span 21 files + 1 new component:
- Database: 3 new columns (needs_human_input, human_input_request,
  human_input_response) with migration
- MCP: new feature_request_human_input tool + guards on existing tools
- API: new resolve-human-input endpoint, 4th feature bucket
- Orchestrator: skip needs_human_input features in scheduling
- Progress: 4-tuple return from count_passing_tests
- WebSocket: needs_human_input count in progress messages
- UI: conditional 4th Kanban column, HumanInputForm component,
  amber status indicators, dependency graph support

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-08 14:11:35 -05:00
Caitlyn Byrne
9721368188 feat: add graceful pause (drain mode) for running agents
File-based signal (.pause_drain) lets the orchestrator finish current
work before pausing instead of hard-freezing the process tree.  New
status states pausing/paused_graceful flow through WebSocket to the UI
where a Pause button, draining indicator, and Resume button are shown.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-08 13:37:22 -05:00
33 changed files with 955 additions and 94 deletions

View File

@@ -72,4 +72,21 @@ Pull request(s): $ARGUMENTS
- What this PR is actually about (one sentence) - What this PR is actually about (one sentence)
- The key concerns, if any (or "no significant concerns") - The key concerns, if any (or "no significant concerns")
- **Verdict: MERGE** / **MERGE (with minor follow-up)** / **DON'T MERGE** with a one-line reason - **Verdict: MERGE** / **MERGE (with minor follow-up)** / **DON'T MERGE** with a one-line reason
- This section should be scannable in under 10 seconds - This section should be scannable in under 10 seconds
10. **Post-Review Action**
- Immediately after the TLDR, provide a `## Recommended Action` section
- Based on the verdict, recommend one of the following actions:
**If verdict is MERGE (no concerns):**
- Recommend merging as-is. No further action needed.
**If verdict is MERGE (with minor follow-up):**
- If the concerns are low-risk and straightforward to fix (e.g., naming tweaks, small refactors, missing type annotations, minor style issues, trivial bug fixes), recommend merging the PR now and offer to immediately address the concerns in a follow-up commit directly on the target branch
- List the specific changes you would make in the follow-up
- Ask the user: *"Should I merge this PR and push a follow-up commit addressing these concerns?"*
**If verdict is DON'T MERGE:**
- If the blocking concerns are still relatively contained and you are confident you can resolve them quickly (e.g., a small bug fix, a missing validation, a straightforward architectural adjustment), recommend merging the PR and immediately addressing the issues in a follow-up commit — but only if the fixes are low-risk and well-understood
- If the issues are too complex, risky, or require author input (e.g., design decisions, major refactors, unclear intent), recommend sending the PR back to the author with specific feedback on what needs to change
- Be honest about your confidence level — if you're unsure whether you can address the concerns correctly, say so and defer to the author

View File

@@ -222,7 +222,7 @@ async def run_autonomous_agent(
# Check if all features are already complete (before starting a new session) # Check if all features are already complete (before starting a new session)
# Skip this check if running as initializer (needs to create features first) # Skip this check if running as initializer (needs to create features first)
if not is_initializer and iteration == 1: if not is_initializer and iteration == 1:
passing, in_progress, total = count_passing_tests(project_dir) passing, in_progress, total, _nhi = count_passing_tests(project_dir)
if total > 0 and passing == total: if total > 0 and passing == total:
print("\n" + "=" * 70) print("\n" + "=" * 70)
print(" ALL FEATURES ALREADY COMPLETE!") print(" ALL FEATURES ALREADY COMPLETE!")
@@ -348,7 +348,7 @@ async def run_autonomous_agent(
print_progress_summary(project_dir) print_progress_summary(project_dir)
# Check if all features are complete - exit gracefully if done # Check if all features are complete - exit gracefully if done
passing, in_progress, total = count_passing_tests(project_dir) passing, in_progress, total, _nhi = count_passing_tests(project_dir)
if total > 0 and passing == total: if total > 0 and passing == total:
print("\n" + "=" * 70) print("\n" + "=" * 70)
print(" ALL FEATURES COMPLETE!") print(" ALL FEATURES COMPLETE!")

View File

@@ -43,10 +43,10 @@ class Feature(Base):
__tablename__ = "features" __tablename__ = "features"
# Composite index for common status query pattern (passes, in_progress) # Composite index for common status query pattern (passes, in_progress, needs_human_input)
# Used by feature_get_stats, get_ready_features, and other status queries # Used by feature_get_stats, get_ready_features, and other status queries
__table_args__ = ( __table_args__ = (
Index('ix_feature_status', 'passes', 'in_progress'), Index('ix_feature_status', 'passes', 'in_progress', 'needs_human_input'),
) )
id = Column(Integer, primary_key=True, index=True) id = Column(Integer, primary_key=True, index=True)
@@ -61,6 +61,11 @@ class Feature(Base):
# NULL/empty = no dependencies (backwards compatible) # NULL/empty = no dependencies (backwards compatible)
dependencies = Column(JSON, nullable=True, default=None) dependencies = Column(JSON, nullable=True, default=None)
# Human input: agent can request structured input from a human
needs_human_input = Column(Boolean, nullable=False, default=False, index=True)
human_input_request = Column(JSON, nullable=True, default=None) # Agent's structured request
human_input_response = Column(JSON, nullable=True, default=None) # Human's response
def to_dict(self) -> dict: def to_dict(self) -> dict:
"""Convert feature to dictionary for JSON serialization.""" """Convert feature to dictionary for JSON serialization."""
return { return {
@@ -75,6 +80,10 @@ class Feature(Base):
"in_progress": self.in_progress if self.in_progress is not None else False, "in_progress": self.in_progress if self.in_progress is not None else False,
# Dependencies: NULL/empty treated as empty list for backwards compat # Dependencies: NULL/empty treated as empty list for backwards compat
"dependencies": self.dependencies if self.dependencies else [], "dependencies": self.dependencies if self.dependencies else [],
# Human input fields
"needs_human_input": self.needs_human_input if self.needs_human_input is not None else False,
"human_input_request": self.human_input_request,
"human_input_response": self.human_input_response,
} }
def get_dependencies_safe(self) -> list[int]: def get_dependencies_safe(self) -> list[int]:
@@ -302,6 +311,21 @@ def _is_network_path(path: Path) -> bool:
return False return False
def _migrate_add_human_input_columns(engine) -> None:
"""Add human input columns to existing databases that don't have them."""
with engine.connect() as conn:
result = conn.execute(text("PRAGMA table_info(features)"))
columns = [row[1] for row in result.fetchall()]
if "needs_human_input" not in columns:
conn.execute(text("ALTER TABLE features ADD COLUMN needs_human_input BOOLEAN DEFAULT 0"))
if "human_input_request" not in columns:
conn.execute(text("ALTER TABLE features ADD COLUMN human_input_request TEXT DEFAULT NULL"))
if "human_input_response" not in columns:
conn.execute(text("ALTER TABLE features ADD COLUMN human_input_response TEXT DEFAULT NULL"))
conn.commit()
def _migrate_add_schedules_tables(engine) -> None: def _migrate_add_schedules_tables(engine) -> None:
"""Create schedules and schedule_overrides tables if they don't exist.""" """Create schedules and schedule_overrides tables if they don't exist."""
from sqlalchemy import inspect from sqlalchemy import inspect
@@ -425,6 +449,7 @@ def create_database(project_dir: Path) -> tuple:
_migrate_fix_null_boolean_fields(engine) _migrate_fix_null_boolean_fields(engine)
_migrate_add_dependencies_column(engine) _migrate_add_dependencies_column(engine)
_migrate_add_testing_columns(engine) _migrate_add_testing_columns(engine)
_migrate_add_human_input_columns(engine)
# Migrate to add schedules tables # Migrate to add schedules tables
_migrate_add_schedules_tables(engine) _migrate_add_schedules_tables(engine)

View File

@@ -39,6 +39,7 @@ assistant.db-wal
assistant.db-shm assistant.db-shm
.agent.lock .agent.lock
.devserver.lock .devserver.lock
.pause_drain
.claude_settings.json .claude_settings.json
.claude_assistant_settings.json .claude_assistant_settings.json
.claude_settings.expand.*.json .claude_settings.expand.*.json
@@ -146,6 +147,15 @@ def get_claude_assistant_settings_path(project_dir: Path) -> Path:
return _resolve_path(project_dir, ".claude_assistant_settings.json") return _resolve_path(project_dir, ".claude_assistant_settings.json")
def get_pause_drain_path(project_dir: Path) -> Path:
"""Return the path to the ``.pause_drain`` signal file.
This file is created to request a graceful pause (drain mode).
Always uses the new location since it's a transient signal file.
"""
return project_dir / ".autoforge" / ".pause_drain"
def get_progress_cache_path(project_dir: Path) -> Path: def get_progress_cache_path(project_dir: Path) -> Path:
"""Resolve the path to ``.progress_cache``.""" """Resolve the path to ``.progress_cache``."""
return _resolve_path(project_dir, ".progress_cache") return _resolve_path(project_dir, ".progress_cache")

View File

@@ -151,17 +151,20 @@ def feature_get_stats() -> str:
result = session.query( result = session.query(
func.count(Feature.id).label('total'), func.count(Feature.id).label('total'),
func.sum(case((Feature.passes == True, 1), else_=0)).label('passing'), func.sum(case((Feature.passes == True, 1), else_=0)).label('passing'),
func.sum(case((Feature.in_progress == True, 1), else_=0)).label('in_progress') func.sum(case((Feature.in_progress == True, 1), else_=0)).label('in_progress'),
func.sum(case((Feature.needs_human_input == True, 1), else_=0)).label('needs_human_input')
).first() ).first()
total = result.total or 0 total = result.total or 0
passing = int(result.passing or 0) passing = int(result.passing or 0)
in_progress = int(result.in_progress or 0) in_progress = int(result.in_progress or 0)
needs_human_input = int(result.needs_human_input or 0)
percentage = round((passing / total) * 100, 1) if total > 0 else 0.0 percentage = round((passing / total) * 100, 1) if total > 0 else 0.0
return json.dumps({ return json.dumps({
"passing": passing, "passing": passing,
"in_progress": in_progress, "in_progress": in_progress,
"needs_human_input": needs_human_input,
"total": total, "total": total,
"percentage": percentage "percentage": percentage
}) })
@@ -221,6 +224,7 @@ def feature_get_summary(
"name": feature.name, "name": feature.name,
"passes": feature.passes, "passes": feature.passes,
"in_progress": feature.in_progress, "in_progress": feature.in_progress,
"needs_human_input": feature.needs_human_input if feature.needs_human_input is not None else False,
"dependencies": feature.dependencies or [] "dependencies": feature.dependencies or []
}) })
finally: finally:
@@ -401,11 +405,11 @@ def feature_mark_in_progress(
""" """
session = get_session() session = get_session()
try: try:
# Atomic claim: only succeeds if feature is not already claimed or passing # Atomic claim: only succeeds if feature is not already claimed, passing, or blocked for human input
result = session.execute(text(""" result = session.execute(text("""
UPDATE features UPDATE features
SET in_progress = 1 SET in_progress = 1
WHERE id = :id AND passes = 0 AND in_progress = 0 WHERE id = :id AND passes = 0 AND in_progress = 0 AND needs_human_input = 0
"""), {"id": feature_id}) """), {"id": feature_id})
session.commit() session.commit()
@@ -418,6 +422,8 @@ def feature_mark_in_progress(
return json.dumps({"error": f"Feature with ID {feature_id} is already passing"}) return json.dumps({"error": f"Feature with ID {feature_id} is already passing"})
if feature.in_progress: if feature.in_progress:
return json.dumps({"error": f"Feature with ID {feature_id} is already in-progress"}) return json.dumps({"error": f"Feature with ID {feature_id} is already in-progress"})
if getattr(feature, 'needs_human_input', False):
return json.dumps({"error": f"Feature with ID {feature_id} is blocked waiting for human input"})
return json.dumps({"error": "Failed to mark feature in-progress for unknown reason"}) return json.dumps({"error": "Failed to mark feature in-progress for unknown reason"})
# Fetch the claimed feature # Fetch the claimed feature
@@ -455,11 +461,14 @@ def feature_claim_and_get(
if feature.passes: if feature.passes:
return json.dumps({"error": f"Feature with ID {feature_id} is already passing"}) return json.dumps({"error": f"Feature with ID {feature_id} is already passing"})
# Try atomic claim: only succeeds if not already claimed if getattr(feature, 'needs_human_input', False):
return json.dumps({"error": f"Feature with ID {feature_id} is blocked waiting for human input"})
# Try atomic claim: only succeeds if not already claimed and not blocked for human input
result = session.execute(text(""" result = session.execute(text("""
UPDATE features UPDATE features
SET in_progress = 1 SET in_progress = 1
WHERE id = :id AND passes = 0 AND in_progress = 0 WHERE id = :id AND passes = 0 AND in_progress = 0 AND needs_human_input = 0
"""), {"id": feature_id}) """), {"id": feature_id})
session.commit() session.commit()
@@ -806,6 +815,8 @@ def feature_get_ready(
for f in all_features: for f in all_features:
if f.passes or f.in_progress: if f.passes or f.in_progress:
continue continue
if getattr(f, 'needs_human_input', False):
continue
deps = f.dependencies or [] deps = f.dependencies or []
if all(dep_id in passing_ids for dep_id in deps): if all(dep_id in passing_ids for dep_id in deps):
ready.append(f.to_dict()) ready.append(f.to_dict())
@@ -888,6 +899,8 @@ def feature_get_graph() -> str:
if f.passes: if f.passes:
status = "done" status = "done"
elif getattr(f, 'needs_human_input', False):
status = "needs_human_input"
elif blocking: elif blocking:
status = "blocked" status = "blocked"
elif f.in_progress: elif f.in_progress:
@@ -984,6 +997,103 @@ def feature_set_dependencies(
return json.dumps({"error": f"Failed to set dependencies: {str(e)}"}) return json.dumps({"error": f"Failed to set dependencies: {str(e)}"})
@mcp.tool()
def feature_request_human_input(
feature_id: Annotated[int, Field(description="The ID of the feature that needs human input", ge=1)],
prompt: Annotated[str, Field(min_length=1, description="Explain what you need from the human and why")],
fields: Annotated[list[dict], Field(min_length=1, description="List of input fields to collect")]
) -> str:
"""Request structured input from a human for a feature that is blocked.
Use this ONLY when the feature genuinely cannot proceed without human intervention:
- Creating API keys or external accounts
- Choosing between design approaches that require human preference
- Configuring external services the agent cannot access
- Providing credentials or secrets
Do NOT use this for issues you can solve yourself (debugging, reading docs, etc.).
The feature will be moved out of in_progress and into a "needs human input" state.
Once the human provides their response, the feature returns to the pending queue
and will include the human's response when you pick it up again.
Args:
feature_id: The ID of the feature that needs human input
prompt: A clear explanation of what you need and why
fields: List of input fields, each with:
- id (str): Unique field identifier
- label (str): Human-readable label
- type (str): "text", "textarea", "select", or "boolean" (default: "text")
- required (bool): Whether the field is required (default: true)
- placeholder (str, optional): Placeholder text
- options (list, optional): For select type: [{value, label}]
Returns:
JSON with success confirmation or error message
"""
# Validate fields
VALID_FIELD_TYPES = {"text", "textarea", "select", "boolean"}
seen_ids: set[str] = set()
for i, field in enumerate(fields):
if "id" not in field or "label" not in field:
return json.dumps({"error": f"Field at index {i} missing required 'id' or 'label'"})
fid = field["id"]
flabel = field["label"]
if not isinstance(fid, str) or not fid.strip():
return json.dumps({"error": f"Field at index {i} has empty or invalid 'id'"})
if not isinstance(flabel, str) or not flabel.strip():
return json.dumps({"error": f"Field at index {i} has empty or invalid 'label'"})
if fid in seen_ids:
return json.dumps({"error": f"Duplicate field id '{fid}' at index {i}"})
seen_ids.add(fid)
ftype = field.get("type", "text")
if ftype not in VALID_FIELD_TYPES:
return json.dumps({"error": f"Field at index {i} has invalid type '{ftype}'. Must be one of: {', '.join(sorted(VALID_FIELD_TYPES))}"})
if ftype == "select" and not field.get("options"):
return json.dumps({"error": f"Field at index {i} is type 'select' but missing 'options' array"})
request_data = {
"prompt": prompt,
"fields": fields,
}
session = get_session()
try:
# Atomically set needs_human_input, clear in_progress, store request, clear previous response
result = session.execute(text("""
UPDATE features
SET needs_human_input = 1,
in_progress = 0,
human_input_request = :request,
human_input_response = NULL
WHERE id = :id AND passes = 0 AND in_progress = 1
"""), {"id": feature_id, "request": json.dumps(request_data)})
session.commit()
if result.rowcount == 0:
feature = session.query(Feature).filter(Feature.id == feature_id).first()
if feature is None:
return json.dumps({"error": f"Feature with ID {feature_id} not found"})
if feature.passes:
return json.dumps({"error": f"Feature with ID {feature_id} is already passing"})
if not feature.in_progress:
return json.dumps({"error": f"Feature with ID {feature_id} is not in progress"})
return json.dumps({"error": "Failed to request human input for unknown reason"})
feature = session.query(Feature).filter(Feature.id == feature_id).first()
return json.dumps({
"success": True,
"feature_id": feature_id,
"name": feature.name,
"message": f"Feature '{feature.name}' is now blocked waiting for human input"
})
except Exception as e:
session.rollback()
return json.dumps({"error": f"Failed to request human input: {str(e)}"})
finally:
session.close()
@mcp.tool() @mcp.tool()
def ask_user( def ask_user(
questions: Annotated[list[dict], Field(description="List of questions to ask, each with question, header, options (list of {label, description}), and multiSelect (bool)")] questions: Annotated[list[dict], Field(description="List of questions to ask, each with question, header, options (list of {label, description}), and multiSelect (bool)")]

View File

@@ -213,6 +213,9 @@ class ParallelOrchestrator:
# Signal handlers only set this flag; cleanup happens in the main loop # Signal handlers only set this flag; cleanup happens in the main loop
self._shutdown_requested = False self._shutdown_requested = False
# Graceful pause (drain mode) flag
self._drain_requested = False
# Session tracking for logging/debugging # Session tracking for logging/debugging
self.session_start_time: datetime | None = None self.session_start_time: datetime | None = None
@@ -493,6 +496,9 @@ class ParallelOrchestrator:
for fd in feature_dicts: for fd in feature_dicts:
if not fd.get("in_progress") or fd.get("passes"): if not fd.get("in_progress") or fd.get("passes"):
continue continue
# Skip if blocked for human input
if fd.get("needs_human_input"):
continue
# Skip if already running in this orchestrator instance # Skip if already running in this orchestrator instance
if fd["id"] in running_ids: if fd["id"] in running_ids:
continue continue
@@ -537,11 +543,14 @@ class ParallelOrchestrator:
running_ids.update(batch_ids) running_ids.update(batch_ids)
ready = [] ready = []
skipped_reasons = {"passes": 0, "in_progress": 0, "running": 0, "failed": 0, "deps": 0} skipped_reasons = {"passes": 0, "in_progress": 0, "running": 0, "failed": 0, "deps": 0, "needs_human_input": 0}
for fd in feature_dicts: for fd in feature_dicts:
if fd.get("passes"): if fd.get("passes"):
skipped_reasons["passes"] += 1 skipped_reasons["passes"] += 1
continue continue
if fd.get("needs_human_input"):
skipped_reasons["needs_human_input"] += 1
continue
if fd.get("in_progress"): if fd.get("in_progress"):
skipped_reasons["in_progress"] += 1 skipped_reasons["in_progress"] += 1
continue continue
@@ -1387,6 +1396,9 @@ class ParallelOrchestrator:
# Must happen before any debug_log.log() calls # Must happen before any debug_log.log() calls
debug_log.start_session() debug_log.start_session()
# Clear any stale drain signal from a previous session
self._clear_drain_signal()
# Log startup to debug file # Log startup to debug file
debug_log.section("ORCHESTRATOR STARTUP") debug_log.section("ORCHESTRATOR STARTUP")
debug_log.log("STARTUP", "Orchestrator run_loop starting", debug_log.log("STARTUP", "Orchestrator run_loop starting",
@@ -1508,6 +1520,34 @@ class ParallelOrchestrator:
print("\nAll features complete!", flush=True) print("\nAll features complete!", flush=True)
break break
# --- Graceful pause (drain mode) ---
if not self._drain_requested and self._check_drain_signal():
self._drain_requested = True
print("Graceful pause requested - draining running agents...", flush=True)
debug_log.log("DRAIN", "Graceful pause requested, draining running agents")
if self._drain_requested:
with self._lock:
coding_count = len(self.running_coding_agents)
testing_count = len(self.running_testing_agents)
if coding_count == 0 and testing_count == 0:
print("All agents drained - paused.", flush=True)
debug_log.log("DRAIN", "All agents drained, entering paused state")
# Wait until signal file is removed (resume) or shutdown
while self._check_drain_signal() and self.is_running and not self._shutdown_requested:
await asyncio.sleep(1)
if not self.is_running or self._shutdown_requested:
break
self._drain_requested = False
print("Resuming from graceful pause...", flush=True)
debug_log.log("DRAIN", "Resuming from graceful pause")
continue
else:
debug_log.log("DRAIN", f"Waiting for agents to finish: coding={coding_count}, testing={testing_count}")
await self._wait_for_agent_completion()
continue
# Maintain testing agents independently (runs every iteration) # Maintain testing agents independently (runs every iteration)
self._maintain_testing_agents(feature_dicts) self._maintain_testing_agents(feature_dicts)
@@ -1632,6 +1672,17 @@ class ParallelOrchestrator:
"yolo_mode": self.yolo_mode, "yolo_mode": self.yolo_mode,
} }
def _check_drain_signal(self) -> bool:
"""Check if the graceful pause (drain) signal file exists."""
from autoforge_paths import get_pause_drain_path
return get_pause_drain_path(self.project_dir).exists()
def _clear_drain_signal(self) -> None:
"""Delete the drain signal file and reset the flag."""
from autoforge_paths import get_pause_drain_path
get_pause_drain_path(self.project_dir).unlink(missing_ok=True)
self._drain_requested = False
def cleanup(self) -> None: def cleanup(self) -> None:
"""Clean up database resources. Safe to call multiple times. """Clean up database resources. Safe to call multiple times.

View File

@@ -62,54 +62,71 @@ def has_features(project_dir: Path) -> bool:
return False return False
def count_passing_tests(project_dir: Path) -> tuple[int, int, int]: def count_passing_tests(project_dir: Path) -> tuple[int, int, int, int]:
""" """
Count passing, in_progress, and total tests via direct database access. Count passing, in_progress, total, and needs_human_input tests via direct database access.
Args: Args:
project_dir: Directory containing the project project_dir: Directory containing the project
Returns: Returns:
(passing_count, in_progress_count, total_count) (passing_count, in_progress_count, total_count, needs_human_input_count)
""" """
from autoforge_paths import get_features_db_path from autoforge_paths import get_features_db_path
db_file = get_features_db_path(project_dir) db_file = get_features_db_path(project_dir)
if not db_file.exists(): if not db_file.exists():
return 0, 0, 0 return 0, 0, 0, 0
try: try:
with closing(_get_connection(db_file)) as conn: with closing(_get_connection(db_file)) as conn:
cursor = conn.cursor() cursor = conn.cursor()
# Single aggregate query instead of 3 separate COUNT queries # Single aggregate query instead of separate COUNT queries
# Handle case where in_progress column doesn't exist yet (legacy DBs) # Handle case where columns don't exist yet (legacy DBs)
try: try:
cursor.execute(""" cursor.execute("""
SELECT SELECT
COUNT(*) as total, COUNT(*) as total,
SUM(CASE WHEN passes = 1 THEN 1 ELSE 0 END) as passing, SUM(CASE WHEN passes = 1 THEN 1 ELSE 0 END) as passing,
SUM(CASE WHEN in_progress = 1 THEN 1 ELSE 0 END) as in_progress SUM(CASE WHEN in_progress = 1 THEN 1 ELSE 0 END) as in_progress,
SUM(CASE WHEN needs_human_input = 1 THEN 1 ELSE 0 END) as needs_human_input
FROM features FROM features
""") """)
row = cursor.fetchone() row = cursor.fetchone()
total = row[0] or 0 total = row[0] or 0
passing = row[1] or 0 passing = row[1] or 0
in_progress = row[2] or 0 in_progress = row[2] or 0
needs_human_input = row[3] or 0
except sqlite3.OperationalError: except sqlite3.OperationalError:
# Fallback for databases without in_progress column # Fallback for databases without newer columns
cursor.execute(""" try:
SELECT cursor.execute("""
COUNT(*) as total, SELECT
SUM(CASE WHEN passes = 1 THEN 1 ELSE 0 END) as passing COUNT(*) as total,
FROM features SUM(CASE WHEN passes = 1 THEN 1 ELSE 0 END) as passing,
""") SUM(CASE WHEN in_progress = 1 THEN 1 ELSE 0 END) as in_progress
row = cursor.fetchone() FROM features
total = row[0] or 0 """)
passing = row[1] or 0 row = cursor.fetchone()
in_progress = 0 total = row[0] or 0
return passing, in_progress, total passing = row[1] or 0
in_progress = row[2] or 0
needs_human_input = 0
except sqlite3.OperationalError:
cursor.execute("""
SELECT
COUNT(*) as total,
SUM(CASE WHEN passes = 1 THEN 1 ELSE 0 END) as passing
FROM features
""")
row = cursor.fetchone()
total = row[0] or 0
passing = row[1] or 0
in_progress = 0
needs_human_input = 0
return passing, in_progress, total, needs_human_input
except Exception as e: except Exception as e:
print(f"[Database error in count_passing_tests: {e}]") print(f"[Database error in count_passing_tests: {e}]")
return 0, 0, 0 return 0, 0, 0, 0
def get_all_passing_features(project_dir: Path) -> list[dict]: def get_all_passing_features(project_dir: Path) -> list[dict]:
@@ -234,7 +251,7 @@ def print_session_header(session_num: int, is_initializer: bool) -> None:
def print_progress_summary(project_dir: Path) -> None: def print_progress_summary(project_dir: Path) -> None:
"""Print a summary of current progress.""" """Print a summary of current progress."""
passing, in_progress, total = count_passing_tests(project_dir) passing, in_progress, total, _needs_human_input = count_passing_tests(project_dir)
if total > 0: if total > 0:
percentage = (passing / total) * 100 percentage = (passing / total) * 100

View File

@@ -743,7 +743,7 @@ def get_effective_sdk_env() -> dict[str, str]:
sdk_env[var] = value sdk_env[var] = value
return sdk_env return sdk_env
sdk_env: dict[str, str] = {} sdk_env = {}
# Explicitly clear credentials that could leak from the server process env. # Explicitly clear credentials that could leak from the server process env.
# For providers using ANTHROPIC_AUTH_TOKEN (GLM, Custom), clear ANTHROPIC_API_KEY. # For providers using ANTHROPIC_AUTH_TOKEN (GLM, Custom), clear ANTHROPIC_API_KEY.

View File

@@ -175,3 +175,31 @@ async def resume_agent(project_name: str):
status=manager.status, status=manager.status,
message=message, message=message,
) )
@router.post("/graceful-pause", response_model=AgentActionResponse)
async def graceful_pause_agent(project_name: str):
"""Request a graceful pause (drain mode) - finish current work then pause."""
manager = get_project_manager(project_name)
success, message = await manager.graceful_pause()
return AgentActionResponse(
success=success,
status=manager.status,
message=message,
)
@router.post("/graceful-resume", response_model=AgentActionResponse)
async def graceful_resume_agent(project_name: str):
"""Resume from a graceful pause."""
manager = get_project_manager(project_name)
success, message = await manager.graceful_resume()
return AgentActionResponse(
success=success,
status=manager.status,
message=message,
)

View File

@@ -23,6 +23,7 @@ from ..schemas import (
FeatureListResponse, FeatureListResponse,
FeatureResponse, FeatureResponse,
FeatureUpdate, FeatureUpdate,
HumanInputResponse,
) )
from ..utils.project_helpers import get_project_path as _get_project_path from ..utils.project_helpers import get_project_path as _get_project_path
from ..utils.validation import validate_project_name from ..utils.validation import validate_project_name
@@ -104,6 +105,9 @@ def feature_to_response(f, passing_ids: set[int] | None = None) -> FeatureRespon
in_progress=f.in_progress if f.in_progress is not None else False, in_progress=f.in_progress if f.in_progress is not None else False,
blocked=blocked, blocked=blocked,
blocking_dependencies=blocking, blocking_dependencies=blocking,
needs_human_input=getattr(f, 'needs_human_input', False) or False,
human_input_request=getattr(f, 'human_input_request', None),
human_input_response=getattr(f, 'human_input_response', None),
) )
@@ -143,11 +147,14 @@ async def list_features(project_name: str):
pending = [] pending = []
in_progress = [] in_progress = []
done = [] done = []
needs_human_input_list = []
for f in all_features: for f in all_features:
feature_response = feature_to_response(f, passing_ids) feature_response = feature_to_response(f, passing_ids)
if f.passes: if f.passes:
done.append(feature_response) done.append(feature_response)
elif getattr(f, 'needs_human_input', False):
needs_human_input_list.append(feature_response)
elif f.in_progress: elif f.in_progress:
in_progress.append(feature_response) in_progress.append(feature_response)
else: else:
@@ -157,6 +164,7 @@ async def list_features(project_name: str):
pending=pending, pending=pending,
in_progress=in_progress, in_progress=in_progress,
done=done, done=done,
needs_human_input=needs_human_input_list,
) )
except HTTPException: except HTTPException:
raise raise
@@ -341,9 +349,11 @@ async def get_dependency_graph(project_name: str):
deps = f.dependencies or [] deps = f.dependencies or []
blocking = [d for d in deps if d not in passing_ids] blocking = [d for d in deps if d not in passing_ids]
status: Literal["pending", "in_progress", "done", "blocked"] status: Literal["pending", "in_progress", "done", "blocked", "needs_human_input"]
if f.passes: if f.passes:
status = "done" status = "done"
elif getattr(f, 'needs_human_input', False):
status = "needs_human_input"
elif blocking: elif blocking:
status = "blocked" status = "blocked"
elif f.in_progress: elif f.in_progress:
@@ -564,6 +574,71 @@ async def skip_feature(project_name: str, feature_id: int):
raise HTTPException(status_code=500, detail="Failed to skip feature") raise HTTPException(status_code=500, detail="Failed to skip feature")
@router.post("/{feature_id}/resolve-human-input", response_model=FeatureResponse)
async def resolve_human_input(project_name: str, feature_id: int, response: HumanInputResponse):
"""Resolve a human input request for a feature.
Validates all required fields have values, stores the response,
and returns the feature to the pending queue for agents to pick up.
"""
project_name = validate_project_name(project_name)
project_dir = _get_project_path(project_name)
if not project_dir:
raise HTTPException(status_code=404, detail=f"Project '{project_name}' not found in registry")
if not project_dir.exists():
raise HTTPException(status_code=404, detail="Project directory not found")
_, Feature = _get_db_classes()
try:
with get_db_session(project_dir) as session:
feature = session.query(Feature).filter(Feature.id == feature_id).first()
if not feature:
raise HTTPException(status_code=404, detail=f"Feature {feature_id} not found")
if not getattr(feature, 'needs_human_input', False):
raise HTTPException(status_code=400, detail="Feature is not waiting for human input")
# Validate required fields
request_data = feature.human_input_request
if request_data and isinstance(request_data, dict):
for field_def in request_data.get("fields", []):
if field_def.get("required", True):
field_id = field_def.get("id")
if field_id not in response.fields or response.fields[field_id] in (None, ""):
raise HTTPException(
status_code=400,
detail=f"Required field '{field_def.get('label', field_id)}' is missing"
)
# Store response and return to pending queue
from datetime import datetime, timezone
response_data = {
"fields": {k: v for k, v in response.fields.items()},
"responded_at": datetime.now(timezone.utc).isoformat(),
}
feature.human_input_response = response_data
feature.needs_human_input = False
# Keep in_progress=False, passes=False so it returns to pending
session.commit()
session.refresh(feature)
# Compute passing IDs for response
all_features = session.query(Feature).all()
passing_ids = {f.id for f in all_features if f.passes}
return feature_to_response(feature, passing_ids)
except HTTPException:
raise
except Exception:
logger.exception("Failed to resolve human input")
raise HTTPException(status_code=500, detail="Failed to resolve human input")
# ============================================================================ # ============================================================================
# Dependency Management Endpoints # Dependency Management Endpoints
# ============================================================================ # ============================================================================

View File

@@ -102,7 +102,7 @@ def get_project_stats(project_dir: Path) -> ProjectStats:
"""Get statistics for a project.""" """Get statistics for a project."""
_init_imports() _init_imports()
assert _count_passing_tests is not None # guaranteed by _init_imports() assert _count_passing_tests is not None # guaranteed by _init_imports()
passing, in_progress, total = _count_passing_tests(project_dir) passing, in_progress, total, _needs_human_input = _count_passing_tests(project_dir)
percentage = (passing / total * 100) if total > 0 else 0.0 percentage = (passing / total * 100) if total > 0 else 0.0
return ProjectStats( return ProjectStats(
passing=passing, passing=passing,

View File

@@ -120,16 +120,41 @@ class FeatureResponse(FeatureBase):
in_progress: bool in_progress: bool
blocked: bool = False # Computed: has unmet dependencies blocked: bool = False # Computed: has unmet dependencies
blocking_dependencies: list[int] = Field(default_factory=list) # Computed blocking_dependencies: list[int] = Field(default_factory=list) # Computed
needs_human_input: bool = False
human_input_request: dict | None = None
human_input_response: dict | None = None
class Config: class Config:
from_attributes = True from_attributes = True
class HumanInputField(BaseModel):
"""Schema for a single human input field."""
id: str
label: str
type: Literal["text", "textarea", "select", "boolean"] = "text"
required: bool = True
placeholder: str | None = None
options: list[dict] | None = None # For select: [{value, label}]
class HumanInputRequest(BaseModel):
"""Schema for an agent's human input request."""
prompt: str
fields: list[HumanInputField]
class HumanInputResponse(BaseModel):
"""Schema for a human's response to an input request."""
fields: dict[str, str | bool | list[str]]
class FeatureListResponse(BaseModel): class FeatureListResponse(BaseModel):
"""Response containing list of features organized by status.""" """Response containing list of features organized by status."""
pending: list[FeatureResponse] pending: list[FeatureResponse]
in_progress: list[FeatureResponse] in_progress: list[FeatureResponse]
done: list[FeatureResponse] done: list[FeatureResponse]
needs_human_input: list[FeatureResponse] = Field(default_factory=list)
class FeatureBulkCreate(BaseModel): class FeatureBulkCreate(BaseModel):
@@ -153,7 +178,7 @@ class DependencyGraphNode(BaseModel):
id: int id: int
name: str name: str
category: str category: str
status: Literal["pending", "in_progress", "done", "blocked"] status: Literal["pending", "in_progress", "done", "blocked", "needs_human_input"]
priority: int priority: int
dependencies: list[int] dependencies: list[int]
@@ -217,7 +242,7 @@ class AgentStartRequest(BaseModel):
class AgentStatus(BaseModel): class AgentStatus(BaseModel):
"""Current agent status.""" """Current agent status."""
status: Literal["stopped", "running", "paused", "crashed"] status: Literal["stopped", "running", "paused", "crashed", "pausing", "paused_graceful"]
pid: int | None = None pid: int | None = None
started_at: datetime | None = None started_at: datetime | None = None
yolo_mode: bool = False yolo_mode: bool = False
@@ -257,6 +282,7 @@ class WSProgressMessage(BaseModel):
in_progress: int in_progress: int
total: int total: int
percentage: float percentage: float
needs_human_input: int = 0
class WSFeatureUpdateMessage(BaseModel): class WSFeatureUpdateMessage(BaseModel):

View File

@@ -77,7 +77,7 @@ class AgentProcessManager:
self.project_dir = project_dir self.project_dir = project_dir
self.root_dir = root_dir self.root_dir = root_dir
self.process: subprocess.Popen | None = None self.process: subprocess.Popen | None = None
self._status: Literal["stopped", "running", "paused", "crashed"] = "stopped" self._status: Literal["stopped", "running", "paused", "crashed", "pausing", "paused_graceful"] = "stopped"
self.started_at: datetime | None = None self.started_at: datetime | None = None
self._output_task: asyncio.Task | None = None self._output_task: asyncio.Task | None = None
self.yolo_mode: bool = False # YOLO mode for rapid prototyping self.yolo_mode: bool = False # YOLO mode for rapid prototyping
@@ -96,11 +96,11 @@ class AgentProcessManager:
self.lock_file = get_agent_lock_path(self.project_dir) self.lock_file = get_agent_lock_path(self.project_dir)
@property @property
def status(self) -> Literal["stopped", "running", "paused", "crashed"]: def status(self) -> Literal["stopped", "running", "paused", "crashed", "pausing", "paused_graceful"]:
return self._status return self._status
@status.setter @status.setter
def status(self, value: Literal["stopped", "running", "paused", "crashed"]): def status(self, value: Literal["stopped", "running", "paused", "crashed", "pausing", "paused_graceful"]):
old_status = self._status old_status = self._status
self._status = value self._status = value
if old_status != value: if old_status != value:
@@ -277,7 +277,7 @@ class AgentProcessManager:
).all() ).all()
if stuck: if stuck:
for f in stuck: for f in stuck:
f.in_progress = False f.in_progress = False # type: ignore[assignment]
session.commit() session.commit()
logger.info( logger.info(
"Cleaned up %d stuck feature(s) for %s", "Cleaned up %d stuck feature(s) for %s",
@@ -330,6 +330,12 @@ class AgentProcessManager:
for help_line in AUTH_ERROR_HELP.strip().split('\n'): for help_line in AUTH_ERROR_HELP.strip().split('\n'):
await self._broadcast_output(help_line) await self._broadcast_output(help_line)
# Detect graceful pause status transitions from orchestrator output
if "All agents drained - paused." in decoded:
self.status = "paused_graceful"
elif "Resuming from graceful pause..." in decoded:
self.status = "running"
await self._broadcast_output(sanitized) await self._broadcast_output(sanitized)
except asyncio.CancelledError: except asyncio.CancelledError:
@@ -340,7 +346,7 @@ class AgentProcessManager:
# Check if process ended # Check if process ended
if self.process and self.process.poll() is not None: if self.process and self.process.poll() is not None:
exit_code = self.process.returncode exit_code = self.process.returncode
if exit_code != 0 and self.status == "running": if exit_code != 0 and self.status in ("running", "pausing", "paused_graceful"):
# Check buffered output for auth errors if we haven't detected one yet # Check buffered output for auth errors if we haven't detected one yet
if not auth_error_detected: if not auth_error_detected:
combined_output = '\n'.join(output_buffer) combined_output = '\n'.join(output_buffer)
@@ -348,10 +354,16 @@ class AgentProcessManager:
for help_line in AUTH_ERROR_HELP.strip().split('\n'): for help_line in AUTH_ERROR_HELP.strip().split('\n'):
await self._broadcast_output(help_line) await self._broadcast_output(help_line)
self.status = "crashed" self.status = "crashed"
elif self.status == "running": elif self.status in ("running", "pausing", "paused_graceful"):
self.status = "stopped" self.status = "stopped"
self._cleanup_stale_features() self._cleanup_stale_features()
self._remove_lock() self._remove_lock()
# Clean up drain signal file if present
try:
from autoforge_paths import get_pause_drain_path
get_pause_drain_path(self.project_dir).unlink(missing_ok=True)
except Exception:
pass
async def start( async def start(
self, self,
@@ -377,7 +389,7 @@ class AgentProcessManager:
Returns: Returns:
Tuple of (success, message) Tuple of (success, message)
""" """
if self.status in ("running", "paused"): if self.status in ("running", "paused", "pausing", "paused_graceful"):
return False, f"Agent is already {self.status}" return False, f"Agent is already {self.status}"
if not self._check_lock(): if not self._check_lock():
@@ -526,6 +538,12 @@ class AgentProcessManager:
self._cleanup_stale_features() self._cleanup_stale_features()
self._remove_lock() self._remove_lock()
# Clean up drain signal file if present
try:
from autoforge_paths import get_pause_drain_path
get_pause_drain_path(self.project_dir).unlink(missing_ok=True)
except Exception:
pass
self.status = "stopped" self.status = "stopped"
self.process = None self.process = None
self.started_at = None self.started_at = None
@@ -586,6 +604,47 @@ class AgentProcessManager:
logger.exception("Failed to resume agent") logger.exception("Failed to resume agent")
return False, f"Failed to resume agent: {e}" return False, f"Failed to resume agent: {e}"
async def graceful_pause(self) -> tuple[bool, str]:
"""Request a graceful pause (drain mode).
Creates a signal file that the orchestrator polls. Running agents
finish their current work before the orchestrator enters a paused state.
Returns:
Tuple of (success, message)
"""
if not self.process or self.status not in ("running",):
return False, "Agent is not running"
try:
from autoforge_paths import get_pause_drain_path
drain_path = get_pause_drain_path(self.project_dir)
drain_path.parent.mkdir(parents=True, exist_ok=True)
drain_path.write_text(str(self.process.pid))
self.status = "pausing"
return True, "Graceful pause requested"
except Exception as e:
logger.exception("Failed to request graceful pause")
return False, f"Failed to request graceful pause: {e}"
async def graceful_resume(self) -> tuple[bool, str]:
"""Resume from a graceful pause by removing the drain signal file.
Returns:
Tuple of (success, message)
"""
if not self.process or self.status not in ("pausing", "paused_graceful"):
return False, "Agent is not in a graceful pause state"
try:
from autoforge_paths import get_pause_drain_path
get_pause_drain_path(self.project_dir).unlink(missing_ok=True)
self.status = "running"
return True, "Agent resumed from graceful pause"
except Exception as e:
logger.exception("Failed to resume from graceful pause")
return False, f"Failed to resume: {e}"
async def healthcheck(self) -> bool: async def healthcheck(self) -> bool:
""" """
Check if the agent process is still alive. Check if the agent process is still alive.
@@ -601,8 +660,14 @@ class AgentProcessManager:
poll = self.process.poll() poll = self.process.poll()
if poll is not None: if poll is not None:
# Process has terminated # Process has terminated
if self.status in ("running", "paused"): if self.status in ("running", "paused", "pausing", "paused_graceful"):
self._cleanup_stale_features() self._cleanup_stale_features()
# Clean up drain signal file if present
try:
from autoforge_paths import get_pause_drain_path
get_pause_drain_path(self.project_dir).unlink(missing_ok=True)
except Exception:
pass
self.status = "crashed" self.status = "crashed"
self._remove_lock() self._remove_lock()
return False return False
@@ -687,8 +752,14 @@ def cleanup_orphaned_locks() -> int:
if not project_path.exists(): if not project_path.exists():
continue continue
# Clean up stale drain signal files
from autoforge_paths import get_autoforge_dir, get_pause_drain_path
drain_file = get_pause_drain_path(project_path)
if drain_file.exists():
drain_file.unlink(missing_ok=True)
logger.info("Removed stale drain signal file for project '%s'", name)
# Check both legacy and new locations for lock files # Check both legacy and new locations for lock files
from autoforge_paths import get_autoforge_dir
lock_locations = [ lock_locations = [
project_path / ".agent.lock", project_path / ".agent.lock",
get_autoforge_dir(project_path) / ".agent.lock", get_autoforge_dir(project_path) / ".agent.lock",

View File

@@ -61,7 +61,7 @@ THOUGHT_PATTERNS = [
(re.compile(r'(?:Testing|Verifying|Running tests|Validating)\s+(.+)', re.I), 'testing'), (re.compile(r'(?:Testing|Verifying|Running tests|Validating)\s+(.+)', re.I), 'testing'),
(re.compile(r'(?:Error|Failed|Cannot|Unable to|Exception)\s+(.+)', re.I), 'struggling'), (re.compile(r'(?:Error|Failed|Cannot|Unable to|Exception)\s+(.+)', re.I), 'struggling'),
# Test results # Test results
(re.compile(r'(?:PASS|passed|success)', re.I), 'success'), (re.compile(r'(?:PASS|passed|success)', re.I), 'testing'),
(re.compile(r'(?:FAIL|failed|error)', re.I), 'struggling'), (re.compile(r'(?:FAIL|failed|error)', re.I), 'struggling'),
] ]
@@ -78,6 +78,9 @@ ORCHESTRATOR_PATTERNS = {
'testing_complete': re.compile(r'Feature #(\d+) testing (completed|failed)'), 'testing_complete': re.compile(r'Feature #(\d+) testing (completed|failed)'),
'all_complete': re.compile(r'All features complete'), 'all_complete': re.compile(r'All features complete'),
'blocked_features': re.compile(r'(\d+) blocked by dependencies'), 'blocked_features': re.compile(r'(\d+) blocked by dependencies'),
'drain_start': re.compile(r'Graceful pause requested'),
'drain_complete': re.compile(r'All agents drained'),
'drain_resume': re.compile(r'Resuming from graceful pause'),
} }
@@ -562,6 +565,30 @@ class OrchestratorTracker:
'All features complete!' 'All features complete!'
) )
# Graceful pause (drain mode) events
elif ORCHESTRATOR_PATTERNS['drain_start'].search(line):
self.state = 'draining'
update = self._create_update(
'drain_start',
'Draining active agents...'
)
elif ORCHESTRATOR_PATTERNS['drain_complete'].search(line):
self.state = 'paused'
self.coding_agents = 0
self.testing_agents = 0
update = self._create_update(
'drain_complete',
'All agents drained. Paused.'
)
elif ORCHESTRATOR_PATTERNS['drain_resume'].search(line):
self.state = 'scheduling'
update = self._create_update(
'drain_resume',
'Resuming feature scheduling'
)
return update return update
def _create_update( def _create_update(
@@ -689,15 +716,19 @@ async def poll_progress(websocket: WebSocket, project_name: str, project_dir: Pa
last_in_progress = -1 last_in_progress = -1
last_total = -1 last_total = -1
last_needs_human_input = -1
while True: while True:
try: try:
passing, in_progress, total = count_passing_tests(project_dir) passing, in_progress, total, needs_human_input = count_passing_tests(project_dir)
# Only send if changed # Only send if changed
if passing != last_passing or in_progress != last_in_progress or total != last_total: if (passing != last_passing or in_progress != last_in_progress
or total != last_total or needs_human_input != last_needs_human_input):
last_passing = passing last_passing = passing
last_in_progress = in_progress last_in_progress = in_progress
last_total = total last_total = total
last_needs_human_input = needs_human_input
percentage = (passing / total * 100) if total > 0 else 0 percentage = (passing / total * 100) if total > 0 else 0
await websocket.send_json({ await websocket.send_json({
@@ -706,6 +737,7 @@ async def poll_progress(websocket: WebSocket, project_name: str, project_dir: Pa
"in_progress": in_progress, "in_progress": in_progress,
"total": total, "total": total,
"percentage": round(percentage, 1), "percentage": round(percentage, 1),
"needs_human_input": needs_human_input,
}) })
await asyncio.sleep(2) # Poll every 2 seconds await asyncio.sleep(2) # Poll every 2 seconds
@@ -858,7 +890,7 @@ async def project_websocket(websocket: WebSocket, project_name: str):
# Send initial progress # Send initial progress
count_passing_tests = _get_count_passing_tests() count_passing_tests = _get_count_passing_tests()
passing, in_progress, total = count_passing_tests(project_dir) passing, in_progress, total, needs_human_input = count_passing_tests(project_dir)
percentage = (passing / total * 100) if total > 0 else 0 percentage = (passing / total * 100) if total > 0 else 0
await websocket.send_json({ await websocket.send_json({
"type": "progress", "type": "progress",
@@ -866,6 +898,7 @@ async def project_websocket(websocket: WebSocket, project_name: str):
"in_progress": in_progress, "in_progress": in_progress,
"total": total, "total": total,
"percentage": round(percentage, 1), "percentage": round(percentage, 1),
"needs_human_input": needs_human_input,
}) })
# Keep connection alive and handle incoming messages # Keep connection alive and handle incoming messages

14
ui/package-lock.json generated
View File

@@ -96,6 +96,7 @@
"integrity": "sha512-e7jT4DxYvIDLk1ZHmU/m/mB19rex9sv0c2ftBtjSBv+kVM/902eh0fINUzD7UwLLNR+jU585GxUJ8/EBfAM5fw==", "integrity": "sha512-e7jT4DxYvIDLk1ZHmU/m/mB19rex9sv0c2ftBtjSBv+kVM/902eh0fINUzD7UwLLNR+jU585GxUJ8/EBfAM5fw==",
"dev": true, "dev": true,
"license": "MIT", "license": "MIT",
"peer": true,
"dependencies": { "dependencies": {
"@babel/code-frame": "^7.27.1", "@babel/code-frame": "^7.27.1",
"@babel/generator": "^7.28.5", "@babel/generator": "^7.28.5",
@@ -2825,6 +2826,7 @@
"integrity": "sha512-MciR4AKGHWl7xwxkBa6xUGxQJ4VBOmPTF7sL+iGzuahOFaO0jHCsuEfS80pan1ef4gWId1oWOweIhrDEYLuaOw==", "integrity": "sha512-MciR4AKGHWl7xwxkBa6xUGxQJ4VBOmPTF7sL+iGzuahOFaO0jHCsuEfS80pan1ef4gWId1oWOweIhrDEYLuaOw==",
"dev": true, "dev": true,
"license": "MIT", "license": "MIT",
"peer": true,
"dependencies": { "dependencies": {
"undici-types": "~6.21.0" "undici-types": "~6.21.0"
} }
@@ -2834,6 +2836,7 @@
"resolved": "https://registry.npmjs.org/@types/react/-/react-19.2.9.tgz", "resolved": "https://registry.npmjs.org/@types/react/-/react-19.2.9.tgz",
"integrity": "sha512-Lpo8kgb/igvMIPeNV2rsYKTgaORYdO1XGVZ4Qz3akwOj0ySGYMPlQWa8BaLn0G63D1aSaAQ5ldR06wCpChQCjA==", "integrity": "sha512-Lpo8kgb/igvMIPeNV2rsYKTgaORYdO1XGVZ4Qz3akwOj0ySGYMPlQWa8BaLn0G63D1aSaAQ5ldR06wCpChQCjA==",
"license": "MIT", "license": "MIT",
"peer": true,
"dependencies": { "dependencies": {
"csstype": "^3.2.2" "csstype": "^3.2.2"
} }
@@ -2844,6 +2847,7 @@
"integrity": "sha512-jp2L/eY6fn+KgVVQAOqYItbF0VY/YApe5Mz2F0aykSO8gx31bYCZyvSeYxCHKvzHG5eZjc+zyaS5BrBWya2+kQ==", "integrity": "sha512-jp2L/eY6fn+KgVVQAOqYItbF0VY/YApe5Mz2F0aykSO8gx31bYCZyvSeYxCHKvzHG5eZjc+zyaS5BrBWya2+kQ==",
"devOptional": true, "devOptional": true,
"license": "MIT", "license": "MIT",
"peer": true,
"peerDependencies": { "peerDependencies": {
"@types/react": "^19.2.0" "@types/react": "^19.2.0"
} }
@@ -2899,6 +2903,7 @@
"integrity": "sha512-3xP4XzzDNQOIqBMWogftkwxhg5oMKApqY0BAflmLZiFYHqyhSOxv/cd/zPQLTcCXr4AkaKb25joocY0BD1WC6A==", "integrity": "sha512-3xP4XzzDNQOIqBMWogftkwxhg5oMKApqY0BAflmLZiFYHqyhSOxv/cd/zPQLTcCXr4AkaKb25joocY0BD1WC6A==",
"dev": true, "dev": true,
"license": "MIT", "license": "MIT",
"peer": true,
"dependencies": { "dependencies": {
"@typescript-eslint/scope-manager": "8.51.0", "@typescript-eslint/scope-manager": "8.51.0",
"@typescript-eslint/types": "8.51.0", "@typescript-eslint/types": "8.51.0",
@@ -3209,6 +3214,7 @@
"integrity": "sha512-NZyJarBfL7nWwIq+FDL6Zp/yHEhePMNnnJ0y3qfieCrmNvYct8uvtiV41UvlSe6apAfk0fY1FbWx+NwfmpvtTg==", "integrity": "sha512-NZyJarBfL7nWwIq+FDL6Zp/yHEhePMNnnJ0y3qfieCrmNvYct8uvtiV41UvlSe6apAfk0fY1FbWx+NwfmpvtTg==",
"dev": true, "dev": true,
"license": "MIT", "license": "MIT",
"peer": true,
"bin": { "bin": {
"acorn": "bin/acorn" "acorn": "bin/acorn"
}, },
@@ -3340,6 +3346,7 @@
} }
], ],
"license": "MIT", "license": "MIT",
"peer": true,
"dependencies": { "dependencies": {
"baseline-browser-mapping": "^2.9.0", "baseline-browser-mapping": "^2.9.0",
"caniuse-lite": "^1.0.30001759", "caniuse-lite": "^1.0.30001759",
@@ -3611,6 +3618,7 @@
"resolved": "https://registry.npmjs.org/d3-selection/-/d3-selection-3.0.0.tgz", "resolved": "https://registry.npmjs.org/d3-selection/-/d3-selection-3.0.0.tgz",
"integrity": "sha512-fmTRWbNMmsmWq6xJV8D19U/gw/bwrHfNXxrIN+HfZgnzqTHp9jOmKMhsTUjXOJnZOdZY9Q28y4yebKzqDKlxlQ==", "integrity": "sha512-fmTRWbNMmsmWq6xJV8D19U/gw/bwrHfNXxrIN+HfZgnzqTHp9jOmKMhsTUjXOJnZOdZY9Q28y4yebKzqDKlxlQ==",
"license": "ISC", "license": "ISC",
"peer": true,
"engines": { "engines": {
"node": ">=12" "node": ">=12"
} }
@@ -3836,6 +3844,7 @@
"integrity": "sha512-LEyamqS7W5HB3ujJyvi0HQK/dtVINZvd5mAAp9eT5S/ujByGjiZLCzPcHVzuXbpJDJF/cxwHlfceVUDZ2lnSTw==", "integrity": "sha512-LEyamqS7W5HB3ujJyvi0HQK/dtVINZvd5mAAp9eT5S/ujByGjiZLCzPcHVzuXbpJDJF/cxwHlfceVUDZ2lnSTw==",
"dev": true, "dev": true,
"license": "MIT", "license": "MIT",
"peer": true,
"dependencies": { "dependencies": {
"@eslint-community/eslint-utils": "^4.8.0", "@eslint-community/eslint-utils": "^4.8.0",
"@eslint-community/regexpp": "^4.12.1", "@eslint-community/regexpp": "^4.12.1",
@@ -5836,6 +5845,7 @@
"integrity": "sha512-5gTmgEY/sqK6gFXLIsQNH19lWb4ebPDLA4SdLP7dsWkIXHWlG66oPuVvXSGFPppYZz8ZDZq0dYYrbHfBCVUb1Q==", "integrity": "sha512-5gTmgEY/sqK6gFXLIsQNH19lWb4ebPDLA4SdLP7dsWkIXHWlG66oPuVvXSGFPppYZz8ZDZq0dYYrbHfBCVUb1Q==",
"dev": true, "dev": true,
"license": "MIT", "license": "MIT",
"peer": true,
"engines": { "engines": {
"node": ">=12" "node": ">=12"
}, },
@@ -5951,6 +5961,7 @@
"resolved": "https://registry.npmjs.org/react/-/react-19.2.3.tgz", "resolved": "https://registry.npmjs.org/react/-/react-19.2.3.tgz",
"integrity": "sha512-Ku/hhYbVjOQnXDZFv2+RibmLFGwFdeeKHFcOTlrt7xplBnya5OGn/hIRDsqDiSUcfORsDC7MPxwork8jBwsIWA==", "integrity": "sha512-Ku/hhYbVjOQnXDZFv2+RibmLFGwFdeeKHFcOTlrt7xplBnya5OGn/hIRDsqDiSUcfORsDC7MPxwork8jBwsIWA==",
"license": "MIT", "license": "MIT",
"peer": true,
"engines": { "engines": {
"node": ">=0.10.0" "node": ">=0.10.0"
} }
@@ -5960,6 +5971,7 @@
"resolved": "https://registry.npmjs.org/react-dom/-/react-dom-19.2.3.tgz", "resolved": "https://registry.npmjs.org/react-dom/-/react-dom-19.2.3.tgz",
"integrity": "sha512-yELu4WmLPw5Mr/lmeEpox5rw3RETacE++JgHqQzd2dg+YbJuat3jH4ingc+WPZhxaoFzdv9y33G+F7Nl5O0GBg==", "integrity": "sha512-yELu4WmLPw5Mr/lmeEpox5rw3RETacE++JgHqQzd2dg+YbJuat3jH4ingc+WPZhxaoFzdv9y33G+F7Nl5O0GBg==",
"license": "MIT", "license": "MIT",
"peer": true,
"dependencies": { "dependencies": {
"scheduler": "^0.27.0" "scheduler": "^0.27.0"
}, },
@@ -6424,6 +6436,7 @@
"integrity": "sha512-84MVSjMEHP+FQRPy3pX9sTVV/INIex71s9TL2Gm5FG/WG1SqXeKyZ0k7/blY/4FdOzI12CBy1vGc4og/eus0fw==", "integrity": "sha512-84MVSjMEHP+FQRPy3pX9sTVV/INIex71s9TL2Gm5FG/WG1SqXeKyZ0k7/blY/4FdOzI12CBy1vGc4og/eus0fw==",
"dev": true, "dev": true,
"license": "Apache-2.0", "license": "Apache-2.0",
"peer": true,
"bin": { "bin": {
"tsc": "bin/tsc", "tsc": "bin/tsc",
"tsserver": "bin/tsserver" "tsserver": "bin/tsserver"
@@ -6677,6 +6690,7 @@
"integrity": "sha512-w+N7Hifpc3gRjZ63vYBXA56dvvRlNWRczTdmCBBa+CotUzAPf5b7YMdMR/8CQoeYE5LX3W4wj6RYTgonm1b9DA==", "integrity": "sha512-w+N7Hifpc3gRjZ63vYBXA56dvvRlNWRczTdmCBBa+CotUzAPf5b7YMdMR/8CQoeYE5LX3W4wj6RYTgonm1b9DA==",
"dev": true, "dev": true,
"license": "MIT", "license": "MIT",
"peer": true,
"dependencies": { "dependencies": {
"esbuild": "^0.27.0", "esbuild": "^0.27.0",
"fdir": "^6.5.0", "fdir": "^6.5.0",

View File

@@ -130,7 +130,8 @@ function App() {
const allFeatures = [ const allFeatures = [
...(features?.pending ?? []), ...(features?.pending ?? []),
...(features?.in_progress ?? []), ...(features?.in_progress ?? []),
...(features?.done ?? []) ...(features?.done ?? []),
...(features?.needs_human_input ?? [])
] ]
const feature = allFeatures.find(f => f.id === nodeId) const feature = allFeatures.find(f => f.id === nodeId)
if (feature) setSelectedFeature(feature) if (feature) setSelectedFeature(feature)
@@ -181,7 +182,7 @@ function App() {
// E : Expand project with AI (when project selected, has spec and has features) // E : Expand project with AI (when project selected, has spec and has features)
if ((e.key === 'e' || e.key === 'E') && selectedProject && hasSpec && features && if ((e.key === 'e' || e.key === 'E') && selectedProject && hasSpec && features &&
(features.pending.length + features.in_progress.length + features.done.length) > 0) { (features.pending.length + features.in_progress.length + features.done.length + (features.needs_human_input?.length || 0)) > 0) {
e.preventDefault() e.preventDefault()
setShowExpandProject(true) setShowExpandProject(true)
} }
@@ -210,8 +211,8 @@ function App() {
setShowKeyboardHelp(true) setShowKeyboardHelp(true)
} }
// R : Open reset modal (when project selected and agent not running) // R : Open reset modal (when project selected and agent not running/draining)
if ((e.key === 'r' || e.key === 'R') && selectedProject && wsState.agentStatus !== 'running') { if ((e.key === 'r' || e.key === 'R') && selectedProject && !['running', 'pausing', 'paused_graceful'].includes(wsState.agentStatus)) {
e.preventDefault() e.preventDefault()
setShowResetModal(true) setShowResetModal(true)
} }
@@ -245,7 +246,7 @@ function App() {
// Combine WebSocket progress with feature data // Combine WebSocket progress with feature data
const progress = wsState.progress.total > 0 ? wsState.progress : { const progress = wsState.progress.total > 0 ? wsState.progress : {
passing: features?.done.length ?? 0, passing: features?.done.length ?? 0,
total: (features?.pending.length ?? 0) + (features?.in_progress.length ?? 0) + (features?.done.length ?? 0), total: (features?.pending.length ?? 0) + (features?.in_progress.length ?? 0) + (features?.done.length ?? 0) + (features?.needs_human_input?.length ?? 0),
percentage: 0, percentage: 0,
} }
@@ -380,7 +381,7 @@ function App() {
variant="outline" variant="outline"
size="sm" size="sm"
aria-label="Reset Project" aria-label="Reset Project"
disabled={wsState.agentStatus === 'running'} disabled={['running', 'pausing', 'paused_graceful'].includes(wsState.agentStatus)}
> >
<RotateCcw size={18} /> <RotateCcw size={18} />
</Button> </Button>
@@ -443,6 +444,7 @@ function App() {
features.pending.length === 0 && features.pending.length === 0 &&
features.in_progress.length === 0 && features.in_progress.length === 0 &&
features.done.length === 0 && features.done.length === 0 &&
(features.needs_human_input?.length || 0) === 0 &&
wsState.agentStatus === 'running' && ( wsState.agentStatus === 'running' && (
<Card className="p-8 text-center"> <Card className="p-8 text-center">
<CardContent className="p-0"> <CardContent className="p-0">
@@ -458,7 +460,7 @@ function App() {
)} )}
{/* View Toggle - only show when there are features */} {/* View Toggle - only show when there are features */}
{features && (features.pending.length + features.in_progress.length + features.done.length) > 0 && ( {features && (features.pending.length + features.in_progress.length + features.done.length + (features.needs_human_input?.length || 0)) > 0 && (
<div className="flex justify-center"> <div className="flex justify-center">
<ViewToggle viewMode={viewMode} onViewModeChange={setViewMode} /> <ViewToggle viewMode={viewMode} onViewModeChange={setViewMode} />
</div> </div>

View File

@@ -1,8 +1,10 @@
import { useState, useEffect, useRef, useCallback } from 'react' import { useState, useEffect, useRef, useCallback } from 'react'
import { Play, Square, Loader2, GitBranch, Clock } from 'lucide-react' import { Play, Square, Loader2, GitBranch, Clock, Pause, PlayCircle } from 'lucide-react'
import { import {
useStartAgent, useStartAgent,
useStopAgent, useStopAgent,
useGracefulPauseAgent,
useGracefulResumeAgent,
useSettings, useSettings,
useUpdateProjectSettings, useUpdateProjectSettings,
} from '../hooks/useProjects' } from '../hooks/useProjects'
@@ -60,12 +62,14 @@ export function AgentControl({ projectName, status, defaultConcurrency = 3 }: Ag
const startAgent = useStartAgent(projectName) const startAgent = useStartAgent(projectName)
const stopAgent = useStopAgent(projectName) const stopAgent = useStopAgent(projectName)
const gracefulPause = useGracefulPauseAgent(projectName)
const gracefulResume = useGracefulResumeAgent(projectName)
const { data: nextRun } = useNextScheduledRun(projectName) const { data: nextRun } = useNextScheduledRun(projectName)
const [showScheduleModal, setShowScheduleModal] = useState(false) const [showScheduleModal, setShowScheduleModal] = useState(false)
const isLoading = startAgent.isPending || stopAgent.isPending const isLoading = startAgent.isPending || stopAgent.isPending || gracefulPause.isPending || gracefulResume.isPending
const isRunning = status === 'running' || status === 'paused' const isRunning = status === 'running' || status === 'paused' || status === 'pausing' || status === 'paused_graceful'
const isLoadingStatus = status === 'loading' const isLoadingStatus = status === 'loading'
const isParallel = concurrency > 1 const isParallel = concurrency > 1
@@ -126,7 +130,7 @@ export function AgentControl({ projectName, status, defaultConcurrency = 3 }: Ag
</Badge> </Badge>
)} )}
{/* Start/Stop button */} {/* Start/Stop/Pause/Resume buttons */}
{isLoadingStatus ? ( {isLoadingStatus ? (
<Button disabled variant="outline" size="sm"> <Button disabled variant="outline" size="sm">
<Loader2 size={18} className="animate-spin" /> <Loader2 size={18} className="animate-spin" />
@@ -146,19 +150,69 @@ export function AgentControl({ projectName, status, defaultConcurrency = 3 }: Ag
)} )}
</Button> </Button>
) : ( ) : (
<Button <div className="flex items-center gap-1.5">
onClick={handleStop} {/* Pausing indicator */}
disabled={isLoading} {status === 'pausing' && (
variant="destructive" <Badge variant="secondary" className="gap-1 animate-pulse">
size="sm" <Loader2 size={12} className="animate-spin" />
title={yoloMode ? 'Stop Agent (YOLO Mode)' : 'Stop Agent'} Pausing...
> </Badge>
{isLoading ? (
<Loader2 size={18} className="animate-spin" />
) : (
<Square size={18} />
)} )}
</Button>
{/* Paused indicator + Resume button */}
{status === 'paused_graceful' && (
<>
<Badge variant="outline" className="gap-1">
Paused
</Badge>
<Button
onClick={() => gracefulResume.mutate()}
disabled={isLoading}
variant="default"
size="sm"
title="Resume agent"
>
{gracefulResume.isPending ? (
<Loader2 size={18} className="animate-spin" />
) : (
<PlayCircle size={18} />
)}
</Button>
</>
)}
{/* Graceful pause button (only when running normally) */}
{status === 'running' && (
<Button
onClick={() => gracefulPause.mutate()}
disabled={isLoading}
variant="outline"
size="sm"
title="Pause agent (finish current work first)"
>
{gracefulPause.isPending ? (
<Loader2 size={18} className="animate-spin" />
) : (
<Pause size={18} />
)}
</Button>
)}
{/* Stop button (always available) */}
<Button
onClick={handleStop}
disabled={isLoading}
variant="destructive"
size="sm"
title="Stop Agent (immediate)"
>
{stopAgent.isPending ? (
<Loader2 size={18} className="animate-spin" />
) : (
<Square size={18} />
)}
</Button>
</div>
)} )}
{/* Clock button to open schedule modal */} {/* Clock button to open schedule modal */}

View File

@@ -72,9 +72,13 @@ export function AgentMissionControl({
? `${agents.length} ${agents.length === 1 ? 'agent' : 'agents'} active` ? `${agents.length} ${agents.length === 1 ? 'agent' : 'agents'} active`
: orchestratorStatus?.state === 'initializing' : orchestratorStatus?.state === 'initializing'
? 'Initializing' ? 'Initializing'
: orchestratorStatus?.state === 'complete' : orchestratorStatus?.state === 'draining'
? 'Complete' ? 'Draining'
: 'Orchestrating' : orchestratorStatus?.state === 'paused'
? 'Paused'
: orchestratorStatus?.state === 'complete'
? 'Complete'
: 'Orchestrating'
} }
</Badge> </Badge>
</div> </div>

View File

@@ -63,7 +63,7 @@ export function AgentThought({ logs, agentStatus }: AgentThoughtProps) {
// Determine if component should be visible // Determine if component should be visible
const shouldShow = useMemo(() => { const shouldShow = useMemo(() => {
if (!thought) return false if (!thought) return false
if (agentStatus === 'running') return true if (agentStatus === 'running' || agentStatus === 'pausing') return true
if (agentStatus === 'paused') { if (agentStatus === 'paused') {
return Date.now() - lastLogTimestamp < IDLE_TIMEOUT return Date.now() - lastLogTimestamp < IDLE_TIMEOUT
} }

View File

@@ -15,7 +15,7 @@ import {
Handle, Handle,
} from '@xyflow/react' } from '@xyflow/react'
import dagre from 'dagre' import dagre from 'dagre'
import { CheckCircle2, Circle, Loader2, AlertTriangle, RefreshCw } from 'lucide-react' import { CheckCircle2, Circle, Loader2, AlertTriangle, RefreshCw, UserCircle } from 'lucide-react'
import type { DependencyGraph as DependencyGraphData, GraphNode, ActiveAgent, AgentMascot, AgentState } from '../lib/types' import type { DependencyGraph as DependencyGraphData, GraphNode, ActiveAgent, AgentMascot, AgentState } from '../lib/types'
import { AgentAvatar } from './AgentAvatar' import { AgentAvatar } from './AgentAvatar'
import { Button } from '@/components/ui/button' import { Button } from '@/components/ui/button'
@@ -93,18 +93,20 @@ class GraphErrorBoundary extends Component<ErrorBoundaryProps, ErrorBoundaryStat
// Custom node component // Custom node component
function FeatureNode({ data }: { data: GraphNode & { onClick?: () => void; agent?: NodeAgentInfo } }) { function FeatureNode({ data }: { data: GraphNode & { onClick?: () => void; agent?: NodeAgentInfo } }) {
const statusColors = { const statusColors: Record<string, string> = {
pending: 'bg-yellow-100 border-yellow-300 dark:bg-yellow-900/30 dark:border-yellow-700', pending: 'bg-yellow-100 border-yellow-300 dark:bg-yellow-900/30 dark:border-yellow-700',
in_progress: 'bg-cyan-100 border-cyan-300 dark:bg-cyan-900/30 dark:border-cyan-700', in_progress: 'bg-cyan-100 border-cyan-300 dark:bg-cyan-900/30 dark:border-cyan-700',
done: 'bg-green-100 border-green-300 dark:bg-green-900/30 dark:border-green-700', done: 'bg-green-100 border-green-300 dark:bg-green-900/30 dark:border-green-700',
blocked: 'bg-red-50 border-red-300 dark:bg-red-900/20 dark:border-red-700', blocked: 'bg-red-50 border-red-300 dark:bg-red-900/20 dark:border-red-700',
needs_human_input: 'bg-amber-100 border-amber-300 dark:bg-amber-900/30 dark:border-amber-700',
} }
const textColors = { const textColors: Record<string, string> = {
pending: 'text-yellow-900 dark:text-yellow-100', pending: 'text-yellow-900 dark:text-yellow-100',
in_progress: 'text-cyan-900 dark:text-cyan-100', in_progress: 'text-cyan-900 dark:text-cyan-100',
done: 'text-green-900 dark:text-green-100', done: 'text-green-900 dark:text-green-100',
blocked: 'text-red-900 dark:text-red-100', blocked: 'text-red-900 dark:text-red-100',
needs_human_input: 'text-amber-900 dark:text-amber-100',
} }
const StatusIcon = () => { const StatusIcon = () => {
@@ -115,6 +117,8 @@ function FeatureNode({ data }: { data: GraphNode & { onClick?: () => void; agent
return <Loader2 size={16} className={`${textColors[data.status]} animate-spin`} /> return <Loader2 size={16} className={`${textColors[data.status]} animate-spin`} />
case 'blocked': case 'blocked':
return <AlertTriangle size={16} className="text-destructive" /> return <AlertTriangle size={16} className="text-destructive" />
case 'needs_human_input':
return <UserCircle size={16} className={textColors[data.status]} />
default: default:
return <Circle size={16} className={textColors[data.status]} /> return <Circle size={16} className={textColors[data.status]} />
} }
@@ -323,6 +327,8 @@ function DependencyGraphInner({ graphData, onNodeClick, activeAgents = [] }: Dep
return '#06b6d4' // cyan-500 return '#06b6d4' // cyan-500
case 'blocked': case 'blocked':
return '#ef4444' // red-500 return '#ef4444' // red-500
case 'needs_human_input':
return '#f59e0b' // amber-500
default: default:
return '#eab308' // yellow-500 return '#eab308' // yellow-500
} }

View File

@@ -1,4 +1,4 @@
import { CheckCircle2, Circle, Loader2, MessageCircle } from 'lucide-react' import { CheckCircle2, Circle, Loader2, MessageCircle, UserCircle } from 'lucide-react'
import type { Feature, ActiveAgent } from '../lib/types' import type { Feature, ActiveAgent } from '../lib/types'
import { DependencyBadge } from './DependencyBadge' import { DependencyBadge } from './DependencyBadge'
import { AgentAvatar } from './AgentAvatar' import { AgentAvatar } from './AgentAvatar'
@@ -45,7 +45,8 @@ export function FeatureCard({ feature, onClick, isInProgress, allFeatures = [],
cursor-pointer transition-all hover:border-primary py-3 cursor-pointer transition-all hover:border-primary py-3
${isInProgress ? 'animate-pulse' : ''} ${isInProgress ? 'animate-pulse' : ''}
${feature.passes ? 'border-primary/50' : ''} ${feature.passes ? 'border-primary/50' : ''}
${isBlocked && !feature.passes ? 'border-destructive/50 opacity-80' : ''} ${feature.needs_human_input ? 'border-amber-500/50' : ''}
${isBlocked && !feature.passes && !feature.needs_human_input ? 'border-destructive/50 opacity-80' : ''}
${hasActiveAgent ? 'ring-2 ring-primary ring-offset-2' : ''} ${hasActiveAgent ? 'ring-2 ring-primary ring-offset-2' : ''}
`} `}
> >
@@ -105,6 +106,11 @@ export function FeatureCard({ feature, onClick, isInProgress, allFeatures = [],
<CheckCircle2 size={16} className="text-primary" /> <CheckCircle2 size={16} className="text-primary" />
<span className="text-primary font-medium">Complete</span> <span className="text-primary font-medium">Complete</span>
</> </>
) : feature.needs_human_input ? (
<>
<UserCircle size={16} className="text-amber-500" />
<span className="text-amber-500 font-medium">Needs Your Input</span>
</>
) : isBlocked ? ( ) : isBlocked ? (
<> <>
<Circle size={16} className="text-destructive" /> <Circle size={16} className="text-destructive" />

View File

@@ -1,7 +1,8 @@
import { useState } from 'react' import { useState } from 'react'
import { X, CheckCircle2, Circle, SkipForward, Trash2, Loader2, AlertCircle, Pencil, Link2, AlertTriangle } from 'lucide-react' import { X, CheckCircle2, Circle, SkipForward, Trash2, Loader2, AlertCircle, Pencil, Link2, AlertTriangle, UserCircle } from 'lucide-react'
import { useSkipFeature, useDeleteFeature, useFeatures } from '../hooks/useProjects' import { useSkipFeature, useDeleteFeature, useFeatures, useResolveHumanInput } from '../hooks/useProjects'
import { EditFeatureForm } from './EditFeatureForm' import { EditFeatureForm } from './EditFeatureForm'
import { HumanInputForm } from './HumanInputForm'
import type { Feature } from '../lib/types' import type { Feature } from '../lib/types'
import { import {
Dialog, Dialog,
@@ -50,10 +51,12 @@ export function FeatureModal({ feature, projectName, onClose }: FeatureModalProp
const deleteFeature = useDeleteFeature(projectName) const deleteFeature = useDeleteFeature(projectName)
const { data: allFeatures } = useFeatures(projectName) const { data: allFeatures } = useFeatures(projectName)
const resolveHumanInput = useResolveHumanInput(projectName)
// Build a map of feature ID to feature for looking up dependency names // Build a map of feature ID to feature for looking up dependency names
const featureMap = new Map<number, Feature>() const featureMap = new Map<number, Feature>()
if (allFeatures) { if (allFeatures) {
;[...allFeatures.pending, ...allFeatures.in_progress, ...allFeatures.done].forEach(f => { ;[...allFeatures.pending, ...allFeatures.in_progress, ...allFeatures.done, ...(allFeatures.needs_human_input || [])].forEach(f => {
featureMap.set(f.id, f) featureMap.set(f.id, f)
}) })
} }
@@ -141,6 +144,11 @@ export function FeatureModal({ feature, projectName, onClose }: FeatureModalProp
<CheckCircle2 size={24} className="text-primary" /> <CheckCircle2 size={24} className="text-primary" />
<span className="font-semibold text-primary">COMPLETE</span> <span className="font-semibold text-primary">COMPLETE</span>
</> </>
) : feature.needs_human_input ? (
<>
<UserCircle size={24} className="text-amber-500" />
<span className="font-semibold text-amber-500">NEEDS YOUR INPUT</span>
</>
) : ( ) : (
<> <>
<Circle size={24} className="text-muted-foreground" /> <Circle size={24} className="text-muted-foreground" />
@@ -152,6 +160,38 @@ export function FeatureModal({ feature, projectName, onClose }: FeatureModalProp
</span> </span>
</div> </div>
{/* Human Input Request */}
{feature.needs_human_input && feature.human_input_request && (
<HumanInputForm
request={feature.human_input_request}
onSubmit={async (fields) => {
setError(null)
try {
await resolveHumanInput.mutateAsync({ featureId: feature.id, fields })
onClose()
} catch (err) {
setError(err instanceof Error ? err.message : 'Failed to submit response')
}
}}
isLoading={resolveHumanInput.isPending}
/>
)}
{/* Previous Human Input Response */}
{feature.human_input_response && !feature.needs_human_input && (
<Alert className="border-green-500 bg-green-50 dark:bg-green-950/20">
<CheckCircle2 className="h-4 w-4 text-green-600" />
<AlertDescription>
<h4 className="font-semibold mb-1 text-green-700 dark:text-green-400">Human Input Provided</h4>
<p className="text-sm text-green-600 dark:text-green-300">
Response submitted{feature.human_input_response.responded_at
? ` at ${new Date(feature.human_input_response.responded_at).toLocaleString()}`
: ''}.
</p>
</AlertDescription>
</Alert>
)}
{/* Description */} {/* Description */}
<div> <div>
<h3 className="font-semibold mb-2 text-sm uppercase tracking-wide text-muted-foreground"> <h3 className="font-semibold mb-2 text-sm uppercase tracking-wide text-muted-foreground">

View File

@@ -0,0 +1,150 @@
import { useState } from 'react'
import { Loader2, UserCircle, Send } from 'lucide-react'
import type { HumanInputRequest } from '../lib/types'
import { Button } from '@/components/ui/button'
import { Input } from '@/components/ui/input'
import { Textarea } from '@/components/ui/textarea'
import { Label } from '@/components/ui/label'
import { Alert, AlertDescription } from '@/components/ui/alert'
import { Switch } from '@/components/ui/switch'
interface HumanInputFormProps {
request: HumanInputRequest
onSubmit: (fields: Record<string, string | boolean | string[]>) => Promise<void>
isLoading: boolean
}
export function HumanInputForm({ request, onSubmit, isLoading }: HumanInputFormProps) {
const [values, setValues] = useState<Record<string, string | boolean | string[]>>(() => {
const initial: Record<string, string | boolean | string[]> = {}
for (const field of request.fields) {
if (field.type === 'boolean') {
initial[field.id] = false
} else {
initial[field.id] = ''
}
}
return initial
})
const [validationError, setValidationError] = useState<string | null>(null)
const handleSubmit = async () => {
// Validate required fields
for (const field of request.fields) {
if (field.required) {
const val = values[field.id]
if (val === undefined || val === null || val === '') {
setValidationError(`"${field.label}" is required`)
return
}
}
}
setValidationError(null)
await onSubmit(values)
}
return (
<Alert className="border-amber-500 bg-amber-50 dark:bg-amber-950/20">
<UserCircle className="h-5 w-5 text-amber-600" />
<AlertDescription className="space-y-4">
<div>
<h4 className="font-semibold text-amber-700 dark:text-amber-400">Agent needs your help</h4>
<p className="text-sm text-amber-600 dark:text-amber-300 mt-1">
{request.prompt}
</p>
</div>
<div className="space-y-3">
{request.fields.map((field) => (
<div key={field.id} className="space-y-1.5">
<Label htmlFor={`human-input-${field.id}`} className="text-sm font-medium text-foreground">
{field.label}
{field.required && <span className="text-destructive ml-1">*</span>}
</Label>
{field.type === 'text' && (
<Input
id={`human-input-${field.id}`}
value={values[field.id] as string}
onChange={(e) => setValues(prev => ({ ...prev, [field.id]: e.target.value }))}
placeholder={field.placeholder || ''}
disabled={isLoading}
/>
)}
{field.type === 'textarea' && (
<Textarea
id={`human-input-${field.id}`}
value={values[field.id] as string}
onChange={(e) => setValues(prev => ({ ...prev, [field.id]: e.target.value }))}
placeholder={field.placeholder || ''}
disabled={isLoading}
rows={3}
/>
)}
{field.type === 'select' && field.options && (
<div className="space-y-1.5">
{field.options.map((option) => (
<label
key={option.value}
className={`flex items-center gap-2 p-2 rounded-md border cursor-pointer transition-colors
${values[field.id] === option.value
? 'border-primary bg-primary/10'
: 'border-border hover:border-primary/50'}`}
>
<input
type="radio"
name={`human-input-${field.id}`}
value={option.value}
checked={values[field.id] === option.value}
onChange={(e) => setValues(prev => ({ ...prev, [field.id]: e.target.value }))}
disabled={isLoading}
className="accent-primary"
/>
<span className="text-sm">{option.label}</span>
</label>
))}
</div>
)}
{field.type === 'boolean' && (
<div className="flex items-center gap-2">
<Switch
id={`human-input-${field.id}`}
checked={values[field.id] as boolean}
onCheckedChange={(checked) => setValues(prev => ({ ...prev, [field.id]: checked }))}
disabled={isLoading}
/>
<Label htmlFor={`human-input-${field.id}`} className="text-sm">
{values[field.id] ? 'Yes' : 'No'}
</Label>
</div>
)}
</div>
))}
</div>
{validationError && (
<p className="text-sm text-destructive">{validationError}</p>
)}
<Button
onClick={handleSubmit}
disabled={isLoading}
className="w-full"
>
{isLoading ? (
<Loader2 size={16} className="animate-spin" />
) : (
<>
<Send size={16} />
Submit Response
</>
)}
</Button>
</AlertDescription>
</Alert>
)
}

View File

@@ -13,13 +13,16 @@ interface KanbanBoardProps {
} }
export function KanbanBoard({ features, onFeatureClick, onAddFeature, onExpandProject, activeAgents = [], onCreateSpec, hasSpec = true }: KanbanBoardProps) { export function KanbanBoard({ features, onFeatureClick, onAddFeature, onExpandProject, activeAgents = [], onCreateSpec, hasSpec = true }: KanbanBoardProps) {
const hasFeatures = features && (features.pending.length + features.in_progress.length + features.done.length) > 0 const hasFeatures = features && (features.pending.length + features.in_progress.length + features.done.length + (features.needs_human_input?.length || 0)) > 0
// Combine all features for dependency status calculation // Combine all features for dependency status calculation
const allFeatures = features const allFeatures = features
? [...features.pending, ...features.in_progress, ...features.done] ? [...features.pending, ...features.in_progress, ...features.done, ...(features.needs_human_input || [])]
: [] : []
const needsInputCount = features?.needs_human_input?.length || 0
const showNeedsInput = needsInputCount > 0
if (!features) { if (!features) {
return ( return (
<div className="grid grid-cols-1 md:grid-cols-3 gap-6"> <div className="grid grid-cols-1 md:grid-cols-3 gap-6">
@@ -40,7 +43,7 @@ export function KanbanBoard({ features, onFeatureClick, onAddFeature, onExpandPr
} }
return ( return (
<div className="grid grid-cols-1 md:grid-cols-3 gap-6"> <div className={`grid grid-cols-1 ${showNeedsInput ? 'md:grid-cols-4' : 'md:grid-cols-3'} gap-6`}>
<KanbanColumn <KanbanColumn
title="Pending" title="Pending"
count={features.pending.length} count={features.pending.length}
@@ -64,6 +67,17 @@ export function KanbanBoard({ features, onFeatureClick, onAddFeature, onExpandPr
color="progress" color="progress"
onFeatureClick={onFeatureClick} onFeatureClick={onFeatureClick}
/> />
{showNeedsInput && (
<KanbanColumn
title="Needs Input"
count={needsInputCount}
features={features.needs_human_input}
allFeatures={allFeatures}
activeAgents={activeAgents}
color="human_input"
onFeatureClick={onFeatureClick}
/>
)}
<KanbanColumn <KanbanColumn
title="Done" title="Done"
count={features.done.length} count={features.done.length}

View File

@@ -11,7 +11,7 @@ interface KanbanColumnProps {
features: Feature[] features: Feature[]
allFeatures?: Feature[] allFeatures?: Feature[]
activeAgents?: ActiveAgent[] activeAgents?: ActiveAgent[]
color: 'pending' | 'progress' | 'done' color: 'pending' | 'progress' | 'done' | 'human_input'
onFeatureClick: (feature: Feature) => void onFeatureClick: (feature: Feature) => void
onAddFeature?: () => void onAddFeature?: () => void
onExpandProject?: () => void onExpandProject?: () => void
@@ -24,6 +24,7 @@ const colorMap = {
pending: 'border-t-4 border-t-muted', pending: 'border-t-4 border-t-muted',
progress: 'border-t-4 border-t-primary', progress: 'border-t-4 border-t-primary',
done: 'border-t-4 border-t-primary', done: 'border-t-4 border-t-primary',
human_input: 'border-t-4 border-t-amber-500',
} }
export function KanbanColumn({ export function KanbanColumn({

View File

@@ -103,6 +103,10 @@ function getStateAnimation(state: OrchestratorState): string {
return 'animate-working' return 'animate-working'
case 'monitoring': case 'monitoring':
return 'animate-bounce-gentle' return 'animate-bounce-gentle'
case 'draining':
return 'animate-thinking'
case 'paused':
return ''
case 'complete': case 'complete':
return 'animate-celebrate' return 'animate-celebrate'
default: default:
@@ -121,6 +125,10 @@ function getStateGlow(state: OrchestratorState): string {
return 'shadow-[0_0_16px_rgba(124,58,237,0.6)]' return 'shadow-[0_0_16px_rgba(124,58,237,0.6)]'
case 'monitoring': case 'monitoring':
return 'shadow-[0_0_8px_rgba(167,139,250,0.4)]' return 'shadow-[0_0_8px_rgba(167,139,250,0.4)]'
case 'draining':
return 'shadow-[0_0_10px_rgba(251,191,36,0.5)]'
case 'paused':
return ''
case 'complete': case 'complete':
return 'shadow-[0_0_20px_rgba(112,224,0,0.6)]' return 'shadow-[0_0_20px_rgba(112,224,0,0.6)]'
default: default:
@@ -141,6 +149,10 @@ function getStateDescription(state: OrchestratorState): string {
return 'spawning agents' return 'spawning agents'
case 'monitoring': case 'monitoring':
return 'monitoring progress' return 'monitoring progress'
case 'draining':
return 'draining active agents'
case 'paused':
return 'paused'
case 'complete': case 'complete':
return 'all features complete' return 'all features complete'
default: default:

View File

@@ -25,6 +25,10 @@ function getStateText(state: OrchestratorState): string {
return 'Watching progress...' return 'Watching progress...'
case 'complete': case 'complete':
return 'Mission accomplished!' return 'Mission accomplished!'
case 'draining':
return 'Draining agents...'
case 'paused':
return 'Paused'
default: default:
return 'Orchestrating...' return 'Orchestrating...'
} }
@@ -42,6 +46,10 @@ function getStateColor(state: OrchestratorState): string {
return 'text-primary' return 'text-primary'
case 'initializing': case 'initializing':
return 'text-yellow-600 dark:text-yellow-400' return 'text-yellow-600 dark:text-yellow-400'
case 'draining':
return 'text-amber-600 dark:text-amber-400'
case 'paused':
return 'text-muted-foreground'
default: default:
return 'text-muted-foreground' return 'text-muted-foreground'
} }

View File

@@ -55,7 +55,7 @@ export function ProgressDashboard({
const showThought = useMemo(() => { const showThought = useMemo(() => {
if (!thought) return false if (!thought) return false
if (agentStatus === 'running') return true if (agentStatus === 'running' || agentStatus === 'pausing') return true
if (agentStatus === 'paused') { if (agentStatus === 'paused') {
return Date.now() - lastLogTimestamp < IDLE_TIMEOUT return Date.now() - lastLogTimestamp < IDLE_TIMEOUT
} }

View File

@@ -137,6 +137,7 @@ function isAllComplete(features: FeatureListResponse | undefined): boolean {
return ( return (
features.pending.length === 0 && features.pending.length === 0 &&
features.in_progress.length === 0 && features.in_progress.length === 0 &&
(features.needs_human_input?.length || 0) === 0 &&
features.done.length > 0 features.done.length > 0
) )
} }

View File

@@ -133,6 +133,18 @@ export function useUpdateFeature(projectName: string) {
}) })
} }
export function useResolveHumanInput(projectName: string) {
const queryClient = useQueryClient()
return useMutation({
mutationFn: ({ featureId, fields }: { featureId: number; fields: Record<string, string | boolean | string[]> }) =>
api.resolveHumanInput(projectName, featureId, { fields }),
onSuccess: () => {
queryClient.invalidateQueries({ queryKey: ['features', projectName] })
},
})
}
// ============================================================================ // ============================================================================
// Agent // Agent
// ============================================================================ // ============================================================================
@@ -197,6 +209,28 @@ export function useResumeAgent(projectName: string) {
}) })
} }
export function useGracefulPauseAgent(projectName: string) {
const queryClient = useQueryClient()
return useMutation({
mutationFn: () => api.gracefulPauseAgent(projectName),
onSuccess: () => {
queryClient.invalidateQueries({ queryKey: ['agent-status', projectName] })
},
})
}
export function useGracefulResumeAgent(projectName: string) {
const queryClient = useQueryClient()
return useMutation({
mutationFn: () => api.gracefulResumeAgent(projectName),
onSuccess: () => {
queryClient.invalidateQueries({ queryKey: ['agent-status', projectName] })
},
})
}
// ============================================================================ // ============================================================================
// Setup // Setup
// ============================================================================ // ============================================================================

View File

@@ -33,6 +33,7 @@ interface WebSocketState {
progress: { progress: {
passing: number passing: number
in_progress: number in_progress: number
needs_human_input: number
total: number total: number
percentage: number percentage: number
} }
@@ -60,7 +61,7 @@ const MAX_AGENT_LOGS = 500 // Keep last 500 log lines per agent
export function useProjectWebSocket(projectName: string | null) { export function useProjectWebSocket(projectName: string | null) {
const [state, setState] = useState<WebSocketState>({ const [state, setState] = useState<WebSocketState>({
progress: { passing: 0, in_progress: 0, total: 0, percentage: 0 }, progress: { passing: 0, in_progress: 0, needs_human_input: 0, total: 0, percentage: 0 },
agentStatus: 'loading', agentStatus: 'loading',
logs: [], logs: [],
isConnected: false, isConnected: false,
@@ -107,6 +108,7 @@ export function useProjectWebSocket(projectName: string | null) {
progress: { progress: {
passing: message.passing, passing: message.passing,
in_progress: message.in_progress, in_progress: message.in_progress,
needs_human_input: message.needs_human_input ?? 0,
total: message.total, total: message.total,
percentage: message.percentage, percentage: message.percentage,
}, },
@@ -385,7 +387,7 @@ export function useProjectWebSocket(projectName: string | null) {
// Reset state when project changes to clear stale data // Reset state when project changes to clear stale data
// Use 'loading' for agentStatus to show loading indicator until WebSocket provides actual status // Use 'loading' for agentStatus to show loading indicator until WebSocket provides actual status
setState({ setState({
progress: { passing: 0, in_progress: 0, total: 0, percentage: 0 }, progress: { passing: 0, in_progress: 0, needs_human_input: 0, total: 0, percentage: 0 },
agentStatus: 'loading', agentStatus: 'loading',
logs: [], logs: [],
isConnected: false, isConnected: false,

View File

@@ -181,6 +181,17 @@ export async function createFeaturesBulk(
}) })
} }
export async function resolveHumanInput(
projectName: string,
featureId: number,
response: { fields: Record<string, string | boolean | string[]> }
): Promise<Feature> {
return fetchJSON(`/projects/${encodeURIComponent(projectName)}/features/${featureId}/resolve-human-input`, {
method: 'POST',
body: JSON.stringify(response),
})
}
// ============================================================================ // ============================================================================
// Dependency Graph API // Dependency Graph API
// ============================================================================ // ============================================================================
@@ -271,6 +282,18 @@ export async function resumeAgent(projectName: string): Promise<AgentActionRespo
}) })
} }
export async function gracefulPauseAgent(projectName: string): Promise<AgentActionResponse> {
return fetchJSON(`/projects/${encodeURIComponent(projectName)}/agent/graceful-pause`, {
method: 'POST',
})
}
export async function gracefulResumeAgent(projectName: string): Promise<AgentActionResponse> {
return fetchJSON(`/projects/${encodeURIComponent(projectName)}/agent/graceful-resume`, {
method: 'POST',
})
}
// ============================================================================ // ============================================================================
// Spec Creation API // Spec Creation API
// ============================================================================ // ============================================================================

View File

@@ -57,6 +57,26 @@ export interface ProjectPrompts {
coding_prompt: string coding_prompt: string
} }
// Human input types
export interface HumanInputField {
id: string
label: string
type: 'text' | 'textarea' | 'select' | 'boolean'
required: boolean
placeholder?: string
options?: { value: string; label: string }[]
}
export interface HumanInputRequest {
prompt: string
fields: HumanInputField[]
}
export interface HumanInputResponseData {
fields: Record<string, string | boolean | string[]>
responded_at?: string
}
// Feature types // Feature types
export interface Feature { export interface Feature {
id: number id: number
@@ -70,10 +90,13 @@ export interface Feature {
dependencies?: number[] // Optional for backwards compat dependencies?: number[] // Optional for backwards compat
blocked?: boolean // Computed by API blocked?: boolean // Computed by API
blocking_dependencies?: number[] // Computed by API blocking_dependencies?: number[] // Computed by API
needs_human_input?: boolean
human_input_request?: HumanInputRequest | null
human_input_response?: HumanInputResponseData | null
} }
// Status type for graph nodes // Status type for graph nodes
export type FeatureStatus = 'pending' | 'in_progress' | 'done' | 'blocked' export type FeatureStatus = 'pending' | 'in_progress' | 'done' | 'blocked' | 'needs_human_input'
// Graph visualization types // Graph visualization types
export interface GraphNode { export interface GraphNode {
@@ -99,6 +122,7 @@ export interface FeatureListResponse {
pending: Feature[] pending: Feature[]
in_progress: Feature[] in_progress: Feature[]
done: Feature[] done: Feature[]
needs_human_input: Feature[]
} }
export interface FeatureCreate { export interface FeatureCreate {
@@ -120,7 +144,7 @@ export interface FeatureUpdate {
} }
// Agent types // Agent types
export type AgentStatus = 'stopped' | 'running' | 'paused' | 'crashed' | 'loading' export type AgentStatus = 'stopped' | 'running' | 'paused' | 'crashed' | 'loading' | 'pausing' | 'paused_graceful'
export interface AgentStatusResponse { export interface AgentStatusResponse {
status: AgentStatus status: AgentStatus
@@ -216,6 +240,8 @@ export type OrchestratorState =
| 'spawning' | 'spawning'
| 'monitoring' | 'monitoring'
| 'complete' | 'complete'
| 'draining'
| 'paused'
// Orchestrator event for recent activity // Orchestrator event for recent activity
export interface OrchestratorEvent { export interface OrchestratorEvent {
@@ -248,6 +274,7 @@ export interface WSProgressMessage {
in_progress: number in_progress: number
total: number total: number
percentage: number percentage: number
needs_human_input?: number
} }
export interface WSFeatureUpdateMessage { export interface WSFeatureUpdateMessage {