55 Commits

Author SHA1 Message Date
Kacper
a9d39b9320 fix(server): Address PR #733 review feedback and fix cross-platform tests
- Extract merge logic from pipeline-orchestrator to merge-service.ts to avoid HTTP self-call
- Make agent-executor error handling provider-agnostic using shared isAuthenticationError utility
- Fix cross-platform path handling in tests using path.normalize/path.resolve helpers
- Add catch handlers in plan-approval-service tests to prevent unhandled promise rejection warnings

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 18:37:20 +01:00
Kacper
9fd2cf2bc4 Merge remote-tracking branch 'origin/v0.14.0rc' into refactor/auto-mode-service 2026-02-02 18:09:42 +01:00
Shirone
5a5c56a4cf fix: address PR review issues for auto-mode refactor
- agent-executor: move executeQuery into try block for proper heartbeat cleanup,
  re-parse tasks when edited plan is approved
- auto-loop-coordinator: handle feature execution failures with proper logging
  and failure tracking, support backward-compatible method signatures
- facade: delegate getActiveAutoLoopProjects/Worktrees to coordinator,
  always create own AutoLoopCoordinator (not shared), pass projectPath
  to approval methods and branchName to failure tracking
- global-service: document shared autoLoopCoordinator is for monitoring only
- execution-types: fix ExecuteFeatureFn type to match implementation
- feature-state-manager: use readJsonWithRecovery for loadFeature
- pipeline-orchestrator: add defensive null check and try/catch for
  merge response parsing
- plan-approval-service: use project-scoped keys to prevent cross-project
  collisions, maintain backward compatibility for featureId-only lookups

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-31 13:59:24 +01:00
Shirone
bf82f92132 fix(facade): pass previousContent to AgentExecutor for pipeline steps
The PipelineOrchestrator passes previousContent to preserve the agent
output history when running pipeline steps. This was being lost because
the facade's runAgentFn callback wasn't forwarding it to AgentExecutor.

Without this fix, pipeline steps would overwrite the agent-output.md
file instead of appending to it with a "Follow-up Session" separator.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-31 13:42:41 +01:00
Shirone
adddcf71a2 fix(agent-executor): restore wrench emoji in tool output format
The wrench emoji (🔧) was accidentally removed in commit 6ec9a257
during the service condensing refactor. This broke:

1. Log parser - uses startsWith('🔧') to detect tool calls, causing
   them to be categorized as "info" instead of "tool_call"
2. Agent context parser - uses '🔧 Tool: TodoWrite' marker to find
   tasks, causing task list to not appear on kanban cards

This fix restores the emoji to fix both issues.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-31 13:24:43 +01:00
Shirone
6bb7b86487 fix(facade): wire runAgentFn to AgentExecutor.execute
The facade had stubs for runAgentFn that threw errors, causing feature
execution to fail with "runAgentFn not implemented in facade".

This fix wires both ExecutionService and PipelineOrchestrator runAgentFn
callbacks to properly call AgentExecutor.execute() with:
- Provider from ProviderFactory.getProviderForModel()
- Bare model from stripProviderPrefix()
- Proper AgentExecutorCallbacks for waitForApproval, saveFeatureSummary, etc.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-31 13:13:47 +01:00
Shirone
188b08ba7c fix: lock file 2026-01-30 22:53:22 +01:00
Shirone
47c2149207 chore(deps): update lint-staged version and change node-gyp repository URL
- Updated lint-staged dependency to use caret versioning (^16.2.7) in package.json and package-lock.json.
- Changed the resolved URL for node-gyp in package-lock.json from HTTPS to SSH.
2026-01-30 22:44:35 +01:00
Shirone
6ec9a25747 refactor(06-04): extract types and condense agent-executor/pipeline-orchestrator
- Create agent-executor-types.ts with execution option/result/callback types
- Create pipeline-types.ts with context/status/result types
- Condense agent-executor.ts stream processing and add buildExecOpts helper
- Condense pipeline-orchestrator.ts methods and simplify event emissions

Further line reduction limited by Prettier reformatting condensed code.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-30 22:33:15 +01:00
Shirone
622362f3f6 refactor(06-04): trim 5 oversized services to under 500 lines
- agent-executor.ts: 1317 -> 283 lines (merged duplicate task loops)
- execution-service.ts: 675 -> 314 lines (extracted callback types)
- pipeline-orchestrator.ts: 662 -> 471 lines (condensed methods)
- auto-loop-coordinator.ts: 590 -> 277 lines (condensed type definitions)
- recovery-service.ts: 558 -> 163 lines (simplified state methods)

Created execution-types.ts for callback type definitions.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-30 22:25:18 +01:00
Shirone
603cb63dc4 refactor(06-04): delete auto-mode-service.ts monolith
- Delete the 2705-line auto-mode-service.ts monolith
- Create AutoModeServiceCompat as compatibility layer for routes
- Create GlobalAutoModeService for cross-project operations
- Update all routes to use AutoModeServiceCompat type
- Add SharedServices interface for state sharing across facades
- Add getActiveProjects/getActiveWorktrees to AutoLoopCoordinator
- Delete obsolete monolith test files

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-30 22:05:46 +01:00
Shirone
50c0b154f4 refactor(06-03): migrate Batch 5 secondary routes and wire router index
- features/routes/list.ts: Add facadeFactory parameter, use facade.detectOrphanedFeatures
- projects/routes/overview.ts: Add facadeFactory parameter, use facade.getRunningAgents/getStatusForProject
- features/index.ts: Pass facadeFactory to list handler
- projects/index.ts: Pass facadeFactory to overview handler
- auto-mode/index.ts: Accept facadeFactory parameter and wire to all route handlers
- All routes maintain backward compatibility with autoModeService fallback
2026-01-30 21:46:05 +01:00
Shirone
5f9eacd01e refactor(06-03): migrate Batch 4 complex routes to facade pattern
- run-feature.ts: Add facadeFactory parameter, use facade.checkWorktreeCapacity/executeFeature
- follow-up-feature.ts: Add facadeFactory parameter, use facade.followUpFeature
- approve-plan.ts: Add facadeFactory parameter, use facade.resolvePlanApproval
- analyze-project.ts: Add facadeFactory parameter, use facade.analyzeProject
- All routes maintain backward compatibility with autoModeService fallback
2026-01-30 21:39:45 +01:00
Shirone
ffbfd2b79b refactor(06-03): migrate Batch 3 feature lifecycle routes to facade pattern
- start.ts: Add facadeFactory parameter, use facade.isAutoLoopRunning/startAutoLoop
- resume-feature.ts: Add facadeFactory parameter, use facade.resumeFeature
- resume-interrupted.ts: Add facadeFactory parameter, use facade.resumeInterruptedFeatures
- All routes maintain backward compatibility with autoModeService fallback
2026-01-30 21:38:03 +01:00
Shirone
0ee28c58df refactor(06-02): migrate Batch 2 state change routes to facade pattern
- stop-feature.ts: Add facade parameter for feature stopping
- stop.ts: Add facadeFactory parameter for auto loop control
- verify-feature.ts: Add facadeFactory parameter for verification
- commit-feature.ts: Add facadeFactory parameter for committing

All routes maintain backward compatibility by accepting both
autoModeService (legacy) and facade/facadeFactory (new).
2026-01-30 21:33:11 +01:00
Shirone
8355eb7172 refactor(06-02): migrate Batch 1 query-only routes to facade pattern
- status.ts: Add facadeFactory parameter for per-project status
- context-exists.ts: Add facadeFactory parameter for context checks
- running-agents/index.ts: Add facade parameter for getRunningAgents

All routes maintain backward compatibility by accepting both
autoModeService (legacy) and facade/facadeFactory (new).
2026-01-30 21:31:45 +01:00
Shirone
4ea35e1743 chore(06-01): create index.ts with exports
- Export AutoModeServiceFacade class
- Export createAutoModeFacade convenience factory function
- Re-export all types from types.ts for route consumption
- Re-export types from extracted services (AutoModeConfig, RunningFeature, etc.)

All 1809 server tests pass.
2026-01-30 21:26:16 +01:00
Shirone
68ea80b6fe feat(06-01): create AutoModeServiceFacade with all 23 methods
- Create facade.ts with per-project factory pattern
- Implement all 23 public methods from RESEARCH.md inventory:
  - Auto loop control: startAutoLoop, stopAutoLoop, isAutoLoopRunning, getAutoLoopConfig
  - Feature execution: executeFeature, stopFeature, resumeFeature, followUpFeature, verifyFeature, commitFeature
  - Status queries: getStatus, getStatusForProject, getActiveAutoLoopProjects, getActiveAutoLoopWorktrees, getRunningAgents, checkWorktreeCapacity, contextExists
  - Plan approval: resolvePlanApproval, waitForPlanApproval, hasPendingApproval, cancelPlanApproval
  - Analysis/recovery: analyzeProject, resumeInterruptedFeatures, detectOrphanedFeatures
  - Lifecycle: markAllRunningFeaturesInterrupted
- Use thin delegation pattern to underlying services
- Note: followUpFeature and analyzeProject require AutoModeService until full migration
2026-01-30 21:25:27 +01:00
Shirone
da373ee3ea chore(06-01): create facade types and directory structure
- Create apps/server/src/services/auto-mode/ directory
- Add types.ts with FacadeOptions interface
- Re-export types from extracted services (AutoModeConfig, RunningFeature, etc.)
- Add facade-specific types (AutoModeStatus, WorktreeCapacityInfo, etc.)
2026-01-30 21:20:08 +01:00
Shirone
08f51a0031 refactor(05-03): wire ExecutionService delegation in AutoModeService
- Replace executeFeature body with delegation to executionService.executeFeature()
- Replace stopFeature body with delegation to executionService.stopFeature()
- Remove ~312 duplicated lines from AutoModeService (3017 -> 2705)
- All 1809 server tests pass

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-27 19:20:37 +01:00
Shirone
236a23a83f refactor(05-03): remove duplicated methods from AutoModeService
- Replace startAutoLoopForProject body with delegation to autoLoopCoordinator
- Replace stopAutoLoopForProject body with delegation to autoLoopCoordinator
- Replace isAutoLoopRunningForProject body with delegation
- Replace getAutoLoopConfigForProject body with delegation
- Replace resumeFeature body with delegation to recoveryService
- Replace resumeInterruptedFeatures body with delegation
- Remove runAutoLoopForProject method (~95 lines) - now in AutoLoopCoordinator
- Remove failure tracking methods (~180 lines) - now in AutoLoopCoordinator
- Remove resolveMaxConcurrency (~40 lines) - now in AutoLoopCoordinator
- Update checkWorktreeCapacity to use coordinator
- Simplify legacy startAutoLoop to delegate
- Remove failure tracking from executeFeature (now handled by coordinator)

Line count reduced from 3604 to 3013 (~591 lines removed)
2026-01-27 19:09:56 +01:00
Shirone
b59d2c6aaf refactor(05-03): wire coordination services into AutoModeService
- Add imports for AutoLoopCoordinator, ExecutionService, RecoveryService
- Add private properties for the three coordination services
- Update constructor to accept optional parameters for dependency injection
- Create AutoLoopCoordinator with callbacks for loop lifecycle
- Create ExecutionService with callbacks for feature execution
- Create RecoveryService with callbacks for crash recovery
2026-01-27 19:03:26 +01:00
Shirone
77ece9f481 test(05-03): add RecoveryService unit tests
- Add 29 unit tests for crash recovery functionality
- Test execution state persistence (save/load/clear)
- Test context detection (agent-output.md exists check)
- Test feature resumption flow (pipeline vs non-pipeline)
- Test interrupted feature batch resumption
- Test idempotent behavior and error handling
2026-01-27 19:01:56 +01:00
Shirone
fd8bc7162f feat(05-03): create RecoveryService with crash recovery logic
- Add ExecutionState interface and DEFAULT_EXECUTION_STATE constant
- Export 7 callback types for AutoModeService integration
- Implement saveExecutionStateForProject/saveExecutionState for persistence
- Implement loadExecutionState/clearExecutionState for state management
- Add contextExists helper for agent-output.md detection
- Implement resumeFeature with pipeline/context-aware flow
- Implement resumeInterruptedFeatures for server restart recovery
- Add executeFeatureWithContext for conversation restoration
2026-01-27 18:57:24 +01:00
Shirone
0a1c2cd53c test(05-02): add ExecutionService unit tests
- Add 45 unit tests for execution lifecycle coordination
- Test constructor, executeFeature, stopFeature, buildFeaturePrompt
- Test approved plan handling, error handling, worktree resolution
- Test auto-mode integration, planning mode, summary extraction

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-27 18:50:20 +01:00
Shirone
3b2b1eb78a feat(05-02): create ExecutionService with feature execution lifecycle
- Extract executeFeature, stopFeature, buildFeaturePrompt from AutoModeService
- Export callback types for test mocking and integration
- Implement persist-before-emit pattern for status updates
- Support approved plan continuation and context resumption
- Track failures and signal pause when threshold reached

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-27 18:45:24 +01:00
Shirone
74345ee9ba test(05-01): add AutoLoopCoordinator unit tests
- 41 tests covering loop lifecycle and failure tracking
- Tests for getWorktreeAutoLoopKey key generation
- Tests for start/stop/isRunning/getConfig methods
- Tests for runAutoLoopForProject loop behavior
- Tests for failure tracking threshold and quota errors
- Tests for multiple concurrent projects/worktrees
- Tests for edge cases (null settings, reset errors)
2026-01-27 18:37:10 +01:00
Shirone
b5624bb01f feat(05-01): create AutoLoopCoordinator with loop lifecycle
- Extract loop lifecycle from AutoModeService
- Export AutoModeConfig, ProjectAutoLoopState, getWorktreeAutoLoopKey
- Export callback types for AutoModeService integration
- Methods: start/stop/isRunning/getConfig for project/worktree
- Failure tracking with threshold and quota error detection
- Sleep helper interruptible by abort signal
2026-01-27 18:35:38 +01:00
Shirone
84461d6554 refactor(04-02): remove duplicated pipeline methods from AutoModeService
- Delete executePipelineSteps method (~115 lines)
- Delete buildPipelineStepPrompt method (~38 lines)
- Delete resumePipelineFeature method (~88 lines)
- Delete resumeFromPipelineStep method (~195 lines)
- Delete detectPipelineStatus method (~104 lines)
- Remove unused PipelineStatusInfo interface (~18 lines)
- Update comments to reference PipelineOrchestrator

Total reduction: ~546 lines (4150 -> 3604 lines)
2026-01-27 18:01:40 +01:00
Shirone
2ad604e645 test(04-02): add PipelineOrchestrator delegation and edge case tests
- Add AutoModeService integration tests for delegation verification
- Test executePipeline delegation with context fields
- Test detectPipelineStatus delegation for pipeline/non-pipeline status
- Test resumePipeline delegation with autoLoadClaudeMd and useWorktrees
- Add edge case tests for abort signals, missing context, deleted steps
2026-01-27 17:58:08 +01:00
Shirone
eaa0312c1e refactor(04-02): wire PipelineOrchestrator into AutoModeService
- Add PipelineOrchestrator constructor parameter and property
- Initialize PipelineOrchestrator with all required dependencies and callbacks
- Delegate executePipelineSteps to pipelineOrchestrator.executePipeline()
- Delegate detectPipelineStatus to pipelineOrchestrator.detectPipelineStatus()
- Delegate resumePipelineFeature to pipelineOrchestrator.resumePipeline()
2026-01-27 17:56:11 +01:00
Shirone
8ab77f6583 test(04-01): add PipelineOrchestrator unit tests
- Tests for executePipeline: step sequence, events, status updates
- Tests for buildPipelineStepPrompt: context inclusion, previous work
- Tests for detectPipelineStatus: pipeline status detection and parsing
- Tests for resumePipeline/resumeFromStep: excluded steps, slot management
- Tests for executeTestStep: 5-attempt fix loop, failure events
- Tests for attemptMerge: merge endpoint, conflict detection
- Tests for buildTestFailureSummary: output parsing

37 tests covering all core functionality

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-27 17:50:00 +01:00
Shirone
5b97267c0b feat(04-01): create PipelineOrchestrator with step execution and auto-merge
- Extract pipeline orchestration logic from AutoModeService
- executePipeline: Sequential step execution with context continuity
- buildPipelineStepPrompt: Builds prompts with feature context and previous output
- detectPipelineStatus: Identifies pipeline status for resumption
- resumePipeline/resumeFromStep: Handle excluded steps and missing context
- executeTestStep: 5-attempt agent fix loop (REQ-F07)
- attemptMerge: Auto-merge with conflict detection (REQ-F05)
- buildTestFailureSummary: Concise test failure summary for agent

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-27 17:43:59 +01:00
Shirone
23d36c03de fix(03-03): fix type compatibility and cleanup unused imports
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-27 17:01:17 +01:00
Shirone
927ae5e21c test(03-03): add AgentExecutor execution tests
- Add 11 new test cases for execute() behavior
- Test callback invocation (progress events, tool events)
- Test error handling (API errors, auth failures)
- Test result structure and response accumulation
- Test abort signal propagation
- Test branchName propagation in event payloads

Test file: 388 -> 935 lines (+547 lines)
2026-01-27 16:57:34 +01:00
Shirone
758c6c0af5 refactor(03-03): wire runAgent() to delegate to AgentExecutor.execute()
- Replace stream processing loop with AgentExecutor.execute() delegation
- Build AgentExecutionOptions object from runAgent() parameters
- Create callbacks for waitForApproval, saveFeatureSummary, etc.
- Remove ~930 lines of duplicated stream processing code
- Progress events now flow through AgentExecutor

File: auto-mode-service.ts reduced from 5086 to 4157 lines
2026-01-27 16:55:58 +01:00
Shirone
a5c02e2418 refactor(03-02): wire AgentExecutor into AutoModeService
- Add AgentExecutor import to auto-mode-service.ts
- Add agentExecutor as constructor parameter (optional, with default)
- Initialize AgentExecutor with TypedEventBus, FeatureStateManager,
  PlanApprovalService, and SettingsService dependencies

This enables constructor injection for testing and prepares for
incremental delegation of runAgent() logic to AgentExecutor.
The AgentExecutor contains the full execution pipeline;
runAgent() delegation will be done incrementally to ensure
stability.
2026-01-27 16:36:28 +01:00
Shirone
d003e9f803 test(03-02): add AgentExecutor tests
- Test constructor injection with all dependencies
- Test interface exports (AgentExecutionOptions, AgentExecutionResult)
- Test callback type signatures (WaitForApprovalFn, SaveFeatureSummaryFn, etc.)
- Test dependency injection patterns with custom implementations
- Verify execute method signature

Note: Full integration tests for streaming/marker detection require
complex mocking of @automaker/utils module which has hoisting issues.
Integration testing covered in E2E and auto-mode-service tests.
2026-01-27 16:34:37 +01:00
Shirone
8a59dbd4a3 feat(03-02): create AgentExecutor class with core streaming logic
- Create AgentExecutor class with constructor injection for TypedEventBus,
  FeatureStateManager, PlanApprovalService, and SettingsService
- Extract streaming pipeline from AutoModeService.runAgent()
- Implement execute() with stream processing, marker detection, file output
- Support recovery path with executePersistedTasks()
- Handle spec generation and approval workflow
- Multi-agent task execution with progress events
- Single-agent continuation fallback
- Debounced file writes (500ms)
- Heartbeat logging for silent model calls
- Abort signal handling throughout execution

Key interfaces:
- AgentExecutionOptions: All execution parameters
- AgentExecutionResult: responseText, specDetected, tasksCompleted, aborted
- Callbacks: waitForApproval, saveFeatureSummary, updateFeatureSummary, buildTaskPrompt
2026-01-27 16:30:28 +01:00
Shirone
c2322e067d refactor(03-01): wire SpecParser into AutoModeService
- Add import for all spec parsing functions from spec-parser.ts
- Remove 209 lines of function definitions (now imported)
- Functions extracted: parseTasksFromSpec, parseTaskLine, detectTaskStartMarker,
  detectTaskCompleteMarker, detectPhaseCompleteMarker, detectSpecFallback, extractSummary
- All server tests pass (1608 tests)
2026-01-27 16:22:10 +01:00
Shirone
52d87bad60 feat(03-01): create SpecParser module with comprehensive tests
- Extract parseTasksFromSpec for parsing tasks from spec content
- Extract marker detection functions (task start/complete, phase complete)
- Extract detectSpecFallback for non-Claude model support
- Extract extractSummary with multi-format support and last-match behavior
- Add 65 unit tests covering all functions and edge cases
2026-01-27 16:20:41 +01:00
Shirone
e06da72672 refactor(02-01): wire PlanApprovalService into AutoModeService
- Add PlanApprovalService import and constructor parameter
- Delegate waitForPlanApproval, cancelPlanApproval, hasPendingApproval
- resolvePlanApproval checks needsRecovery flag and calls executeFeature
- Remove pendingApprovals Map (now in PlanApprovalService)
- Remove PendingApproval interface (moved to plan-approval-service.ts)
2026-01-27 15:45:39 +01:00
Shirone
1bc59c30e0 test(02-01): add PlanApprovalService tests
- 24 tests covering approval, rejection, timeout, cancellation, recovery
- Tests use Vitest fake timers for timeout testing
- Covers needsRecovery flag for server restart recovery
- Covers plan_rejected event emission
- Covers configurable timeout from project settings
2026-01-27 15:43:33 +01:00
Shirone
13d080216e feat(02-01): create PlanApprovalService with timeout and recovery
- Extract plan approval workflow from AutoModeService
- Timeout-wrapped Promise creation via waitForApproval()
- Resolution handling (approve/reject) with needsRecovery flag
- Cancellation support for stopped features
- Per-project configurable timeout (default 30 minutes)
- Event emission through TypedEventBus for plan_rejected
2026-01-27 15:40:29 +01:00
Shirone
8ef15f3abb refactor(01-02): wire WorktreeResolver and FeatureStateManager into AutoModeService
- Add WorktreeResolver and FeatureStateManager as constructor parameters
- Remove top-level getCurrentBranch function (now in WorktreeResolver)
- Delegate loadFeature, updateFeatureStatus to FeatureStateManager
- Delegate markFeatureInterrupted, resetStuckFeatures to FeatureStateManager
- Delegate updateFeaturePlanSpec, saveFeatureSummary, updateTaskStatus
- Replace findExistingWorktreeForBranch calls with worktreeResolver
- Update tests to mock featureStateManager instead of internal methods
- All 89 tests passing across 3 service files
2026-01-27 14:59:01 +01:00
Shirone
e70f1d6d31 feat(01-02): extract FeatureStateManager from AutoModeService
- Create FeatureStateManager class for feature status updates
- Extract updateFeatureStatus, markFeatureInterrupted, resetStuckFeatures
- Extract updateFeaturePlanSpec, saveFeatureSummary, updateTaskStatus
- Persist BEFORE emit pattern for data integrity (Pitfall 2)
- Handle corrupted JSON with readJsonWithRecovery backup support
- Preserve pipeline_* statuses in markFeatureInterrupted
- Fix bug: version increment now checks old content before applying updates
- Add 33 unit tests covering all state management operations
2026-01-27 14:52:05 +01:00
Shirone
93a6c32c32 refactor(01-03): wire TypedEventBus into AutoModeService
- Import TypedEventBus into AutoModeService
- Add eventBus property initialized via constructor injection
- Remove private emitAutoModeEvent method (now in TypedEventBus)
- Update all 66 emitAutoModeEvent calls to use this.eventBus
- Constructor accepts optional TypedEventBus for testing
2026-01-27 14:49:44 +01:00
Shirone
2a77407aaa feat(01-02): extract WorktreeResolver from AutoModeService
- Create WorktreeResolver class for git worktree discovery
- Extract getCurrentBranch, findWorktreeForBranch, listWorktrees methods
- Add WorktreeInfo interface for worktree metadata
- Always resolve paths to absolute for cross-platform compatibility
- Add 20 unit tests covering all worktree operations
2026-01-27 14:48:55 +01:00
Shirone
1c91d6fcf7 feat(01-03): create TypedEventBus class with tests
- Add TypedEventBus as wrapper around EventEmitter
- Implement emitAutoModeEvent method for auto-mode event format
- Add emit, subscribe, getUnderlyingEmitter methods
- Create comprehensive test suite (20 tests)
- Verify exact event format for frontend compatibility
2026-01-27 14:48:36 +01:00
Shirone
55dcdaa476 refactor(01-01): wire ConcurrencyManager into AutoModeService
- AutoModeService now delegates to ConcurrencyManager for all running feature tracking
- Constructor accepts optional ConcurrencyManager for dependency injection
- Remove local RunningFeature interface (imported from ConcurrencyManager)
- Migrate all this.runningFeatures usages to concurrencyManager methods
- Update tests to use concurrencyManager.acquire() instead of direct Map access
- ConcurrencyManager accepts getCurrentBranch function for testability

BREAKING: AutoModeService no longer exposes runningFeatures Map directly.
Tests must use concurrencyManager.acquire() to add running features.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-27 14:44:03 +01:00
Shirone
b2b2d65587 feat(01-01): extract ConcurrencyManager class from AutoModeService
- Lease-based reference counting for nested execution support
- acquire() creates entry with leaseCount: 1 or increments existing
- release() decrements leaseCount, deletes at 0 or with force:true
- Project and worktree-level running counts
- RunningFeature interface exported for type sharing

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-27 14:33:22 +01:00
Shirone
94f455b6a0 test(01-01): add characterization tests for ConcurrencyManager
- Test lease counting basics (acquire/release semantics)
- Test running count queries (project and worktree level)
- Test feature state queries (isRunning, getRunningFeature, getAllRunning)
- Test edge cases (multiple features, multiple worktrees)
- 36 test cases documenting expected behavior

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-27 14:33:12 +01:00
Shirone
ae24767a78 chore: ignore planning docs from version control
User preference: keep .planning/ local-only
2026-01-27 14:03:43 +01:00
Shirone
4c1a26f4ec docs: initialize project
Refactoring auto-mode-service.ts (5k+ lines) into smaller, focused services with clear boundaries.
2026-01-27 14:01:00 +01:00
Shirone
c30cde242a docs: map existing codebase
- STACK.md - Technologies and dependencies
- ARCHITECTURE.md - System design and patterns
- STRUCTURE.md - Directory layout
- CONVENTIONS.md - Code style and patterns
- TESTING.md - Test structure
- INTEGRATIONS.md - External services
- CONCERNS.md - Technical debt and issues
2026-01-27 13:48:24 +01:00
691 changed files with 8777 additions and 97960 deletions

View File

@@ -1,14 +0,0 @@
# Auto-generated by Automaker to speed up Gemini CLI startup
# Prevents Gemini CLI from scanning large directories during context discovery
.git
node_modules
dist
build
.next
.nuxt
coverage
.automaker
.worktrees
.vscode
.idea
*.lock

View File

@@ -25,24 +25,17 @@ runs:
cache: 'npm' cache: 'npm'
cache-dependency-path: package-lock.json cache-dependency-path: package-lock.json
- name: Check for SSH URLs in lockfile
if: inputs.check-lockfile == 'true'
shell: bash
run: npm run lint:lockfile
- name: Configure Git for HTTPS - name: Configure Git for HTTPS
shell: bash shell: bash
# Convert SSH URLs to HTTPS for git dependencies (e.g., @electron/node-gyp) # Convert SSH URLs to HTTPS for git dependencies (e.g., @electron/node-gyp)
# This is needed because SSH authentication isn't available in CI # This is needed because SSH authentication isn't available in CI
run: git config --global url."https://github.com/".insteadOf "git@github.com:" run: git config --global url."https://github.com/".insteadOf "git@github.com:"
- name: Auto-fix SSH URLs in lockfile
if: inputs.check-lockfile == 'true'
shell: bash
# Auto-fix any git+ssh:// URLs in package-lock.json before linting
# This handles cases where npm reintroduces SSH URLs for git dependencies
run: node scripts/fix-lockfile-urls.mjs
- name: Check for SSH URLs in lockfile
if: inputs.check-lockfile == 'true'
shell: bash
run: npm run lint:lockfile
- name: Install dependencies - name: Install dependencies
shell: bash shell: bash
# Use npm install instead of npm ci to correctly resolve platform-specific # Use npm install instead of npm ci to correctly resolve platform-specific
@@ -52,7 +45,6 @@ runs:
run: npm install --ignore-scripts --force run: npm install --ignore-scripts --force
- name: Install Linux native bindings - name: Install Linux native bindings
if: runner.os == 'Linux'
shell: bash shell: bash
# Workaround for npm optional dependencies bug (npm/cli#4828) # Workaround for npm optional dependencies bug (npm/cli#4828)
# Explicitly install Linux bindings needed for build tools # Explicitly install Linux bindings needed for build tools

View File

@@ -13,13 +13,6 @@ jobs:
e2e: e2e:
runs-on: ubuntu-latest runs-on: ubuntu-latest
timeout-minutes: 15 timeout-minutes: 15
strategy:
fail-fast: false
matrix:
# shardIndex: [1, 2, 3]
# shardTotal: [3]
shardIndex: [1]
shardTotal: [1]
steps: steps:
- name: Checkout code - name: Checkout code
@@ -53,8 +46,7 @@ jobs:
echo "SERVER_PID=$SERVER_PID" >> $GITHUB_ENV echo "SERVER_PID=$SERVER_PID" >> $GITHUB_ENV
env: env:
PORT: 3108 PORT: 3008
TEST_SERVER_PORT: 3108
NODE_ENV: test NODE_ENV: test
# Use a deterministic API key so Playwright can log in reliably # Use a deterministic API key so Playwright can log in reliably
AUTOMAKER_API_KEY: test-api-key-for-e2e-tests AUTOMAKER_API_KEY: test-api-key-for-e2e-tests
@@ -89,13 +81,13 @@ jobs:
# Wait for health endpoint # Wait for health endpoint
for i in {1..60}; do for i in {1..60}; do
if curl -s -f http://localhost:3108/api/health > /dev/null 2>&1; then if curl -s -f http://localhost:3008/api/health > /dev/null 2>&1; then
echo "Backend server is ready!" echo "Backend server is ready!"
echo "=== Backend logs ===" echo "=== Backend logs ==="
cat backend.log cat backend.log
echo "" echo ""
echo "Health check response:" echo "Health check response:"
curl -s http://localhost:3108/api/health | jq . 2>/dev/null || echo "Health check: $(curl -s http://localhost:3108/api/health 2>/dev/null || echo 'No response')" curl -s http://localhost:3008/api/health | jq . 2>/dev/null || echo "Health check: $(curl -s http://localhost:3008/api/health 2>/dev/null || echo 'No response')"
exit 0 exit 0
fi fi
@@ -119,11 +111,11 @@ jobs:
ps aux | grep -E "(node|tsx)" | grep -v grep || echo "No node processes found" ps aux | grep -E "(node|tsx)" | grep -v grep || echo "No node processes found"
echo "" echo ""
echo "=== Port status ===" echo "=== Port status ==="
netstat -tlnp 2>/dev/null | grep :3108 || echo "Port 3108 not listening" netstat -tlnp 2>/dev/null | grep :3008 || echo "Port 3008 not listening"
lsof -i :3108 2>/dev/null || echo "lsof not available or port not in use" lsof -i :3008 2>/dev/null || echo "lsof not available or port not in use"
echo "" echo ""
echo "=== Health endpoint test ===" echo "=== Health endpoint test ==="
curl -v http://localhost:3108/api/health 2>&1 || echo "Health endpoint failed" curl -v http://localhost:3008/api/health 2>&1 || echo "Health endpoint failed"
# Kill the server process if it's still hanging # Kill the server process if it's still hanging
if kill -0 $SERVER_PID 2>/dev/null; then if kill -0 $SERVER_PID 2>/dev/null; then
@@ -134,23 +126,16 @@ jobs:
exit 1 exit 1
- name: Run E2E tests (shard ${{ matrix.shardIndex }}/${{ matrix.shardTotal }}) - name: Run E2E tests
# Playwright automatically starts the Vite frontend via webServer config # Playwright automatically starts the Vite frontend via webServer config
# (see apps/ui/playwright.config.ts) - no need to start it manually # (see apps/ui/playwright.config.ts) - no need to start it manually
run: npx playwright test --shard=${{ matrix.shardIndex }}/${{ matrix.shardTotal }} run: npm run test --workspace=apps/ui
working-directory: apps/ui
env: env:
CI: true CI: true
VITE_SERVER_URL: http://localhost:3008
VITE_SKIP_SETUP: 'true' VITE_SKIP_SETUP: 'true'
# Keep UI-side login/defaults consistent # Keep UI-side login/defaults consistent
AUTOMAKER_API_KEY: test-api-key-for-e2e-tests AUTOMAKER_API_KEY: test-api-key-for-e2e-tests
# Backend is already started above - Playwright config sets
# AUTOMAKER_SERVER_PORT so the Vite proxy forwards /api/* to the backend.
# Do NOT set VITE_SERVER_URL here: it bypasses the Vite proxy and causes
# a cookie domain mismatch (cookies are bound to 127.0.0.1, but
# VITE_SERVER_URL=http://localhost:3108 makes the frontend call localhost).
TEST_USE_EXTERNAL_BACKEND: 'true'
TEST_SERVER_PORT: 3108
- name: Print backend logs on failure - name: Print backend logs on failure
if: failure() if: failure()
@@ -162,13 +147,13 @@ jobs:
ps aux | grep -E "(node|tsx)" | grep -v grep || echo "No node processes found" ps aux | grep -E "(node|tsx)" | grep -v grep || echo "No node processes found"
echo "" echo ""
echo "=== Port status ===" echo "=== Port status ==="
netstat -tlnp 2>/dev/null | grep :3108 || echo "Port 3108 not listening" netstat -tlnp 2>/dev/null | grep :3008 || echo "Port 3008 not listening"
- name: Upload Playwright report - name: Upload Playwright report
uses: actions/upload-artifact@v4 uses: actions/upload-artifact@v4
if: always() if: always()
with: with:
name: playwright-report-shard-${{ matrix.shardIndex }}-of-${{ matrix.shardTotal }} name: playwright-report
path: apps/ui/playwright-report/ path: apps/ui/playwright-report/
retention-days: 7 retention-days: 7
@@ -176,21 +161,12 @@ jobs:
uses: actions/upload-artifact@v4 uses: actions/upload-artifact@v4
if: always() if: always()
with: with:
name: test-results-shard-${{ matrix.shardIndex }}-of-${{ matrix.shardTotal }} name: test-results
path: | path: |
apps/ui/test-results/ apps/ui/test-results/
retention-days: 7 retention-days: 7
if-no-files-found: ignore if-no-files-found: ignore
- name: Upload blob report for merging
uses: actions/upload-artifact@v4
if: always()
with:
name: blob-report-shard-${{ matrix.shardIndex }}-of-${{ matrix.shardTotal }}
path: apps/ui/blob-report/
retention-days: 1
if-no-files-found: ignore
- name: Cleanup - Kill backend server - name: Cleanup - Kill backend server
if: always() if: always()
run: | run: |

View File

@@ -95,11 +95,9 @@ jobs:
upload: upload:
needs: build needs: build
runs-on: ubuntu-latest runs-on: ubuntu-latest
if: github.event.release.draft == false
steps: steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Download macOS artifacts - name: Download macOS artifacts
uses: actions/download-artifact@v4 uses: actions/download-artifact@v4
with: with:

15
.gitignore vendored
View File

@@ -65,17 +65,6 @@ coverage/
*.lcov *.lcov
playwright-report/ playwright-report/
blob-report/ blob-report/
test/**/test-project-[0-9]*/
test/opus-thinking-*/
test/agent-session-test-*/
test/feature-backlog-test-*/
test/running-task-display-test-*/
test/agent-output-modal-responsive-*/
test/fixtures/
test/board-bg-test-*/
test/edit-feature-test-*/
test/open-project-test-*/
# Environment files (keep .example) # Environment files (keep .example)
.env .env
@@ -101,8 +90,6 @@ pnpm-lock.yaml
yarn.lock yarn.lock
# Fork-specific workflow files (should never be committed) # Fork-specific workflow files (should never be committed)
DEVELOPMENT_WORKFLOW.md
check-sync.sh
# API key files # API key files
data/.api-key data/.api-key
data/credentials.json data/credentials.json
@@ -111,5 +98,3 @@ data/
# GSD planning docs (local-only) # GSD planning docs (local-only)
.planning/ .planning/
.mcp.json
.planning

View File

@@ -38,18 +38,6 @@ else
export PATH="$PATH:/usr/local/bin:/opt/homebrew/bin:/usr/bin" export PATH="$PATH:/usr/local/bin:/opt/homebrew/bin:/usr/bin"
fi fi
# Auto-fix git+ssh:// URLs in package-lock.json if it's being committed
# This prevents CI failures from SSH URLs that npm introduces for git dependencies
if git diff --cached --name-only | grep -q "^package-lock.json$"; then
if command -v node >/dev/null 2>&1; then
if grep -q "git+ssh://" package-lock.json 2>/dev/null; then
echo "Fixing git+ssh:// URLs in package-lock.json..."
node scripts/fix-lockfile-urls.mjs
git add package-lock.json
fi
fi
fi
# Run lint-staged - works with or without nvm # Run lint-staged - works with or without nvm
# Prefer npx, fallback to npm exec, both work with system-installed Node.js # Prefer npx, fallback to npm exec, both work with system-installed Node.js
if command -v npx >/dev/null 2>&1; then if command -v npx >/dev/null 2>&1; then

View File

@@ -161,7 +161,7 @@ Use `resolveModelString()` from `@automaker/model-resolver` to convert model ali
- `haiku``claude-haiku-4-5` - `haiku``claude-haiku-4-5`
- `sonnet``claude-sonnet-4-20250514` - `sonnet``claude-sonnet-4-20250514`
- `opus``claude-opus-4-6` - `opus``claude-opus-4-5-20251101`
## Environment Variables ## Environment Variables

253
DEVELOPMENT_WORKFLOW.md Normal file
View File

@@ -0,0 +1,253 @@
# Development Workflow
This document defines the standard workflow for keeping a branch in sync with the upstream
release candidate (RC) and for shipping feature work. It is paired with `check-sync.sh`.
## Quick Decision Rule
1. Ask the user to select a workflow:
- **Sync Workflow** → you are maintaining the current RC branch with fixes/improvements
and will push the same fixes to both origin and upstream RC when you have local
commits to publish.
- **PR Workflow** → you are starting new feature work on a new branch; upstream updates
happen via PR only.
2. After the user selects, run:
```bash
./check-sync.sh
```
3. Use the status output to confirm alignment. If it reports **diverged**, default to
merging `upstream/<TARGET_RC>` into the current branch and preserving local commits.
For Sync Workflow, when the working tree is clean and you are behind upstream RC,
proceed with the fetch + merge without asking for additional confirmation.
## Target RC Resolution
The target RC is resolved dynamically so the workflow stays current as the RC changes.
Resolution order:
1. Latest `upstream/v*rc` branch (auto-detected)
2. `upstream/HEAD` (fallback)
3. If neither is available, you must pass `--rc <branch>`
Override for a single run:
```bash
./check-sync.sh --rc <rc-branch>
```
## Pre-Flight Checklist
1. Confirm a clean working tree:
```bash
git status
```
2. Confirm the current branch:
```bash
git branch --show-current
```
3. Ensure remotes exist (origin + upstream):
```bash
git remote -v
```
## Sync Workflow (Upstream Sync)
Use this flow when you are updating the current branch with fixes or improvements and
intend to keep origin and upstream RC in lockstep.
1. **Check sync status**
```bash
./check-sync.sh
```
2. **Update from upstream RC before editing (no pulls)**
- **Behind upstream RC** → fetch and merge RC into your branch:
```bash
git fetch upstream
git merge upstream/<TARGET_RC> --no-edit
```
When the working tree is clean and the user selected Sync Workflow, proceed without
an extra confirmation prompt.
- **Diverged** → stop and resolve manually.
3. **Resolve conflicts if needed**
- Handle conflicts intelligently: preserve upstream behavior and your local intent.
4. **Make changes and commit (if you are delivering fixes)**
```bash
git add -A
git commit -m "type: description"
```
5. **Build to verify**
```bash
npm run build:packages
npm run build
```
6. **Push after a successful merge to keep remotes aligned**
- If you only merged upstream RC changes, push **origin only** to sync your fork:
```bash
git push origin <branch>
```
- If you have local fixes to publish, push **origin + upstream**:
```bash
git push origin <branch>
git push upstream <branch>:<TARGET_RC>
```
- Always ask the user which push to perform.
- Origin (origin-only sync):
```bash
git push origin <branch>
```
- Upstream RC (publish the same fixes when you have local commits):
```bash
git push upstream <branch>:<TARGET_RC>
```
7. **Re-check sync**
```bash
./check-sync.sh
```
## PR Workflow (Feature Work)
Use this flow only for new feature work on a new branch. Do not push to upstream RC.
1. **Create or switch to a feature branch**
```bash
git checkout -b <branch>
```
2. **Make changes and commit**
```bash
git add -A
git commit -m "type: description"
```
3. **Merge upstream RC before shipping**
```bash
git merge upstream/<TARGET_RC> --no-edit
```
4. **Build and/or test**
```bash
npm run build:packages
npm run build
```
5. **Push to origin**
```bash
git push -u origin <branch>
```
6. **Create or update the PR**
- Use `gh pr create` or the GitHub UI.
7. **Review and follow-up**
- Apply feedback, commit changes, and push again.
- Re-run `./check-sync.sh` if additional upstream sync is needed.
## Conflict Resolution Checklist
1. Identify which changes are from upstream vs. local.
2. Preserve both behaviors where possible; avoid dropping either side.
3. Prefer minimal, safe integrations over refactors.
4. Re-run build commands after resolving conflicts.
5. Re-run `./check-sync.sh` to confirm status.
## Build/Test Matrix
- **Sync Workflow**: `npm run build:packages` and `npm run build`.
- **PR Workflow**: `npm run build:packages` and `npm run build` (plus relevant tests).
## Post-Sync Verification
1. `git status` should be clean.
2. `./check-sync.sh` should show expected alignment.
3. Verify recent commits with:
```bash
git log --oneline -5
```
## check-sync.sh Usage
- Uses dynamic Target RC resolution (see above).
- Override target RC:
```bash
./check-sync.sh --rc <rc-branch>
```
- Optional preview limit:
```bash
./check-sync.sh --preview 10
```
- The script prints sync status for both origin and upstream and previews recent commits
when you are behind.
## Stop Conditions
Stop and ask for guidance if any of the following are true:
- The working tree is dirty and you are about to merge or push.
- `./check-sync.sh` reports **diverged** during PR Workflow, or a merge cannot be completed.
- The script cannot resolve a target RC and requests `--rc`.
- A build fails after sync or conflict resolution.
## AI Agent Guardrails
- Always run `./check-sync.sh` before merges or pushes.
- Always ask for explicit user approval before any push command.
- Do not ask for additional confirmation before a Sync Workflow fetch + merge when the
working tree is clean and the user has already selected the Sync Workflow.
- Choose Sync vs PR workflow based on intent (RC maintenance vs new feature work), not
on the script's workflow hint.
- Only use force push when the user explicitly requests a history rewrite.
- Ask for explicit approval before dependency installs, branch deletion, or destructive operations.
- When resolving merge conflicts, preserve both upstream changes and local intent where possible.
- Do not create or switch to new branches unless the user explicitly requests it.
## AI Agent Decision Guidance
Agents should provide concrete, task-specific suggestions instead of repeatedly asking
open-ended questions. Use the user's stated goal and the `./check-sync.sh` status to
propose a default path plus one or two alternatives, and only ask for confirmation when
an action requires explicit approval.
Default behavior:
- If the intent is RC maintenance, recommend the Sync Workflow and proceed with
safe preparation steps (status checks, previews). If the branch is behind upstream RC,
fetch and merge without additional confirmation when the working tree is clean, then
push to origin to keep the fork aligned. Push upstream only when there are local fixes
to publish.
- If the intent is new feature work, recommend the PR Workflow and proceed with safe
preparation steps (status checks, identifying scope). Ask for approval before merges,
pushes, or dependency installs.
- If `./check-sync.sh` reports **diverged** during Sync Workflow, merge
`upstream/<TARGET_RC>` into the current branch and preserve local commits.
- If `./check-sync.sh` reports **diverged** during PR Workflow, stop and ask for guidance
with a short explanation of the divergence and the minimal options to resolve it.
If the user's intent is RC maintenance, prefer the Sync Workflow regardless of the
script hint. When the intent is new feature work, use the PR Workflow and avoid upstream
RC pushes.
Suggestion format (keep it short):
- **Recommended**: one sentence with the default path and why it fits the task.
- **Alternatives**: one or two options with the tradeoff or prerequisite.
- **Approval points**: mention any upcoming actions that need explicit approval (exclude sync
workflow pushes and merges).
## Failure Modes and How to Avoid Them
Sync Workflow:
- Wrong RC target: verify the auto-detected RC in `./check-sync.sh` output before merging.
- Diverged from upstream RC: stop and resolve manually before any merge or push.
- Dirty working tree: commit or stash before syncing to avoid accidental merges.
- Missing remotes: ensure both `origin` and `upstream` are configured before syncing.
- Build breaks after sync: run `npm run build:packages` and `npm run build` before pushing.
PR Workflow:
- Branch not synced to current RC: re-run `./check-sync.sh` and merge RC before shipping.
- Pushing the wrong branch: confirm `git branch --show-current` before pushing.
- Unreviewed changes: always commit and push to origin before opening or updating a PR.
- Skipped tests/builds: run the build commands before declaring the PR ready.
## Notes
- Avoid merging with uncommitted changes; commit or stash first.
- Prefer merge over rebase for PR branches; rebases rewrite history and often require a force push,
which should only be done with an explicit user request.
- Use clear, conventional commit messages and split unrelated changes into separate commits.

View File

@@ -118,7 +118,6 @@ RUN curl -fsSL https://opencode.ai/install | bash && \
echo "=== Checking OpenCode CLI installation ===" && \ echo "=== Checking OpenCode CLI installation ===" && \
ls -la /home/automaker/.local/bin/ && \ ls -la /home/automaker/.local/bin/ && \
(which opencode && opencode --version) || echo "opencode installed (may need auth setup)" (which opencode && opencode --version) || echo "opencode installed (may need auth setup)"
USER root USER root
# Add PATH to profile so it's available in all interactive shells (for login shells) # Add PATH to profile so it's available in all interactive shells (for login shells)
@@ -148,15 +147,6 @@ COPY --from=server-builder /app/apps/server/package*.json ./apps/server/
# Copy node_modules (includes symlinks to libs) # Copy node_modules (includes symlinks to libs)
COPY --from=server-builder /app/node_modules ./node_modules COPY --from=server-builder /app/node_modules ./node_modules
# Install Playwright Chromium browser for AI agent verification tests
# This adds ~300MB to the image but enables automated testing mode out of the box
# Using the locally installed playwright ensures we use the pinned version from package-lock.json
USER automaker
RUN ./node_modules/.bin/playwright install chromium && \
echo "=== Playwright Chromium installed ===" && \
ls -la /home/automaker/.cache/ms-playwright/
USER root
# Create data and projects directories # Create data and projects directories
RUN mkdir -p /data /projects && chown automaker:automaker /data /projects RUN mkdir -p /data /projects && chown automaker:automaker /data /projects
@@ -209,10 +199,9 @@ COPY libs ./libs
COPY apps/ui ./apps/ui COPY apps/ui ./apps/ui
# Build packages in dependency order, then build UI # Build packages in dependency order, then build UI
# When VITE_SERVER_URL is empty, the UI uses relative URLs (e.g., /api/...) which nginx proxies # VITE_SERVER_URL tells the UI where to find the API server
# to the server container. This avoids CORS issues entirely in Docker Compose setups. # Use ARG to allow overriding at build time: --build-arg VITE_SERVER_URL=http://api.example.com
# Override at build time if needed: --build-arg VITE_SERVER_URL=http://api.example.com ARG VITE_SERVER_URL=http://localhost:3008
ARG VITE_SERVER_URL=
ENV VITE_SKIP_ELECTRON=true ENV VITE_SKIP_ELECTRON=true
ENV VITE_SERVER_URL=${VITE_SERVER_URL} ENV VITE_SERVER_URL=${VITE_SERVER_URL}
RUN npm run build:packages && npm run build --workspace=apps/ui RUN npm run build:packages && npm run build --workspace=apps/ui

158
LICENSE
View File

@@ -1,27 +1,141 @@
## Project Status AUTOMAKER LICENSE AGREEMENT
**This project is no longer actively maintained.** The codebase is provided as-is for those who wish to use, study, or fork it. No bug fixes, security updates, or new features are being developed. Community contributions may still be accepted, but there is no guarantee of review or merge. This License Agreement ("Agreement") is entered into between you ("Licensee") and the copyright holders of Automaker ("Licensor"). By using, copying, modifying, downloading, cloning, or distributing the Software (as defined below), you agree to be bound by the terms of this Agreement.
1. DEFINITIONS
"Software" means the Automaker software, including all source code, object code, documentation, and related materials.
"Generated Files" means files created by the Software during normal operation to store internal state, configuration, or working data, including but not limited to app_spec.txt, feature.json, and similar files generated by the Software. Generated Files are not considered part of the Software for the purposes of this license and are not subject to the restrictions herein.
"Derivative Work" means any work that is based on, derived from, or incorporates the Software or any substantial portion of it, including but not limited to modifications, forks, adaptations, translations, or any altered version of the Software.
"Monetization" means any activity that generates revenue, income, or commercial benefit from the Software itself or any Derivative Work, including but not limited to:
- Reselling, redistributing, or sublicensing the Software, any Derivative Work, or any substantial portion thereof
- Including the Software, any Derivative Work, or substantial portions thereof in a product or service that you sell or distribute
- Offering the Software, any Derivative Work, or substantial portions thereof as a standalone product or service for sale
- Hosting the Software or any Derivative Work as a service (whether free or paid) for use by others, including cloud hosting, Software-as-a-Service (SaaS), or any other form of hosted access for third parties
- Extracting, reselling, redistributing, or sublicensing any prompts, context, or other instructional content bundled within the Software
- Creating, distributing, or selling modified versions, forks, or Derivative Works of the Software
Monetization does NOT include:
- Using the Software internally within your organization, regardless of whether your organization is for-profit
- Using the Software to build products or services that generate revenue, as long as you are not reselling or redistributing the Software itself
- Using the Software to provide services for which fees are charged, as long as the Software itself is not being resold or redistributed
- Hosting the Software anywhere for personal use by a single developer, as long as the Software is not made accessible to others
"Core Contributors" means the following individuals who are granted perpetual, royalty-free licenses:
- Cody Seibert (webdevcody)
- SuperComboGamer (SCG)
- Kacper Lachowicz (Shironex, Shirone)
- Ben Scott (trueheads)
2. GRANT OF LICENSE
Subject to the terms and conditions of this Agreement, Licensor hereby grants to Licensee a non-exclusive, non-transferable license to use, copy, modify, and distribute the Software, provided that:
a) Licensee may freely clone, install, and use the Software locally or within an organization for the purpose of building, developing, and maintaining other products, software, or services. There are no restrictions on the products you build _using_ the Software.
b) Licensee may run the Software on personal or organizational infrastructure for internal use.
c) Core Contributors are each individually granted a perpetual, worldwide, royalty-free, non-exclusive license to use, copy, modify, distribute, and sublicense the Software for any purpose, including Monetization, without payment of any fees or royalties. Each Core Contributor may exercise these rights independently and does not require permission, consent, or approval from any other Core Contributor to Monetize the Software in any way they see fit.
d) Commercial licenses for the Software may be discussed and issued to external parties or companies seeking to use the Software for financial gain or Monetization purposes. Core Contributors already have full rights under section 2(c) and do not require commercial licenses. Any commercial license issued to external parties shall require a unanimous vote by all Core Contributors and shall be granted in writing and signed by all Core Contributors.
e) The list of individuals defined as "Core Contributors" in Section 1 shall be amended to reflect any revocation or reinstatement of status made under this section.
3. RESTRICTIONS
Licensee may NOT:
- Engage in any Monetization of the Software or any Derivative Work without explicit written permission from all Core Contributors
- Resell, redistribute, or sublicense the Software, any Derivative Work, or any substantial portion thereof
- Create, distribute, or sell modified versions, forks, or Derivative Works of the Software for any commercial purpose
- Include the Software, any Derivative Work, or substantial portions thereof in a product or service that you sell or distribute
- Offer the Software, any Derivative Work, or substantial portions thereof as a standalone product or service for sale
- Extract, resell, redistribute, or sublicense any prompts, context, or other instructional content bundled within the Software
- Host the Software or any Derivative Work as a service (whether free or paid) for use by others (except Core Contributors)
- Remove or alter any copyright notices or license terms
- Use the Software in any manner that violates applicable laws or regulations
Licensee MAY:
- Use the Software internally within their organization (commercial or non-profit)
- Use the Software to build other commercial products (products that do NOT contain the Software or Derivative Works)
- Modify the Software for internal use within their organization (commercial or non-profit)
4. CORE CONTRIBUTOR STATUS MANAGEMENT
a) Core Contributor status may be revoked indefinitely by the remaining Core Contributors if:
- A Core Contributor cannot be reached for a period of one (1) month through reasonable means of communication (including but not limited to email, Discord, GitHub, or other project communication channels)
- AND the Core Contributor has not contributed to the project during that one-month period. For purposes of this section, "contributed" means at least one of the following activities:
- Discussing the Software through project communication channels
- Committing code changes to the project repository
- Submitting bug fixes or patches
- Participating in project-related discussions or decision-making
b) Revocation of Core Contributor status requires a unanimous vote by all other Core Contributors (excluding the Core Contributor whose status is being considered for revocation).
c) Upon revocation of Core Contributor status, the individual shall no longer be considered a Core Contributor and shall lose the rights granted under section 2(c) of this Agreement. However, any Contributions made prior to revocation shall remain subject to the terms of section 5 (CONTRIBUTIONS AND RIGHTS ASSIGNMENT).
d) A revoked Core Contributor may be reinstated to Core Contributor status with a unanimous vote by all current Core Contributors. Upon reinstatement, the individual shall regain all rights granted under section 2(c) of this Agreement.
5. CONTRIBUTIONS AND RIGHTS ASSIGNMENT
By submitting, pushing, or contributing any code, documentation, pull requests, issues, or other materials ("Contributions") to the Automaker project, you agree to the following terms without reservation:
a) **Full Ownership Transfer & Rights Grant:** You hereby assign to the Core Contributors all right, title, and interest in and to your Contributions, including all copyrights, patents, and other intellectual property rights. If such assignment is not effective under applicable law, you grant the Core Contributors an unrestricted, perpetual, worldwide, non-exclusive, royalty-free, fully paid-up, irrevocable, sublicensable, and transferable license to use, reproduce, modify, adapt, publish, translate, create derivative works from, distribute, perform, display, and otherwise exploit your Contributions in any manner they see fit, including for any commercial purpose or Monetization.
b) **No Take-Backs:** You understand and agree that this grant of rights is irrevocable ("no take-backs"). You cannot revoke, rescind, or terminate this grant of rights once your Contribution has been submitted.
c) **Waiver of Moral Rights:** You waive any "moral rights" or other rights with respect to attribution of authorship or integrity of materials regarding your Contributions that you may have under any applicable law.
d) **Right to Contribute:** You represent and warrant that you are the original author of the Contributions, or that you have sufficient rights to grant the rights conveyed by this section, and that your Contributions do not infringe upon the rights of any third party.
6. TERMINATION
This license will terminate automatically if Licensee breaches any term of this Agreement. Upon termination, Licensee must immediately cease all use of the Software and destroy all copies in their possession.
7. HIGH RISK DISCLAIMER AND LIMITATION OF LIABILITY
a) **AI RISKS:** THE SOFTWARE UTILIZES ARTIFICIAL INTELLIGENCE TO GENERATE CODE, EXECUTE COMMANDS, AND INTERACT WITH YOUR FILE SYSTEM. YOU ACKNOWLEDGE THAT AI SYSTEMS CAN BE UNPREDICTABLE, MAY GENERATE INCORRECT, INSECURE, OR DESTRUCTIVE CODE, AND MAY TAKE ACTIONS THAT COULD DAMAGE YOUR SYSTEM, FILES, OR HARDWARE.
b) **USE AT YOUR OWN RISK:** YOU AGREE THAT YOUR USE OF THE SOFTWARE IS SOLELY AT YOUR OWN RISK. THE CORE CONTRIBUTORS AND LICENSOR DO NOT GUARANTEE THAT THE SOFTWARE OR ANY CODE GENERATED BY IT WILL BE SAFE, BUG-FREE, OR FUNCTIONAL.
c) **NO WARRANTY:** THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, AND NONINFRINGEMENT.
d) **LIMITATION OF LIABILITY:** IN NO EVENT SHALL THE CORE CONTRIBUTORS, LICENSORS, OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES, OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT, OR OTHERWISE, ARISING FROM, OUT OF, OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE, INCLUDING BUT NOT LIMITED TO:
- DAMAGE TO HARDWARE OR COMPUTER SYSTEMS
- DATA LOSS OR CORRUPTION
- GENERATION OF BAD, VULNERABLE, OR MALICIOUS CODE
- FINANCIAL LOSSES
- BUSINESS INTERRUPTION
8. LICENSE AMENDMENTS
Any amendment, modification, or update to this License Agreement must be agreed upon unanimously by all Core Contributors. No changes to this Agreement shall be effective unless all Core Contributors have provided their written consent or approval through a unanimous vote.
9. CONTACT
For inquiries regarding this license or permissions for Monetization, please contact the Core Contributors through the official project channels:
- Agentic Jumpstart Discord: https://discord.gg/JUDWZDN3VT
- Website: https://automaker.app
- Email: automakerapp@gmail.com
Any permission for Monetization requires the unanimous written consent of all Core Contributors.
10. GOVERNING LAW
This Agreement shall be governed by and construed in accordance with the laws of the State of Tennessee, USA, without regard to conflict of law principles.
By using the Software, you acknowledge that you have read this Agreement, understand it, and agree to be bound by its terms and conditions.
--- ---
MIT License
Copyright (c) 2025 Automaker Core Contributors Copyright (c) 2025 Automaker Core Contributors
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@@ -1,2 +0,0 @@
{
"$schema": "https://opencode.ai/config.json",}

View File

@@ -288,31 +288,6 @@ services:
**Note:** The Claude CLI config must be writable (do not use `:ro` flag) as the CLI writes debug files. **Note:** The Claude CLI config must be writable (do not use `:ro` flag) as the CLI writes debug files.
> **⚠️ Important: Linux/WSL Users**
>
> The container runs as UID 1001 by default. If your host user has a different UID (common on Linux/WSL where the first user is UID 1000), you must create a `.env` file to match your host user:
>
> ```bash
> # Check your UID/GID
> id -u # outputs your UID (e.g., 1000)
> id -g # outputs your GID (e.g., 1000)
> ```
>
> Create a `.env` file in the automaker directory:
>
> ```
> UID=1000
> GID=1000
> ```
>
> Then rebuild the images:
>
> ```bash
> docker compose build
> ```
>
> Without this, files written by the container will be inaccessible to your host user.
##### GitHub CLI Authentication (For Git Push/PR Operations) ##### GitHub CLI Authentication (For Git Push/PR Operations)
To enable git push and GitHub CLI operations inside the container: To enable git push and GitHub CLI operations inside the container:
@@ -363,42 +338,6 @@ services:
The Docker image supports both AMD64 and ARM64 architectures. The GitHub CLI and Claude CLI are automatically downloaded for the correct architecture during build. The Docker image supports both AMD64 and ARM64 architectures. The GitHub CLI and Claude CLI are automatically downloaded for the correct architecture during build.
##### Playwright for Automated Testing
The Docker image includes **Playwright Chromium pre-installed** for AI agent verification tests. When agents implement features in automated testing mode, they use Playwright to verify the implementation works correctly.
**No additional setup required** - Playwright verification works out of the box.
#### Optional: Persist browsers for manual updates
By default, Playwright Chromium is pre-installed in the Docker image. If you need to manually update browsers or want to persist browser installations across container restarts (not image rebuilds), you can mount a volume.
**Important:** When you first add this volume mount to an existing setup, the empty volume will override the pre-installed browsers. You must re-install them:
```bash
# After adding the volume mount for the first time
docker exec --user automaker -w /app automaker-server npx playwright install chromium
```
Add this to your `docker-compose.override.yml`:
```yaml
services:
server:
volumes:
- playwright-cache:/home/automaker/.cache/ms-playwright
volumes:
playwright-cache:
name: automaker-playwright-cache
```
**Updating browsers manually:**
```bash
docker exec --user automaker -w /app automaker-server npx playwright install chromium
```
### Testing ### Testing
#### End-to-End Tests (Playwright) #### End-to-End Tests (Playwright)
@@ -705,10 +644,26 @@ Join the **Agentic Jumpstart** Discord to connect with other builders exploring
👉 [Agentic Jumpstart Discord](https://discord.gg/jjem7aEDKU) 👉 [Agentic Jumpstart Discord](https://discord.gg/jjem7aEDKU)
## Project Status
**This project is no longer actively maintained.** The codebase is provided as-is for those who wish to use, study, or fork it. No bug fixes, security updates, or new features are being developed. Community contributions may still be accepted, but there is no guarantee of review or merge.
## License ## License
This project is licensed under the **MIT License**. See [LICENSE](LICENSE) for the full text. This project is licensed under the **Automaker License Agreement**. See [LICENSE](LICENSE) for the full text.
**Summary of Terms:**
- **Allowed:**
- **Build Anything:** You can clone and use Automaker locally or in your organization to build ANY product (commercial or free).
- **Internal Use:** You can use it internally within your company (commercial or non-profit) without restriction.
- **Modify:** You can modify the code for internal use within your organization (commercial or non-profit).
- **Restricted (The "No Monetization of the Tool" Rule):**
- **No Resale:** You cannot resell Automaker itself.
- **No SaaS:** You cannot host Automaker as a service for others.
- **No Monetizing Mods:** You cannot distribute modified versions of Automaker for money.
- **Liability:**
- **Use at Own Risk:** This tool uses AI. We are **NOT** responsible if it breaks your computer, deletes your files, or generates bad code. You assume all risk.
- **Contributing:**
- By contributing to this repository, you grant the Core Contributors full, irrevocable rights to your code (copyright assignment).
**Core Contributors** (Cody Seibert (webdevcody), SuperComboGamer (SCG), Kacper Lachowicz (Shironex, Shirone), and Ben Scott (trueheads)) are granted perpetual, royalty-free licenses for any use, including monetization.

View File

@@ -52,12 +52,6 @@ HOST=0.0.0.0
# Port to run the server on # Port to run the server on
PORT=3008 PORT=3008
# Port to run the server on for testing
TEST_SERVER_PORT=3108
# Port to run the UI on for testing
TEST_PORT=3107
# Data directory for sessions and metadata # Data directory for sessions and metadata
DATA_DIR=./data DATA_DIR=./data

View File

@@ -1,74 +0,0 @@
import { defineConfig, globalIgnores } from 'eslint/config';
import js from '@eslint/js';
import ts from '@typescript-eslint/eslint-plugin';
import tsParser from '@typescript-eslint/parser';
const eslintConfig = defineConfig([
js.configs.recommended,
{
files: ['**/*.ts'],
languageOptions: {
parser: tsParser,
parserOptions: {
ecmaVersion: 'latest',
sourceType: 'module',
},
globals: {
// Node.js globals
console: 'readonly',
process: 'readonly',
Buffer: 'readonly',
__dirname: 'readonly',
__filename: 'readonly',
URL: 'readonly',
URLSearchParams: 'readonly',
AbortController: 'readonly',
AbortSignal: 'readonly',
fetch: 'readonly',
Response: 'readonly',
Request: 'readonly',
Headers: 'readonly',
FormData: 'readonly',
RequestInit: 'readonly',
// Timers
setTimeout: 'readonly',
setInterval: 'readonly',
clearTimeout: 'readonly',
clearInterval: 'readonly',
setImmediate: 'readonly',
clearImmediate: 'readonly',
queueMicrotask: 'readonly',
// Node.js types
NodeJS: 'readonly',
},
},
plugins: {
'@typescript-eslint': ts,
},
rules: {
...ts.configs.recommended.rules,
'@typescript-eslint/no-unused-vars': [
'warn',
{
argsIgnorePattern: '^_',
varsIgnorePattern: '^_',
caughtErrorsIgnorePattern: '^_',
ignoreRestSiblings: true,
},
],
'@typescript-eslint/no-explicit-any': 'warn',
// Server code frequently works with terminal output containing ANSI escape codes
'no-control-regex': 'off',
'@typescript-eslint/ban-ts-comment': [
'error',
{
'ts-nocheck': 'allow-with-description',
minimumDescriptionLength: 10,
},
],
},
},
globalIgnores(['dist/**', 'node_modules/**']),
]);
export default eslintConfig;

View File

@@ -1,6 +1,6 @@
{ {
"name": "@automaker/server", "name": "@automaker/server",
"version": "1.0.0", "version": "0.13.0",
"description": "Backend server for Automaker - provides API for both web and Electron modes", "description": "Backend server for Automaker - provides API for both web and Electron modes",
"author": "AutoMaker Team", "author": "AutoMaker Team",
"license": "SEE LICENSE IN LICENSE", "license": "SEE LICENSE IN LICENSE",
@@ -24,7 +24,7 @@
"test:unit": "vitest run tests/unit" "test:unit": "vitest run tests/unit"
}, },
"dependencies": { "dependencies": {
"@anthropic-ai/claude-agent-sdk": "0.2.32", "@anthropic-ai/claude-agent-sdk": "0.1.76",
"@automaker/dependency-resolver": "1.0.0", "@automaker/dependency-resolver": "1.0.0",
"@automaker/git-utils": "1.0.0", "@automaker/git-utils": "1.0.0",
"@automaker/model-resolver": "1.0.0", "@automaker/model-resolver": "1.0.0",
@@ -34,7 +34,7 @@
"@automaker/utils": "1.0.0", "@automaker/utils": "1.0.0",
"@github/copilot-sdk": "^0.1.16", "@github/copilot-sdk": "^0.1.16",
"@modelcontextprotocol/sdk": "1.25.2", "@modelcontextprotocol/sdk": "1.25.2",
"@openai/codex-sdk": "^0.98.0", "@openai/codex-sdk": "^0.77.0",
"cookie-parser": "1.4.7", "cookie-parser": "1.4.7",
"cors": "2.8.5", "cors": "2.8.5",
"dotenv": "17.2.3", "dotenv": "17.2.3",
@@ -45,7 +45,6 @@
"yaml": "2.7.0" "yaml": "2.7.0"
}, },
"devDependencies": { "devDependencies": {
"@playwright/test": "1.57.0",
"@types/cookie": "0.6.0", "@types/cookie": "0.6.0",
"@types/cookie-parser": "1.4.10", "@types/cookie-parser": "1.4.10",
"@types/cors": "2.8.19", "@types/cors": "2.8.19",

View File

@@ -66,10 +66,6 @@ import { createCodexRoutes } from './routes/codex/index.js';
import { CodexUsageService } from './services/codex-usage-service.js'; import { CodexUsageService } from './services/codex-usage-service.js';
import { CodexAppServerService } from './services/codex-app-server-service.js'; import { CodexAppServerService } from './services/codex-app-server-service.js';
import { CodexModelCacheService } from './services/codex-model-cache-service.js'; import { CodexModelCacheService } from './services/codex-model-cache-service.js';
import { createZaiRoutes } from './routes/zai/index.js';
import { ZaiUsageService } from './services/zai-usage-service.js';
import { createGeminiRoutes } from './routes/gemini/index.js';
import { GeminiUsageService } from './services/gemini-usage-service.js';
import { createGitHubRoutes } from './routes/github/index.js'; import { createGitHubRoutes } from './routes/github/index.js';
import { createContextRoutes } from './routes/context/index.js'; import { createContextRoutes } from './routes/context/index.js';
import { createBacklogPlanRoutes } from './routes/backlog-plan/index.js'; import { createBacklogPlanRoutes } from './routes/backlog-plan/index.js';
@@ -125,57 +121,21 @@ const BOX_CONTENT_WIDTH = 67;
// The Claude Agent SDK can use either ANTHROPIC_API_KEY or Claude Code CLI authentication // The Claude Agent SDK can use either ANTHROPIC_API_KEY or Claude Code CLI authentication
(async () => { (async () => {
const hasAnthropicKey = !!process.env.ANTHROPIC_API_KEY; const hasAnthropicKey = !!process.env.ANTHROPIC_API_KEY;
const hasEnvOAuthToken = !!process.env.CLAUDE_CODE_OAUTH_TOKEN;
logger.debug('[CREDENTIAL_CHECK] Starting credential detection...');
logger.debug('[CREDENTIAL_CHECK] Environment variables:', {
hasAnthropicKey,
hasEnvOAuthToken,
});
if (hasAnthropicKey) { if (hasAnthropicKey) {
logger.info('✓ ANTHROPIC_API_KEY detected'); logger.info('✓ ANTHROPIC_API_KEY detected');
return; return;
} }
if (hasEnvOAuthToken) {
logger.info('✓ CLAUDE_CODE_OAUTH_TOKEN detected');
return;
}
// Check for Claude Code CLI authentication // Check for Claude Code CLI authentication
// Store indicators outside the try block so we can use them in the warning message
let cliAuthIndicators: Awaited<ReturnType<typeof getClaudeAuthIndicators>> | null = null;
try { try {
cliAuthIndicators = await getClaudeAuthIndicators(); const indicators = await getClaudeAuthIndicators();
const indicators = cliAuthIndicators;
// Log detailed credential detection results
const { checks, ...indicatorSummary } = indicators;
logger.debug('[CREDENTIAL_CHECK] Claude CLI auth indicators:', indicatorSummary);
logger.debug('[CREDENTIAL_CHECK] File check details:', checks);
const hasCliAuth = const hasCliAuth =
indicators.hasStatsCacheWithActivity || indicators.hasStatsCacheWithActivity ||
(indicators.hasSettingsFile && indicators.hasProjectsSessions) || (indicators.hasSettingsFile && indicators.hasProjectsSessions) ||
(indicators.hasCredentialsFile && (indicators.hasCredentialsFile &&
(indicators.credentials?.hasOAuthToken || indicators.credentials?.hasApiKey)); (indicators.credentials?.hasOAuthToken || indicators.credentials?.hasApiKey));
logger.debug('[CREDENTIAL_CHECK] Auth determination:', {
hasCliAuth,
reason: hasCliAuth
? indicators.hasStatsCacheWithActivity
? 'stats cache with activity'
: indicators.hasSettingsFile && indicators.hasProjectsSessions
? 'settings file + project sessions'
: indicators.credentials?.hasOAuthToken
? 'credentials file with OAuth token'
: 'credentials file with API key'
: 'no valid credentials found',
});
if (hasCliAuth) { if (hasCliAuth) {
logger.info('✓ Claude Code CLI authentication detected'); logger.info('✓ Claude Code CLI authentication detected');
return; return;
@@ -185,7 +145,7 @@ const BOX_CONTENT_WIDTH = 67;
logger.warn('Error checking for Claude Code CLI authentication:', error); logger.warn('Error checking for Claude Code CLI authentication:', error);
} }
// No authentication found - show warning with paths that were checked // No authentication found - show warning
const wHeader = '⚠️ WARNING: No Claude authentication configured'.padEnd(BOX_CONTENT_WIDTH); const wHeader = '⚠️ WARNING: No Claude authentication configured'.padEnd(BOX_CONTENT_WIDTH);
const w1 = 'The Claude Agent SDK requires authentication to function.'.padEnd(BOX_CONTENT_WIDTH); const w1 = 'The Claude Agent SDK requires authentication to function.'.padEnd(BOX_CONTENT_WIDTH);
const w2 = 'Options:'.padEnd(BOX_CONTENT_WIDTH); const w2 = 'Options:'.padEnd(BOX_CONTENT_WIDTH);
@@ -198,33 +158,6 @@ const BOX_CONTENT_WIDTH = 67;
BOX_CONTENT_WIDTH BOX_CONTENT_WIDTH
); );
// Build paths checked summary from the indicators (if available)
let pathsCheckedInfo = '';
if (cliAuthIndicators) {
const pathsChecked: string[] = [];
// Collect paths that were checked (paths are always populated strings)
pathsChecked.push(`Settings: ${cliAuthIndicators.checks.settingsFile.path}`);
pathsChecked.push(`Stats cache: ${cliAuthIndicators.checks.statsCache.path}`);
pathsChecked.push(`Projects dir: ${cliAuthIndicators.checks.projectsDir.path}`);
for (const credFile of cliAuthIndicators.checks.credentialFiles) {
pathsChecked.push(`Credentials: ${credFile.path}`);
}
if (pathsChecked.length > 0) {
pathsCheckedInfo = `
║ ║
${'Paths checked:'.padEnd(BOX_CONTENT_WIDTH)}
${pathsChecked
.map((p) => {
const maxLen = BOX_CONTENT_WIDTH - 4;
const display = p.length > maxLen ? '...' + p.slice(-(maxLen - 3)) : p;
return `${display.padEnd(maxLen)}`;
})
.join('\n')}`;
}
}
logger.warn(` logger.warn(`
╔═════════════════════════════════════════════════════════════════════╗ ╔═════════════════════════════════════════════════════════════════════╗
${wHeader} ${wHeader}
@@ -236,7 +169,7 @@ ${pathsChecked
${w3} ${w3}
${w4} ${w4}
${w5} ${w5}
${w6}${pathsCheckedInfo} ${w6}
║ ║ ║ ║
╚═════════════════════════════════════════════════════════════════════╝ ╚═════════════════════════════════════════════════════════════════════╝
`); `);
@@ -261,35 +194,12 @@ morgan.token('status-colored', (_req, res) => {
app.use( app.use(
morgan(':method :url :status-colored', { morgan(':method :url :status-colored', {
// Skip when request logging is disabled or for health check endpoints // Skip when request logging is disabled or for health check endpoints
skip: (req) => skip: (req) => !requestLoggingEnabled || req.url === '/api/health',
!requestLoggingEnabled ||
req.url === '/api/health' ||
req.url === '/api/auto-mode/context-exists',
}) })
); );
// CORS configuration // CORS configuration
// When using credentials (cookies), origin cannot be '*' // When using credentials (cookies), origin cannot be '*'
// We dynamically allow the requesting origin for local development // We dynamically allow the requesting origin for local development
// Check if origin is a local/private network address
function isLocalOrigin(origin: string): boolean {
try {
const url = new URL(origin);
const hostname = url.hostname;
return (
hostname === 'localhost' ||
hostname === '127.0.0.1' ||
hostname === '[::1]' ||
hostname === '0.0.0.0' ||
hostname.startsWith('192.168.') ||
hostname.startsWith('10.') ||
/^172\.(1[6-9]|2[0-9]|3[0-1])\./.test(hostname)
);
} catch {
return false;
}
}
app.use( app.use(
cors({ cors({
origin: (origin, callback) => { origin: (origin, callback) => {
@@ -300,25 +210,35 @@ app.use(
} }
// If CORS_ORIGIN is set, use it (can be comma-separated list) // If CORS_ORIGIN is set, use it (can be comma-separated list)
const allowedOrigins = process.env.CORS_ORIGIN?.split(',') const allowedOrigins = process.env.CORS_ORIGIN?.split(',').map((o) => o.trim());
.map((o) => o.trim()) if (allowedOrigins && allowedOrigins.length > 0 && allowedOrigins[0] !== '*') {
.filter(Boolean);
if (allowedOrigins && allowedOrigins.length > 0) {
if (allowedOrigins.includes('*')) {
callback(null, true);
return;
}
if (allowedOrigins.includes(origin)) { if (allowedOrigins.includes(origin)) {
callback(null, origin); callback(null, origin);
} else {
callback(new Error('Not allowed by CORS'));
}
return;
}
// For local development, allow all localhost/loopback origins (any port)
try {
const url = new URL(origin);
const hostname = url.hostname;
if (
hostname === 'localhost' ||
hostname === '127.0.0.1' ||
hostname === '::1' ||
hostname === '0.0.0.0' ||
hostname.startsWith('192.168.') ||
hostname.startsWith('10.') ||
hostname.startsWith('172.')
) {
callback(null, origin);
return; return;
} }
// Fall through to local network check below } catch (err) {
} // Ignore URL parsing errors
// Allow all localhost/loopback/private network origins (any port)
if (isLocalOrigin(origin)) {
callback(null, origin);
return;
} }
// Reject other origins by default for security // Reject other origins by default for security
@@ -345,16 +265,12 @@ const claudeUsageService = new ClaudeUsageService();
const codexAppServerService = new CodexAppServerService(); const codexAppServerService = new CodexAppServerService();
const codexModelCacheService = new CodexModelCacheService(DATA_DIR, codexAppServerService); const codexModelCacheService = new CodexModelCacheService(DATA_DIR, codexAppServerService);
const codexUsageService = new CodexUsageService(codexAppServerService); const codexUsageService = new CodexUsageService(codexAppServerService);
const zaiUsageService = new ZaiUsageService();
const geminiUsageService = new GeminiUsageService();
const mcpTestService = new MCPTestService(settingsService); const mcpTestService = new MCPTestService(settingsService);
const ideationService = new IdeationService(events, settingsService, featureLoader); const ideationService = new IdeationService(events, settingsService, featureLoader);
// Initialize DevServerService with event emitter for real-time log streaming // Initialize DevServerService with event emitter for real-time log streaming
const devServerService = getDevServerService(); const devServerService = getDevServerService();
devServerService.initialize(DATA_DIR, events).catch((err) => { devServerService.setEventEmitter(events);
logger.error('Failed to initialize DevServerService:', err);
});
// Initialize Notification Service with event emitter for real-time updates // Initialize Notification Service with event emitter for real-time updates
const notificationService = getNotificationService(); const notificationService = getNotificationService();
@@ -389,74 +305,24 @@ eventHookService.initialize(events, settingsService, eventHistoryService, featur
logger.warn('Failed to check for legacy settings migration:', err); logger.warn('Failed to check for legacy settings migration:', err);
} }
// Fetch global settings once and reuse for logging config and feature reconciliation
let globalSettings: Awaited<ReturnType<typeof settingsService.getGlobalSettings>> | null = null;
try {
globalSettings = await settingsService.getGlobalSettings();
} catch {
logger.warn('Failed to load global settings, using defaults');
}
// Apply logging settings from saved settings // Apply logging settings from saved settings
if (globalSettings) { try {
try { const settings = await settingsService.getGlobalSettings();
if ( if (settings.serverLogLevel && LOG_LEVEL_MAP[settings.serverLogLevel] !== undefined) {
globalSettings.serverLogLevel && setLogLevel(LOG_LEVEL_MAP[settings.serverLogLevel]);
LOG_LEVEL_MAP[globalSettings.serverLogLevel] !== undefined logger.info(`Server log level set to: ${settings.serverLogLevel}`);
) {
setLogLevel(LOG_LEVEL_MAP[globalSettings.serverLogLevel]);
logger.info(`Server log level set to: ${globalSettings.serverLogLevel}`);
}
// Apply request logging setting (default true if not set)
const enableRequestLog = globalSettings.enableRequestLogging ?? true;
setRequestLoggingEnabled(enableRequestLog);
logger.info(`HTTP request logging: ${enableRequestLog ? 'enabled' : 'disabled'}`);
} catch {
logger.warn('Failed to apply logging settings, using defaults');
} }
// Apply request logging setting (default true if not set)
const enableRequestLog = settings.enableRequestLogging ?? true;
setRequestLoggingEnabled(enableRequestLog);
logger.info(`HTTP request logging: ${enableRequestLog ? 'enabled' : 'disabled'}`);
} catch (err) {
logger.warn('Failed to load logging settings, using defaults');
} }
await agentService.initialize(); await agentService.initialize();
logger.info('Agent service initialized'); logger.info('Agent service initialized');
// Reconcile feature states on startup
// After any type of restart (clean, forced, crash), features may be stuck in
// transient states (in_progress, interrupted, pipeline_*) that don't match reality.
// Reconcile them back to resting states before the UI is served.
if (globalSettings) {
try {
if (globalSettings.projects && globalSettings.projects.length > 0) {
let totalReconciled = 0;
for (const project of globalSettings.projects) {
const count = await autoModeService.reconcileFeatureStates(project.path);
totalReconciled += count;
}
if (totalReconciled > 0) {
logger.info(
`[STARTUP] Reconciled ${totalReconciled} feature(s) across ${globalSettings.projects.length} project(s)`
);
} else {
logger.info('[STARTUP] Feature state reconciliation complete - no stale states found');
}
// Resume interrupted features in the background for all projects.
// This handles features stuck in transient states (in_progress, pipeline_*)
// or explicitly marked as interrupted. Running in background so it doesn't block startup.
for (const project of globalSettings.projects) {
autoModeService.resumeInterruptedFeatures(project.path).catch((err) => {
logger.warn(
`[STARTUP] Failed to resume interrupted features for ${project.path}:`,
err
);
});
}
logger.info('[STARTUP] Initiated background resume of interrupted features');
}
} catch (err) {
logger.warn('[STARTUP] Failed to reconcile feature states:', err);
}
}
// Bootstrap Codex model cache in background (don't block server startup) // Bootstrap Codex model cache in background (don't block server startup)
void codexModelCacheService.getModels().catch((err) => { void codexModelCacheService.getModels().catch((err) => {
logger.error('Failed to bootstrap Codex model cache:', err); logger.error('Failed to bootstrap Codex model cache:', err);
@@ -496,7 +362,7 @@ app.use(
); );
app.use('/api/auto-mode', createAutoModeRoutes(autoModeService)); app.use('/api/auto-mode', createAutoModeRoutes(autoModeService));
app.use('/api/enhance-prompt', createEnhancePromptRoutes(settingsService)); app.use('/api/enhance-prompt', createEnhancePromptRoutes(settingsService));
app.use('/api/worktree', createWorktreeRoutes(events, settingsService, featureLoader)); app.use('/api/worktree', createWorktreeRoutes(events, settingsService));
app.use('/api/git', createGitRoutes()); app.use('/api/git', createGitRoutes());
app.use('/api/models', createModelsRoutes()); app.use('/api/models', createModelsRoutes());
app.use('/api/spec-regeneration', createSpecRegenerationRoutes(events, settingsService)); app.use('/api/spec-regeneration', createSpecRegenerationRoutes(events, settingsService));
@@ -507,8 +373,6 @@ app.use('/api/terminal', createTerminalRoutes());
app.use('/api/settings', createSettingsRoutes(settingsService)); app.use('/api/settings', createSettingsRoutes(settingsService));
app.use('/api/claude', createClaudeRoutes(claudeUsageService)); app.use('/api/claude', createClaudeRoutes(claudeUsageService));
app.use('/api/codex', createCodexRoutes(codexUsageService, codexModelCacheService)); app.use('/api/codex', createCodexRoutes(codexUsageService, codexModelCacheService));
app.use('/api/zai', createZaiRoutes(zaiUsageService, settingsService));
app.use('/api/gemini', createGeminiRoutes(geminiUsageService, events));
app.use('/api/github', createGitHubRoutes(events, settingsService)); app.use('/api/github', createGitHubRoutes(events, settingsService));
app.use('/api/context', createContextRoutes(settingsService)); app.use('/api/context', createContextRoutes(settingsService));
app.use('/api/backlog-plan', createBacklogPlanRoutes(events, settingsService)); app.use('/api/backlog-plan', createBacklogPlanRoutes(events, settingsService));
@@ -528,7 +392,7 @@ const server = createServer(app);
// WebSocket servers using noServer mode for proper multi-path support // WebSocket servers using noServer mode for proper multi-path support
const wss = new WebSocketServer({ noServer: true }); const wss = new WebSocketServer({ noServer: true });
const terminalWss = new WebSocketServer({ noServer: true }); const terminalWss = new WebSocketServer({ noServer: true });
const terminalService = getTerminalService(settingsService); const terminalService = getTerminalService();
/** /**
* Authenticate WebSocket upgrade requests * Authenticate WebSocket upgrade requests
@@ -598,23 +462,24 @@ wss.on('connection', (ws: WebSocket) => {
// Subscribe to all events and forward to this client // Subscribe to all events and forward to this client
const unsubscribe = events.subscribe((type, payload) => { const unsubscribe = events.subscribe((type, payload) => {
// Use debug level for high-frequency events to avoid log spam logger.info('Event received:', {
// that causes progressive memory growth and server slowdown
const isHighFrequency =
type === 'dev-server:output' || type === 'test-runner:output' || type === 'feature:progress';
const log = isHighFrequency ? logger.debug.bind(logger) : logger.info.bind(logger);
log('Event received:', {
type, type,
hasPayload: !!payload, hasPayload: !!payload,
payloadKeys: payload ? Object.keys(payload) : [],
wsReadyState: ws.readyState, wsReadyState: ws.readyState,
wsOpen: ws.readyState === WebSocket.OPEN,
}); });
if (ws.readyState === WebSocket.OPEN) { if (ws.readyState === WebSocket.OPEN) {
const message = JSON.stringify({ type, payload }); const message = JSON.stringify({ type, payload });
logger.info('Sending event to client:', {
type,
messageLength: message.length,
sessionId: (payload as any)?.sessionId,
});
ws.send(message); ws.send(message);
} else { } else {
logger.warn('Cannot send event, WebSocket not open. ReadyState:', ws.readyState); logger.info('WARNING: Cannot send event, WebSocket not open. ReadyState:', ws.readyState);
} }
}); });
@@ -676,15 +541,8 @@ terminalWss.on('connection', (ws: WebSocket, req: import('http').IncomingMessage
// Check if session exists // Check if session exists
const session = terminalService.getSession(sessionId); const session = terminalService.getSession(sessionId);
if (!session) { if (!session) {
logger.warn( logger.info(`Session ${sessionId} not found`);
`Terminal session ${sessionId} not found. ` + ws.close(4004, 'Session not found');
`The session may have exited, been deleted, or was never created. ` +
`Active terminal sessions: ${terminalService.getSessionCount()}`
);
ws.close(
4004,
'Session not found. The terminal session may have expired or been closed. Please create a new terminal.'
);
return; return;
} }

View File

@@ -8,6 +8,9 @@ import { spawn, execSync } from 'child_process';
import * as fs from 'fs'; import * as fs from 'fs';
import * as path from 'path'; import * as path from 'path';
import * as os from 'os'; import * as os from 'os';
import { createLogger } from '@automaker/utils';
const logger = createLogger('CliDetection');
export interface CliInfo { export interface CliInfo {
name: string; name: string;
@@ -83,7 +86,7 @@ export async function detectCli(
options: CliDetectionOptions = {} options: CliDetectionOptions = {}
): Promise<CliDetectionResult> { ): Promise<CliDetectionResult> {
const config = CLI_CONFIGS[provider]; const config = CLI_CONFIGS[provider];
const { timeout = 5000 } = options; const { timeout = 5000, includeWsl = false, wslDistribution } = options;
const issues: string[] = []; const issues: string[] = [];
const cliInfo: CliInfo = { const cliInfo: CliInfo = {

View File

@@ -40,7 +40,7 @@ export interface ErrorClassification {
suggestedAction?: string; suggestedAction?: string;
retryable: boolean; retryable: boolean;
provider?: string; provider?: string;
context?: Record<string, unknown>; context?: Record<string, any>;
} }
export interface ErrorPattern { export interface ErrorPattern {
@@ -180,7 +180,7 @@ const ERROR_PATTERNS: ErrorPattern[] = [
export function classifyError( export function classifyError(
error: unknown, error: unknown,
provider?: string, provider?: string,
context?: Record<string, unknown> context?: Record<string, any>
): ErrorClassification { ): ErrorClassification {
const errorText = getErrorText(error); const errorText = getErrorText(error);
@@ -281,19 +281,18 @@ function getErrorText(error: unknown): string {
if (typeof error === 'object' && error !== null) { if (typeof error === 'object' && error !== null) {
// Handle structured error objects // Handle structured error objects
const errorObj = error as Record<string, unknown>; const errorObj = error as any;
if (typeof errorObj.message === 'string') { if (errorObj.message) {
return errorObj.message; return errorObj.message;
} }
const nestedError = errorObj.error; if (errorObj.error?.message) {
if (typeof nestedError === 'object' && nestedError !== null && 'message' in nestedError) { return errorObj.error.message;
return String((nestedError as Record<string, unknown>).message);
} }
if (nestedError) { if (errorObj.error) {
return typeof nestedError === 'string' ? nestedError : JSON.stringify(nestedError); return typeof errorObj.error === 'string' ? errorObj.error : JSON.stringify(errorObj.error);
} }
return JSON.stringify(error); return JSON.stringify(error);
@@ -308,7 +307,7 @@ function getErrorText(error: unknown): string {
export function createErrorResponse( export function createErrorResponse(
error: unknown, error: unknown,
provider?: string, provider?: string,
context?: Record<string, unknown> context?: Record<string, any>
): { ): {
success: false; success: false;
error: string; error: string;
@@ -336,7 +335,7 @@ export function logError(
error: unknown, error: unknown,
provider?: string, provider?: string,
operation?: string, operation?: string,
additionalContext?: Record<string, unknown> additionalContext?: Record<string, any>
): void { ): void {
const classification = classifyError(error, provider, { const classification = classifyError(error, provider, {
operation, operation,

View File

@@ -1,37 +0,0 @@
/**
* Shared execution utilities
*
* Common helpers for spawning child processes with the correct environment.
* Used by both route handlers and service layers.
*/
import { createLogger } from '@automaker/utils';
const logger = createLogger('ExecUtils');
// Extended PATH to include common tool installation locations
export const extendedPath = [
process.env.PATH,
'/opt/homebrew/bin',
'/usr/local/bin',
'/home/linuxbrew/.linuxbrew/bin',
`${process.env.HOME}/.local/bin`,
]
.filter(Boolean)
.join(':');
export const execEnv = {
...process.env,
PATH: extendedPath,
};
export function getErrorMessage(error: unknown): string {
if (error instanceof Error) {
return error.message;
}
return String(error);
}
export function logError(error: unknown, context: string): void {
logger.error(`${context}:`, error);
}

View File

@@ -1,62 +0,0 @@
export interface CommitFields {
hash: string;
shortHash: string;
author: string;
authorEmail: string;
date: string;
subject: string;
body: string;
}
export function parseGitLogOutput(output: string): CommitFields[] {
const commits: CommitFields[] = [];
// Split by NUL character to separate commits
const commitBlocks = output.split('\0').filter((block) => block.trim());
for (const block of commitBlocks) {
const allLines = block.split('\n');
// Skip leading empty lines that may appear at block boundaries
let startIndex = 0;
while (startIndex < allLines.length && allLines[startIndex].trim() === '') {
startIndex++;
}
const fields = allLines.slice(startIndex);
// Validate we have all expected fields (at least hash, shortHash, author, authorEmail, date, subject)
if (fields.length < 6) {
continue; // Skip malformed blocks
}
const commit: CommitFields = {
hash: fields[0].trim(),
shortHash: fields[1].trim(),
author: fields[2].trim(),
authorEmail: fields[3].trim(),
date: fields[4].trim(),
subject: fields[5].trim(),
body: fields.slice(6).join('\n').trim(),
};
commits.push(commit);
}
return commits;
}
/**
* Creates a commit object from parsed fields, matching the expected API response format
*/
export function createCommitFromFields(fields: CommitFields, files?: string[]) {
return {
hash: fields.hash,
shortHash: fields.shortHash,
author: fields.author,
authorEmail: fields.authorEmail,
date: fields.date,
subject: fields.subject,
body: fields.body,
files: files || [],
};
}

View File

@@ -1,236 +0,0 @@
/**
* Shared git command execution utilities.
*
* This module provides the canonical `execGitCommand` helper and common
* git utilities used across services and routes. All consumers should
* import from here rather than defining their own copy.
*/
import fs from 'fs/promises';
import path from 'path';
import { spawnProcess } from '@automaker/platform';
import { createLogger } from '@automaker/utils';
const logger = createLogger('GitLib');
// Extended PATH so git is found when the process does not inherit a full shell PATH
// (e.g. Electron, some CI, or IDE-launched processes).
const pathSeparator = process.platform === 'win32' ? ';' : ':';
const extraPaths: string[] =
process.platform === 'win32'
? ([
process.env.LOCALAPPDATA && `${process.env.LOCALAPPDATA}\\Programs\\Git\\cmd`,
process.env.PROGRAMFILES && `${process.env.PROGRAMFILES}\\Git\\cmd`,
process.env['ProgramFiles(x86)'] && `${process.env['ProgramFiles(x86)']}\\Git\\cmd`,
].filter(Boolean) as string[])
: [
'/opt/homebrew/bin',
'/usr/local/bin',
'/usr/bin',
'/home/linuxbrew/.linuxbrew/bin',
process.env.HOME ? `${process.env.HOME}/.local/bin` : '',
].filter(Boolean);
const extendedPath = [process.env.PATH, ...extraPaths].filter(Boolean).join(pathSeparator);
const gitEnv = { ...process.env, PATH: extendedPath };
// ============================================================================
// Secure Command Execution
// ============================================================================
/**
* Execute git command with array arguments to prevent command injection.
* Uses spawnProcess from @automaker/platform for secure, cross-platform execution.
*
* @param args - Array of git command arguments (e.g., ['worktree', 'add', path])
* @param cwd - Working directory to execute the command in
* @param env - Optional additional environment variables to pass to the git process.
* These are merged on top of the current process environment. Pass
* `{ LC_ALL: 'C' }` to force git to emit English output regardless of the
* system locale so that text-based output parsing remains reliable.
* @param abortController - Optional AbortController to cancel the git process.
* When the controller is aborted the underlying process is sent SIGTERM and
* the returned promise rejects with an Error whose message is 'Process aborted'.
* @returns Promise resolving to stdout output
* @throws Error with stderr/stdout message if command fails. The thrown error
* also has `stdout` and `stderr` string properties for structured access.
*
* @example
* ```typescript
* // Safe: no injection possible
* await execGitCommand(['branch', '-D', branchName], projectPath);
*
* // Force English output for reliable text parsing:
* await execGitCommand(['rebase', '--', 'main'], worktreePath, { LC_ALL: 'C' });
*
* // With a process-level timeout:
* const controller = new AbortController();
* const timerId = setTimeout(() => controller.abort(), 30_000);
* try {
* await execGitCommand(['fetch', '--all', '--quiet'], cwd, undefined, controller);
* } finally {
* clearTimeout(timerId);
* }
*
* // Instead of unsafe:
* // await execAsync(`git branch -D ${branchName}`, { cwd });
* ```
*/
export async function execGitCommand(
args: string[],
cwd: string,
env?: Record<string, string>,
abortController?: AbortController
): Promise<string> {
const result = await spawnProcess({
command: 'git',
args,
cwd,
env:
env !== undefined
? {
...gitEnv,
...env,
PATH: [gitEnv.PATH, env.PATH].filter(Boolean).join(pathSeparator),
}
: gitEnv,
...(abortController !== undefined ? { abortController } : {}),
});
// spawnProcess returns { stdout, stderr, exitCode }
if (result.exitCode === 0) {
return result.stdout;
} else {
const errorMessage =
result.stderr || result.stdout || `Git command failed with code ${result.exitCode}`;
throw Object.assign(new Error(errorMessage), {
stdout: result.stdout,
stderr: result.stderr,
});
}
}
// ============================================================================
// Common Git Utilities
// ============================================================================
/**
* Get the current branch name for the given worktree.
*
* This is the canonical implementation shared across services. Services
* should import this rather than duplicating the logic locally.
*
* @param worktreePath - Path to the git worktree
* @returns The current branch name (trimmed)
*/
export async function getCurrentBranch(worktreePath: string): Promise<string> {
const branchOutput = await execGitCommand(['rev-parse', '--abbrev-ref', 'HEAD'], worktreePath);
return branchOutput.trim();
}
// ============================================================================
// Index Lock Recovery
// ============================================================================
/**
* Check whether an error message indicates a stale git index lock file.
*
* Git operations that write to the index (e.g. `git stash push`) will fail
* with "could not write index" or "Unable to create ... .lock" when a
* `.git/index.lock` file exists from a previously interrupted operation.
*
* @param errorMessage - The error string from a failed git command
* @returns true if the error looks like a stale index lock issue
*/
export function isIndexLockError(errorMessage: string): boolean {
const lower = errorMessage.toLowerCase();
return (
lower.includes('could not write index') ||
(lower.includes('unable to create') && lower.includes('index.lock')) ||
lower.includes('index.lock')
);
}
/**
* Attempt to remove a stale `.git/index.lock` file for the given worktree.
*
* Uses `git rev-parse --git-dir` to locate the correct `.git` directory,
* which works for both regular repositories and linked worktrees.
*
* @param worktreePath - Path to the git worktree (or main repo)
* @returns true if a lock file was found and removed, false otherwise
*/
export async function removeStaleIndexLock(worktreePath: string): Promise<boolean> {
try {
// Resolve the .git directory (handles worktrees correctly)
const gitDirRaw = await execGitCommand(['rev-parse', '--git-dir'], worktreePath);
const gitDir = path.resolve(worktreePath, gitDirRaw.trim());
const lockFilePath = path.join(gitDir, 'index.lock');
// Check if the lock file exists
try {
await fs.access(lockFilePath);
} catch {
// Lock file does not exist — nothing to remove
return false;
}
// Remove the stale lock file
await fs.unlink(lockFilePath);
logger.info('Removed stale index.lock file', { worktreePath, lockFilePath });
return true;
} catch (err) {
logger.warn('Failed to remove stale index.lock file', {
worktreePath,
error: err instanceof Error ? err.message : String(err),
});
return false;
}
}
/**
* Execute a git command with automatic retry when a stale index.lock is detected.
*
* If the command fails with an error indicating a locked index file, this
* helper will attempt to remove the stale `.git/index.lock` and retry the
* command exactly once.
*
* This is particularly useful for `git stash push` which writes to the
* index and commonly fails when a previous git operation was interrupted.
*
* @param args - Array of git command arguments
* @param cwd - Working directory to execute the command in
* @param env - Optional additional environment variables
* @returns Promise resolving to stdout output
* @throws The original error if retry also fails, or a non-lock error
*/
export async function execGitCommandWithLockRetry(
args: string[],
cwd: string,
env?: Record<string, string>
): Promise<string> {
try {
return await execGitCommand(args, cwd, env);
} catch (error: unknown) {
const err = error as { message?: string; stderr?: string };
const errorMessage = err.stderr || err.message || '';
if (!isIndexLockError(errorMessage)) {
throw error;
}
logger.info('Git command failed due to index lock, attempting cleanup and retry', {
cwd,
args: args.join(' '),
});
const removed = await removeStaleIndexLock(cwd);
if (!removed) {
// Could not remove the lock file — re-throw the original error
throw error;
}
// Retry the command once after removing the lock file
return await execGitCommand(args, cwd, env);
}
}

View File

@@ -12,18 +12,11 @@ export interface PermissionCheckResult {
reason?: string; reason?: string;
} }
/** Minimal shape of a Cursor tool call used for permission checking */
interface CursorToolCall {
shellToolCall?: { args?: { command: string } };
readToolCall?: { args?: { path: string } };
writeToolCall?: { args?: { path: string } };
}
/** /**
* Check if a tool call is allowed based on permissions * Check if a tool call is allowed based on permissions
*/ */
export function checkToolCallPermission( export function checkToolCallPermission(
toolCall: CursorToolCall, toolCall: any,
permissions: CursorCliConfigFile | null permissions: CursorCliConfigFile | null
): PermissionCheckResult { ): PermissionCheckResult {
if (!permissions || !permissions.permissions) { if (!permissions || !permissions.permissions) {
@@ -159,11 +152,7 @@ function matchesRule(toolName: string, rule: string): boolean {
/** /**
* Log permission violations * Log permission violations
*/ */
export function logPermissionViolation( export function logPermissionViolation(toolCall: any, reason: string, sessionId?: string): void {
toolCall: CursorToolCall,
reason: string,
sessionId?: string
): void {
const sessionIdStr = sessionId ? ` [${sessionId}]` : ''; const sessionIdStr = sessionId ? ` [${sessionId}]` : '';
if (toolCall.shellToolCall?.args?.command) { if (toolCall.shellToolCall?.args?.command) {

View File

@@ -133,16 +133,12 @@ export const TOOL_PRESETS = {
'Read', 'Read',
'Write', 'Write',
'Edit', 'Edit',
'MultiEdit',
'Glob', 'Glob',
'Grep', 'Grep',
'LS',
'Bash', 'Bash',
'WebSearch', 'WebSearch',
'WebFetch', 'WebFetch',
'TodoWrite', 'TodoWrite',
'Task',
'Skill',
] as const, ] as const,
/** Tools for chat/interactive mode */ /** Tools for chat/interactive mode */
@@ -150,16 +146,12 @@ export const TOOL_PRESETS = {
'Read', 'Read',
'Write', 'Write',
'Edit', 'Edit',
'MultiEdit',
'Glob', 'Glob',
'Grep', 'Grep',
'LS',
'Bash', 'Bash',
'WebSearch', 'WebSearch',
'WebFetch', 'WebFetch',
'TodoWrite', 'TodoWrite',
'Task',
'Skill',
] as const, ] as const,
} as const; } as const;
@@ -261,27 +253,11 @@ function buildMcpOptions(config: CreateSdkOptionsConfig): McpOptions {
/** /**
* Build thinking options for SDK configuration. * Build thinking options for SDK configuration.
* Converts ThinkingLevel to maxThinkingTokens for the Claude SDK. * Converts ThinkingLevel to maxThinkingTokens for the Claude SDK.
* For adaptive thinking (Opus 4.6), omits maxThinkingTokens to let the model
* decide its own reasoning depth.
* *
* @param thinkingLevel - The thinking level to convert * @param thinkingLevel - The thinking level to convert
* @returns Object with maxThinkingTokens if thinking is enabled with a budget * @returns Object with maxThinkingTokens if thinking is enabled
*/ */
function buildThinkingOptions(thinkingLevel?: ThinkingLevel): Partial<Options> { function buildThinkingOptions(thinkingLevel?: ThinkingLevel): Partial<Options> {
if (!thinkingLevel || thinkingLevel === 'none') {
return {};
}
// Adaptive thinking (Opus 4.6): don't set maxThinkingTokens
// The model will use adaptive thinking by default
if (thinkingLevel === 'adaptive') {
logger.debug(
`buildThinkingOptions: thinkingLevel="adaptive" -> no maxThinkingTokens (model decides)`
);
return {};
}
// Manual budget-based thinking for Haiku/Sonnet
const maxThinkingTokens = getThinkingTokenBudget(thinkingLevel); const maxThinkingTokens = getThinkingTokenBudget(thinkingLevel);
logger.debug( logger.debug(
`buildThinkingOptions: thinkingLevel="${thinkingLevel}" -> maxThinkingTokens=${maxThinkingTokens}` `buildThinkingOptions: thinkingLevel="${thinkingLevel}" -> maxThinkingTokens=${maxThinkingTokens}`
@@ -290,15 +266,11 @@ function buildThinkingOptions(thinkingLevel?: ThinkingLevel): Partial<Options> {
} }
/** /**
* Build system prompt and settingSources based on two independent settings: * Build system prompt configuration based on autoLoadClaudeMd setting.
* - useClaudeCodeSystemPrompt: controls whether to use the 'claude_code' preset as the base prompt * When autoLoadClaudeMd is true:
* - autoLoadClaudeMd: controls whether to add settingSources for SDK to load CLAUDE.md files * - Uses preset mode with 'claude_code' to enable CLAUDE.md auto-loading
* * - If there's a custom systemPrompt, appends it to the preset
* These combine independently (4 possible states): * - Sets settingSources to ['project'] for SDK to load CLAUDE.md files
* 1. Both ON: preset + settingSources (full Claude Code experience)
* 2. useClaudeCodeSystemPrompt ON, autoLoadClaudeMd OFF: preset only (no CLAUDE.md auto-loading)
* 3. useClaudeCodeSystemPrompt OFF, autoLoadClaudeMd ON: plain string + settingSources
* 4. Both OFF: plain string only
* *
* @param config - The SDK options config * @param config - The SDK options config
* @returns Object with systemPrompt and settingSources for SDK options * @returns Object with systemPrompt and settingSources for SDK options
@@ -307,34 +279,27 @@ function buildClaudeMdOptions(config: CreateSdkOptionsConfig): {
systemPrompt?: string | SystemPromptConfig; systemPrompt?: string | SystemPromptConfig;
settingSources?: Array<'user' | 'project' | 'local'>; settingSources?: Array<'user' | 'project' | 'local'>;
} { } {
const result: { if (!config.autoLoadClaudeMd) {
systemPrompt?: string | SystemPromptConfig;
settingSources?: Array<'user' | 'project' | 'local'>;
} = {};
// Determine system prompt format based on useClaudeCodeSystemPrompt
if (config.useClaudeCodeSystemPrompt) {
// Use Claude Code's built-in system prompt as the base
const presetConfig: SystemPromptConfig = {
type: 'preset',
preset: 'claude_code',
};
// If there's a custom system prompt, append it to the preset
if (config.systemPrompt) {
presetConfig.append = config.systemPrompt;
}
result.systemPrompt = presetConfig;
} else {
// Standard mode - just pass through the system prompt as-is // Standard mode - just pass through the system prompt as-is
if (config.systemPrompt) { return config.systemPrompt ? { systemPrompt: config.systemPrompt } : {};
result.systemPrompt = config.systemPrompt;
}
} }
// Determine settingSources based on autoLoadClaudeMd // Auto-load CLAUDE.md mode - use preset with settingSources
if (config.autoLoadClaudeMd) { const result: {
systemPrompt: SystemPromptConfig;
settingSources: Array<'user' | 'project' | 'local'>;
} = {
systemPrompt: {
type: 'preset',
preset: 'claude_code',
},
// Load both user (~/.claude/CLAUDE.md) and project (.claude/CLAUDE.md) settings // Load both user (~/.claude/CLAUDE.md) and project (.claude/CLAUDE.md) settings
result.settingSources = ['user', 'project']; settingSources: ['user', 'project'],
};
// If there's a custom system prompt, append it to the preset
if (config.systemPrompt) {
result.systemPrompt.append = config.systemPrompt;
} }
return result; return result;
@@ -342,14 +307,12 @@ function buildClaudeMdOptions(config: CreateSdkOptionsConfig): {
/** /**
* System prompt configuration for SDK options * System prompt configuration for SDK options
* The 'claude_code' preset provides the system prompt only — it does NOT auto-load * When using preset mode with claude_code, CLAUDE.md files are automatically loaded
* CLAUDE.md files. CLAUDE.md auto-loading is controlled independently by
* settingSources (set via autoLoadClaudeMd). These two settings are orthogonal.
*/ */
export interface SystemPromptConfig { export interface SystemPromptConfig {
/** Use preset mode to select the base system prompt */ /** Use preset mode with claude_code to enable CLAUDE.md auto-loading */
type: 'preset'; type: 'preset';
/** The preset to use - 'claude_code' uses the Claude Code system prompt */ /** The preset to use - 'claude_code' enables CLAUDE.md loading */
preset: 'claude_code'; preset: 'claude_code';
/** Optional additional prompt to append to the preset */ /** Optional additional prompt to append to the preset */
append?: string; append?: string;
@@ -383,19 +346,11 @@ export interface CreateSdkOptionsConfig {
/** Enable auto-loading of CLAUDE.md files via SDK's settingSources */ /** Enable auto-loading of CLAUDE.md files via SDK's settingSources */
autoLoadClaudeMd?: boolean; autoLoadClaudeMd?: boolean;
/** Use Claude Code's built-in system prompt (claude_code preset) as the base prompt */
useClaudeCodeSystemPrompt?: boolean;
/** MCP servers to make available to the agent */ /** MCP servers to make available to the agent */
mcpServers?: Record<string, McpServerConfig>; mcpServers?: Record<string, McpServerConfig>;
/** Extended thinking level for Claude models */ /** Extended thinking level for Claude models */
thinkingLevel?: ThinkingLevel; thinkingLevel?: ThinkingLevel;
/** Optional user-configured max turns override (from settings).
* When provided, overrides the preset MAX_TURNS for the use case.
* Range: 1-2000. */
maxTurns?: number;
} }
// Re-export MCP types from @automaker/types for convenience // Re-export MCP types from @automaker/types for convenience
@@ -432,7 +387,7 @@ export function createSpecGenerationOptions(config: CreateSdkOptionsConfig): Opt
// See: https://github.com/AutoMaker-Org/automaker/issues/149 // See: https://github.com/AutoMaker-Org/automaker/issues/149
permissionMode: 'default', permissionMode: 'default',
model: getModelForUseCase('spec', config.model), model: getModelForUseCase('spec', config.model),
maxTurns: config.maxTurns ?? MAX_TURNS.maximum, maxTurns: MAX_TURNS.maximum,
cwd: config.cwd, cwd: config.cwd,
allowedTools: [...TOOL_PRESETS.specGeneration], allowedTools: [...TOOL_PRESETS.specGeneration],
...claudeMdOptions, ...claudeMdOptions,
@@ -466,7 +421,7 @@ export function createFeatureGenerationOptions(config: CreateSdkOptionsConfig):
// Override permissionMode - feature generation only needs read-only tools // Override permissionMode - feature generation only needs read-only tools
permissionMode: 'default', permissionMode: 'default',
model: getModelForUseCase('features', config.model), model: getModelForUseCase('features', config.model),
maxTurns: config.maxTurns ?? MAX_TURNS.quick, maxTurns: MAX_TURNS.quick,
cwd: config.cwd, cwd: config.cwd,
allowedTools: [...TOOL_PRESETS.readOnly], allowedTools: [...TOOL_PRESETS.readOnly],
...claudeMdOptions, ...claudeMdOptions,
@@ -497,7 +452,7 @@ export function createSuggestionsOptions(config: CreateSdkOptionsConfig): Option
return { return {
...getBaseOptions(), ...getBaseOptions(),
model: getModelForUseCase('suggestions', config.model), model: getModelForUseCase('suggestions', config.model),
maxTurns: config.maxTurns ?? MAX_TURNS.extended, maxTurns: MAX_TURNS.extended,
cwd: config.cwd, cwd: config.cwd,
allowedTools: [...TOOL_PRESETS.readOnly], allowedTools: [...TOOL_PRESETS.readOnly],
...claudeMdOptions, ...claudeMdOptions,
@@ -535,7 +490,7 @@ export function createChatOptions(config: CreateSdkOptionsConfig): Options {
return { return {
...getBaseOptions(), ...getBaseOptions(),
model: getModelForUseCase('chat', effectiveModel), model: getModelForUseCase('chat', effectiveModel),
maxTurns: config.maxTurns ?? MAX_TURNS.standard, maxTurns: MAX_TURNS.standard,
cwd: config.cwd, cwd: config.cwd,
allowedTools: [...TOOL_PRESETS.chat], allowedTools: [...TOOL_PRESETS.chat],
...claudeMdOptions, ...claudeMdOptions,
@@ -570,7 +525,7 @@ export function createAutoModeOptions(config: CreateSdkOptionsConfig): Options {
return { return {
...getBaseOptions(), ...getBaseOptions(),
model: getModelForUseCase('auto', config.model), model: getModelForUseCase('auto', config.model),
maxTurns: config.maxTurns ?? MAX_TURNS.maximum, maxTurns: MAX_TURNS.maximum,
cwd: config.cwd, cwd: config.cwd,
allowedTools: [...TOOL_PRESETS.fullAccess], allowedTools: [...TOOL_PRESETS.fullAccess],
...claudeMdOptions, ...claudeMdOptions,

View File

@@ -33,16 +33,9 @@ import {
const logger = createLogger('SettingsHelper'); const logger = createLogger('SettingsHelper');
/** Default number of agent turns used when no value is configured. */
export const DEFAULT_MAX_TURNS = 10000;
/** Upper bound for the max-turns clamp; values above this are capped here. */
export const MAX_ALLOWED_TURNS = 10000;
/** /**
* Get the autoLoadClaudeMd setting, with project settings taking precedence over global. * Get the autoLoadClaudeMd setting, with project settings taking precedence over global.
* Falls back to global settings and defaults to true when unset. * Returns false if settings service is not available.
* Returns true if settings service is not available.
* *
* @param projectPath - Path to the project * @param projectPath - Path to the project
* @param settingsService - Optional settings service instance * @param settingsService - Optional settings service instance
@@ -55,8 +48,8 @@ export async function getAutoLoadClaudeMdSetting(
logPrefix = '[SettingsHelper]' logPrefix = '[SettingsHelper]'
): Promise<boolean> { ): Promise<boolean> {
if (!settingsService) { if (!settingsService) {
logger.info(`${logPrefix} SettingsService not available, autoLoadClaudeMd defaulting to true`); logger.info(`${logPrefix} SettingsService not available, autoLoadClaudeMd disabled`);
return true; return false;
} }
try { try {
@@ -71,7 +64,7 @@ export async function getAutoLoadClaudeMdSetting(
// Fall back to global settings // Fall back to global settings
const globalSettings = await settingsService.getGlobalSettings(); const globalSettings = await settingsService.getGlobalSettings();
const result = globalSettings.autoLoadClaudeMd ?? true; const result = globalSettings.autoLoadClaudeMd ?? false;
logger.info(`${logPrefix} autoLoadClaudeMd from global settings: ${result}`); logger.info(`${logPrefix} autoLoadClaudeMd from global settings: ${result}`);
return result; return result;
} catch (error) { } catch (error) {
@@ -80,84 +73,6 @@ export async function getAutoLoadClaudeMdSetting(
} }
} }
/**
* Get the useClaudeCodeSystemPrompt setting, with project settings taking precedence over global.
* Falls back to global settings and defaults to true when unset.
* Returns true if settings service is not available.
*
* @param projectPath - Path to the project
* @param settingsService - Optional settings service instance
* @param logPrefix - Prefix for log messages (e.g., '[AgentService]')
* @returns Promise resolving to the useClaudeCodeSystemPrompt setting value
*/
export async function getUseClaudeCodeSystemPromptSetting(
projectPath: string,
settingsService?: SettingsService | null,
logPrefix = '[SettingsHelper]'
): Promise<boolean> {
if (!settingsService) {
logger.info(
`${logPrefix} SettingsService not available, useClaudeCodeSystemPrompt defaulting to true`
);
return true;
}
try {
// Check project settings first (takes precedence)
const projectSettings = await settingsService.getProjectSettings(projectPath);
if (projectSettings.useClaudeCodeSystemPrompt !== undefined) {
logger.info(
`${logPrefix} useClaudeCodeSystemPrompt from project settings: ${projectSettings.useClaudeCodeSystemPrompt}`
);
return projectSettings.useClaudeCodeSystemPrompt;
}
// Fall back to global settings
const globalSettings = await settingsService.getGlobalSettings();
const result = globalSettings.useClaudeCodeSystemPrompt ?? true;
logger.info(`${logPrefix} useClaudeCodeSystemPrompt from global settings: ${result}`);
return result;
} catch (error) {
logger.error(`${logPrefix} Failed to load useClaudeCodeSystemPrompt setting:`, error);
throw error;
}
}
/**
* Get the default max turns setting from global settings.
*
* Reads the user's configured `defaultMaxTurns` setting, which controls the maximum
* number of agent turns (tool-call round-trips) for feature execution.
*
* @param settingsService - Settings service instance (may be null)
* @param logPrefix - Logging prefix for debugging
* @returns The user's configured max turns, or {@link DEFAULT_MAX_TURNS} as default
*/
export async function getDefaultMaxTurnsSetting(
settingsService?: SettingsService | null,
logPrefix = '[SettingsHelper]'
): Promise<number> {
if (!settingsService) {
logger.info(
`${logPrefix} SettingsService not available, using default maxTurns=${DEFAULT_MAX_TURNS}`
);
return DEFAULT_MAX_TURNS;
}
try {
const globalSettings = await settingsService.getGlobalSettings();
const raw = globalSettings.defaultMaxTurns;
const result = Number.isFinite(raw) ? (raw as number) : DEFAULT_MAX_TURNS;
// Clamp to valid range
const clamped = Math.max(1, Math.min(MAX_ALLOWED_TURNS, Math.floor(result)));
logger.debug(`${logPrefix} defaultMaxTurns from global settings: ${clamped}`);
return clamped;
} catch (error) {
logger.error(`${logPrefix} Failed to load defaultMaxTurns setting:`, error);
return DEFAULT_MAX_TURNS;
}
}
/** /**
* Filters out CLAUDE.md from context files when autoLoadClaudeMd is enabled * Filters out CLAUDE.md from context files when autoLoadClaudeMd is enabled
* and rebuilds the formatted prompt without it. * and rebuilds the formatted prompt without it.
@@ -689,145 +604,6 @@ export interface ProviderByModelIdResult {
resolvedModel: string | undefined; resolvedModel: string | undefined;
} }
/** Result from resolveProviderContext */
export interface ProviderContextResult {
/** The provider configuration */
provider: ClaudeCompatibleProvider | undefined;
/** Credentials for API key resolution */
credentials: Credentials | undefined;
/** The resolved Claude model ID for SDK configuration */
resolvedModel: string | undefined;
/** The original model config from the provider if found */
modelConfig: import('@automaker/types').ProviderModel | undefined;
}
/**
* Checks if a provider is enabled.
* Providers with enabled: undefined are treated as enabled (default state).
* Only explicitly set enabled: false means the provider is disabled.
*/
function isProviderEnabled(provider: ClaudeCompatibleProvider): boolean {
return provider.enabled !== false;
}
/**
* Finds a model config in a provider's models array by ID (case-insensitive).
*/
function findModelInProvider(
provider: ClaudeCompatibleProvider,
modelId: string
): import('@automaker/types').ProviderModel | undefined {
return provider.models?.find(
(m) => m.id === modelId || m.id.toLowerCase() === modelId.toLowerCase()
);
}
/**
* Resolves the provider and Claude-compatible model configuration.
*
* This is the central logic for resolving provider context, supporting:
* 1. Explicit lookup by providerId (most reliable for persistence)
* 2. Fallback lookup by modelId across all enabled providers
* 3. Resolution of mapsToClaudeModel for SDK configuration
*
* @param settingsService - Settings service instance
* @param modelId - The model ID to resolve
* @param providerId - Optional explicit provider ID
* @param logPrefix - Prefix for log messages
* @returns Promise resolving to the provider context
*/
export async function resolveProviderContext(
settingsService: SettingsService,
modelId: string,
providerId?: string,
logPrefix = '[SettingsHelper]'
): Promise<ProviderContextResult> {
try {
const globalSettings = await settingsService.getGlobalSettings();
const credentials = await settingsService.getCredentials();
const providers = globalSettings.claudeCompatibleProviders || [];
logger.debug(
`${logPrefix} Resolving provider context: modelId="${modelId}", providerId="${providerId ?? 'none'}", providers count=${providers.length}`
);
let provider: ClaudeCompatibleProvider | undefined;
let modelConfig: import('@automaker/types').ProviderModel | undefined;
// 1. Try resolving by explicit providerId first (most reliable)
if (providerId) {
provider = providers.find((p) => p.id === providerId);
if (provider) {
if (!isProviderEnabled(provider)) {
logger.warn(
`${logPrefix} Explicitly requested provider "${provider.name}" (${providerId}) is disabled (enabled=${provider.enabled})`
);
} else {
logger.debug(
`${logPrefix} Found provider "${provider.name}" (${providerId}), enabled=${provider.enabled ?? 'undefined (treated as enabled)'}`
);
// Find the model config within this provider to check for mappings
modelConfig = findModelInProvider(provider, modelId);
if (!modelConfig && provider.models && provider.models.length > 0) {
logger.debug(
`${logPrefix} Model "${modelId}" not found in provider "${provider.name}". Available models: ${provider.models.map((m) => m.id).join(', ')}`
);
}
}
} else {
logger.warn(
`${logPrefix} Explicitly requested provider "${providerId}" not found. Available providers: ${providers.map((p) => p.id).join(', ')}`
);
}
}
// 2. Fallback to model-based lookup across all providers if modelConfig not found
// Note: We still search even if provider was found, to get the modelConfig for mapping
if (!modelConfig) {
for (const p of providers) {
if (!isProviderEnabled(p) || p.id === providerId) continue; // Skip disabled or already checked
const config = findModelInProvider(p, modelId);
if (config) {
// Only override provider if we didn't find one by explicit ID
if (!provider) {
provider = p;
}
modelConfig = config;
logger.debug(`${logPrefix} Found model "${modelId}" in provider "${p.name}" (fallback)`);
break;
}
}
}
// 3. Resolve the mapped Claude model if specified
let resolvedModel: string | undefined;
if (modelConfig?.mapsToClaudeModel) {
const { resolveModelString } = await import('@automaker/model-resolver');
resolvedModel = resolveModelString(modelConfig.mapsToClaudeModel);
logger.debug(
`${logPrefix} Model "${modelId}" maps to Claude model "${modelConfig.mapsToClaudeModel}" -> "${resolvedModel}"`
);
}
// Log final result for debugging
logger.debug(
`${logPrefix} Provider context resolved: provider=${provider?.name ?? 'none'}, modelConfig=${modelConfig ? 'found' : 'not found'}, resolvedModel=${resolvedModel ?? modelId}`
);
return { provider, credentials, resolvedModel, modelConfig };
} catch (error) {
logger.error(`${logPrefix} Failed to resolve provider context:`, error);
return {
provider: undefined,
credentials: undefined,
resolvedModel: undefined,
modelConfig: undefined,
};
}
}
/** /**
* Find a ClaudeCompatibleProvider by one of its model IDs. * Find a ClaudeCompatibleProvider by one of its model IDs.
* Searches through all enabled providers to find one that contains the specified model. * Searches through all enabled providers to find one that contains the specified model.

View File

@@ -1,25 +0,0 @@
/**
* Terminal Theme Data - Re-export terminal themes from platform package
*
* This module re-exports terminal theme data for use in the server.
*/
import { terminalThemeColors, getTerminalThemeColors as getThemeColors } from '@automaker/platform';
import type { ThemeMode } from '@automaker/types';
import type { TerminalTheme } from '@automaker/platform';
/**
* Get terminal theme colors for a given theme mode
*/
export function getTerminalThemeColors(theme: ThemeMode): TerminalTheme {
return getThemeColors(theme);
}
/**
* Get all terminal themes
*/
export function getAllTerminalThemes(): Record<ThemeMode, TerminalTheme> {
return terminalThemeColors;
}
export default terminalThemeColors;

View File

@@ -78,7 +78,7 @@ export async function readWorktreeMetadata(
const metadataPath = getWorktreeMetadataPath(projectPath, branch); const metadataPath = getWorktreeMetadataPath(projectPath, branch);
const content = (await secureFs.readFile(metadataPath, 'utf-8')) as string; const content = (await secureFs.readFile(metadataPath, 'utf-8')) as string;
return JSON.parse(content) as WorktreeMetadata; return JSON.parse(content) as WorktreeMetadata;
} catch (_error) { } catch (error) {
// File doesn't exist or can't be read // File doesn't exist or can't be read
return null; return null;
} }

View File

@@ -5,10 +5,11 @@
* with the provider architecture. * with the provider architecture.
*/ */
import { query, type Options, type SDKUserMessage } from '@anthropic-ai/claude-agent-sdk'; import { query, type Options } from '@anthropic-ai/claude-agent-sdk';
import { BaseProvider } from './base-provider.js'; import { BaseProvider } from './base-provider.js';
import { classifyError, getUserFriendlyErrorMessage, createLogger } from '@automaker/utils'; import { classifyError, getUserFriendlyErrorMessage, createLogger } from '@automaker/utils';
import { getClaudeAuthIndicators } from '@automaker/platform';
const logger = createLogger('ClaudeProvider');
import { import {
getThinkingTokenBudget, getThinkingTokenBudget,
validateBareModelId, validateBareModelId,
@@ -16,14 +17,6 @@ import {
type ClaudeCompatibleProvider, type ClaudeCompatibleProvider,
type Credentials, type Credentials,
} from '@automaker/types'; } from '@automaker/types';
import type {
ExecuteOptions,
ProviderMessage,
InstallationStatus,
ModelDefinition,
} from './types.js';
const logger = createLogger('ClaudeProvider');
/** /**
* ProviderConfig - Union type for provider configuration * ProviderConfig - Union type for provider configuration
@@ -32,11 +25,29 @@ const logger = createLogger('ClaudeProvider');
* Both share the same connection settings structure. * Both share the same connection settings structure.
*/ */
type ProviderConfig = ClaudeApiProfile | ClaudeCompatibleProvider; type ProviderConfig = ClaudeApiProfile | ClaudeCompatibleProvider;
import type {
ExecuteOptions,
ProviderMessage,
InstallationStatus,
ModelDefinition,
} from './types.js';
// System vars are always passed from process.env regardless of profile. // Explicit allowlist of environment variables to pass to the SDK.
// Includes filesystem, locale, and temp directory vars that the Claude CLI // Only these vars are passed - nothing else from process.env leaks through.
// needs internally for config resolution and temp file creation. const ALLOWED_ENV_VARS = [
const SYSTEM_ENV_VARS = [ // Authentication
'ANTHROPIC_API_KEY',
'ANTHROPIC_AUTH_TOKEN',
// Endpoint configuration
'ANTHROPIC_BASE_URL',
'API_TIMEOUT_MS',
// Model mappings
'ANTHROPIC_DEFAULT_HAIKU_MODEL',
'ANTHROPIC_DEFAULT_SONNET_MODEL',
'ANTHROPIC_DEFAULT_OPUS_MODEL',
// Traffic control
'CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC',
// System vars (always from process.env)
'PATH', 'PATH',
'HOME', 'HOME',
'SHELL', 'SHELL',
@@ -44,13 +55,11 @@ const SYSTEM_ENV_VARS = [
'USER', 'USER',
'LANG', 'LANG',
'LC_ALL', 'LC_ALL',
'TMPDIR',
'XDG_CONFIG_HOME',
'XDG_DATA_HOME',
'XDG_CACHE_HOME',
'XDG_STATE_HOME',
]; ];
// System vars are always passed from process.env regardless of profile
const SYSTEM_ENV_VARS = ['PATH', 'HOME', 'SHELL', 'TERM', 'USER', 'LANG', 'LC_ALL'];
/** /**
* Check if the config is a ClaudeCompatibleProvider (new system) * Check if the config is a ClaudeCompatibleProvider (new system)
* by checking for the 'models' array property * by checking for the 'models' array property
@@ -188,7 +197,6 @@ export class ClaudeProvider extends BaseProvider {
async *executeQuery(options: ExecuteOptions): AsyncGenerator<ProviderMessage> { async *executeQuery(options: ExecuteOptions): AsyncGenerator<ProviderMessage> {
// Validate that model doesn't have a provider prefix // Validate that model doesn't have a provider prefix
// AgentService should strip prefixes before passing to providers // AgentService should strip prefixes before passing to providers
// Claude doesn't use a provider prefix, so we don't need to specify an expected provider
validateBareModelId(options.model, 'ClaudeProvider'); validateBareModelId(options.model, 'ClaudeProvider');
const { const {
@@ -196,7 +204,7 @@ export class ClaudeProvider extends BaseProvider {
model, model,
cwd, cwd,
systemPrompt, systemPrompt,
maxTurns = 1000, maxTurns = 20,
allowedTools, allowedTools,
abortController, abortController,
conversationHistory, conversationHistory,
@@ -211,11 +219,8 @@ export class ClaudeProvider extends BaseProvider {
// claudeCompatibleProvider takes precedence over claudeApiProfile // claudeCompatibleProvider takes precedence over claudeApiProfile
const providerConfig = claudeCompatibleProvider || claudeApiProfile; const providerConfig = claudeCompatibleProvider || claudeApiProfile;
// Build thinking configuration // Convert thinking level to token budget
// Adaptive thinking (Opus 4.6): don't set maxThinkingTokens, model uses adaptive by default const maxThinkingTokens = getThinkingTokenBudget(thinkingLevel);
// Manual thinking (Haiku/Sonnet): use budget_tokens
const maxThinkingTokens =
thinkingLevel === 'adaptive' ? undefined : getThinkingTokenBudget(thinkingLevel);
// Build Claude SDK options // Build Claude SDK options
const sdkOptions: Options = { const sdkOptions: Options = {
@@ -229,8 +234,6 @@ export class ClaudeProvider extends BaseProvider {
env: buildEnv(providerConfig, credentials), env: buildEnv(providerConfig, credentials),
// Pass through allowedTools if provided by caller (decided by sdk-options.ts) // Pass through allowedTools if provided by caller (decided by sdk-options.ts)
...(allowedTools && { allowedTools }), ...(allowedTools && { allowedTools }),
// Restrict available built-in tools if specified (tools: [] disables all tools)
...(options.tools && { tools: options.tools }),
// AUTONOMOUS MODE: Always bypass permissions for fully autonomous operation // AUTONOMOUS MODE: Always bypass permissions for fully autonomous operation
permissionMode: 'bypassPermissions', permissionMode: 'bypassPermissions',
allowDangerouslySkipPermissions: true, allowDangerouslySkipPermissions: true,
@@ -252,14 +255,14 @@ export class ClaudeProvider extends BaseProvider {
}; };
// Build prompt payload // Build prompt payload
let promptPayload: string | AsyncIterable<SDKUserMessage>; let promptPayload: string | AsyncIterable<any>;
if (Array.isArray(prompt)) { if (Array.isArray(prompt)) {
// Multi-part prompt (with images) // Multi-part prompt (with images)
promptPayload = (async function* () { promptPayload = (async function* () {
const multiPartPrompt: SDKUserMessage = { const multiPartPrompt = {
type: 'user' as const, type: 'user' as const,
session_id: sdkSessionId || '', session_id: '',
message: { message: {
role: 'user' as const, role: 'user' as const,
content: prompt, content: prompt,
@@ -311,16 +314,12 @@ export class ClaudeProvider extends BaseProvider {
? `${userMessage}\n\nTip: If you're running multiple features in auto-mode, consider reducing concurrency (maxConcurrency setting) to avoid hitting rate limits.` ? `${userMessage}\n\nTip: If you're running multiple features in auto-mode, consider reducing concurrency (maxConcurrency setting) to avoid hitting rate limits.`
: userMessage; : userMessage;
const enhancedError = new Error(message) as Error & { const enhancedError = new Error(message);
originalError: unknown; (enhancedError as any).originalError = error;
type: string; (enhancedError as any).type = errorInfo.type;
retryAfter?: number;
};
enhancedError.originalError = error;
enhancedError.type = errorInfo.type;
if (errorInfo.isRateLimit) { if (errorInfo.isRateLimit) {
enhancedError.retryAfter = errorInfo.retryAfter; (enhancedError as any).retryAfter = errorInfo.retryAfter;
} }
throw enhancedError; throw enhancedError;
@@ -332,37 +331,13 @@ export class ClaudeProvider extends BaseProvider {
*/ */
async detectInstallation(): Promise<InstallationStatus> { async detectInstallation(): Promise<InstallationStatus> {
// Claude SDK is always available since it's a dependency // Claude SDK is always available since it's a dependency
// Check all four supported auth methods, mirroring the logic in buildEnv(): const hasApiKey = !!process.env.ANTHROPIC_API_KEY;
// 1. ANTHROPIC_API_KEY environment variable
// 2. ANTHROPIC_AUTH_TOKEN environment variable
// 3. credentials?.apiKeys?.anthropic (credentials file, checked via platform indicators)
// 4. Claude Max CLI OAuth (SDK handles this automatically; detected via getClaudeAuthIndicators)
const hasEnvApiKey = !!process.env.ANTHROPIC_API_KEY;
const hasEnvAuthToken = !!process.env.ANTHROPIC_AUTH_TOKEN;
// Check credentials file and CLI OAuth indicators (same sources used by buildEnv)
let hasCredentialsApiKey = false;
let hasCliOAuth = false;
try {
const indicators = await getClaudeAuthIndicators();
hasCredentialsApiKey = !!indicators.credentials?.hasApiKey;
hasCliOAuth = !!(
indicators.credentials?.hasOAuthToken ||
indicators.hasStatsCacheWithActivity ||
(indicators.hasSettingsFile && indicators.hasProjectsSessions)
);
} catch {
// If we can't check indicators, fall back to env vars only
}
const hasApiKey = hasEnvApiKey || hasCredentialsApiKey;
const authenticated = hasEnvApiKey || hasEnvAuthToken || hasCredentialsApiKey || hasCliOAuth;
const status: InstallationStatus = { const status: InstallationStatus = {
installed: true, installed: true,
method: 'sdk', method: 'sdk',
hasApiKey, hasApiKey,
authenticated, authenticated: hasApiKey,
}; };
return status; return status;
@@ -374,30 +349,18 @@ export class ClaudeProvider extends BaseProvider {
getAvailableModels(): ModelDefinition[] { getAvailableModels(): ModelDefinition[] {
const models = [ const models = [
{ {
id: 'claude-opus-4-6', id: 'claude-opus-4-5-20251101',
name: 'Claude Opus 4.6', name: 'Claude Opus 4.5',
modelString: 'claude-opus-4-6', modelString: 'claude-opus-4-5-20251101',
provider: 'anthropic', provider: 'anthropic',
description: 'Most capable Claude model with adaptive thinking', description: 'Most capable Claude model',
contextWindow: 200000, contextWindow: 200000,
maxOutputTokens: 128000, maxOutputTokens: 16000,
supportsVision: true, supportsVision: true,
supportsTools: true, supportsTools: true,
tier: 'premium' as const, tier: 'premium' as const,
default: true, default: true,
}, },
{
id: 'claude-sonnet-4-6',
name: 'Claude Sonnet 4.6',
modelString: 'claude-sonnet-4-6',
provider: 'anthropic',
description: 'Balanced performance and cost with enhanced reasoning',
contextWindow: 200000,
maxOutputTokens: 64000,
supportsVision: true,
supportsTools: true,
tier: 'standard' as const,
},
{ {
id: 'claude-sonnet-4-20250514', id: 'claude-sonnet-4-20250514',
name: 'Claude Sonnet 4', name: 'Claude Sonnet 4',

View File

@@ -19,11 +19,12 @@ const MAX_OUTPUT_16K = 16000;
export const CODEX_MODELS: ModelDefinition[] = [ export const CODEX_MODELS: ModelDefinition[] = [
// ========== Recommended Codex Models ========== // ========== Recommended Codex Models ==========
{ {
id: CODEX_MODEL_MAP.gpt53Codex, id: CODEX_MODEL_MAP.gpt52Codex,
name: 'GPT-5.3-Codex', name: 'GPT-5.2-Codex',
modelString: CODEX_MODEL_MAP.gpt53Codex, modelString: CODEX_MODEL_MAP.gpt52Codex,
provider: 'openai', provider: 'openai',
description: 'Latest frontier agentic coding model.', description:
'Most advanced agentic coding model for complex software engineering (default for ChatGPT users).',
contextWindow: CONTEXT_WINDOW_256K, contextWindow: CONTEXT_WINDOW_256K,
maxOutputTokens: MAX_OUTPUT_32K, maxOutputTokens: MAX_OUTPUT_32K,
supportsVision: true, supportsVision: true,
@@ -32,38 +33,12 @@ export const CODEX_MODELS: ModelDefinition[] = [
default: true, default: true,
hasReasoning: true, hasReasoning: true,
}, },
{
id: CODEX_MODEL_MAP.gpt53CodexSpark,
name: 'GPT-5.3-Codex-Spark',
modelString: CODEX_MODEL_MAP.gpt53CodexSpark,
provider: 'openai',
description: 'Near-instant real-time coding model, 1000+ tokens/sec.',
contextWindow: CONTEXT_WINDOW_256K,
maxOutputTokens: MAX_OUTPUT_32K,
supportsVision: true,
supportsTools: true,
tier: 'premium' as const,
hasReasoning: true,
},
{
id: CODEX_MODEL_MAP.gpt52Codex,
name: 'GPT-5.2-Codex',
modelString: CODEX_MODEL_MAP.gpt52Codex,
provider: 'openai',
description: 'Frontier agentic coding model.',
contextWindow: CONTEXT_WINDOW_256K,
maxOutputTokens: MAX_OUTPUT_32K,
supportsVision: true,
supportsTools: true,
tier: 'premium' as const,
hasReasoning: true,
},
{ {
id: CODEX_MODEL_MAP.gpt51CodexMax, id: CODEX_MODEL_MAP.gpt51CodexMax,
name: 'GPT-5.1-Codex-Max', name: 'GPT-5.1-Codex-Max',
modelString: CODEX_MODEL_MAP.gpt51CodexMax, modelString: CODEX_MODEL_MAP.gpt51CodexMax,
provider: 'openai', provider: 'openai',
description: 'Codex-optimized flagship for deep and fast reasoning.', description: 'Optimized for long-horizon, agentic coding tasks in Codex.',
contextWindow: CONTEXT_WINDOW_256K, contextWindow: CONTEXT_WINDOW_256K,
maxOutputTokens: MAX_OUTPUT_32K, maxOutputTokens: MAX_OUTPUT_32K,
supportsVision: true, supportsVision: true,
@@ -76,46 +51,7 @@ export const CODEX_MODELS: ModelDefinition[] = [
name: 'GPT-5.1-Codex-Mini', name: 'GPT-5.1-Codex-Mini',
modelString: CODEX_MODEL_MAP.gpt51CodexMini, modelString: CODEX_MODEL_MAP.gpt51CodexMini,
provider: 'openai', provider: 'openai',
description: 'Optimized for codex. Cheaper, faster, but less capable.', description: 'Smaller, more cost-effective version for faster workflows.',
contextWindow: CONTEXT_WINDOW_128K,
maxOutputTokens: MAX_OUTPUT_16K,
supportsVision: true,
supportsTools: true,
tier: 'basic' as const,
hasReasoning: false,
},
{
id: CODEX_MODEL_MAP.gpt51Codex,
name: 'GPT-5.1-Codex',
modelString: CODEX_MODEL_MAP.gpt51Codex,
provider: 'openai',
description: 'Original GPT-5.1 Codex agentic coding model.',
contextWindow: CONTEXT_WINDOW_256K,
maxOutputTokens: MAX_OUTPUT_32K,
supportsVision: true,
supportsTools: true,
tier: 'standard' as const,
hasReasoning: true,
},
{
id: CODEX_MODEL_MAP.gpt5Codex,
name: 'GPT-5-Codex',
modelString: CODEX_MODEL_MAP.gpt5Codex,
provider: 'openai',
description: 'Original GPT-5 Codex model.',
contextWindow: CONTEXT_WINDOW_128K,
maxOutputTokens: MAX_OUTPUT_16K,
supportsVision: true,
supportsTools: true,
tier: 'standard' as const,
hasReasoning: true,
},
{
id: CODEX_MODEL_MAP.gpt5CodexMini,
name: 'GPT-5-Codex-Mini',
modelString: CODEX_MODEL_MAP.gpt5CodexMini,
provider: 'openai',
description: 'Smaller, cheaper GPT-5 Codex variant.',
contextWindow: CONTEXT_WINDOW_128K, contextWindow: CONTEXT_WINDOW_128K,
maxOutputTokens: MAX_OUTPUT_16K, maxOutputTokens: MAX_OUTPUT_16K,
supportsVision: true, supportsVision: true,
@@ -130,7 +66,7 @@ export const CODEX_MODELS: ModelDefinition[] = [
name: 'GPT-5.2', name: 'GPT-5.2',
modelString: CODEX_MODEL_MAP.gpt52, modelString: CODEX_MODEL_MAP.gpt52,
provider: 'openai', provider: 'openai',
description: 'Latest frontier model with improvements across knowledge, reasoning and coding.', description: 'Best general agentic model for tasks across industries and domains.',
contextWindow: CONTEXT_WINDOW_256K, contextWindow: CONTEXT_WINDOW_256K,
maxOutputTokens: MAX_OUTPUT_32K, maxOutputTokens: MAX_OUTPUT_32K,
supportsVision: true, supportsVision: true,
@@ -151,19 +87,6 @@ export const CODEX_MODELS: ModelDefinition[] = [
tier: 'standard' as const, tier: 'standard' as const,
hasReasoning: true, hasReasoning: true,
}, },
{
id: CODEX_MODEL_MAP.gpt5,
name: 'GPT-5',
modelString: CODEX_MODEL_MAP.gpt5,
provider: 'openai',
description: 'Base GPT-5 model.',
contextWindow: CONTEXT_WINDOW_128K,
maxOutputTokens: MAX_OUTPUT_16K,
supportsVision: true,
supportsTools: true,
tier: 'standard' as const,
hasReasoning: true,
},
]; ];
/** /**

View File

@@ -30,9 +30,11 @@ import type {
ModelDefinition, ModelDefinition,
} from './types.js'; } from './types.js';
import { import {
CODEX_MODEL_MAP,
supportsReasoningEffort, supportsReasoningEffort,
validateBareModelId, validateBareModelId,
calculateReasoningTimeout, calculateReasoningTimeout,
DEFAULT_TIMEOUT_MS,
type CodexApprovalPolicy, type CodexApprovalPolicy,
type CodexSandboxMode, type CodexSandboxMode,
type CodexAuthStatus, type CodexAuthStatus,
@@ -51,14 +53,18 @@ import { CODEX_MODELS } from './codex-models.js';
const CODEX_COMMAND = 'codex'; const CODEX_COMMAND = 'codex';
const CODEX_EXEC_SUBCOMMAND = 'exec'; const CODEX_EXEC_SUBCOMMAND = 'exec';
const CODEX_RESUME_SUBCOMMAND = 'resume';
const CODEX_JSON_FLAG = '--json'; const CODEX_JSON_FLAG = '--json';
const CODEX_MODEL_FLAG = '--model'; const CODEX_MODEL_FLAG = '--model';
const CODEX_VERSION_FLAG = '--version'; const CODEX_VERSION_FLAG = '--version';
const CODEX_CONFIG_FLAG = '--config'; const CODEX_SANDBOX_FLAG = '--sandbox';
const CODEX_ADD_DIR_FLAG = '--add-dir'; const CODEX_APPROVAL_FLAG = '--ask-for-approval';
const CODEX_SEARCH_FLAG = '--search';
const CODEX_OUTPUT_SCHEMA_FLAG = '--output-schema'; const CODEX_OUTPUT_SCHEMA_FLAG = '--output-schema';
const CODEX_CONFIG_FLAG = '--config';
const CODEX_IMAGE_FLAG = '--image';
const CODEX_ADD_DIR_FLAG = '--add-dir';
const CODEX_SKIP_GIT_REPO_CHECK_FLAG = '--skip-git-repo-check'; const CODEX_SKIP_GIT_REPO_CHECK_FLAG = '--skip-git-repo-check';
const CODEX_RESUME_FLAG = 'resume';
const CODEX_REASONING_EFFORT_KEY = 'reasoning_effort'; const CODEX_REASONING_EFFORT_KEY = 'reasoning_effort';
const CODEX_YOLO_FLAG = '--dangerously-bypass-approvals-and-sandbox'; const CODEX_YOLO_FLAG = '--dangerously-bypass-approvals-and-sandbox';
const OPENAI_API_KEY_ENV = 'OPENAI_API_KEY'; const OPENAI_API_KEY_ENV = 'OPENAI_API_KEY';
@@ -98,8 +104,11 @@ const TEXT_ENCODING = 'utf-8';
* *
* @see calculateReasoningTimeout from @automaker/types * @see calculateReasoningTimeout from @automaker/types
*/ */
const CODEX_CLI_TIMEOUT_MS = 120000; // 2 minutes — matches CLI provider base timeout const CODEX_CLI_TIMEOUT_MS = DEFAULT_TIMEOUT_MS;
const CODEX_FEATURE_GENERATION_BASE_TIMEOUT_MS = 300000; // 5 minutes for feature generation const CODEX_FEATURE_GENERATION_BASE_TIMEOUT_MS = 300000; // 5 minutes for feature generation
const CONTEXT_WINDOW_256K = 256000;
const MAX_OUTPUT_32K = 32000;
const MAX_OUTPUT_16K = 16000;
const SYSTEM_PROMPT_SEPARATOR = '\n\n'; const SYSTEM_PROMPT_SEPARATOR = '\n\n';
const CODEX_INSTRUCTIONS_DIR = '.codex'; const CODEX_INSTRUCTIONS_DIR = '.codex';
const CODEX_INSTRUCTIONS_SECTION = 'Codex Project Instructions'; const CODEX_INSTRUCTIONS_SECTION = 'Codex Project Instructions';
@@ -127,16 +136,11 @@ const DEFAULT_ALLOWED_TOOLS = [
'Read', 'Read',
'Write', 'Write',
'Edit', 'Edit',
'MultiEdit',
'Glob', 'Glob',
'Grep', 'Grep',
'LS',
'Bash', 'Bash',
'WebSearch', 'WebSearch',
'WebFetch', 'WebFetch',
'TodoWrite',
'Task',
'Skill',
] as const; ] as const;
const SEARCH_TOOL_NAMES = new Set(['WebSearch', 'WebFetch']); const SEARCH_TOOL_NAMES = new Set(['WebSearch', 'WebFetch']);
const MIN_MAX_TURNS = 1; const MIN_MAX_TURNS = 1;
@@ -206,42 +210,16 @@ function isSdkEligible(options: ExecuteOptions): boolean {
return isNoToolsRequested(options) && !hasMcpServersConfigured(options); return isNoToolsRequested(options) && !hasMcpServersConfigured(options);
} }
function isSdkEligibleWithApiKey(options: ExecuteOptions): boolean {
// When using an API key (not CLI OAuth), prefer SDK over CLI to avoid OAuth issues.
// SDK mode is used when MCP servers are not configured (MCP requires CLI).
// Tool requests are handled by the SDK, so we allow SDK mode even with tools.
return !hasMcpServersConfigured(options);
}
async function resolveCodexExecutionPlan(options: ExecuteOptions): Promise<CodexExecutionPlan> { async function resolveCodexExecutionPlan(options: ExecuteOptions): Promise<CodexExecutionPlan> {
const cliPath = await findCodexCliPath(); const cliPath = await findCodexCliPath();
const authIndicators = await getCodexAuthIndicators(); const authIndicators = await getCodexAuthIndicators();
const openAiApiKey = await resolveOpenAiApiKey(); const openAiApiKey = await resolveOpenAiApiKey();
const hasApiKey = Boolean(openAiApiKey); const hasApiKey = Boolean(openAiApiKey);
const cliAvailable = Boolean(cliPath); const cliAuthenticated = authIndicators.hasOAuthToken || authIndicators.hasApiKey || hasApiKey;
// CLI OAuth login takes priority: if the user has logged in via `codex login`,
// use the CLI regardless of whether an API key is also stored.
// hasOAuthToken = OAuth session from `codex login`
// authIndicators.hasApiKey = API key stored in Codex's own auth file (via `codex login --api-key`)
// Both are "CLI-native" auth — distinct from an API key stored in Automaker's credentials.
const hasCliNativeAuth = authIndicators.hasOAuthToken || authIndicators.hasApiKey;
const sdkEligible = isSdkEligible(options); const sdkEligible = isSdkEligible(options);
const cliAvailable = Boolean(cliPath);
// If CLI is available and the user authenticated via the CLI (`codex login`), if (hasApiKey) {
// prefer CLI mode over SDK. This ensures `codex login` sessions take priority
// over API keys stored in Automaker's credentials.
if (cliAvailable && hasCliNativeAuth) {
return {
mode: CODEX_EXECUTION_MODE_CLI,
cliPath,
openAiApiKey,
};
}
// No CLI-native auth — prefer SDK when an API key is available.
// Using SDK with an API key avoids OAuth issues that can arise with the CLI.
// MCP servers still require CLI mode since the SDK doesn't support MCP.
if (hasApiKey && isSdkEligibleWithApiKey(options)) {
return { return {
mode: CODEX_EXECUTION_MODE_SDK, mode: CODEX_EXECUTION_MODE_SDK,
cliPath, cliPath,
@@ -249,16 +227,6 @@ async function resolveCodexExecutionPlan(options: ExecuteOptions): Promise<Codex
}; };
} }
// MCP servers are requested with an API key but no CLI-native auth — use CLI mode
// with the API key passed as an environment variable.
if (hasApiKey && cliAvailable) {
return {
mode: CODEX_EXECUTION_MODE_CLI,
cliPath,
openAiApiKey,
};
}
if (sdkEligible) { if (sdkEligible) {
if (!cliAvailable) { if (!cliAvailable) {
throw new Error(ERROR_CODEX_SDK_AUTH_REQUIRED); throw new Error(ERROR_CODEX_SDK_AUTH_REQUIRED);
@@ -269,9 +237,15 @@ async function resolveCodexExecutionPlan(options: ExecuteOptions): Promise<Codex
throw new Error(ERROR_CODEX_CLI_REQUIRED); throw new Error(ERROR_CODEX_CLI_REQUIRED);
} }
// At this point, neither hasCliNativeAuth nor hasApiKey is true, if (!cliAuthenticated) {
// so authentication is required regardless. throw new Error(ERROR_CODEX_AUTH_REQUIRED);
throw new Error(ERROR_CODEX_AUTH_REQUIRED); }
return {
mode: CODEX_EXECUTION_MODE_CLI,
cliPath,
openAiApiKey,
};
} }
function getEventType(event: Record<string, unknown>): string | null { function getEventType(event: Record<string, unknown>): string | null {
@@ -361,14 +335,9 @@ function resolveSystemPrompt(systemPrompt?: unknown): string | null {
return null; return null;
} }
function buildPromptText(options: ExecuteOptions): string {
return typeof options.prompt === 'string'
? options.prompt
: extractTextFromContent(options.prompt);
}
function buildCombinedPrompt(options: ExecuteOptions, systemPromptText?: string | null): string { function buildCombinedPrompt(options: ExecuteOptions, systemPromptText?: string | null): string {
const promptText = buildPromptText(options); const promptText =
typeof options.prompt === 'string' ? options.prompt : extractTextFromContent(options.prompt);
const historyText = options.conversationHistory const historyText = options.conversationHistory
? formatHistoryAsText(options.conversationHistory) ? formatHistoryAsText(options.conversationHistory)
: ''; : '';
@@ -381,11 +350,6 @@ function buildCombinedPrompt(options: ExecuteOptions, systemPromptText?: string
return `${historyText}${systemSection}${HISTORY_HEADER}${promptText}`; return `${historyText}${systemSection}${HISTORY_HEADER}${promptText}`;
} }
function buildResumePrompt(options: ExecuteOptions): string {
const promptText = buildPromptText(options);
return `${HISTORY_HEADER}${promptText}`;
}
function formatConfigValue(value: string | number | boolean): string { function formatConfigValue(value: string | number | boolean): string {
return String(value); return String(value);
} }
@@ -739,9 +703,9 @@ export class CodexProvider extends BaseProvider {
} }
async *executeQuery(options: ExecuteOptions): AsyncGenerator<ProviderMessage> { async *executeQuery(options: ExecuteOptions): AsyncGenerator<ProviderMessage> {
// Validate that model doesn't have a provider prefix (except codex- which should already be stripped) // Validate that model doesn't have a provider prefix
// AgentService should strip prefixes before passing to providers // AgentService should strip prefixes before passing to providers
validateBareModelId(options.model, 'CodexProvider', 'codex'); validateBareModelId(options.model, 'CodexProvider');
try { try {
const mcpServers = options.mcpServers ?? {}; const mcpServers = options.mcpServers ?? {};
@@ -753,16 +717,6 @@ export class CodexProvider extends BaseProvider {
); );
const baseSystemPrompt = resolveSystemPrompt(options.systemPrompt); const baseSystemPrompt = resolveSystemPrompt(options.systemPrompt);
const resolvedMaxTurns = resolveMaxTurns(options.maxTurns); const resolvedMaxTurns = resolveMaxTurns(options.maxTurns);
if (resolvedMaxTurns === null && options.maxTurns === undefined) {
logger.warn(
`[executeQuery] maxTurns not provided — Codex CLI will use its internal default. ` +
`This may cause premature completion. Model: ${options.model}`
);
} else {
logger.info(
`[executeQuery] maxTurns: requested=${options.maxTurns}, resolved=${resolvedMaxTurns}, model=${options.model}`
);
}
const resolvedAllowedTools = options.allowedTools ?? Array.from(DEFAULT_ALLOWED_TOOLS); const resolvedAllowedTools = options.allowedTools ?? Array.from(DEFAULT_ALLOWED_TOOLS);
const restrictTools = !hasMcpServers || options.mcpUnrestrictedTools === false; const restrictTools = !hasMcpServers || options.mcpUnrestrictedTools === false;
const wantsOutputSchema = Boolean( const wantsOutputSchema = Boolean(
@@ -804,27 +758,24 @@ export class CodexProvider extends BaseProvider {
options.cwd, options.cwd,
codexSettings.sandboxMode !== 'danger-full-access' codexSettings.sandboxMode !== 'danger-full-access'
); );
const resolvedSandboxMode = sandboxCheck.enabled
? codexSettings.sandboxMode
: 'danger-full-access';
if (!sandboxCheck.enabled && sandboxCheck.message) { if (!sandboxCheck.enabled && sandboxCheck.message) {
console.warn(`[CodexProvider] ${sandboxCheck.message}`); console.warn(`[CodexProvider] ${sandboxCheck.message}`);
} }
const searchEnabled = const searchEnabled =
codexSettings.enableWebSearch || resolveSearchEnabled(resolvedAllowedTools, restrictTools); codexSettings.enableWebSearch || resolveSearchEnabled(resolvedAllowedTools, restrictTools);
const isResumeQuery = Boolean(options.sdkSessionId); const outputSchemaPath = await writeOutputSchemaFile(options.cwd, options.outputFormat);
const schemaPath = isResumeQuery const imageBlocks = codexSettings.enableImages ? extractImageBlocks(options.prompt) : [];
? null const imagePaths = await writeImageFiles(options.cwd, imageBlocks);
: await writeOutputSchemaFile(options.cwd, options.outputFormat);
const imageBlocks =
!isResumeQuery && codexSettings.enableImages ? extractImageBlocks(options.prompt) : [];
const imagePaths = isResumeQuery ? [] : await writeImageFiles(options.cwd, imageBlocks);
const approvalPolicy = const approvalPolicy =
hasMcpServers && options.mcpAutoApproveTools !== undefined hasMcpServers && options.mcpAutoApproveTools !== undefined
? options.mcpAutoApproveTools ? options.mcpAutoApproveTools
? 'never' ? 'never'
: 'on-request' : 'on-request'
: codexSettings.approvalPolicy; : codexSettings.approvalPolicy;
const promptText = isResumeQuery const promptText = buildCombinedPrompt(options, combinedSystemPrompt);
? buildResumePrompt(options)
: buildCombinedPrompt(options, combinedSystemPrompt);
const commandPath = executionPlan.cliPath || CODEX_COMMAND; const commandPath = executionPlan.cliPath || CODEX_COMMAND;
// Build config overrides for max turns and reasoning effort // Build config overrides for max turns and reasoning effort
@@ -850,43 +801,25 @@ export class CodexProvider extends BaseProvider {
overrides.push({ key: 'features.web_search_request', value: true }); overrides.push({ key: 'features.web_search_request', value: true });
} }
const configOverrideArgs = buildConfigOverrides(overrides); const configOverrides = buildConfigOverrides(overrides);
const preExecArgs: string[] = []; const preExecArgs: string[] = [];
// Add additional directories with write access // Add additional directories with write access
if ( if (codexSettings.additionalDirs && codexSettings.additionalDirs.length > 0) {
!isResumeQuery &&
codexSettings.additionalDirs &&
codexSettings.additionalDirs.length > 0
) {
for (const dir of codexSettings.additionalDirs) { for (const dir of codexSettings.additionalDirs) {
preExecArgs.push(CODEX_ADD_DIR_FLAG, dir); preExecArgs.push(CODEX_ADD_DIR_FLAG, dir);
} }
} }
// If images were written to disk, add the image directory so the CLI can access them.
// Note: imagePaths is set to [] when isResumeQuery is true, so this check is sufficient.
if (imagePaths.length > 0) {
const imageDir = path.join(options.cwd, CODEX_INSTRUCTIONS_DIR, IMAGE_TEMP_DIR);
preExecArgs.push(CODEX_ADD_DIR_FLAG, imageDir);
}
// Model is already bare (no prefix) - validated by executeQuery // Model is already bare (no prefix) - validated by executeQuery
const codexCommand = isResumeQuery
? [CODEX_EXEC_SUBCOMMAND, CODEX_RESUME_SUBCOMMAND]
: [CODEX_EXEC_SUBCOMMAND];
const args = [ const args = [
...codexCommand, CODEX_EXEC_SUBCOMMAND,
CODEX_YOLO_FLAG, CODEX_YOLO_FLAG,
CODEX_SKIP_GIT_REPO_CHECK_FLAG, CODEX_SKIP_GIT_REPO_CHECK_FLAG,
...preExecArgs, ...preExecArgs,
CODEX_MODEL_FLAG, CODEX_MODEL_FLAG,
options.model, options.model,
CODEX_JSON_FLAG, CODEX_JSON_FLAG,
...configOverrideArgs,
...(schemaPath ? [CODEX_OUTPUT_SCHEMA_FLAG, schemaPath] : []),
...(options.sdkSessionId ? [options.sdkSessionId] : []),
'-', // Read prompt from stdin to avoid shell escaping issues '-', // Read prompt from stdin to avoid shell escaping issues
]; ];
@@ -933,36 +866,16 @@ export class CodexProvider extends BaseProvider {
// Enhance error message with helpful context // Enhance error message with helpful context
let enhancedError = errorText; let enhancedError = errorText;
const errorLower = errorText.toLowerCase(); if (errorText.toLowerCase().includes('rate limit')) {
if (errorLower.includes('rate limit')) {
enhancedError = `${errorText}\n\nTip: You're being rate limited. Try reducing concurrent tasks or waiting a few minutes before retrying.`; enhancedError = `${errorText}\n\nTip: You're being rate limited. Try reducing concurrent tasks or waiting a few minutes before retrying.`;
} else if (errorLower.includes('authentication') || errorLower.includes('unauthorized')) {
enhancedError = `${errorText}\n\nTip: Check that your OPENAI_API_KEY is set correctly or run 'codex login' to authenticate.`;
} else if ( } else if (
errorLower.includes('model does not exist') || errorText.toLowerCase().includes('authentication') ||
errorLower.includes('requested model does not exist') || errorText.toLowerCase().includes('unauthorized')
errorLower.includes('do not have access') ||
errorLower.includes('model_not_found') ||
errorLower.includes('invalid_model')
) { ) {
enhancedError = enhancedError = `${errorText}\n\nTip: Check that your OPENAI_API_KEY is set correctly or run 'codex auth login' to authenticate.`;
`${errorText}\n\nTip: The model '${options.model}' may not be available on your OpenAI plan. ` +
`See https://platform.openai.com/docs/models for available models. ` +
`Some models require a ChatGPT Pro/Plus subscription—authenticate with 'codex login' instead of an API key.`;
} else if ( } else if (
errorLower.includes('stream disconnected') || errorText.toLowerCase().includes('not found') ||
errorLower.includes('stream ended') || errorText.toLowerCase().includes('command not found')
errorLower.includes('connection reset')
) {
enhancedError =
`${errorText}\n\nTip: The connection to OpenAI was interrupted. This can happen due to:\n` +
`- Network instability\n` +
`- The model not being available on your plan\n` +
`- Server-side timeouts for long-running requests\n` +
`Try again, or switch to a different model.`;
} else if (
errorLower.includes('command not found') ||
errorLower.includes('is not recognized as an internal or external command')
) { ) {
enhancedError = `${errorText}\n\nTip: Make sure the Codex CLI is installed. Run 'npm install -g @openai/codex-cli' to install.`; enhancedError = `${errorText}\n\nTip: Make sure the Codex CLI is installed. Run 'npm install -g @openai/codex-cli' to install.`;
} }
@@ -1120,6 +1033,7 @@ export class CodexProvider extends BaseProvider {
async detectInstallation(): Promise<InstallationStatus> { async detectInstallation(): Promise<InstallationStatus> {
const cliPath = await findCodexCliPath(); const cliPath = await findCodexCliPath();
const hasApiKey = Boolean(await resolveOpenAiApiKey()); const hasApiKey = Boolean(await resolveOpenAiApiKey());
const authIndicators = await getCodexAuthIndicators();
const installed = !!cliPath; const installed = !!cliPath;
let version = ''; let version = '';
@@ -1131,7 +1045,7 @@ export class CodexProvider extends BaseProvider {
cwd: process.cwd(), cwd: process.cwd(),
}); });
version = result.stdout.trim(); version = result.stdout.trim();
} catch { } catch (error) {
version = ''; version = '';
} }
} }

View File

@@ -15,9 +15,6 @@ const SDK_HISTORY_HEADER = 'Current request:\n';
const DEFAULT_RESPONSE_TEXT = ''; const DEFAULT_RESPONSE_TEXT = '';
const SDK_ERROR_DETAILS_LABEL = 'Details:'; const SDK_ERROR_DETAILS_LABEL = 'Details:';
type SdkReasoningEffort = 'minimal' | 'low' | 'medium' | 'high' | 'xhigh';
const SDK_REASONING_EFFORTS = new Set<string>(['minimal', 'low', 'medium', 'high', 'xhigh']);
type PromptBlock = { type PromptBlock = {
type: string; type: string;
text?: string; text?: string;
@@ -102,52 +99,38 @@ export async function* executeCodexSdkQuery(
const apiKey = resolveApiKey(); const apiKey = resolveApiKey();
const codex = new Codex({ apiKey }); const codex = new Codex({ apiKey });
// Build thread options with model
// The model must be passed to startThread/resumeThread so the SDK
// knows which model to use for the conversation. Without this,
// the SDK may use a default model that the user doesn't have access to.
const threadOptions: {
model?: string;
modelReasoningEffort?: SdkReasoningEffort;
} = {};
if (options.model) {
threadOptions.model = options.model;
}
// Add reasoning effort to thread options if model supports it
if (
options.reasoningEffort &&
options.model &&
supportsReasoningEffort(options.model) &&
options.reasoningEffort !== 'none' &&
SDK_REASONING_EFFORTS.has(options.reasoningEffort)
) {
threadOptions.modelReasoningEffort = options.reasoningEffort as SdkReasoningEffort;
}
// Resume existing thread or start new one // Resume existing thread or start new one
let thread; let thread;
if (options.sdkSessionId) { if (options.sdkSessionId) {
try { try {
thread = codex.resumeThread(options.sdkSessionId, threadOptions); thread = codex.resumeThread(options.sdkSessionId);
} catch { } catch {
// If resume fails, start a new thread // If resume fails, start a new thread
thread = codex.startThread(threadOptions); thread = codex.startThread();
} }
} else { } else {
thread = codex.startThread(threadOptions); thread = codex.startThread();
} }
const promptText = buildPromptText(options, systemPrompt); const promptText = buildPromptText(options, systemPrompt);
// Build run options // Build run options with reasoning effort if supported
const runOptions: { const runOptions: {
signal?: AbortSignal; signal?: AbortSignal;
reasoning?: { effort: string };
} = { } = {
signal: options.abortController?.signal, signal: options.abortController?.signal,
}; };
// Add reasoning effort if model supports it and reasoningEffort is specified
if (
options.reasoningEffort &&
supportsReasoningEffort(options.model) &&
options.reasoningEffort !== 'none'
) {
runOptions.reasoning = { effort: options.reasoningEffort };
}
// Run the query // Run the query
const result = await thread.run(promptText, runOptions); const result = await thread.run(promptText, runOptions);
@@ -177,42 +160,10 @@ export async function* executeCodexSdkQuery(
} catch (error) { } catch (error) {
const errorInfo = classifyError(error); const errorInfo = classifyError(error);
const userMessage = getUserFriendlyErrorMessage(error); const userMessage = getUserFriendlyErrorMessage(error);
let combinedMessage = buildSdkErrorMessage(errorInfo.message, userMessage); const combinedMessage = buildSdkErrorMessage(errorInfo.message, userMessage);
// Enhance error messages with actionable tips for common Codex issues
// Normalize inputs to avoid crashes from nullish values
const errorLower = (errorInfo?.message ?? '').toLowerCase();
const modelLabel = options?.model ?? '<unknown model>';
if (
errorLower.includes('does not exist') ||
errorLower.includes('model_not_found') ||
errorLower.includes('invalid_model')
) {
// Model not found - provide helpful guidance
combinedMessage +=
`\n\nTip: The model '${modelLabel}' may not be available on your OpenAI plan. ` +
`Some models (like gpt-5.3-codex) require a ChatGPT Pro/Plus subscription and OAuth login via 'codex login'. ` +
`Try using a different model (e.g., gpt-5.1 or gpt-5.2), or authenticate with 'codex login' instead of an API key.`;
} else if (
errorLower.includes('stream disconnected') ||
errorLower.includes('stream ended') ||
errorLower.includes('connection reset') ||
errorLower.includes('socket hang up')
) {
// Stream disconnection - provide helpful guidance
combinedMessage +=
`\n\nTip: The connection to OpenAI was interrupted. This can happen due to:\n` +
`- Network instability\n` +
`- The model not being available on your plan (try 'codex login' for OAuth authentication)\n` +
`- Server-side timeouts for long-running requests\n` +
`Try again, or switch to a different model.`;
}
console.error('[CodexSDK] executeQuery() error during execution:', { console.error('[CodexSDK] executeQuery() error during execution:', {
type: errorInfo.type, type: errorInfo.type,
message: errorInfo.message, message: errorInfo.message,
model: options.model,
isRateLimit: errorInfo.isRateLimit, isRateLimit: errorInfo.isRateLimit,
retryAfter: errorInfo.retryAfter, retryAfter: errorInfo.retryAfter,
stack: error instanceof Error ? error.stack : undefined, stack: error instanceof Error ? error.stack : undefined,

View File

@@ -30,7 +30,6 @@ import {
type CopilotRuntimeModel, type CopilotRuntimeModel,
} from '@automaker/types'; } from '@automaker/types';
import { createLogger, isAbortError } from '@automaker/utils'; import { createLogger, isAbortError } from '@automaker/utils';
import { resolveModelString } from '@automaker/model-resolver';
import { CopilotClient, type PermissionRequest } from '@github/copilot-sdk'; import { CopilotClient, type PermissionRequest } from '@github/copilot-sdk';
import { import {
normalizeTodos, normalizeTodos,
@@ -43,7 +42,7 @@ import {
const logger = createLogger('CopilotProvider'); const logger = createLogger('CopilotProvider');
// Default bare model (without copilot- prefix) for SDK calls // Default bare model (without copilot- prefix) for SDK calls
const DEFAULT_BARE_MODEL = 'claude-sonnet-4.6'; const DEFAULT_BARE_MODEL = 'claude-sonnet-4.5';
// ============================================================================= // =============================================================================
// SDK Event Types (from @github/copilot-sdk) // SDK Event Types (from @github/copilot-sdk)
@@ -76,21 +75,20 @@ interface SdkToolExecutionStartEvent extends SdkEvent {
}; };
} }
interface SdkToolExecutionCompleteEvent extends SdkEvent { interface SdkToolExecutionEndEvent extends SdkEvent {
type: 'tool.execution_complete'; type: 'tool.execution_end';
data: { data: {
toolName: string;
toolCallId: string; toolCallId: string;
success: boolean; result?: string;
result?: { error?: string;
content: string;
};
error?: {
message: string;
code?: string;
};
}; };
} }
interface SdkSessionIdleEvent extends SdkEvent {
type: 'session.idle';
}
interface SdkSessionErrorEvent extends SdkEvent { interface SdkSessionErrorEvent extends SdkEvent {
type: 'session.error'; type: 'session.error';
data: { data: {
@@ -99,16 +97,6 @@ interface SdkSessionErrorEvent extends SdkEvent {
}; };
} }
// =============================================================================
// Constants
// =============================================================================
/**
* Prefix for error messages in tool results
* Consistent with GeminiProvider's error formatting
*/
const TOOL_ERROR_PREFIX = '[ERROR]' as const;
// ============================================================================= // =============================================================================
// Error Codes // Error Codes
// ============================================================================= // =============================================================================
@@ -132,12 +120,6 @@ export interface CopilotError extends Error {
suggestion?: string; suggestion?: string;
} }
type CopilotSession = Awaited<ReturnType<CopilotClient['createSession']>>;
type CopilotSessionOptions = Parameters<CopilotClient['createSession']>[0];
type ResumableCopilotClient = CopilotClient & {
resumeSession?: (sessionId: string, options: CopilotSessionOptions) => Promise<CopilotSession>;
};
// ============================================================================= // =============================================================================
// Tool Name Normalization // Tool Name Normalization
// ============================================================================= // =============================================================================
@@ -372,19 +354,12 @@ export class CopilotProvider extends CliProvider {
}; };
} }
/** case 'tool.execution_end': {
* Tool execution completed event const toolResultEvent = sdkEvent as SdkToolExecutionEndEvent;
* Handles both successful results and errors from tool executions const isError = !!toolResultEvent.data.error;
* Error messages optionally include error codes for better debugging const content = isError
*/ ? `[ERROR] ${toolResultEvent.data.error}`
case 'tool.execution_complete': { : toolResultEvent.data.result || '';
const toolResultEvent = sdkEvent as SdkToolExecutionCompleteEvent;
const error = toolResultEvent.data.error;
// Format error message with optional code for better debugging
const content = error
? `${TOOL_ERROR_PREFIX} ${error.message}${error.code ? ` (${error.code})` : ''}`
: toolResultEvent.data.result?.content || '';
return { return {
type: 'assistant', type: 'assistant',
@@ -411,14 +386,9 @@ export class CopilotProvider extends CliProvider {
case 'session.error': { case 'session.error': {
const errorEvent = sdkEvent as SdkSessionErrorEvent; const errorEvent = sdkEvent as SdkSessionErrorEvent;
const enrichedError =
errorEvent.data.message ||
(errorEvent.data.code
? `Copilot agent error (code: ${errorEvent.data.code})`
: 'Copilot agent error');
return { return {
type: 'error', type: 'error',
error: enrichedError, error: errorEvent.data.message || 'Unknown error',
}; };
} }
@@ -550,11 +520,7 @@ export class CopilotProvider extends CliProvider {
} }
const promptText = this.extractPromptText(options); const promptText = this.extractPromptText(options);
// resolveModelString may return dash-separated canonical names (e.g. "claude-sonnet-4-6"), const bareModel = options.model || DEFAULT_BARE_MODEL;
// but the Copilot SDK expects dot-separated version suffixes (e.g. "claude-sonnet-4.6").
// Normalize by converting the last dash-separated numeric pair to dot notation.
const resolvedModel = resolveModelString(options.model || DEFAULT_BARE_MODEL);
const bareModel = resolvedModel.replace(/-(\d+)-(\d+)$/, '-$1.$2');
const workingDirectory = options.cwd || process.cwd(); const workingDirectory = options.cwd || process.cwd();
logger.debug( logger.debug(
@@ -592,14 +558,12 @@ export class CopilotProvider extends CliProvider {
}); });
}; };
// Declare session outside try so it's accessible in the catch block for cleanup.
let session: CopilotSession | undefined;
try { try {
await client.start(); await client.start();
logger.debug(`CopilotClient started with cwd: ${workingDirectory}`); logger.debug(`CopilotClient started with cwd: ${workingDirectory}`);
const sessionOptions: CopilotSessionOptions = { // Create session with streaming enabled for real-time events
const session = await client.createSession({
model: bareModel, model: bareModel,
streaming: true, streaming: true,
// AUTONOMOUS MODE: Auto-approve all permission requests. // AUTONOMOUS MODE: Auto-approve all permission requests.
@@ -612,33 +576,13 @@ export class CopilotProvider extends CliProvider {
logger.debug(`Permission request: ${request.kind}`); logger.debug(`Permission request: ${request.kind}`);
return { kind: 'approved' }; return { kind: 'approved' };
}, },
}; });
// Resume the previous Copilot session when possible; otherwise create a fresh one. const sessionId = session.sessionId;
const resumableClient = client as ResumableCopilotClient; logger.debug(`Session created: ${sessionId}`);
let sessionResumed = false;
if (options.sdkSessionId && typeof resumableClient.resumeSession === 'function') {
try {
session = await resumableClient.resumeSession(options.sdkSessionId, sessionOptions);
sessionResumed = true;
logger.debug(`Resumed Copilot session: ${session.sessionId}`);
} catch (resumeError) {
logger.warn(
`Failed to resume Copilot session "${options.sdkSessionId}", creating a new session: ${resumeError}`
);
session = await client.createSession(sessionOptions);
}
} else {
session = await client.createSession(sessionOptions);
}
// session is always assigned by this point (both branches above assign it)
const activeSession = session!;
const sessionId = activeSession.sessionId;
logger.debug(`Session ${sessionResumed ? 'resumed' : 'created'}: ${sessionId}`);
// Set up event handler to push events to queue // Set up event handler to push events to queue
activeSession.on((event: SdkEvent) => { session.on((event: SdkEvent) => {
logger.debug(`SDK event: ${event.type}`); logger.debug(`SDK event: ${event.type}`);
if (event.type === 'session.idle') { if (event.type === 'session.idle') {
@@ -650,13 +594,13 @@ export class CopilotProvider extends CliProvider {
sessionComplete = true; sessionComplete = true;
pushEvent(event); pushEvent(event);
} else { } else {
// Push all other events (tool.execution_start, tool.execution_complete, assistant.message, etc.) // Push all other events (tool.execution_start, tool.execution_end, assistant.message, etc.)
pushEvent(event); pushEvent(event);
} }
}); });
// Send the prompt (non-blocking) // Send the prompt (non-blocking)
await activeSession.send({ prompt: promptText }); await session.send({ prompt: promptText });
// Process events as they arrive // Process events as they arrive
while (!sessionComplete || eventQueue.length > 0) { while (!sessionComplete || eventQueue.length > 0) {
@@ -664,7 +608,7 @@ export class CopilotProvider extends CliProvider {
// Check for errors first (before processing events to avoid race condition) // Check for errors first (before processing events to avoid race condition)
if (sessionError) { if (sessionError) {
await activeSession.destroy(); await session.destroy();
await client.stop(); await client.stop();
throw sessionError; throw sessionError;
} }
@@ -684,19 +628,11 @@ export class CopilotProvider extends CliProvider {
} }
// Cleanup // Cleanup
await activeSession.destroy(); await session.destroy();
await client.stop(); await client.stop();
logger.debug('CopilotClient stopped successfully'); logger.debug('CopilotClient stopped successfully');
} catch (error) { } catch (error) {
// Ensure session is destroyed and client is stopped on error to prevent leaks. // Ensure client is stopped on error
// The session may have been created/resumed before the error occurred.
if (session) {
try {
await session.destroy();
} catch (sessionCleanupError) {
logger.debug(`Failed to destroy session during cleanup: ${sessionCleanupError}`);
}
}
try { try {
await client.stop(); await client.stop();
} catch (cleanupError) { } catch (cleanupError) {

View File

@@ -14,7 +14,6 @@ import { execSync } from 'child_process';
import * as fs from 'fs'; import * as fs from 'fs';
import * as path from 'path'; import * as path from 'path';
import * as os from 'os'; import * as os from 'os';
import { findCliInWsl, isWslAvailable } from '@automaker/platform';
import { import {
CliProvider, CliProvider,
type CliSpawnConfig, type CliSpawnConfig,
@@ -31,7 +30,7 @@ import type {
} from './types.js'; } from './types.js';
import { validateBareModelId } from '@automaker/types'; import { validateBareModelId } from '@automaker/types';
import { validateApiKey } from '../lib/auth-utils.js'; import { validateApiKey } from '../lib/auth-utils.js';
import { getEffectivePermissions, detectProfile } from '../services/cursor-config-service.js'; import { getEffectivePermissions } from '../services/cursor-config-service.js';
import { import {
type CursorStreamEvent, type CursorStreamEvent,
type CursorSystemEvent, type CursorSystemEvent,
@@ -69,7 +68,6 @@ interface CursorToolHandler<TArgs = unknown, TResult = unknown> {
* Registry of Cursor tool handlers * Registry of Cursor tool handlers
* Each handler knows how to normalize its specific tool call type * Each handler knows how to normalize its specific tool call type
*/ */
// eslint-disable-next-line @typescript-eslint/no-explicit-any -- handler registry stores heterogeneous tool type parameters
const CURSOR_TOOL_HANDLERS: Record<string, CursorToolHandler<any, any>> = { const CURSOR_TOOL_HANDLERS: Record<string, CursorToolHandler<any, any>> = {
readToolCall: { readToolCall: {
name: 'Read', name: 'Read',
@@ -288,113 +286,15 @@ export class CursorProvider extends CliProvider {
getSpawnConfig(): CliSpawnConfig { getSpawnConfig(): CliSpawnConfig {
return { return {
windowsStrategy: 'direct', windowsStrategy: 'wsl', // cursor-agent requires WSL on Windows
commonPaths: { commonPaths: {
linux: [ linux: [
path.join(os.homedir(), '.local/bin/cursor-agent'), // Primary symlink location path.join(os.homedir(), '.local/bin/cursor-agent'), // Primary symlink location
'/usr/local/bin/cursor-agent', '/usr/local/bin/cursor-agent',
], ],
darwin: [path.join(os.homedir(), '.local/bin/cursor-agent'), '/usr/local/bin/cursor-agent'], darwin: [path.join(os.homedir(), '.local/bin/cursor-agent'), '/usr/local/bin/cursor-agent'],
win32: [ // Windows paths are not used - we check for WSL installation instead
path.join( win32: [],
process.env.LOCALAPPDATA || path.join(os.homedir(), 'AppData', 'Local'),
'Programs',
'Cursor',
'resources',
'app',
'bin',
'cursor-agent.exe'
),
path.join(
process.env.LOCALAPPDATA || path.join(os.homedir(), 'AppData', 'Local'),
'Programs',
'Cursor',
'resources',
'app',
'bin',
'cursor-agent.cmd'
),
path.join(
process.env.LOCALAPPDATA || path.join(os.homedir(), 'AppData', 'Local'),
'Programs',
'Cursor',
'resources',
'app',
'bin',
'cursor.exe'
),
path.join(
process.env.LOCALAPPDATA || path.join(os.homedir(), 'AppData', 'Local'),
'Programs',
'Cursor',
'cursor.exe'
),
path.join(
process.env.LOCALAPPDATA || path.join(os.homedir(), 'AppData', 'Local'),
'Programs',
'cursor',
'resources',
'app',
'bin',
'cursor-agent.exe'
),
path.join(
process.env.LOCALAPPDATA || path.join(os.homedir(), 'AppData', 'Local'),
'Programs',
'cursor',
'resources',
'app',
'bin',
'cursor-agent.cmd'
),
path.join(
process.env.LOCALAPPDATA || path.join(os.homedir(), 'AppData', 'Local'),
'Programs',
'cursor',
'resources',
'app',
'bin',
'cursor.exe'
),
path.join(
process.env.LOCALAPPDATA || path.join(os.homedir(), 'AppData', 'Local'),
'Programs',
'cursor',
'cursor.exe'
),
path.join(
process.env.APPDATA || path.join(os.homedir(), 'AppData', 'Roaming'),
'npm',
'cursor-agent.cmd'
),
path.join(
process.env.APPDATA || path.join(os.homedir(), 'AppData', 'Roaming'),
'npm',
'cursor.cmd'
),
path.join(
process.env.APPDATA || path.join(os.homedir(), 'AppData', 'Roaming'),
'.npm-global',
'bin',
'cursor-agent.cmd'
),
path.join(
process.env.APPDATA || path.join(os.homedir(), 'AppData', 'Roaming'),
'.npm-global',
'bin',
'cursor.cmd'
),
path.join(
process.env.LOCALAPPDATA || path.join(os.homedir(), 'AppData', 'Local'),
'pnpm',
'cursor-agent.cmd'
),
path.join(
process.env.LOCALAPPDATA || path.join(os.homedir(), 'AppData', 'Local'),
'pnpm',
'cursor.cmd'
),
],
}, },
}; };
} }
@@ -450,11 +350,6 @@ export class CursorProvider extends CliProvider {
cliArgs.push('--model', model); cliArgs.push('--model', model);
} }
// Resume an existing chat when a provider session ID is available
if (options.sdkSessionId) {
cliArgs.push('--resume', options.sdkSessionId);
}
// Use '-' to indicate reading prompt from stdin // Use '-' to indicate reading prompt from stdin
cliArgs.push('-'); cliArgs.push('-');
@@ -562,14 +457,10 @@ export class CursorProvider extends CliProvider {
const resultEvent = cursorEvent as CursorResultEvent; const resultEvent = cursorEvent as CursorResultEvent;
if (resultEvent.is_error) { if (resultEvent.is_error) {
const errorText = resultEvent.error || resultEvent.result || '';
const enrichedError =
errorText ||
`Cursor agent failed (duration: ${resultEvent.duration_ms}ms, subtype: ${resultEvent.subtype}, session: ${resultEvent.session_id ?? 'none'})`;
return { return {
type: 'error', type: 'error',
session_id: resultEvent.session_id, session_id: resultEvent.session_id,
error: enrichedError, error: resultEvent.error || resultEvent.result || 'Unknown error',
}; };
} }
@@ -596,92 +487,6 @@ export class CursorProvider extends CliProvider {
* 2. Cursor IDE with 'cursor agent' subcommand support * 2. Cursor IDE with 'cursor agent' subcommand support
*/ */
protected detectCli(): CliDetectionResult { protected detectCli(): CliDetectionResult {
if (process.platform === 'win32') {
const findInPath = (command: string): string | null => {
try {
const result = execSync(`where ${command}`, {
encoding: 'utf8',
timeout: 5000,
stdio: ['pipe', 'pipe', 'pipe'],
windowsHide: true,
})
.trim()
.split(/\r?\n/)[0];
if (result && fs.existsSync(result)) {
return result;
}
} catch {
// Not in PATH
}
return null;
};
const isCursorAgentBinary = (cliPath: string) =>
cliPath.toLowerCase().includes('cursor-agent');
const supportsCursorAgentSubcommand = (cliPath: string) => {
try {
execSync(`"${cliPath}" agent --version`, {
encoding: 'utf8',
timeout: 5000,
stdio: 'pipe',
windowsHide: true,
});
return true;
} catch {
return false;
}
};
const pathResult = findInPath('cursor-agent') || findInPath('cursor');
if (pathResult) {
if (isCursorAgentBinary(pathResult) || supportsCursorAgentSubcommand(pathResult)) {
return {
cliPath: pathResult,
useWsl: false,
strategy: pathResult.toLowerCase().endsWith('.cmd') ? 'cmd' : 'direct',
};
}
}
const config = this.getSpawnConfig();
for (const candidate of config.commonPaths.win32 || []) {
const resolved = candidate;
if (!fs.existsSync(resolved)) {
continue;
}
if (isCursorAgentBinary(resolved) || supportsCursorAgentSubcommand(resolved)) {
return {
cliPath: resolved,
useWsl: false,
strategy: resolved.toLowerCase().endsWith('.cmd') ? 'cmd' : 'direct',
};
}
}
const wslLogger = (msg: string) => logger.debug(msg);
if (isWslAvailable({ logger: wslLogger })) {
const wslResult = findCliInWsl('cursor-agent', { logger: wslLogger });
if (wslResult) {
logger.debug(
`Using cursor-agent via WSL (${wslResult.distribution || 'default'}): ${wslResult.wslPath}`
);
return {
cliPath: 'wsl.exe',
useWsl: true,
wslCliPath: wslResult.wslPath,
wslDistribution: wslResult.distribution,
strategy: 'wsl',
};
}
}
logger.debug('cursor-agent not found on Windows');
return { cliPath: null, useWsl: false, strategy: 'direct' };
}
// First try standard detection (PATH, common paths, WSL) // First try standard detection (PATH, common paths, WSL)
const result = super.detectCli(); const result = super.detectCli();
if (result.cliPath) { if (result.cliPath) {
@@ -690,7 +495,7 @@ export class CursorProvider extends CliProvider {
// Cursor-specific: Check versions directory for any installed version // Cursor-specific: Check versions directory for any installed version
// This handles cases where cursor-agent is installed but not in PATH // This handles cases where cursor-agent is installed but not in PATH
if (fs.existsSync(CursorProvider.VERSIONS_DIR)) { if (process.platform !== 'win32' && fs.existsSync(CursorProvider.VERSIONS_DIR)) {
try { try {
const versions = fs const versions = fs
.readdirSync(CursorProvider.VERSIONS_DIR) .readdirSync(CursorProvider.VERSIONS_DIR)
@@ -716,31 +521,33 @@ export class CursorProvider extends CliProvider {
// If cursor-agent not found, try to find 'cursor' IDE and use 'cursor agent' subcommand // If cursor-agent not found, try to find 'cursor' IDE and use 'cursor agent' subcommand
// The Cursor IDE includes the agent as a subcommand: cursor agent // The Cursor IDE includes the agent as a subcommand: cursor agent
const cursorPaths = [ if (process.platform !== 'win32') {
'/usr/bin/cursor', const cursorPaths = [
'/usr/local/bin/cursor', '/usr/bin/cursor',
path.join(os.homedir(), '.local/bin/cursor'), '/usr/local/bin/cursor',
'/opt/cursor/cursor', path.join(os.homedir(), '.local/bin/cursor'),
]; '/opt/cursor/cursor',
];
for (const cursorPath of cursorPaths) { for (const cursorPath of cursorPaths) {
if (fs.existsSync(cursorPath)) { if (fs.existsSync(cursorPath)) {
// Verify cursor agent subcommand works // Verify cursor agent subcommand works
try { try {
execSync(`"${cursorPath}" agent --version`, { execSync(`"${cursorPath}" agent --version`, {
encoding: 'utf8', encoding: 'utf8',
timeout: 5000, timeout: 5000,
stdio: 'pipe', stdio: 'pipe',
}); });
logger.debug(`Using cursor agent via Cursor IDE: ${cursorPath}`); logger.debug(`Using cursor agent via Cursor IDE: ${cursorPath}`);
// Return cursor path but we'll use 'cursor agent' subcommand // Return cursor path but we'll use 'cursor agent' subcommand
return { return {
cliPath: cursorPath, cliPath: cursorPath,
useWsl: false, useWsl: false,
strategy: 'native', strategy: 'native',
}; };
} catch { } catch {
// cursor agent subcommand doesn't work, try next path // cursor agent subcommand doesn't work, try next path
}
} }
} }
} }
@@ -843,10 +650,9 @@ export class CursorProvider extends CliProvider {
async *executeQuery(options: ExecuteOptions): AsyncGenerator<ProviderMessage> { async *executeQuery(options: ExecuteOptions): AsyncGenerator<ProviderMessage> {
this.ensureCliDetected(); this.ensureCliDetected();
// Validate that model doesn't have a provider prefix (except cursor- which should already be stripped) // Validate that model doesn't have a provider prefix
// AgentService should strip prefixes before passing to providers // AgentService should strip prefixes before passing to providers
// Note: Cursor's Gemini models (e.g., "gemini-3-pro") legitimately start with "gemini-" validateBareModelId(options.model, 'CursorProvider');
validateBareModelId(options.model, 'CursorProvider', 'cursor');
if (!this.cliPath) { if (!this.cliPath) {
throw this.createError( throw this.createError(
@@ -888,12 +694,8 @@ export class CursorProvider extends CliProvider {
logger.debug(`CursorProvider.executeQuery called with model: "${options.model}"`); logger.debug(`CursorProvider.executeQuery called with model: "${options.model}"`);
// Get effective permissions for this project and detect the active profile // Get effective permissions for this project
const effectivePermissions = await getEffectivePermissions(options.cwd || process.cwd()); const effectivePermissions = await getEffectivePermissions(options.cwd || process.cwd());
const activeProfile = detectProfile(effectivePermissions);
logger.debug(
`Active permission profile: ${activeProfile ?? 'none'}, permissions: ${JSON.stringify(effectivePermissions)}`
);
// Debug: log raw events when AUTOMAKER_DEBUG_RAW_OUTPUT is enabled // Debug: log raw events when AUTOMAKER_DEBUG_RAW_OUTPUT is enabled
const debugRawEvents = const debugRawEvents =

View File

@@ -20,11 +20,12 @@ import type {
ProviderMessage, ProviderMessage,
InstallationStatus, InstallationStatus,
ModelDefinition, ModelDefinition,
ContentBlock,
} from './types.js'; } from './types.js';
import { validateBareModelId } from '@automaker/types'; import { validateBareModelId } from '@automaker/types';
import { GEMINI_MODEL_MAP, type GeminiAuthStatus } from '@automaker/types'; import { GEMINI_MODEL_MAP, type GeminiAuthStatus } from '@automaker/types';
import { createLogger, isAbortError } from '@automaker/utils'; import { createLogger, isAbortError } from '@automaker/utils';
import { spawnJSONLProcess, type SubprocessOptions } from '@automaker/platform'; import { spawnJSONLProcess } from '@automaker/platform';
import { normalizeTodos } from './tool-normalization.js'; import { normalizeTodos } from './tool-normalization.js';
// Create logger for this module // Create logger for this module
@@ -263,14 +264,6 @@ export class GeminiProvider extends CliProvider {
// Use explicit approval-mode for clearer semantics // Use explicit approval-mode for clearer semantics
cliArgs.push('--approval-mode', 'yolo'); cliArgs.push('--approval-mode', 'yolo');
// Force headless (non-interactive) mode with --prompt flag.
// The actual prompt content is passed via stdin (see buildSubprocessOptions()),
// but we MUST include -p to trigger headless mode. Without it, Gemini CLI
// starts in interactive mode which adds significant startup overhead
// (interactive REPL setup, extra context loading, etc.).
// Per Gemini CLI docs: stdin content is "appended to" the -p value.
cliArgs.push('--prompt', '');
// Explicitly include the working directory in allowed workspace directories // Explicitly include the working directory in allowed workspace directories
// This ensures Gemini CLI allows file operations in the project directory, // This ensures Gemini CLI allows file operations in the project directory,
// even if it has a different workspace cached from a previous session // even if it has a different workspace cached from a previous session
@@ -278,15 +271,13 @@ export class GeminiProvider extends CliProvider {
cliArgs.push('--include-directories', options.cwd); cliArgs.push('--include-directories', options.cwd);
} }
// Resume an existing Gemini session when one is available
if (options.sdkSessionId) {
cliArgs.push('--resume', options.sdkSessionId);
}
// Note: Gemini CLI doesn't have a --thinking-level flag. // Note: Gemini CLI doesn't have a --thinking-level flag.
// Thinking capabilities are determined by the model selection (e.g., gemini-2.5-pro). // Thinking capabilities are determined by the model selection (e.g., gemini-2.5-pro).
// The model handles thinking internally based on the task complexity. // The model handles thinking internally based on the task complexity.
// The prompt will be passed as the last positional argument
// We'll append it in executeQuery after extracting the text
return cliArgs; return cliArgs;
} }
@@ -381,13 +372,10 @@ export class GeminiProvider extends CliProvider {
const resultEvent = geminiEvent as GeminiResultEvent; const resultEvent = geminiEvent as GeminiResultEvent;
if (resultEvent.status === 'error') { if (resultEvent.status === 'error') {
const enrichedError =
resultEvent.error ||
`Gemini agent failed (duration: ${resultEvent.stats?.duration_ms ?? 'unknown'}ms, session: ${resultEvent.session_id ?? 'none'})`;
return { return {
type: 'error', type: 'error',
session_id: resultEvent.session_id, session_id: resultEvent.session_id,
error: enrichedError, error: resultEvent.error || 'Unknown error',
}; };
} }
@@ -404,12 +392,10 @@ export class GeminiProvider extends CliProvider {
case 'error': { case 'error': {
const errorEvent = geminiEvent as GeminiResultEvent; const errorEvent = geminiEvent as GeminiResultEvent;
const enrichedError =
errorEvent.error || `Gemini agent failed (session: ${errorEvent.session_id ?? 'none'})`;
return { return {
type: 'error', type: 'error',
session_id: errorEvent.session_id, session_id: errorEvent.session_id,
error: enrichedError, error: errorEvent.error || 'Unknown error',
}; };
} }
@@ -423,32 +409,6 @@ export class GeminiProvider extends CliProvider {
// CliProvider Overrides // CliProvider Overrides
// ========================================================================== // ==========================================================================
/**
* Build subprocess options with stdin data for prompt and speed-optimized env vars.
*
* Passes the prompt via stdin instead of --prompt CLI arg to:
* - Avoid shell argument size limits with large prompts (system prompt + context)
* - Avoid shell escaping issues with special characters in prompts
* - Match the pattern used by Cursor, OpenCode, and Codex providers
*
* Also injects environment variables to reduce Gemini CLI startup overhead:
* - GEMINI_TELEMETRY_ENABLED=false: Disables OpenTelemetry collection
*/
protected buildSubprocessOptions(options: ExecuteOptions, cliArgs: string[]): SubprocessOptions {
const subprocessOptions = super.buildSubprocessOptions(options, cliArgs);
// Pass prompt via stdin to avoid shell interpretation of special characters
// and shell argument size limits with large system prompts + context files
subprocessOptions.stdinData = this.extractPromptText(options);
// Disable telemetry to reduce startup overhead
if (subprocessOptions.env) {
subprocessOptions.env['GEMINI_TELEMETRY_ENABLED'] = 'false';
}
return subprocessOptions;
}
/** /**
* Override error mapping for Gemini-specific error codes * Override error mapping for Gemini-specific error codes
*/ */
@@ -546,8 +506,8 @@ export class GeminiProvider extends CliProvider {
async *executeQuery(options: ExecuteOptions): AsyncGenerator<ProviderMessage> { async *executeQuery(options: ExecuteOptions): AsyncGenerator<ProviderMessage> {
this.ensureCliDetected(); this.ensureCliDetected();
// Validate that model doesn't have a provider prefix (except gemini- which should already be stripped) // Validate that model doesn't have a provider prefix
validateBareModelId(options.model, 'GeminiProvider', 'gemini'); validateBareModelId(options.model, 'GeminiProvider');
if (!this.cliPath) { if (!this.cliPath) {
throw this.createError( throw this.createError(
@@ -558,21 +518,14 @@ export class GeminiProvider extends CliProvider {
); );
} }
// Ensure .geminiignore exists in the working directory to prevent Gemini CLI // Extract prompt text to pass as positional argument
// from scanning .git and node_modules directories during startup. This reduces const promptText = this.extractPromptText(options);
// startup time significantly (reported: 35s → 11s) by skipping large directories
// that Gemini CLI would otherwise traverse for context discovery.
await this.ensureGeminiIgnore(options.cwd || process.cwd());
// Embed system prompt into the user prompt so Gemini CLI receives // Build CLI args and append the prompt as the last positional argument
// project context (CLAUDE.md, CODE_QUALITY.md, etc.) that would const cliArgs = this.buildCliArgs(options);
// otherwise be silently dropped since Gemini CLI has no --system-prompt flag. cliArgs.push(promptText); // Gemini CLI uses positional args for the prompt
const effectiveOptions = this.embedSystemPromptIntoPrompt(options);
// Build CLI args for headless execution. const subprocessOptions = this.buildSubprocessOptions(options, cliArgs);
const cliArgs = this.buildCliArgs(effectiveOptions);
const subprocessOptions = this.buildSubprocessOptions(effectiveOptions, cliArgs);
let sessionId: string | undefined; let sessionId: string | undefined;
@@ -625,49 +578,6 @@ export class GeminiProvider extends CliProvider {
// Gemini-Specific Methods // Gemini-Specific Methods
// ========================================================================== // ==========================================================================
/**
* Ensure a .geminiignore file exists in the working directory.
*
* Gemini CLI scans the working directory for context discovery during startup.
* Excluding .git and node_modules dramatically reduces startup time by preventing
* traversal of large directories (reported improvement: 35s → 11s).
*
* Only creates the file if it doesn't already exist to avoid overwriting user config.
*/
private async ensureGeminiIgnore(cwd: string): Promise<void> {
const ignorePath = path.join(cwd, '.geminiignore');
const content = [
'# Auto-generated by Automaker to speed up Gemini CLI startup',
'# Prevents Gemini CLI from scanning large directories during context discovery',
'.git',
'node_modules',
'dist',
'build',
'.next',
'.nuxt',
'coverage',
'.automaker',
'.worktrees',
'.vscode',
'.idea',
'*.lock',
'',
].join('\n');
try {
// Use 'wx' flag for atomic creation - fails if file exists (EEXIST)
await fs.writeFile(ignorePath, content, { encoding: 'utf-8', flag: 'wx' });
logger.debug(`Created .geminiignore at ${ignorePath}`);
} catch (writeError) {
// EEXIST means file already exists - that's fine, preserve user's file
if ((writeError as NodeJS.ErrnoException).code === 'EEXIST') {
logger.debug(`.geminiignore already exists at ${ignorePath}, preserving existing file`);
return;
}
// Non-fatal: startup will just be slower without the ignore file
logger.debug(`Failed to create .geminiignore: ${writeError}`);
}
}
/** /**
* Create a GeminiError with details * Create a GeminiError with details
*/ */

View File

@@ -1,53 +0,0 @@
/**
* Mock Provider - No-op AI provider for E2E and CI testing
*
* When AUTOMAKER_MOCK_AGENT=true, the server uses this provider instead of
* real backends (Claude, Codex, etc.) so tests never call external APIs.
*/
import type { ExecuteOptions } from '@automaker/types';
import { BaseProvider } from './base-provider.js';
import type { ProviderMessage, InstallationStatus, ModelDefinition } from './types.js';
const MOCK_TEXT = 'Mock agent output for testing.';
export class MockProvider extends BaseProvider {
getName(): string {
return 'mock';
}
async *executeQuery(_options: ExecuteOptions): AsyncGenerator<ProviderMessage> {
yield {
type: 'assistant',
message: {
role: 'assistant',
content: [{ type: 'text', text: MOCK_TEXT }],
},
};
yield {
type: 'result',
subtype: 'success',
};
}
async detectInstallation(): Promise<InstallationStatus> {
return {
installed: true,
method: 'sdk',
hasApiKey: true,
authenticated: true,
};
}
getAvailableModels(): ModelDefinition[] {
return [
{
id: 'mock-model',
name: 'Mock Model',
modelString: 'mock-model',
provider: 'mock',
description: 'Mock model for testing',
},
];
}
}

View File

@@ -192,28 +192,6 @@ export interface OpenCodeToolErrorEvent extends OpenCodeBaseEvent {
part?: OpenCodePart & { error: string }; part?: OpenCodePart & { error: string };
} }
/**
* Tool use event - The actual format emitted by OpenCode CLI when a tool is invoked.
* Contains the tool name, call ID, and the complete state (input, output, status).
* Note: OpenCode CLI emits 'tool_use' (not 'tool_call') as the event type.
*/
export interface OpenCodeToolUseEvent extends OpenCodeBaseEvent {
type: 'tool_use';
part: OpenCodePart & {
type: 'tool';
callID?: string;
tool?: string;
state?: {
status?: string;
input?: unknown;
output?: string;
title?: string;
metadata?: unknown;
time?: { start: number; end: number };
};
};
}
/** /**
* Union type of all OpenCode stream events * Union type of all OpenCode stream events
*/ */
@@ -222,7 +200,6 @@ export type OpenCodeStreamEvent =
| OpenCodeStepStartEvent | OpenCodeStepStartEvent
| OpenCodeStepFinishEvent | OpenCodeStepFinishEvent
| OpenCodeToolCallEvent | OpenCodeToolCallEvent
| OpenCodeToolUseEvent
| OpenCodeToolResultEvent | OpenCodeToolResultEvent
| OpenCodeErrorEvent | OpenCodeErrorEvent
| OpenCodeToolErrorEvent; | OpenCodeToolErrorEvent;
@@ -334,8 +311,8 @@ export class OpencodeProvider extends CliProvider {
* Arguments built: * Arguments built:
* - 'run' subcommand for executing queries * - 'run' subcommand for executing queries
* - '--format', 'json' for JSONL streaming output * - '--format', 'json' for JSONL streaming output
* - '-c', '<cwd>' for working directory (using opencode's -c flag)
* - '--model', '<model>' for model selection (if specified) * - '--model', '<model>' for model selection (if specified)
* - '--session', '<id>' for continuing an existing session (if sdkSessionId is set)
* *
* The prompt is passed via stdin (piped) to avoid shell escaping issues. * The prompt is passed via stdin (piped) to avoid shell escaping issues.
* OpenCode CLI automatically reads from stdin when input is piped. * OpenCode CLI automatically reads from stdin when input is piped.
@@ -349,14 +326,6 @@ export class OpencodeProvider extends CliProvider {
// Add JSON output format for JSONL parsing (not 'stream-json') // Add JSON output format for JSONL parsing (not 'stream-json')
args.push('--format', 'json'); args.push('--format', 'json');
// Handle session resumption for conversation continuity.
// The opencode CLI supports `--session <id>` to continue an existing session.
// The sdkSessionId is captured from the sessionID field in previous stream events
// and persisted by AgentService for use in follow-up messages.
if (options.sdkSessionId) {
args.push('--session', options.sdkSessionId);
}
// Handle model selection // Handle model selection
// Convert canonical prefix format (opencode-xxx) to CLI slash format (opencode/xxx) // Convert canonical prefix format (opencode-xxx) to CLI slash format (opencode/xxx)
// OpenCode CLI expects provider/model format (e.g., 'opencode/big-model') // OpenCode CLI expects provider/model format (e.g., 'opencode/big-model')
@@ -429,225 +398,15 @@ export class OpencodeProvider extends CliProvider {
return subprocessOptions; return subprocessOptions;
} }
/**
* Check if an error message indicates a session-not-found condition.
*
* Centralizes the pattern matching for session errors to avoid duplication.
* Strips ANSI escape codes first since opencode CLI uses colored stderr output
* (e.g. "\x1b[91m\x1b[1mError: \x1b[0mSession not found").
*
* IMPORTANT: Patterns must be specific enough to avoid false positives.
* Generic patterns like "notfounderror" or "resource not found" match
* non-session errors (e.g. "ProviderModelNotFoundError") which would
* trigger unnecessary retries that fail identically, producing confusing
* error messages like "OpenCode session could not be created".
*
* @param errorText - Raw error text (may contain ANSI codes)
* @returns true if the error indicates the session was not found
*/
private static isSessionNotFoundError(errorText: string): boolean {
const cleaned = OpencodeProvider.stripAnsiCodes(errorText).toLowerCase();
// Explicit session-related phrases — high confidence
if (
cleaned.includes('session not found') ||
cleaned.includes('session does not exist') ||
cleaned.includes('invalid session') ||
cleaned.includes('session expired') ||
cleaned.includes('no such session')
) {
return true;
}
// Generic "NotFoundError" / "resource not found" are only session errors
// when the message also references a session path or session ID.
// Without this guard, errors like "ProviderModelNotFoundError" or
// "Resource not found: /path/to/config.json" would false-positive.
if (cleaned.includes('notfounderror') || cleaned.includes('resource not found')) {
return cleaned.includes('/session/') || /\bsession\b/.test(cleaned);
}
return false;
}
/**
* Strip ANSI escape codes from a string.
*
* The OpenCode CLI uses colored stderr output (e.g. "\x1b[91m\x1b[1mError: \x1b[0m").
* These escape codes render as garbled text like "[91m[1mError: [0m" in the UI
* when passed through as-is. This utility removes them so error messages are
* clean and human-readable.
*/
private static stripAnsiCodes(text: string): string {
return text.replace(/\x1b\[[0-9;]*m/g, '');
}
/**
* Clean a CLI error message for display.
*
* Strips ANSI escape codes AND removes the redundant "Error: " prefix that
* the OpenCode CLI prepends to error messages in its colored stderr output
* (e.g. "\x1b[91m\x1b[1mError: \x1b[0mSession not found" → "Session not found").
*
* Without this, consumers that wrap the message in their own "Error: " prefix
* (like AgentService or AgentExecutor) produce garbled double-prefixed output:
* "Error: Error: Session not found".
*/
private static cleanErrorMessage(text: string): string {
let cleaned = OpencodeProvider.stripAnsiCodes(text).trim();
// Remove leading "Error: " prefix (case-insensitive) if present.
// The CLI formats errors as: \x1b[91m\x1b[1mError: \x1b[0m<actual message>
// After ANSI stripping this becomes: "Error: <actual message>"
cleaned = cleaned.replace(/^Error:\s*/i, '').trim();
return cleaned || text;
}
/**
* Execute a query with automatic session resumption fallback.
*
* When a sdkSessionId is provided, the CLI receives `--session <id>`.
* If the session no longer exists on disk the CLI will fail with a
* "NotFoundError" / "Resource not found" / "Session not found" error.
*
* The opencode CLI writes this to **stderr** and exits non-zero.
* `spawnJSONLProcess` collects stderr and **yields** it as
* `{ type: 'error', error: <stderrText> }` — it is NOT thrown.
* After `normalizeEvent`, the error becomes a yielded `ProviderMessage`
* with `type: 'error'`. A simple try/catch therefore cannot intercept it.
*
* This override iterates the parent stream, intercepts yielded error
* messages that match the session-not-found pattern, and retries the
* entire query WITHOUT the `--session` flag so a fresh session is started.
*
* Session-not-found retry is ONLY attempted when `sdkSessionId` is set.
* Without the `--session` flag the CLI always creates a fresh session, so
* retrying without it would be identical to the first attempt and would
* fail the same way — producing a confusing "session could not be created"
* message for what is actually a different error (model not found, auth
* failure, etc.).
*
* All error messages (session or not) are cleaned of ANSI codes and the
* CLI's redundant "Error: " prefix before being yielded to consumers.
*
* After a successful retry, the consumer (AgentService) will receive a new
* session_id from the fresh stream events, which it persists to metadata —
* replacing the stale sdkSessionId and preventing repeated failures.
*/
async *executeQuery(options: ExecuteOptions): AsyncGenerator<ProviderMessage> {
// When no sdkSessionId is set, there is nothing to "retry without" — just
// stream normally and clean error messages as they pass through.
if (!options.sdkSessionId) {
for await (const msg of super.executeQuery(options)) {
// Clean error messages so consumers don't get ANSI or double "Error:" prefix
if (msg.type === 'error' && msg.error && typeof msg.error === 'string') {
msg.error = OpencodeProvider.cleanErrorMessage(msg.error);
}
yield msg;
}
return;
}
// sdkSessionId IS set — the CLI will receive `--session <id>`.
// If that session no longer exists, intercept the error and retry fresh.
//
// To avoid buffering the entire stream in memory for long-lived sessions,
// we only buffer an initial window of messages until we observe a healthy
// (non-error) message. Once a healthy message is seen, we flush the buffer
// and switch to direct passthrough, while still watching for session errors
// via isSessionNotFoundError on any subsequent error messages.
const buffered: ProviderMessage[] = [];
let sessionError = false;
let seenHealthyMessage = false;
try {
for await (const msg of super.executeQuery(options)) {
if (msg.type === 'error') {
const errorText = msg.error || '';
if (OpencodeProvider.isSessionNotFoundError(errorText)) {
sessionError = true;
opencodeLogger.info(
`OpenCode session error detected (session "${options.sdkSessionId}") ` +
`— retrying without --session to start fresh`
);
break; // stop consuming the failed stream
}
// Non-session error — clean it
if (msg.error && typeof msg.error === 'string') {
msg.error = OpencodeProvider.cleanErrorMessage(msg.error);
}
} else {
// A non-error message is a healthy signal — stop buffering after this
seenHealthyMessage = true;
}
if (seenHealthyMessage && buffered.length > 0) {
// Flush the pre-healthy buffer first, then switch to passthrough
for (const bufferedMsg of buffered) {
yield bufferedMsg;
}
buffered.length = 0;
}
if (seenHealthyMessage) {
// Passthrough mode — yield directly without buffering
yield msg;
} else {
// Still in initial window — buffer until we see a healthy message
buffered.push(msg);
}
}
} catch (error) {
// Also handle thrown exceptions (e.g. from mapError in cli-provider)
const errMsg = error instanceof Error ? error.message : String(error);
if (OpencodeProvider.isSessionNotFoundError(errMsg)) {
sessionError = true;
opencodeLogger.info(
`OpenCode session error detected (thrown, session "${options.sdkSessionId}") ` +
`— retrying without --session to start fresh`
);
} else {
throw error;
}
}
if (sessionError) {
// Retry the entire query without the stale session ID.
const retryOptions = { ...options, sdkSessionId: undefined };
opencodeLogger.info('Retrying OpenCode query without --session flag...');
// Stream the retry directly to the consumer.
// If the retry also fails, it's a genuine error (not session-related)
// and should be surfaced as-is rather than masked with a misleading
// "session could not be created" message.
for await (const retryMsg of super.executeQuery(retryOptions)) {
if (retryMsg.type === 'error' && retryMsg.error && typeof retryMsg.error === 'string') {
retryMsg.error = OpencodeProvider.cleanErrorMessage(retryMsg.error);
}
yield retryMsg;
}
} else if (buffered.length > 0) {
// No session error and still have buffered messages (stream ended before
// any healthy message was observed) — flush them to the consumer
for (const msg of buffered) {
yield msg;
}
}
// If seenHealthyMessage is true, all messages have already been yielded
// directly in passthrough mode — nothing left to flush.
}
/** /**
* Normalize a raw CLI event to ProviderMessage format * Normalize a raw CLI event to ProviderMessage format
* *
* Maps OpenCode event types to the standard ProviderMessage structure: * Maps OpenCode event types to the standard ProviderMessage structure:
* - text -> type: 'assistant', content with type: 'text' * - text -> type: 'assistant', content with type: 'text'
* - step_start -> null (informational, no message needed) * - step_start -> null (informational, no message needed)
* - step_finish with reason 'stop'/'end_turn' -> type: 'result', subtype: 'success' * - step_finish with reason 'stop' -> type: 'result', subtype: 'success'
* - step_finish with reason 'tool-calls' -> null (intermediate step, not final)
* - step_finish with error -> type: 'error' * - step_finish with error -> type: 'error'
* - tool_use -> type: 'assistant', content with type: 'tool_use' (OpenCode CLI format) * - tool_call -> type: 'assistant', content with type: 'tool_use'
* - tool_call -> type: 'assistant', content with type: 'tool_use' (legacy format)
* - tool_result -> type: 'assistant', content with type: 'tool_result' * - tool_result -> type: 'assistant', content with type: 'tool_result'
* - error -> type: 'error' * - error -> type: 'error'
* *
@@ -700,7 +459,7 @@ export class OpencodeProvider extends CliProvider {
return { return {
type: 'error', type: 'error',
session_id: finishEvent.sessionID, session_id: finishEvent.sessionID,
error: OpencodeProvider.cleanErrorMessage(finishEvent.part.error), error: finishEvent.part.error,
}; };
} }
@@ -709,40 +468,15 @@ export class OpencodeProvider extends CliProvider {
return { return {
type: 'error', type: 'error',
session_id: finishEvent.sessionID, session_id: finishEvent.sessionID,
error: OpencodeProvider.cleanErrorMessage('Step execution failed'), error: 'Step execution failed',
}; };
} }
// Intermediate step completion (reason: 'tool-calls') — the agent loop // Successful completion (reason: 'stop' or 'end_turn')
// is continuing because the model requested tool calls. Skip these so
// consumers don't mistake them for final results.
if (finishEvent.part?.reason === 'tool-calls') {
return null;
}
// Only treat an explicit allowlist of reasons as true success.
// Reasons like 'length' (context-window truncation) or 'content-filter'
// indicate the model stopped abnormally and must not be surfaced as
// successful completions.
const SUCCESS_REASONS = new Set(['stop', 'end_turn']);
const reason = finishEvent.part?.reason;
if (reason === undefined || SUCCESS_REASONS.has(reason)) {
// Final completion (reason: 'stop', 'end_turn', or unset)
return {
type: 'result',
subtype: 'success',
session_id: finishEvent.sessionID,
result: (finishEvent.part as OpenCodePart & { result?: string })?.result,
};
}
// Non-success, non-tool-calls reason (e.g. 'length', 'content-filter')
return { return {
type: 'result', type: 'result',
subtype: 'error', subtype: 'success',
session_id: finishEvent.sessionID, session_id: finishEvent.sessionID,
error: `Step finished with non-success reason: ${reason}`,
result: (finishEvent.part as OpenCodePart & { result?: string })?.result, result: (finishEvent.part as OpenCodePart & { result?: string })?.result,
}; };
} }
@@ -750,10 +484,8 @@ export class OpencodeProvider extends CliProvider {
case 'tool_error': { case 'tool_error': {
const toolErrorEvent = openCodeEvent as OpenCodeBaseEvent; const toolErrorEvent = openCodeEvent as OpenCodeBaseEvent;
// Extract error message from part.error and clean ANSI codes // Extract error message from part.error
const errorMessage = OpencodeProvider.cleanErrorMessage( const errorMessage = toolErrorEvent.part?.error || 'Tool execution failed';
toolErrorEvent.part?.error || 'Tool execution failed'
);
return { return {
type: 'error', type: 'error',
@@ -762,45 +494,6 @@ export class OpencodeProvider extends CliProvider {
}; };
} }
// OpenCode CLI emits 'tool_use' events (not 'tool_call') when the model invokes a tool.
// The event format includes the tool name, call ID, and state with input/output.
// Handle both 'tool_use' (actual CLI format) and 'tool_call' (legacy/alternative) for robustness.
case 'tool_use': {
const toolUseEvent = openCodeEvent as OpenCodeToolUseEvent;
const part = toolUseEvent.part;
// Generate a tool use ID if not provided
const toolUseId = part?.callID || part?.call_id || generateToolUseId();
const toolName = part?.tool || part?.name || 'unknown';
const content: ContentBlock[] = [
{
type: 'tool_use',
name: toolName,
tool_use_id: toolUseId,
input: part?.state?.input || part?.args,
},
];
// If the tool has already completed (state.status === 'completed'), also emit the result
if (part?.state?.status === 'completed' && part?.state?.output) {
content.push({
type: 'tool_result',
tool_use_id: toolUseId,
content: part.state.output,
});
}
return {
type: 'assistant',
session_id: toolUseEvent.sessionID,
message: {
role: 'assistant',
content,
},
};
}
case 'tool_call': { case 'tool_call': {
const toolEvent = openCodeEvent as OpenCodeToolCallEvent; const toolEvent = openCodeEvent as OpenCodeToolCallEvent;
@@ -867,13 +560,6 @@ export class OpencodeProvider extends CliProvider {
errorMessage = errorEvent.part.error; errorMessage = errorEvent.part.error;
} }
// Clean error messages: strip ANSI escape codes AND the redundant "Error: "
// prefix the CLI adds. The OpenCode CLI outputs colored stderr like:
// \x1b[91m\x1b[1mError: \x1b[0mSession not found
// Without cleaning, consumers that wrap in their own "Error: " prefix
// produce "Error: Error: Session not found".
errorMessage = OpencodeProvider.cleanErrorMessage(errorMessage);
return { return {
type: 'error', type: 'error',
session_id: errorEvent.sessionID, session_id: errorEvent.sessionID,
@@ -937,9 +623,9 @@ export class OpencodeProvider extends CliProvider {
default: true, default: true,
}, },
{ {
id: 'opencode/glm-5-free', id: 'opencode/glm-4.7-free',
name: 'GLM 5 Free', name: 'GLM 4.7 Free',
modelString: 'opencode/glm-5-free', modelString: 'opencode/glm-4.7-free',
provider: 'opencode', provider: 'opencode',
description: 'OpenCode free tier GLM model', description: 'OpenCode free tier GLM model',
supportsTools: true, supportsTools: true,
@@ -957,19 +643,19 @@ export class OpencodeProvider extends CliProvider {
tier: 'basic', tier: 'basic',
}, },
{ {
id: 'opencode/kimi-k2.5-free', id: 'opencode/grok-code',
name: 'Kimi K2.5 Free', name: 'Grok Code (Free)',
modelString: 'opencode/kimi-k2.5-free', modelString: 'opencode/grok-code',
provider: 'opencode', provider: 'opencode',
description: 'OpenCode free tier Kimi model for coding', description: 'OpenCode free tier Grok model for coding',
supportsTools: true, supportsTools: true,
supportsVision: false, supportsVision: false,
tier: 'basic', tier: 'basic',
}, },
{ {
id: 'opencode/minimax-m2.5-free', id: 'opencode/minimax-m2.1-free',
name: 'MiniMax M2.5 Free', name: 'MiniMax M2.1 Free',
modelString: 'opencode/minimax-m2.5-free', modelString: 'opencode/minimax-m2.1-free',
provider: 'opencode', provider: 'opencode',
description: 'OpenCode free tier MiniMax model', description: 'OpenCode free tier MiniMax model',
supportsTools: true, supportsTools: true,
@@ -1091,7 +777,7 @@ export class OpencodeProvider extends CliProvider {
* *
* OpenCode CLI output format (one model per line): * OpenCode CLI output format (one model per line):
* opencode/big-pickle * opencode/big-pickle
* opencode/glm-5-free * opencode/glm-4.7-free
* anthropic/claude-3-5-haiku-20241022 * anthropic/claude-3-5-haiku-20241022
* github-copilot/claude-3.5-sonnet * github-copilot/claude-3.5-sonnet
* ... * ...
@@ -1189,26 +875,8 @@ export class OpencodeProvider extends CliProvider {
* Format a display name for a model * Format a display name for a model
*/ */
private formatModelDisplayName(model: OpenCodeModelInfo): string { private formatModelDisplayName(model: OpenCodeModelInfo): string {
// Extract the last path segment for nested model IDs
// e.g., "arcee-ai/trinity-large-preview:free" → "trinity-large-preview:free"
let rawName = model.name;
if (rawName.includes('/')) {
rawName = rawName.split('/').pop()!;
}
// Strip tier/pricing suffixes like ":free", ":extended"
const colonIdx = rawName.indexOf(':');
let suffix = '';
if (colonIdx !== -1) {
const tierPart = rawName.slice(colonIdx + 1);
if (/^(free|extended|beta|preview)$/i.test(tierPart)) {
suffix = ` (${tierPart.charAt(0).toUpperCase() + tierPart.slice(1)})`;
}
rawName = rawName.slice(0, colonIdx);
}
// Capitalize and format the model name // Capitalize and format the model name
const formattedName = rawName const formattedName = model.name
.split('-') .split('-')
.map((part) => { .map((part) => {
// Handle version numbers like "4-5" -> "4.5" // Handle version numbers like "4-5" -> "4.5"
@@ -1236,7 +904,7 @@ export class OpencodeProvider extends CliProvider {
}; };
const providerDisplay = providerNames[model.provider] || model.provider; const providerDisplay = providerNames[model.provider] || model.provider;
return `${formattedName}${suffix} (${providerDisplay})`; return `${formattedName} (${providerDisplay})`;
} }
/** /**

View File

@@ -67,16 +67,6 @@ export function registerProvider(name: string, registration: ProviderRegistratio
providerRegistry.set(name.toLowerCase(), registration); providerRegistry.set(name.toLowerCase(), registration);
} }
/** Cached mock provider instance when AUTOMAKER_MOCK_AGENT is set (E2E/CI). */
let mockProviderInstance: BaseProvider | null = null;
function getMockProvider(): BaseProvider {
if (!mockProviderInstance) {
mockProviderInstance = new MockProvider();
}
return mockProviderInstance;
}
export class ProviderFactory { export class ProviderFactory {
/** /**
* Determine which provider to use for a given model * Determine which provider to use for a given model
@@ -85,9 +75,6 @@ export class ProviderFactory {
* @returns Provider name (ModelProvider type) * @returns Provider name (ModelProvider type)
*/ */
static getProviderNameForModel(model: string): ModelProvider { static getProviderNameForModel(model: string): ModelProvider {
if (process.env.AUTOMAKER_MOCK_AGENT === 'true') {
return 'claude' as ModelProvider; // Name only; getProviderForModel returns MockProvider
}
const lowerModel = model.toLowerCase(); const lowerModel = model.toLowerCase();
// Get all registered providers sorted by priority (descending) // Get all registered providers sorted by priority (descending)
@@ -116,7 +103,7 @@ export class ProviderFactory {
/** /**
* Get the appropriate provider for a given model ID * Get the appropriate provider for a given model ID
* *
* @param modelId Model identifier (e.g., "claude-opus-4-6", "cursor-gpt-4o", "cursor-auto") * @param modelId Model identifier (e.g., "claude-opus-4-5-20251101", "cursor-gpt-4o", "cursor-auto")
* @param options Optional settings * @param options Optional settings
* @param options.throwOnDisconnected Throw error if provider is disconnected (default: true) * @param options.throwOnDisconnected Throw error if provider is disconnected (default: true)
* @returns Provider instance for the model * @returns Provider instance for the model
@@ -126,9 +113,6 @@ export class ProviderFactory {
modelId: string, modelId: string,
options: { throwOnDisconnected?: boolean } = {} options: { throwOnDisconnected?: boolean } = {}
): BaseProvider { ): BaseProvider {
if (process.env.AUTOMAKER_MOCK_AGENT === 'true') {
return getMockProvider();
}
const { throwOnDisconnected = true } = options; const { throwOnDisconnected = true } = options;
const providerName = this.getProviderForModelName(modelId); const providerName = this.getProviderForModelName(modelId);
@@ -158,9 +142,6 @@ export class ProviderFactory {
* Get the provider name for a given model ID (without creating provider instance) * Get the provider name for a given model ID (without creating provider instance)
*/ */
static getProviderForModelName(modelId: string): string { static getProviderForModelName(modelId: string): string {
if (process.env.AUTOMAKER_MOCK_AGENT === 'true') {
return 'claude';
}
const lowerModel = modelId.toLowerCase(); const lowerModel = modelId.toLowerCase();
// Get all registered providers sorted by priority (descending) // Get all registered providers sorted by priority (descending)
@@ -291,7 +272,6 @@ export class ProviderFactory {
// ============================================================================= // =============================================================================
// Import providers for registration side-effects // Import providers for registration side-effects
import { MockProvider } from './mock-provider.js';
import { ClaudeProvider } from './claude-provider.js'; import { ClaudeProvider } from './claude-provider.js';
import { CursorProvider } from './cursor-provider.js'; import { CursorProvider } from './cursor-provider.js';
import { CodexProvider } from './codex-provider.js'; import { CodexProvider } from './codex-provider.js';

View File

@@ -16,6 +16,8 @@
import { ProviderFactory } from './provider-factory.js'; import { ProviderFactory } from './provider-factory.js';
import type { import type {
ProviderMessage,
ContentBlock,
ThinkingLevel, ThinkingLevel,
ReasoningEffort, ReasoningEffort,
ClaudeApiProfile, ClaudeApiProfile,
@@ -94,7 +96,7 @@ export interface StreamingQueryOptions extends SimpleQueryOptions {
/** /**
* Default model to use when none specified * Default model to use when none specified
*/ */
const DEFAULT_MODEL = 'claude-sonnet-4-6'; const DEFAULT_MODEL = 'claude-sonnet-4-20250514';
/** /**
* Execute a simple query and return the text result * Execute a simple query and return the text result

View File

@@ -16,7 +16,7 @@ export function createHistoryHandler(agentService: AgentService) {
return; return;
} }
const result = await agentService.getHistory(sessionId); const result = agentService.getHistory(sessionId);
res.json(result); res.json(result);
} catch (error) { } catch (error) {
logError(error, 'Get history failed'); logError(error, 'Get history failed');

View File

@@ -19,7 +19,7 @@ export function createQueueListHandler(agentService: AgentService) {
return; return;
} }
const result = await agentService.getQueue(sessionId); const result = agentService.getQueue(sessionId);
res.json(result); res.json(result);
} catch (error) { } catch (error) {
logError(error, 'List queue failed'); logError(error, 'List queue failed');

View File

@@ -53,15 +53,7 @@ export function createSendHandler(agentService: AgentService) {
thinkingLevel, thinkingLevel,
}) })
.catch((error) => { .catch((error) => {
const errorMsg = (error as Error).message || 'Unknown error'; logger.error('Background error in sendMessage():', error);
logger.error(`Background error in sendMessage() for session ${sessionId}:`, errorMsg);
// Emit error via WebSocket so the UI is notified even though
// the HTTP response already returned 200. This is critical for
// session-not-found errors where sendMessage() throws before it
// can emit its own error event (no in-memory session to emit from).
agentService.emitSessionError(sessionId, errorMsg);
logError(error, 'Send message failed (background)'); logError(error, 'Send message failed (background)');
}); });

View File

@@ -6,7 +6,7 @@ import type { Request, Response } from 'express';
import { AgentService } from '../../../services/agent-service.js'; import { AgentService } from '../../../services/agent-service.js';
import { createLogger } from '@automaker/utils'; import { createLogger } from '@automaker/utils';
import { getErrorMessage, logError } from '../common.js'; import { getErrorMessage, logError } from '../common.js';
const _logger = createLogger('Agent'); const logger = createLogger('Agent');
export function createStartHandler(agentService: AgentService) { export function createStartHandler(agentService: AgentService) {
return async (req: Request, res: Response): Promise<void> => { return async (req: Request, res: Response): Promise<void> => {

View File

@@ -128,7 +128,7 @@ export function logAuthStatus(context: string): void {
*/ */
export function logError(error: unknown, context: string): void { export function logError(error: unknown, context: string): void {
logger.error(`${context}:`); logger.error(`${context}:`);
logger.error('Error name:', (error as Error)?.name); logger.error('Error name:', (error as any)?.name);
logger.error('Error message:', (error as Error)?.message); logger.error('Error message:', (error as Error)?.message);
logger.error('Error stack:', (error as Error)?.stack); logger.error('Error stack:', (error as Error)?.stack);
logger.error('Full error object:', JSON.stringify(error, Object.getOwnPropertyNames(error), 2)); logger.error('Full error object:', JSON.stringify(error, Object.getOwnPropertyNames(error), 2));

View File

@@ -30,7 +30,7 @@ const DEFAULT_MAX_FEATURES = 50;
* Timeout for Codex models when generating features (5 minutes). * Timeout for Codex models when generating features (5 minutes).
* Codex models are slower and need more time to generate 50+ features. * Codex models are slower and need more time to generate 50+ features.
*/ */
const _CODEX_FEATURE_GENERATION_TIMEOUT_MS = 300000; // 5 minutes const CODEX_FEATURE_GENERATION_TIMEOUT_MS = 300000; // 5 minutes
/** /**
* Type for extracted features JSON response * Type for extracted features JSON response
@@ -323,7 +323,7 @@ Your entire response should be valid JSON starting with { and ending with }. No
} }
} }
await parseAndCreateFeatures(projectPath, contentForParsing, events, settingsService); await parseAndCreateFeatures(projectPath, contentForParsing, events);
logger.debug('========== generateFeaturesFromSpec() completed =========='); logger.debug('========== generateFeaturesFromSpec() completed ==========');
} }

View File

@@ -9,16 +9,13 @@ import { createLogger, atomicWriteJson, DEFAULT_BACKUP_COUNT } from '@automaker/
import { getFeaturesDir } from '@automaker/platform'; import { getFeaturesDir } from '@automaker/platform';
import { extractJsonWithArray } from '../../lib/json-extractor.js'; import { extractJsonWithArray } from '../../lib/json-extractor.js';
import { getNotificationService } from '../../services/notification-service.js'; import { getNotificationService } from '../../services/notification-service.js';
import type { SettingsService } from '../../services/settings-service.js';
import { resolvePhaseModel } from '@automaker/model-resolver';
const logger = createLogger('SpecRegeneration'); const logger = createLogger('SpecRegeneration');
export async function parseAndCreateFeatures( export async function parseAndCreateFeatures(
projectPath: string, projectPath: string,
content: string, content: string,
events: EventEmitter, events: EventEmitter
settingsService?: SettingsService
): Promise<void> { ): Promise<void> {
logger.info('========== parseAndCreateFeatures() started =========='); logger.info('========== parseAndCreateFeatures() started ==========');
logger.info(`Content length: ${content.length} chars`); logger.info(`Content length: ${content.length} chars`);
@@ -26,37 +23,6 @@ export async function parseAndCreateFeatures(
logger.info(content); logger.info(content);
logger.info('========== END CONTENT =========='); logger.info('========== END CONTENT ==========');
// Load default model and planning settings from settingsService
let defaultModel: string | undefined;
let defaultPlanningMode: string = 'skip';
let defaultRequirePlanApproval = false;
if (settingsService) {
try {
const globalSettings = await settingsService.getGlobalSettings();
const projectSettings = await settingsService.getProjectSettings(projectPath);
const defaultModelEntry =
projectSettings.defaultFeatureModel ?? globalSettings.defaultFeatureModel;
if (defaultModelEntry) {
const resolved = resolvePhaseModel(defaultModelEntry);
defaultModel = resolved.model;
}
defaultPlanningMode = globalSettings.defaultPlanningMode ?? 'skip';
defaultRequirePlanApproval = globalSettings.defaultRequirePlanApproval ?? false;
logger.info(
`[parseAndCreateFeatures] Using defaults: model=${defaultModel ?? 'none'}, planningMode=${defaultPlanningMode}, requirePlanApproval=${defaultRequirePlanApproval}`
);
} catch (settingsError) {
logger.warn(
'[parseAndCreateFeatures] Failed to load settings, using defaults:',
settingsError
);
}
}
try { try {
// Extract JSON from response using shared utility // Extract JSON from response using shared utility
logger.info('Extracting JSON from response using extractJsonWithArray...'); logger.info('Extracting JSON from response using extractJsonWithArray...');
@@ -95,7 +61,7 @@ export async function parseAndCreateFeatures(
const featureDir = path.join(featuresDir, feature.id); const featureDir = path.join(featuresDir, feature.id);
await secureFs.mkdir(featureDir, { recursive: true }); await secureFs.mkdir(featureDir, { recursive: true });
const featureData: Record<string, unknown> = { const featureData = {
id: feature.id, id: feature.id,
category: feature.category || 'Uncategorized', category: feature.category || 'Uncategorized',
title: feature.title, title: feature.title,
@@ -104,20 +70,10 @@ export async function parseAndCreateFeatures(
priority: feature.priority || 2, priority: feature.priority || 2,
complexity: feature.complexity || 'moderate', complexity: feature.complexity || 'moderate',
dependencies: feature.dependencies || [], dependencies: feature.dependencies || [],
planningMode: defaultPlanningMode,
requirePlanApproval:
defaultPlanningMode === 'skip' || defaultPlanningMode === 'lite'
? false
: defaultRequirePlanApproval,
createdAt: new Date().toISOString(), createdAt: new Date().toISOString(),
updatedAt: new Date().toISOString(), updatedAt: new Date().toISOString(),
}; };
// Apply default model if available from settings
if (defaultModel) {
featureData.model = defaultModel;
}
// Use atomic write with backup support for crash protection // Use atomic write with backup support for crash protection
await atomicWriteJson(path.join(featureDir, 'feature.json'), featureData, { await atomicWriteJson(path.join(featureDir, 'feature.json'), featureData, {
backupCount: DEFAULT_BACKUP_COUNT, backupCount: DEFAULT_BACKUP_COUNT,

View File

@@ -29,6 +29,7 @@ import {
updateTechnologyStack, updateTechnologyStack,
updateRoadmapPhaseStatus, updateRoadmapPhaseStatus,
type ImplementedFeature, type ImplementedFeature,
type RoadmapPhase,
} from '../../lib/xml-extractor.js'; } from '../../lib/xml-extractor.js';
import { getNotificationService } from '../../services/notification-service.js'; import { getNotificationService } from '../../services/notification-service.js';

View File

@@ -21,7 +21,6 @@ import { createFollowUpFeatureHandler } from './routes/follow-up-feature.js';
import { createCommitFeatureHandler } from './routes/commit-feature.js'; import { createCommitFeatureHandler } from './routes/commit-feature.js';
import { createApprovePlanHandler } from './routes/approve-plan.js'; import { createApprovePlanHandler } from './routes/approve-plan.js';
import { createResumeInterruptedHandler } from './routes/resume-interrupted.js'; import { createResumeInterruptedHandler } from './routes/resume-interrupted.js';
import { createReconcileHandler } from './routes/reconcile.js';
/** /**
* Create auto-mode routes. * Create auto-mode routes.
@@ -82,11 +81,6 @@ export function createAutoModeRoutes(autoModeService: AutoModeServiceCompat): Ro
validatePathParams('projectPath'), validatePathParams('projectPath'),
createResumeInterruptedHandler(autoModeService) createResumeInterruptedHandler(autoModeService)
); );
router.post(
'/reconcile',
validatePathParams('projectPath'),
createReconcileHandler(autoModeService)
);
return router; return router;
} }

View File

@@ -19,11 +19,10 @@ export function createAnalyzeProjectHandler(autoModeService: AutoModeServiceComp
return; return;
} }
// Kick off analysis in the background; attach a rejection handler so // Start analysis in background
// unhandled-promise warnings don't surface and errors are at least logged. autoModeService.analyzeProject(projectPath).catch((error) => {
// Synchronous throws (e.g. "not implemented") still propagate here. logger.error(`[AutoMode] Project analysis error:`, error);
const analysisPromise = autoModeService.analyzeProject(projectPath); });
analysisPromise.catch((err) => logError(err, 'Background analyzeProject failed'));
res.json({ success: true, message: 'Project analysis started' }); res.json({ success: true, message: 'Project analysis started' });
} catch (error) { } catch (error) {

View File

@@ -17,7 +17,7 @@ export function createApprovePlanHandler(autoModeService: AutoModeServiceCompat)
approved: boolean; approved: boolean;
editedPlan?: string; editedPlan?: string;
feedback?: string; feedback?: string;
projectPath: string; projectPath?: string;
}; };
if (!featureId) { if (!featureId) {
@@ -36,14 +36,6 @@ export function createApprovePlanHandler(autoModeService: AutoModeServiceCompat)
return; return;
} }
if (!projectPath) {
res.status(400).json({
success: false,
error: 'projectPath is required',
});
return;
}
// Note: We no longer check hasPendingApproval here because resolvePlanApproval // Note: We no longer check hasPendingApproval here because resolvePlanApproval
// can handle recovery when pending approval is not in Map but feature has planSpec.status='generated' // can handle recovery when pending approval is not in Map but feature has planSpec.status='generated'
// This supports cases where the server restarted while waiting for approval // This supports cases where the server restarted while waiting for approval
@@ -56,7 +48,7 @@ export function createApprovePlanHandler(autoModeService: AutoModeServiceCompat)
// Resolve the pending approval (with recovery support) // Resolve the pending approval (with recovery support)
const result = await autoModeService.resolvePlanApproval( const result = await autoModeService.resolvePlanApproval(
projectPath, projectPath || '',
featureId, featureId,
approved, approved,
editedPlan, editedPlan,

View File

@@ -1,53 +0,0 @@
/**
* Reconcile Feature States Handler
*
* On-demand endpoint to reconcile all feature states for a project.
* Resets features stuck in transient states (in_progress, interrupted, pipeline_*)
* back to resting states (ready/backlog) and emits events to update the UI.
*
* This is useful when:
* - The UI reconnects after a server restart
* - A client detects stale feature states
* - An admin wants to force-reset stuck features
*/
import type { Request, Response } from 'express';
import { createLogger } from '@automaker/utils';
import type { AutoModeServiceCompat } from '../../../services/auto-mode/index.js';
const logger = createLogger('ReconcileFeatures');
interface ReconcileRequest {
projectPath: string;
}
export function createReconcileHandler(autoModeService: AutoModeServiceCompat) {
return async (req: Request, res: Response): Promise<void> => {
const { projectPath } = req.body as ReconcileRequest;
if (!projectPath) {
res.status(400).json({ error: 'Project path is required' });
return;
}
logger.info(`Reconciling feature states for ${projectPath}`);
try {
const reconciledCount = await autoModeService.reconcileFeatureStates(projectPath);
res.json({
success: true,
reconciledCount,
message:
reconciledCount > 0
? `Reconciled ${reconciledCount} feature(s)`
: 'No features needed reconciliation',
});
} catch (error) {
logger.error('Error reconciling feature states:', error);
res.status(500).json({
error: error instanceof Error ? error.message : 'Unknown error',
});
}
};
}

View File

@@ -26,9 +26,23 @@ export function createRunFeatureHandler(autoModeService: AutoModeServiceCompat)
return; return;
} }
// Note: No concurrency limit check here. Manual feature starts always run // Check per-worktree capacity before starting
// immediately and bypass the concurrency limit. Their presence IS counted const capacity = await autoModeService.checkWorktreeCapacity(projectPath, featureId);
// by the auto-loop coordinator when deciding whether to dispatch new auto-mode tasks. if (!capacity.hasCapacity) {
const worktreeDesc = capacity.branchName
? `worktree "${capacity.branchName}"`
: 'main worktree';
res.status(429).json({
success: false,
error: `Agent limit reached for ${worktreeDesc} (${capacity.currentAgents}/${capacity.maxAgents}). Wait for running tasks to complete or increase the limit.`,
details: {
currentAgents: capacity.currentAgents,
maxAgents: capacity.maxAgents,
branchName: capacity.branchName,
},
});
return;
}
// Start execution in background // Start execution in background
// executeFeature derives workDir from feature.branchName // executeFeature derives workDir from feature.branchName

View File

@@ -25,7 +25,7 @@ export function createStatusHandler(autoModeService: AutoModeServiceCompat) {
// Normalize branchName: undefined becomes null // Normalize branchName: undefined becomes null
const normalizedBranchName = branchName ?? null; const normalizedBranchName = branchName ?? null;
const projectStatus = await autoModeService.getStatusForProject( const projectStatus = autoModeService.getStatusForProject(
projectPath, projectPath,
normalizedBranchName normalizedBranchName
); );

View File

@@ -114,20 +114,9 @@ export function mapBacklogPlanError(rawMessage: string): string {
return 'Claude CLI could not be launched. Make sure the Claude CLI is installed and available in PATH, or check that Node.js is correctly installed. Try running "which claude" or "claude --version" in your terminal to verify.'; return 'Claude CLI could not be launched. Make sure the Claude CLI is installed and available in PATH, or check that Node.js is correctly installed. Try running "which claude" or "claude --version" in your terminal to verify.';
} }
// Claude Code process crash - extract exit code for diagnostics // Claude Code process crash
if (rawMessage.includes('Claude Code process exited')) { if (rawMessage.includes('Claude Code process exited')) {
const exitCodeMatch = rawMessage.match(/exited with code (\d+)/); return 'Claude exited unexpectedly. Try again. If it keeps happening, re-run `claude login` or update your API key in Setup.';
const exitCode = exitCodeMatch ? exitCodeMatch[1] : 'unknown';
logger.error(`[BacklogPlan] Claude process exit code: ${exitCode}`);
return `Claude exited unexpectedly (exit code: ${exitCode}). This is usually a transient issue. Try again. If it keeps happening, re-run \`claude login\` or update your API key in Setup.`;
}
// Claude Code process killed by signal
if (rawMessage.includes('Claude Code process terminated by signal')) {
const signalMatch = rawMessage.match(/terminated by signal (\w+)/);
const signal = signalMatch ? signalMatch[1] : 'unknown';
logger.error(`[BacklogPlan] Claude process terminated by signal: ${signal}`);
return `Claude was terminated by signal ${signal}. This may indicate a resource issue. Try again.`;
} }
// Rate limiting // Rate limiting

View File

@@ -3,22 +3,17 @@
* *
* Model is configurable via phaseModels.backlogPlanningModel in settings * Model is configurable via phaseModels.backlogPlanningModel in settings
* (defaults to Sonnet). Can be overridden per-call via model parameter. * (defaults to Sonnet). Can be overridden per-call via model parameter.
*
* Includes automatic retry for transient CLI failures (e.g., "Claude Code
* process exited unexpectedly") to improve reliability.
*/ */
import type { EventEmitter } from '../../lib/events.js'; import type { EventEmitter } from '../../lib/events.js';
import type { Feature, BacklogPlanResult } from '@automaker/types'; import type { Feature, BacklogPlanResult, BacklogChange, DependencyUpdate } from '@automaker/types';
import { import {
DEFAULT_PHASE_MODELS, DEFAULT_PHASE_MODELS,
isCursorModel, isCursorModel,
stripProviderPrefix, stripProviderPrefix,
type ThinkingLevel, type ThinkingLevel,
type SystemPromptPreset,
} from '@automaker/types'; } from '@automaker/types';
import { resolvePhaseModel } from '@automaker/model-resolver'; import { resolvePhaseModel } from '@automaker/model-resolver';
import { getCurrentBranch } from '@automaker/git-utils';
import { FeatureLoader } from '../../services/feature-loader.js'; import { FeatureLoader } from '../../services/feature-loader.js';
import { ProviderFactory } from '../../providers/provider-factory.js'; import { ProviderFactory } from '../../providers/provider-factory.js';
import { extractJsonWithArray } from '../../lib/json-extractor.js'; import { extractJsonWithArray } from '../../lib/json-extractor.js';
@@ -32,28 +27,10 @@ import {
import type { SettingsService } from '../../services/settings-service.js'; import type { SettingsService } from '../../services/settings-service.js';
import { import {
getAutoLoadClaudeMdSetting, getAutoLoadClaudeMdSetting,
getUseClaudeCodeSystemPromptSetting,
getPromptCustomization, getPromptCustomization,
getPhaseModelWithOverrides, getPhaseModelWithOverrides,
getProviderByModelId,
} from '../../lib/settings-helpers.js'; } from '../../lib/settings-helpers.js';
/** Maximum number of retry attempts for transient CLI failures */
const MAX_RETRIES = 2;
/** Delay between retries in milliseconds */
const RETRY_DELAY_MS = 2000;
/**
* Check if an error is retryable (transient CLI process failure)
*/
function isRetryableError(error: unknown): boolean {
const message = error instanceof Error ? error.message : String(error);
return (
message.includes('Claude Code process exited') ||
message.includes('Claude Code process terminated by signal')
);
}
const featureLoader = new FeatureLoader(); const featureLoader = new FeatureLoader();
/** /**
@@ -107,53 +84,6 @@ function parsePlanResponse(response: string): BacklogPlanResult {
}; };
} }
/**
* Try to parse a valid plan response without fallback behavior.
* Returns null if parsing fails.
*/
function tryParsePlanResponse(response: string): BacklogPlanResult | null {
if (!response || response.trim().length === 0) {
return null;
}
return extractJsonWithArray<BacklogPlanResult>(response, 'changes', { logger });
}
/**
* Choose the most reliable response text between streamed assistant chunks
* and provider final result payload.
*/
function selectBestResponseText(accumulatedText: string, providerResultText: string): string {
const hasAccumulated = accumulatedText.trim().length > 0;
const hasProviderResult = providerResultText.trim().length > 0;
if (!hasProviderResult) {
return accumulatedText;
}
if (!hasAccumulated) {
return providerResultText;
}
const accumulatedParsed = tryParsePlanResponse(accumulatedText);
const providerParsed = tryParsePlanResponse(providerResultText);
if (providerParsed && !accumulatedParsed) {
logger.info('[BacklogPlan] Using provider result (parseable JSON)');
return providerResultText;
}
if (accumulatedParsed && !providerParsed) {
logger.info('[BacklogPlan] Keeping accumulated text (parseable JSON)');
return accumulatedText;
}
if (providerResultText.length > accumulatedText.length) {
logger.info('[BacklogPlan] Using provider result (longer content)');
return providerResultText;
}
logger.info('[BacklogPlan] Keeping accumulated text (longer content)');
return accumulatedText;
}
/** /**
* Generate a backlog modification plan based on user prompt * Generate a backlog modification plan based on user prompt
*/ */
@@ -163,40 +93,11 @@ export async function generateBacklogPlan(
events: EventEmitter, events: EventEmitter,
abortController: AbortController, abortController: AbortController,
settingsService?: SettingsService, settingsService?: SettingsService,
model?: string, model?: string
branchName?: string
): Promise<BacklogPlanResult> { ): Promise<BacklogPlanResult> {
try { try {
// Load current features // Load current features
const allFeatures = await featureLoader.getAll(projectPath); const features = await featureLoader.getAll(projectPath);
// Filter features by branch if specified (worktree-scoped backlog)
let features: Feature[];
if (branchName) {
// Determine the primary branch so unassigned features show for the main worktree
let primaryBranch: string | null = null;
try {
primaryBranch = await getCurrentBranch(projectPath);
} catch {
// If git fails, fall back to 'main' so unassigned features are visible
// when branchName matches a common default branch name
primaryBranch = 'main';
}
const isMainBranch = branchName === primaryBranch;
features = allFeatures.filter((f) => {
if (!f.branchName) {
// Unassigned features belong to the main/primary worktree
return isMainBranch;
}
return f.branchName === branchName;
});
logger.info(
`[BacklogPlan] Filtered to ${features.length}/${allFeatures.length} features for branch: ${branchName}`
);
} else {
features = allFeatures;
}
events.emit('backlog-plan:event', { events.emit('backlog-plan:event', {
type: 'backlog_plan_progress', type: 'backlog_plan_progress',
@@ -232,35 +133,6 @@ export async function generateBacklogPlan(
effectiveModel = resolved.model; effectiveModel = resolved.model;
thinkingLevel = resolved.thinkingLevel; thinkingLevel = resolved.thinkingLevel;
credentials = await settingsService?.getCredentials(); credentials = await settingsService?.getCredentials();
// Resolve Claude-compatible provider when client sends a model (e.g. MiniMax, GLM)
if (settingsService) {
const providerResult = await getProviderByModelId(
effectiveModel,
settingsService,
'[BacklogPlan]'
);
if (providerResult.provider) {
claudeCompatibleProvider = providerResult.provider;
if (providerResult.credentials) {
credentials = providerResult.credentials;
}
}
// Fallback: use phase settings provider if model lookup found nothing (e.g. model
// string format differs from provider's model id, but backlog planning phase has providerId).
if (!claudeCompatibleProvider) {
const phaseResult = await getPhaseModelWithOverrides(
'backlogPlanningModel',
settingsService,
projectPath,
'[BacklogPlan]'
);
const phaseResolved = resolvePhaseModel(phaseResult.phaseModel);
if (phaseResult.provider && phaseResolved.model === effectiveModel) {
claudeCompatibleProvider = phaseResult.provider;
credentials = phaseResult.credentials ?? credentials;
}
}
}
} else if (settingsService) { } else if (settingsService) {
// Use settings-based model with provider info // Use settings-based model with provider info
const phaseResult = await getPhaseModelWithOverrides( const phaseResult = await getPhaseModelWithOverrides(
@@ -290,23 +162,17 @@ export async function generateBacklogPlan(
// Strip provider prefix - providers expect bare model IDs // Strip provider prefix - providers expect bare model IDs
const bareModel = stripProviderPrefix(effectiveModel); const bareModel = stripProviderPrefix(effectiveModel);
// Get autoLoadClaudeMd and useClaudeCodeSystemPrompt settings // Get autoLoadClaudeMd setting
const autoLoadClaudeMd = await getAutoLoadClaudeMdSetting( const autoLoadClaudeMd = await getAutoLoadClaudeMdSetting(
projectPath, projectPath,
settingsService, settingsService,
'[BacklogPlan]' '[BacklogPlan]'
); );
const useClaudeCodeSystemPrompt = await getUseClaudeCodeSystemPromptSetting(
projectPath,
settingsService,
'[BacklogPlan]'
);
// For Cursor models, we need to combine prompts with explicit instructions // For Cursor models, we need to combine prompts with explicit instructions
// because Cursor doesn't support systemPrompt separation like Claude SDK // because Cursor doesn't support systemPrompt separation like Claude SDK
let finalPrompt = userPrompt; let finalPrompt = userPrompt;
let finalSystemPrompt: string | SystemPromptPreset | undefined = systemPrompt; let finalSystemPrompt: string | undefined = systemPrompt;
let finalSettingSources: Array<'user' | 'project' | 'local'> | undefined;
if (isCursorModel(effectiveModel)) { if (isCursorModel(effectiveModel)) {
logger.info('[BacklogPlan] Using Cursor model - adding explicit no-file-write instructions'); logger.info('[BacklogPlan] Using Cursor model - adding explicit no-file-write instructions');
@@ -321,145 +187,54 @@ CRITICAL INSTRUCTIONS:
${userPrompt}`; ${userPrompt}`;
finalSystemPrompt = undefined; // System prompt is now embedded in the user prompt finalSystemPrompt = undefined; // System prompt is now embedded in the user prompt
} else if (claudeCompatibleProvider) {
// Claude-compatible providers (MiniMax, GLM, etc.) use a plain API; do not use
// the claude_code preset (which is for Claude CLI/subprocess and can break the request).
finalSystemPrompt = systemPrompt;
} else if (useClaudeCodeSystemPrompt) {
// Use claude_code preset for native Claude so the SDK subprocess
// authenticates via CLI OAuth or API key the same way all other SDK calls do.
finalSystemPrompt = {
type: 'preset',
preset: 'claude_code',
append: systemPrompt,
};
}
// Include settingSources when autoLoadClaudeMd is enabled
if (autoLoadClaudeMd) {
finalSettingSources = ['user', 'project'];
} }
// Execute the query with retry logic for transient CLI failures // Execute the query
const queryOptions = { const stream = provider.executeQuery({
prompt: finalPrompt, prompt: finalPrompt,
model: bareModel, model: bareModel,
cwd: projectPath, cwd: projectPath,
systemPrompt: finalSystemPrompt, systemPrompt: finalSystemPrompt,
maxTurns: 1, maxTurns: 1,
tools: [] as string[], // Disable all built-in tools - plan generation only needs text output allowedTools: [], // No tools needed for this
abortController, abortController,
settingSources: finalSettingSources, settingSources: autoLoadClaudeMd ? ['user', 'project'] : undefined,
readOnly: true, // Plan generation only generates text, doesn't write files
thinkingLevel, // Pass thinking level for extended thinking thinkingLevel, // Pass thinking level for extended thinking
claudeCompatibleProvider, // Pass provider for alternative endpoint configuration claudeCompatibleProvider, // Pass provider for alternative endpoint configuration
credentials, // Pass credentials for resolving 'credentials' apiKeySource credentials, // Pass credentials for resolving 'credentials' apiKeySource
}; });
let responseText = ''; let responseText = '';
let bestResponseText = ''; // Preserve best response across all retry attempts
let recoveredResult: BacklogPlanResult | null = null;
let lastError: unknown = null;
for (let attempt = 0; attempt <= MAX_RETRIES; attempt++) { for await (const msg of stream) {
if (abortController.signal.aborted) { if (abortController.signal.aborted) {
throw new Error('Generation aborted'); throw new Error('Generation aborted');
} }
if (attempt > 0) { if (msg.type === 'assistant') {
logger.info( if (msg.message?.content) {
`[BacklogPlan] Retry attempt ${attempt}/${MAX_RETRIES} after transient failure` for (const block of msg.message.content) {
); if (block.type === 'text') {
events.emit('backlog-plan:event', { responseText += block.text;
type: 'backlog_plan_progress',
content: `Retrying... (attempt ${attempt + 1}/${MAX_RETRIES + 1})`,
});
await new Promise((resolve) => setTimeout(resolve, RETRY_DELAY_MS));
}
let accumulatedText = '';
let providerResultText = '';
try {
const stream = provider.executeQuery(queryOptions);
for await (const msg of stream) {
if (abortController.signal.aborted) {
throw new Error('Generation aborted');
}
if (msg.type === 'assistant') {
if (msg.message?.content) {
for (const block of msg.message.content) {
if (block.type === 'text') {
accumulatedText += block.text;
}
}
} }
} else if (msg.type === 'result' && msg.subtype === 'success' && msg.result) {
providerResultText = msg.result;
logger.info(
'[BacklogPlan] Received result from provider, length:',
providerResultText.length
);
logger.info('[BacklogPlan] Accumulated response length:', accumulatedText.length);
} }
} }
} else if (msg.type === 'result' && msg.subtype === 'success' && msg.result) {
responseText = selectBestResponseText(accumulatedText, providerResultText); // Use result if it's a final accumulated message (from Cursor provider)
logger.info('[BacklogPlan] Received result from Cursor, length:', msg.result.length);
// If we got here, the stream completed successfully logger.info('[BacklogPlan] Previous responseText length:', responseText.length);
lastError = null; if (msg.result.length > responseText.length) {
break; logger.info('[BacklogPlan] Using Cursor result (longer than accumulated text)');
} catch (error) { responseText = msg.result;
lastError = error; } else {
const errorMessage = error instanceof Error ? error.message : String(error); logger.info('[BacklogPlan] Keeping accumulated text (longer than Cursor result)');
responseText = selectBestResponseText(accumulatedText, providerResultText);
// Preserve the best response text across all attempts so that if a retry
// crashes immediately (empty response), we can still recover from an earlier attempt
bestResponseText = selectBestResponseText(bestResponseText, responseText);
// Claude SDK can occasionally exit non-zero after emitting a complete response.
// If we already have valid JSON, recover instead of failing the entire planning flow.
if (isRetryableError(error)) {
const parsed = tryParsePlanResponse(bestResponseText);
if (parsed) {
logger.warn(
'[BacklogPlan] Recovered from transient CLI exit using accumulated valid response'
);
recoveredResult = parsed;
lastError = null;
break;
}
// On final retryable failure, degrade gracefully if we have text from any attempt.
if (attempt >= MAX_RETRIES && bestResponseText.trim().length > 0) {
logger.warn(
'[BacklogPlan] Final retryable CLI failure with non-empty response, attempting fallback parse'
);
recoveredResult = parsePlanResponse(bestResponseText);
lastError = null;
break;
}
} }
// Only retry on transient CLI failures, not on user aborts or other errors
if (!isRetryableError(error) || attempt >= MAX_RETRIES) {
throw error;
}
logger.warn(
`[BacklogPlan] Transient CLI failure (attempt ${attempt + 1}/${MAX_RETRIES + 1}): ${errorMessage}`
);
} }
} }
// If we exhausted retries, throw the last error
if (lastError) {
throw lastError;
}
// Parse the response // Parse the response
const result = recoveredResult ?? parsePlanResponse(responseText); const result = parsePlanResponse(responseText);
await saveBacklogPlan(projectPath, { await saveBacklogPlan(projectPath, {
savedAt: new Date().toISOString(), savedAt: new Date().toISOString(),

View File

@@ -25,7 +25,7 @@ export function createBacklogPlanRoutes(
); );
router.post('/stop', createStopHandler()); router.post('/stop', createStopHandler());
router.get('/status', validatePathParams('projectPath'), createStatusHandler()); router.get('/status', validatePathParams('projectPath'), createStatusHandler());
router.post('/apply', validatePathParams('projectPath'), createApplyHandler(settingsService)); router.post('/apply', validatePathParams('projectPath'), createApplyHandler());
router.post('/clear', validatePathParams('projectPath'), createClearHandler()); router.post('/clear', validatePathParams('projectPath'), createClearHandler());
return router; return router;

View File

@@ -3,23 +3,13 @@
*/ */
import type { Request, Response } from 'express'; import type { Request, Response } from 'express';
import { resolvePhaseModel } from '@automaker/model-resolver'; import type { BacklogPlanResult, BacklogChange, Feature } from '@automaker/types';
import type { BacklogPlanResult, PhaseModelEntry, PlanningMode } from '@automaker/types';
import { FeatureLoader } from '../../../services/feature-loader.js'; import { FeatureLoader } from '../../../services/feature-loader.js';
import type { SettingsService } from '../../../services/settings-service.js';
import { clearBacklogPlan, getErrorMessage, logError, logger } from '../common.js'; import { clearBacklogPlan, getErrorMessage, logError, logger } from '../common.js';
const featureLoader = new FeatureLoader(); const featureLoader = new FeatureLoader();
function normalizePhaseModelEntry( export function createApplyHandler() {
entry: PhaseModelEntry | string | undefined | null
): PhaseModelEntry | undefined {
if (!entry) return undefined;
if (typeof entry === 'string') return { model: entry };
return entry;
}
export function createApplyHandler(settingsService?: SettingsService) {
return async (req: Request, res: Response): Promise<void> => { return async (req: Request, res: Response): Promise<void> => {
try { try {
const { const {
@@ -48,23 +38,6 @@ export function createApplyHandler(settingsService?: SettingsService) {
return; return;
} }
let defaultPlanningMode: PlanningMode = 'skip';
let defaultRequirePlanApproval = false;
let defaultModelEntry: PhaseModelEntry | undefined;
if (settingsService) {
const globalSettings = await settingsService.getGlobalSettings();
const projectSettings = await settingsService.getProjectSettings(projectPath);
defaultPlanningMode = globalSettings.defaultPlanningMode ?? 'skip';
defaultRequirePlanApproval = globalSettings.defaultRequirePlanApproval ?? false;
defaultModelEntry = normalizePhaseModelEntry(
projectSettings.defaultFeatureModel ?? globalSettings.defaultFeatureModel
);
}
const resolvedDefaultModel = resolvePhaseModel(defaultModelEntry);
const appliedChanges: string[] = []; const appliedChanges: string[] = [];
// Load current features for dependency validation // Load current features for dependency validation
@@ -85,9 +58,6 @@ export function createApplyHandler(settingsService?: SettingsService) {
if (feature.dependencies?.includes(change.featureId)) { if (feature.dependencies?.includes(change.featureId)) {
const newDeps = feature.dependencies.filter((d) => d !== change.featureId); const newDeps = feature.dependencies.filter((d) => d !== change.featureId);
await featureLoader.update(projectPath, feature.id, { dependencies: newDeps }); await featureLoader.update(projectPath, feature.id, { dependencies: newDeps });
// Mutate the in-memory feature object so subsequent deletions use the updated
// dependency list and don't reintroduce already-removed dependency IDs.
feature.dependencies = newDeps;
logger.info( logger.info(
`[BacklogPlan] Removed dependency ${change.featureId} from ${feature.id}` `[BacklogPlan] Removed dependency ${change.featureId} from ${feature.id}`
); );
@@ -115,12 +85,6 @@ export function createApplyHandler(settingsService?: SettingsService) {
if (!change.feature) continue; if (!change.feature) continue;
try { try {
const effectivePlanningMode = change.feature.planningMode ?? defaultPlanningMode;
const effectiveRequirePlanApproval =
effectivePlanningMode === 'skip' || effectivePlanningMode === 'lite'
? false
: (change.feature.requirePlanApproval ?? defaultRequirePlanApproval);
// Create the new feature - use the AI-generated ID if provided // Create the new feature - use the AI-generated ID if provided
const newFeature = await featureLoader.create(projectPath, { const newFeature = await featureLoader.create(projectPath, {
id: change.feature.id, // Use descriptive ID from AI if provided id: change.feature.id, // Use descriptive ID from AI if provided
@@ -130,12 +94,6 @@ export function createApplyHandler(settingsService?: SettingsService) {
dependencies: change.feature.dependencies, dependencies: change.feature.dependencies,
priority: change.feature.priority, priority: change.feature.priority,
status: 'backlog', status: 'backlog',
model: change.feature.model ?? resolvedDefaultModel.model,
thinkingLevel: change.feature.thinkingLevel ?? resolvedDefaultModel.thinkingLevel,
reasoningEffort: change.feature.reasoningEffort ?? resolvedDefaultModel.reasoningEffort,
providerId: change.feature.providerId ?? resolvedDefaultModel.providerId,
planningMode: effectivePlanningMode,
requirePlanApproval: effectiveRequirePlanApproval,
branchName, branchName,
}); });

View File

@@ -17,11 +17,10 @@ import type { SettingsService } from '../../../services/settings-service.js';
export function createGenerateHandler(events: EventEmitter, settingsService?: SettingsService) { export function createGenerateHandler(events: EventEmitter, settingsService?: SettingsService) {
return async (req: Request, res: Response): Promise<void> => { return async (req: Request, res: Response): Promise<void> => {
try { try {
const { projectPath, prompt, model, branchName } = req.body as { const { projectPath, prompt, model } = req.body as {
projectPath: string; projectPath: string;
prompt: string; prompt: string;
model?: string; model?: string;
branchName?: string;
}; };
if (!projectPath) { if (!projectPath) {
@@ -43,30 +42,28 @@ export function createGenerateHandler(events: EventEmitter, settingsService?: Se
return; return;
} }
const abortController = new AbortController(); setRunningState(true);
setRunningState(true, abortController);
setRunningDetails({ setRunningDetails({
projectPath, projectPath,
prompt, prompt,
model, model,
startedAt: new Date().toISOString(), startedAt: new Date().toISOString(),
}); });
const abortController = new AbortController();
setRunningState(true, abortController);
// Start generation in background // Start generation in background
// Note: generateBacklogPlan handles its own error event emission // Note: generateBacklogPlan handles its own error event emission,
// and state cleanup in its finally block, so we only log here // so we only log here to avoid duplicate error toasts
generateBacklogPlan( generateBacklogPlan(projectPath, prompt, events, abortController, settingsService, model)
projectPath, .catch((error) => {
prompt, // Just log - error event already emitted by generateBacklogPlan
events, logError(error, 'Generate backlog plan failed (background)');
abortController, })
settingsService, .finally(() => {
model, setRunningState(false, null);
branchName setRunningDetails(null);
).catch((error) => { });
// Just log - error event already emitted by generateBacklogPlan
logError(error, 'Generate backlog plan failed (background)');
});
res.json({ success: true }); res.json({ success: true });
} catch (error) { } catch (error) {

View File

@@ -142,33 +142,11 @@ function mapDescribeImageError(rawMessage: string | undefined): {
if (!rawMessage) return baseResponse; if (!rawMessage) return baseResponse;
if ( if (rawMessage.includes('Claude Code process exited')) {
rawMessage.includes('Claude Code process exited') ||
rawMessage.includes('Claude Code process terminated by signal')
) {
const exitCodeMatch = rawMessage.match(/exited with code (\d+)/);
const signalMatch = rawMessage.match(/terminated by signal (\w+)/);
const detail = exitCodeMatch
? ` (exit code: ${exitCodeMatch[1]})`
: signalMatch
? ` (signal: ${signalMatch[1]})`
: '';
// Crash/OS-kill signals suggest a process crash, not an auth failure —
// omit auth recovery advice and suggest retry/reporting instead.
const crashSignals = ['SIGSEGV', 'SIGABRT', 'SIGKILL', 'SIGBUS', 'SIGTRAP'];
const isCrashSignal = signalMatch ? crashSignals.includes(signalMatch[1]) : false;
if (isCrashSignal) {
return {
statusCode: 503,
userMessage: `Claude crashed unexpectedly${detail} while describing the image. This may be a transient condition. Please try again. If the problem persists, collect logs and report the issue.`,
};
}
return { return {
statusCode: 503, statusCode: 503,
userMessage: `Claude exited unexpectedly${detail} while describing the image. This is usually a transient issue. Try again. If it keeps happening, re-run \`claude login\` or update your API key in Setup.`, userMessage:
'Claude exited unexpectedly while describing the image. Try again. If it keeps happening, re-run `claude login` or update your API key in Setup so Claude can restart cleanly.',
}; };
} }

View File

@@ -10,23 +10,14 @@ import type { Request, Response } from 'express';
import { createLogger } from '@automaker/utils'; import { createLogger } from '@automaker/utils';
import { resolveModelString } from '@automaker/model-resolver'; import { resolveModelString } from '@automaker/model-resolver';
import { CLAUDE_MODEL_MAP, type ThinkingLevel } from '@automaker/types'; import { CLAUDE_MODEL_MAP, type ThinkingLevel } from '@automaker/types';
import { getAppSpecPath } from '@automaker/platform';
import { simpleQuery } from '../../../providers/simple-query-service.js'; import { simpleQuery } from '../../../providers/simple-query-service.js';
import type { SettingsService } from '../../../services/settings-service.js'; import type { SettingsService } from '../../../services/settings-service.js';
import { getPromptCustomization, getProviderByModelId } from '../../../lib/settings-helpers.js'; import { getPromptCustomization, getProviderByModelId } from '../../../lib/settings-helpers.js';
import { FeatureLoader } from '../../../services/feature-loader.js';
import * as secureFs from '../../../lib/secure-fs.js';
import { import {
buildUserPrompt, buildUserPrompt,
isValidEnhancementMode, isValidEnhancementMode,
type EnhancementMode, type EnhancementMode,
} from '../../../lib/enhancement-prompts.js'; } from '../../../lib/enhancement-prompts.js';
import {
extractTechnologyStack,
extractXmlElements,
extractXmlSection,
unescapeXml,
} from '../../../lib/xml-extractor.js';
const logger = createLogger('EnhancePrompt'); const logger = createLogger('EnhancePrompt');
@@ -62,66 +53,6 @@ interface EnhanceErrorResponse {
error: string; error: string;
} }
async function buildProjectContext(projectPath: string): Promise<string | null> {
const contextBlocks: string[] = [];
try {
const appSpecPath = getAppSpecPath(projectPath);
const specContent = (await secureFs.readFile(appSpecPath, 'utf-8')) as string;
const projectName = extractXmlSection(specContent, 'project_name');
const overview = extractXmlSection(specContent, 'overview');
const techStack = extractTechnologyStack(specContent);
const coreSection = extractXmlSection(specContent, 'core_capabilities');
const coreCapabilities = coreSection ? extractXmlElements(coreSection, 'capability') : [];
const summaryLines: string[] = [];
if (projectName) {
summaryLines.push(`Name: ${unescapeXml(projectName.trim())}`);
}
if (overview) {
summaryLines.push(`Overview: ${unescapeXml(overview.trim())}`);
}
if (techStack.length > 0) {
summaryLines.push(`Tech Stack: ${techStack.join(', ')}`);
}
if (coreCapabilities.length > 0) {
summaryLines.push(`Core Capabilities: ${coreCapabilities.slice(0, 10).join(', ')}`);
}
if (summaryLines.length > 0) {
contextBlocks.push(`PROJECT CONTEXT:\n${summaryLines.map((line) => `- ${line}`).join('\n')}`);
}
} catch (error) {
logger.debug('No app_spec.txt context available for enhancement', error);
}
try {
const featureLoader = new FeatureLoader();
const features = await featureLoader.getAll(projectPath);
const featureTitles = features
.map((feature) => feature.title || feature.name || feature.id)
.filter((title) => Boolean(title));
if (featureTitles.length > 0) {
const listed = featureTitles.slice(0, 30).map((title) => `- ${title}`);
contextBlocks.push(
`EXISTING FEATURES (avoid duplicates):\n${listed.join('\n')}${
featureTitles.length > 30 ? '\n- ...' : ''
}`
);
}
} catch (error) {
logger.debug('Failed to load existing features for enhancement context', error);
}
if (contextBlocks.length === 0) {
return null;
}
return contextBlocks.join('\n\n');
}
/** /**
* Create the enhance request handler * Create the enhance request handler
* *
@@ -191,10 +122,6 @@ export function createEnhanceHandler(
// Build the user prompt with few-shot examples // Build the user prompt with few-shot examples
const userPrompt = buildUserPrompt(validMode, trimmedText, true); const userPrompt = buildUserPrompt(validMode, trimmedText, true);
const projectContext = projectPath ? await buildProjectContext(projectPath) : null;
if (projectContext) {
logger.debug('Including project context in enhancement prompt');
}
// Check if the model is a provider model (like "GLM-4.5-Air") // Check if the model is a provider model (like "GLM-4.5-Air")
// If so, get the provider config and resolved Claude model // If so, get the provider config and resolved Claude model
@@ -219,21 +146,18 @@ export function createEnhanceHandler(
} }
} }
// Resolve the model for API call. // Resolve the model - use provider resolved model, passed model, or default to sonnet
// CRITICAL: For custom providers (GLM, MiniMax), pass the provider's model ID (e.g. "GLM-4.7") const resolvedModel =
// to the API, NOT the resolved Claude model - otherwise we get "model not found" providerResolvedModel || resolveModelString(model, CLAUDE_MODEL_MAP.sonnet);
const modelForApi = claudeCompatibleProvider
? model
: providerResolvedModel || resolveModelString(model, CLAUDE_MODEL_MAP.sonnet);
logger.debug(`Using model: ${modelForApi}`); logger.debug(`Using model: ${resolvedModel}`);
// Use simpleQuery - provider abstraction handles routing to correct provider // Use simpleQuery - provider abstraction handles routing to correct provider
// The system prompt is combined with user prompt since some providers // The system prompt is combined with user prompt since some providers
// don't have a separate system prompt concept // don't have a separate system prompt concept
const result = await simpleQuery({ const result = await simpleQuery({
prompt: [systemPrompt, projectContext, userPrompt].filter(Boolean).join('\n\n'), prompt: `${systemPrompt}\n\n${userPrompt}`,
model: modelForApi, model: resolvedModel,
cwd: process.cwd(), // Enhancement doesn't need a specific working directory cwd: process.cwd(), // Enhancement doesn't need a specific working directory
maxTurns: 1, maxTurns: 1,
allowedTools: [], allowedTools: [],

View File

@@ -19,11 +19,6 @@ import { createAgentOutputHandler, createRawOutputHandler } from './routes/agent
import { createGenerateTitleHandler } from './routes/generate-title.js'; import { createGenerateTitleHandler } from './routes/generate-title.js';
import { createExportHandler } from './routes/export.js'; import { createExportHandler } from './routes/export.js';
import { createImportHandler, createConflictCheckHandler } from './routes/import.js'; import { createImportHandler, createConflictCheckHandler } from './routes/import.js';
import {
createOrphanedListHandler,
createOrphanedResolveHandler,
createOrphanedBulkResolveHandler,
} from './routes/orphaned.js';
export function createFeaturesRoutes( export function createFeaturesRoutes(
featureLoader: FeatureLoader, featureLoader: FeatureLoader,
@@ -38,22 +33,13 @@ export function createFeaturesRoutes(
validatePathParams('projectPath'), validatePathParams('projectPath'),
createListHandler(featureLoader, autoModeService) createListHandler(featureLoader, autoModeService)
); );
router.get(
'/list',
validatePathParams('projectPath'),
createListHandler(featureLoader, autoModeService)
);
router.post('/get', validatePathParams('projectPath'), createGetHandler(featureLoader)); router.post('/get', validatePathParams('projectPath'), createGetHandler(featureLoader));
router.post( router.post(
'/create', '/create',
validatePathParams('projectPath'), validatePathParams('projectPath'),
createCreateHandler(featureLoader, events) createCreateHandler(featureLoader, events)
); );
router.post( router.post('/update', validatePathParams('projectPath'), createUpdateHandler(featureLoader));
'/update',
validatePathParams('projectPath'),
createUpdateHandler(featureLoader, events)
);
router.post( router.post(
'/bulk-update', '/bulk-update',
validatePathParams('projectPath'), validatePathParams('projectPath'),
@@ -75,21 +61,6 @@ export function createFeaturesRoutes(
validatePathParams('projectPath'), validatePathParams('projectPath'),
createConflictCheckHandler(featureLoader) createConflictCheckHandler(featureLoader)
); );
router.post(
'/orphaned',
validatePathParams('projectPath'),
createOrphanedListHandler(featureLoader, autoModeService)
);
router.post(
'/orphaned/resolve',
validatePathParams('projectPath'),
createOrphanedResolveHandler(featureLoader, autoModeService)
);
router.post(
'/orphaned/bulk-resolve',
validatePathParams('projectPath'),
createOrphanedBulkResolveHandler(featureLoader)
);
return router; return router;
} }

View File

@@ -24,6 +24,19 @@ export function createCreateHandler(featureLoader: FeatureLoader, events?: Event
return; return;
} }
// Check for duplicate title if title is provided
if (feature.title && feature.title.trim()) {
const duplicate = await featureLoader.findDuplicateTitle(projectPath, feature.title);
if (duplicate) {
res.status(409).json({
success: false,
error: `A feature with title "${feature.title}" already exists`,
duplicateFeatureId: duplicate.id,
});
return;
}
}
const created = await featureLoader.create(projectPath, feature); const created = await featureLoader.create(projectPath, feature);
// Emit feature_created event for hooks // Emit feature_created event for hooks

View File

@@ -36,7 +36,7 @@ interface ExportRequest {
}; };
} }
export function createExportHandler(_featureLoader: FeatureLoader) { export function createExportHandler(featureLoader: FeatureLoader) {
const exportService = getFeatureExportService(); const exportService = getFeatureExportService();
return async (req: Request, res: Response): Promise<void> => { return async (req: Request, res: Response): Promise<void> => {

View File

@@ -34,7 +34,7 @@ export function createGenerateTitleHandler(
): (req: Request, res: Response) => Promise<void> { ): (req: Request, res: Response) => Promise<void> {
return async (req: Request, res: Response): Promise<void> => { return async (req: Request, res: Response): Promise<void> => {
try { try {
const { description } = req.body as GenerateTitleRequestBody; const { description, projectPath } = req.body as GenerateTitleRequestBody;
if (!description || typeof description !== 'string') { if (!description || typeof description !== 'string') {
const response: GenerateTitleErrorResponse = { const response: GenerateTitleErrorResponse = {

View File

@@ -33,7 +33,7 @@ interface ConflictInfo {
hasConflict: boolean; hasConflict: boolean;
} }
export function createImportHandler(_featureLoader: FeatureLoader) { export function createImportHandler(featureLoader: FeatureLoader) {
const exportService = getFeatureExportService(); const exportService = getFeatureExportService();
return async (req: Request, res: Response): Promise<void> => { return async (req: Request, res: Response): Promise<void> => {

View File

@@ -1,7 +1,5 @@
/** /**
* POST/GET /list endpoint - List all features for a project * POST /list endpoint - List all features for a project
*
* projectPath may come from req.body (POST) or req.query (GET fallback).
* *
* Also performs orphan detection when a project is loaded to identify * Also performs orphan detection when a project is loaded to identify
* features whose branches no longer exist. This runs on every project load/switch. * features whose branches no longer exist. This runs on every project load/switch.
@@ -21,17 +19,7 @@ export function createListHandler(
) { ) {
return async (req: Request, res: Response): Promise<void> => { return async (req: Request, res: Response): Promise<void> => {
try { try {
const bodyProjectPath = const { projectPath } = req.body as { projectPath: string };
typeof req.body === 'object' && req.body !== null
? (req.body as { projectPath?: unknown }).projectPath
: undefined;
const queryProjectPath = req.query.projectPath;
const projectPath =
typeof bodyProjectPath === 'string'
? bodyProjectPath
: typeof queryProjectPath === 'string'
? queryProjectPath
: undefined;
if (!projectPath) { if (!projectPath) {
res.status(400).json({ success: false, error: 'projectPath is required' }); res.status(400).json({ success: false, error: 'projectPath is required' });
@@ -45,23 +33,18 @@ export function createListHandler(
// We don't await this to keep the list response fast // We don't await this to keep the list response fast
// Note: detectOrphanedFeatures handles errors internally and always resolves // Note: detectOrphanedFeatures handles errors internally and always resolves
if (autoModeService) { if (autoModeService) {
autoModeService autoModeService.detectOrphanedFeatures(projectPath).then((orphanedFeatures) => {
.detectOrphanedFeatures(projectPath, features) if (orphanedFeatures.length > 0) {
.then((orphanedFeatures) => { logger.info(
if (orphanedFeatures.length > 0) { `[ProjectLoad] Detected ${orphanedFeatures.length} orphaned feature(s) in ${projectPath}`
);
for (const { feature, missingBranch } of orphanedFeatures) {
logger.info( logger.info(
`[ProjectLoad] Detected ${orphanedFeatures.length} orphaned feature(s) in ${projectPath}` `[ProjectLoad] Orphaned: ${feature.title || feature.id} - branch "${missingBranch}" no longer exists`
); );
for (const { feature, missingBranch } of orphanedFeatures) {
logger.info(
`[ProjectLoad] Orphaned: ${feature.title || feature.id} - branch "${missingBranch}" no longer exists`
);
}
} }
}) }
.catch((error) => { });
logger.warn(`[ProjectLoad] Orphan detection failed for ${projectPath}:`, error);
});
} }
res.json({ success: true, features }); res.json({ success: true, features });

View File

@@ -1,287 +0,0 @@
/**
* POST /orphaned endpoint - Detect orphaned features (features with missing branches)
* POST /orphaned/resolve endpoint - Resolve an orphaned feature (delete, create-worktree, or move-to-branch)
* POST /orphaned/bulk-resolve endpoint - Resolve multiple orphaned features at once
*/
import crypto from 'crypto';
import path from 'path';
import type { Request, Response } from 'express';
import { FeatureLoader } from '../../../services/feature-loader.js';
import type { AutoModeServiceCompat } from '../../../services/auto-mode/index.js';
import { getErrorMessage, logError } from '../common.js';
import { execGitCommand } from '../../../lib/git.js';
import { deleteWorktreeMetadata } from '../../../lib/worktree-metadata.js';
import { createLogger } from '@automaker/utils';
const logger = createLogger('OrphanedFeatures');
type ResolveAction = 'delete' | 'create-worktree' | 'move-to-branch';
const VALID_ACTIONS: ResolveAction[] = ['delete', 'create-worktree', 'move-to-branch'];
export function createOrphanedListHandler(
featureLoader: FeatureLoader,
autoModeService?: AutoModeServiceCompat
) {
return async (req: Request, res: Response): Promise<void> => {
try {
const { projectPath } = req.body as { projectPath: string };
if (!projectPath) {
res.status(400).json({ success: false, error: 'projectPath is required' });
return;
}
if (!autoModeService) {
res.status(500).json({ success: false, error: 'Auto-mode service not available' });
return;
}
const orphanedFeatures = await autoModeService.detectOrphanedFeatures(projectPath);
res.json({ success: true, orphanedFeatures });
} catch (error) {
logError(error, 'Detect orphaned features failed');
res.status(500).json({ success: false, error: getErrorMessage(error) });
}
};
}
export function createOrphanedResolveHandler(
featureLoader: FeatureLoader,
_autoModeService?: AutoModeServiceCompat
) {
return async (req: Request, res: Response): Promise<void> => {
try {
const { projectPath, featureId, action, targetBranch } = req.body as {
projectPath: string;
featureId: string;
action: ResolveAction;
targetBranch?: string | null;
};
if (!projectPath || !featureId || !action) {
res.status(400).json({
success: false,
error: 'projectPath, featureId, and action are required',
});
return;
}
if (!VALID_ACTIONS.includes(action)) {
res.status(400).json({
success: false,
error: `action must be one of: ${VALID_ACTIONS.join(', ')}`,
});
return;
}
const result = await resolveOrphanedFeature(
featureLoader,
projectPath,
featureId,
action,
targetBranch
);
if (!result.success) {
res.status(result.error === 'Feature not found' ? 404 : 500).json(result);
return;
}
res.json(result);
} catch (error) {
logError(error, 'Resolve orphaned feature failed');
res.status(500).json({ success: false, error: getErrorMessage(error) });
}
};
}
interface BulkResolveResult {
featureId: string;
success: boolean;
action?: string;
error?: string;
}
async function resolveOrphanedFeature(
featureLoader: FeatureLoader,
projectPath: string,
featureId: string,
action: ResolveAction,
targetBranch?: string | null
): Promise<BulkResolveResult> {
try {
const feature = await featureLoader.get(projectPath, featureId);
if (!feature) {
return { featureId, success: false, error: 'Feature not found' };
}
const missingBranch = feature.branchName;
switch (action) {
case 'delete': {
if (missingBranch) {
try {
await deleteWorktreeMetadata(projectPath, missingBranch);
} catch {
// Non-fatal
}
}
const success = await featureLoader.delete(projectPath, featureId);
if (!success) {
return { featureId, success: false, error: 'Deletion failed' };
}
logger.info(`Deleted orphaned feature ${featureId} (branch: ${missingBranch})`);
return { featureId, success: true, action: 'deleted' };
}
case 'create-worktree': {
if (!missingBranch) {
return { featureId, success: false, error: 'Feature has no branch name to recreate' };
}
const sanitizedName = missingBranch.replace(/[^a-zA-Z0-9_-]/g, '-');
const hash = crypto.createHash('sha1').update(missingBranch).digest('hex').slice(0, 8);
const worktreesDir = path.join(projectPath, '.worktrees');
const worktreePath = path.join(worktreesDir, `${sanitizedName}-${hash}`);
try {
await execGitCommand(['worktree', 'add', '-b', missingBranch, worktreePath], projectPath);
} catch (error) {
const msg = getErrorMessage(error);
if (msg.includes('already exists')) {
try {
await execGitCommand(['worktree', 'add', worktreePath, missingBranch], projectPath);
} catch (innerError) {
return {
featureId,
success: false,
error: `Failed to create worktree: ${getErrorMessage(innerError)}`,
};
}
} else {
return { featureId, success: false, error: `Failed to create worktree: ${msg}` };
}
}
logger.info(
`Created worktree for orphaned feature ${featureId} at ${worktreePath} (branch: ${missingBranch})`
);
return { featureId, success: true, action: 'worktree-created' };
}
case 'move-to-branch': {
// Move the feature to a different branch (or clear branch to use main worktree)
const newBranch = targetBranch || null;
// Validate that the target branch exists if one is specified
if (newBranch) {
try {
await execGitCommand(['rev-parse', '--verify', newBranch], projectPath);
} catch {
return {
featureId,
success: false,
error: `Target branch "${newBranch}" does not exist`,
};
}
}
await featureLoader.update(projectPath, featureId, {
branchName: newBranch,
status: 'pending',
});
// Clean up old worktree metadata
if (missingBranch) {
try {
await deleteWorktreeMetadata(projectPath, missingBranch);
} catch {
// Non-fatal
}
}
const destination = newBranch ?? 'main worktree';
logger.info(
`Moved orphaned feature ${featureId} to ${destination} (was: ${missingBranch})`
);
return { featureId, success: true, action: 'moved' };
}
}
} catch (error) {
return { featureId, success: false, error: getErrorMessage(error) };
}
}
export function createOrphanedBulkResolveHandler(featureLoader: FeatureLoader) {
return async (req: Request, res: Response): Promise<void> => {
try {
const { projectPath, featureIds, action, targetBranch } = req.body as {
projectPath: string;
featureIds: string[];
action: ResolveAction;
targetBranch?: string | null;
};
if (
!projectPath ||
!featureIds ||
!Array.isArray(featureIds) ||
featureIds.length === 0 ||
!action
) {
res.status(400).json({
success: false,
error: 'projectPath, featureIds (non-empty array), and action are required',
});
return;
}
if (!VALID_ACTIONS.includes(action)) {
res.status(400).json({
success: false,
error: `action must be one of: ${VALID_ACTIONS.join(', ')}`,
});
return;
}
// Process sequentially for worktree creation (git operations shouldn't race),
// in parallel for delete/move-to-branch
const results: BulkResolveResult[] = [];
if (action === 'create-worktree') {
for (const featureId of featureIds) {
const result = await resolveOrphanedFeature(
featureLoader,
projectPath,
featureId,
action,
targetBranch
);
results.push(result);
}
} else {
const batchResults = await Promise.all(
featureIds.map((featureId) =>
resolveOrphanedFeature(featureLoader, projectPath, featureId, action, targetBranch)
)
);
results.push(...batchResults);
}
const successCount = results.filter((r) => r.success).length;
const failedCount = results.length - successCount;
res.json({
success: failedCount === 0,
resolvedCount: successCount,
failedCount,
results,
});
} catch (error) {
logError(error, 'Bulk resolve orphaned features failed');
res.status(500).json({ success: false, error: getErrorMessage(error) });
}
};
}

View File

@@ -5,7 +5,6 @@
import type { Request, Response } from 'express'; import type { Request, Response } from 'express';
import { FeatureLoader } from '../../../services/feature-loader.js'; import { FeatureLoader } from '../../../services/feature-loader.js';
import type { Feature, FeatureStatus } from '@automaker/types'; import type { Feature, FeatureStatus } from '@automaker/types';
import type { EventEmitter } from '../../../lib/events.js';
import { getErrorMessage, logError } from '../common.js'; import { getErrorMessage, logError } from '../common.js';
import { createLogger } from '@automaker/utils'; import { createLogger } from '@automaker/utils';
@@ -14,7 +13,7 @@ const logger = createLogger('features/update');
// Statuses that should trigger syncing to app_spec.txt // Statuses that should trigger syncing to app_spec.txt
const SYNC_TRIGGER_STATUSES: FeatureStatus[] = ['verified', 'completed']; const SYNC_TRIGGER_STATUSES: FeatureStatus[] = ['verified', 'completed'];
export function createUpdateHandler(featureLoader: FeatureLoader, events?: EventEmitter) { export function createUpdateHandler(featureLoader: FeatureLoader) {
return async (req: Request, res: Response): Promise<void> => { return async (req: Request, res: Response): Promise<void> => {
try { try {
const { const {
@@ -41,13 +40,26 @@ export function createUpdateHandler(featureLoader: FeatureLoader, events?: Event
return; return;
} }
// Check for duplicate title if title is being updated
if (updates.title && updates.title.trim()) {
const duplicate = await featureLoader.findDuplicateTitle(
projectPath,
updates.title,
featureId // Exclude the current feature from duplicate check
);
if (duplicate) {
res.status(409).json({
success: false,
error: `A feature with title "${updates.title}" already exists`,
duplicateFeatureId: duplicate.id,
});
return;
}
}
// Get the current feature to detect status changes // Get the current feature to detect status changes
const currentFeature = await featureLoader.get(projectPath, featureId); const currentFeature = await featureLoader.get(projectPath, featureId);
if (!currentFeature) { const previousStatus = currentFeature?.status as FeatureStatus | undefined;
res.status(404).json({ success: false, error: `Feature ${featureId} not found` });
return;
}
const previousStatus = currentFeature.status as FeatureStatus;
const newStatus = updates.status as FeatureStatus | undefined; const newStatus = updates.status as FeatureStatus | undefined;
const updated = await featureLoader.update( const updated = await featureLoader.update(
@@ -59,18 +71,8 @@ export function createUpdateHandler(featureLoader: FeatureLoader, events?: Event
preEnhancementDescription preEnhancementDescription
); );
// Emit completion event and sync to app_spec.txt when status transitions to verified/completed // Trigger sync to app_spec.txt when status changes to verified or completed
if (newStatus && SYNC_TRIGGER_STATUSES.includes(newStatus) && previousStatus !== newStatus) { if (newStatus && SYNC_TRIGGER_STATUSES.includes(newStatus) && previousStatus !== newStatus) {
events?.emit('feature:completed', {
featureId,
featureName: updated.title,
projectPath,
passes: true,
message:
newStatus === 'verified' ? 'Feature verified manually' : 'Feature completed manually',
executionMode: 'manual',
});
try { try {
const synced = await featureLoader.syncFeatureToAppSpec(projectPath, updated); const synced = await featureLoader.syncFeatureToAppSpec(projectPath, updated);
if (synced) { if (synced) {

View File

@@ -19,10 +19,6 @@ import { createBrowseHandler } from './routes/browse.js';
import { createImageHandler } from './routes/image.js'; import { createImageHandler } from './routes/image.js';
import { createSaveBoardBackgroundHandler } from './routes/save-board-background.js'; import { createSaveBoardBackgroundHandler } from './routes/save-board-background.js';
import { createDeleteBoardBackgroundHandler } from './routes/delete-board-background.js'; import { createDeleteBoardBackgroundHandler } from './routes/delete-board-background.js';
import { createBrowseProjectFilesHandler } from './routes/browse-project-files.js';
import { createCopyHandler } from './routes/copy.js';
import { createMoveHandler } from './routes/move.js';
import { createDownloadHandler } from './routes/download.js';
export function createFsRoutes(_events: EventEmitter): Router { export function createFsRoutes(_events: EventEmitter): Router {
const router = Router(); const router = Router();
@@ -41,10 +37,6 @@ export function createFsRoutes(_events: EventEmitter): Router {
router.get('/image', createImageHandler()); router.get('/image', createImageHandler());
router.post('/save-board-background', createSaveBoardBackgroundHandler()); router.post('/save-board-background', createSaveBoardBackgroundHandler());
router.post('/delete-board-background', createDeleteBoardBackgroundHandler()); router.post('/delete-board-background', createDeleteBoardBackgroundHandler());
router.post('/browse-project-files', createBrowseProjectFilesHandler());
router.post('/copy', createCopyHandler());
router.post('/move', createMoveHandler());
router.post('/download', createDownloadHandler());
return router; return router;
} }

View File

@@ -1,191 +0,0 @@
/**
* POST /browse-project-files endpoint - Browse files and directories within a project
*
* Unlike /browse which only lists directories (for project folder selection),
* this endpoint lists both files and directories relative to a project root.
* Used by the file selector for "Copy files to worktree" settings.
*
* Features:
* - Lists both files and directories
* - Hides .git, .worktrees, node_modules, and other build artifacts
* - Returns entries relative to the project root
* - Supports navigating into subdirectories
* - Security: prevents path traversal outside project root
*/
import type { Request, Response } from 'express';
import * as secureFs from '../../../lib/secure-fs.js';
import path from 'path';
import { PathNotAllowedError } from '@automaker/platform';
import { getErrorMessage, logError } from '../common.js';
// Directories to hide from the listing (build artifacts, caches, etc.)
const HIDDEN_DIRECTORIES = new Set([
'.git',
'.worktrees',
'node_modules',
'.automaker',
'__pycache__',
'.cache',
'.next',
'.nuxt',
'.svelte-kit',
'.turbo',
'.vercel',
'.output',
'coverage',
'.nyc_output',
'dist',
'build',
'out',
'.tmp',
'tmp',
'.venv',
'venv',
'target',
'vendor',
'.gradle',
'.idea',
'.vscode',
]);
interface ProjectFileEntry {
name: string;
relativePath: string;
isDirectory: boolean;
isFile: boolean;
}
export function createBrowseProjectFilesHandler() {
return async (req: Request, res: Response): Promise<void> => {
try {
const { projectPath, relativePath } = req.body as {
projectPath: string;
relativePath?: string; // Relative path within the project to browse (empty = project root)
};
if (!projectPath) {
res.status(400).json({ success: false, error: 'projectPath is required' });
return;
}
const resolvedProjectPath = path.resolve(projectPath);
// Determine the target directory to browse
let targetPath = resolvedProjectPath;
let currentRelativePath = '';
if (relativePath) {
// Security: normalize and validate the relative path
const normalized = path.normalize(relativePath);
if (normalized.startsWith('..') || path.isAbsolute(normalized)) {
res.status(400).json({
success: false,
error: 'Invalid relative path - must be within the project directory',
});
return;
}
targetPath = path.join(resolvedProjectPath, normalized);
currentRelativePath = normalized;
// Double-check the resolved path is within the project
// Use a separator-terminated prefix to prevent matching sibling dirs
// that share the same prefix (e.g. /projects/foo vs /projects/foobar).
const resolvedTarget = path.resolve(targetPath);
const projectPrefix = resolvedProjectPath.endsWith(path.sep)
? resolvedProjectPath
: resolvedProjectPath + path.sep;
if (!resolvedTarget.startsWith(projectPrefix) && resolvedTarget !== resolvedProjectPath) {
res.status(400).json({
success: false,
error: 'Path traversal detected',
});
return;
}
}
// Determine parent relative path
let parentRelativePath: string | null = null;
if (currentRelativePath) {
const parent = path.dirname(currentRelativePath);
parentRelativePath = parent === '.' ? '' : parent;
}
try {
const stat = await secureFs.stat(targetPath);
if (!stat.isDirectory()) {
res.status(400).json({ success: false, error: 'Path is not a directory' });
return;
}
// Read directory contents
const dirEntries = await secureFs.readdir(targetPath, { withFileTypes: true });
// Filter and map entries
const entries: ProjectFileEntry[] = dirEntries
.filter((entry) => {
// Skip hidden directories (build artifacts, etc.)
if (entry.isDirectory() && HIDDEN_DIRECTORIES.has(entry.name)) {
return false;
}
// Skip entries starting with . (hidden files) except common config files
// We keep hidden files visible since users often need .env, .eslintrc, etc.
return true;
})
.map((entry) => {
const entryRelativePath = currentRelativePath
? path.posix.join(currentRelativePath.replace(/\\/g, '/'), entry.name)
: entry.name;
return {
name: entry.name,
relativePath: entryRelativePath,
isDirectory: entry.isDirectory(),
isFile: entry.isFile(),
};
})
// Sort: directories first, then files, alphabetically within each group
.sort((a, b) => {
if (a.isDirectory !== b.isDirectory) {
return a.isDirectory ? -1 : 1;
}
return a.name.localeCompare(b.name);
});
res.json({
success: true,
currentRelativePath,
parentRelativePath,
entries,
});
} catch (error) {
const errorMessage = error instanceof Error ? error.message : 'Failed to read directory';
const isPermissionError = errorMessage.includes('EPERM') || errorMessage.includes('EACCES');
if (isPermissionError) {
res.json({
success: true,
currentRelativePath,
parentRelativePath,
entries: [],
warning: 'Permission denied - unable to read this directory',
});
} else {
res.status(400).json({
success: false,
error: errorMessage,
});
}
}
} catch (error) {
if (error instanceof PathNotAllowedError) {
res.status(403).json({ success: false, error: getErrorMessage(error) });
return;
}
logError(error, 'Browse project files failed');
res.status(500).json({ success: false, error: getErrorMessage(error) });
}
};
}

View File

@@ -1,99 +0,0 @@
/**
* POST /copy endpoint - Copy file or directory to a new location
*/
import type { Request, Response } from 'express';
import * as secureFs from '../../../lib/secure-fs.js';
import path from 'path';
import { PathNotAllowedError } from '@automaker/platform';
import { mkdirSafe } from '@automaker/utils';
import { getErrorMessage, logError } from '../common.js';
/**
* Recursively copy a directory and its contents
*/
async function copyDirectoryRecursive(src: string, dest: string): Promise<void> {
await mkdirSafe(dest);
const entries = await secureFs.readdir(src, { withFileTypes: true });
for (const entry of entries) {
const srcPath = path.join(src, entry.name);
const destPath = path.join(dest, entry.name);
if (entry.isDirectory()) {
await copyDirectoryRecursive(srcPath, destPath);
} else {
await secureFs.copyFile(srcPath, destPath);
}
}
}
export function createCopyHandler() {
return async (req: Request, res: Response): Promise<void> => {
try {
const { sourcePath, destinationPath, overwrite } = req.body as {
sourcePath: string;
destinationPath: string;
overwrite?: boolean;
};
if (!sourcePath || !destinationPath) {
res
.status(400)
.json({ success: false, error: 'sourcePath and destinationPath are required' });
return;
}
// Prevent copying a folder into itself or its own descendant (infinite recursion)
const resolvedSrc = path.resolve(sourcePath);
const resolvedDest = path.resolve(destinationPath);
if (resolvedDest === resolvedSrc || resolvedDest.startsWith(resolvedSrc + path.sep)) {
res.status(400).json({
success: false,
error: 'Cannot copy a folder into itself or one of its own descendants',
});
return;
}
// Check if destination already exists
try {
await secureFs.stat(destinationPath);
// Destination exists
if (!overwrite) {
res.status(409).json({
success: false,
error: 'Destination already exists',
exists: true,
});
return;
}
// If overwrite is true, remove the existing destination first to avoid merging
await secureFs.rm(destinationPath, { recursive: true });
} catch {
// Destination doesn't exist - good to proceed
}
// Ensure parent directory exists
await mkdirSafe(path.dirname(path.resolve(destinationPath)));
// Check if source is a directory
const stats = await secureFs.stat(sourcePath);
if (stats.isDirectory()) {
await copyDirectoryRecursive(sourcePath, destinationPath);
} else {
await secureFs.copyFile(sourcePath, destinationPath);
}
res.json({ success: true });
} catch (error) {
if (error instanceof PathNotAllowedError) {
res.status(403).json({ success: false, error: getErrorMessage(error) });
return;
}
logError(error, 'Copy file failed');
res.status(500).json({ success: false, error: getErrorMessage(error) });
}
};
}

View File

@@ -1,142 +0,0 @@
/**
* POST /download endpoint - Download a file, or GET /download for streaming
* For folders, creates a zip archive on the fly
*/
import type { Request, Response } from 'express';
import * as secureFs from '../../../lib/secure-fs.js';
import path from 'path';
import { PathNotAllowedError } from '@automaker/platform';
import { getErrorMessage, logError } from '../common.js';
import { createReadStream } from 'fs';
import { execFile } from 'child_process';
import { promisify } from 'util';
import { tmpdir } from 'os';
const execFileAsync = promisify(execFile);
/**
* Get total size of a directory recursively
*/
async function getDirectorySize(dirPath: string): Promise<number> {
let totalSize = 0;
const entries = await secureFs.readdir(dirPath, { withFileTypes: true });
for (const entry of entries) {
const entryPath = path.join(dirPath, entry.name);
if (entry.isDirectory()) {
totalSize += await getDirectorySize(entryPath);
} else {
const stats = await secureFs.stat(entryPath);
totalSize += Number(stats.size);
}
}
return totalSize;
}
export function createDownloadHandler() {
return async (req: Request, res: Response): Promise<void> => {
try {
const { filePath } = req.body as { filePath: string };
if (!filePath) {
res.status(400).json({ success: false, error: 'filePath is required' });
return;
}
const stats = await secureFs.stat(filePath);
const fileName = path.basename(filePath);
if (stats.isDirectory()) {
// For directories, create a zip archive
const dirSize = await getDirectorySize(filePath);
const MAX_DIR_SIZE = 100 * 1024 * 1024; // 100MB limit
if (dirSize > MAX_DIR_SIZE) {
res.status(413).json({
success: false,
error: `Directory is too large to download (${(dirSize / (1024 * 1024)).toFixed(1)}MB). Maximum size is ${MAX_DIR_SIZE / (1024 * 1024)}MB.`,
size: dirSize,
});
return;
}
// Create a temporary zip file
const zipFileName = `${fileName}.zip`;
const tmpZipPath = path.join(tmpdir(), `automaker-download-${Date.now()}-${zipFileName}`);
try {
// Use system zip command (available on macOS and Linux)
// Use execFile to avoid shell injection via user-provided paths
await execFileAsync('zip', ['-r', tmpZipPath, fileName], {
cwd: path.dirname(filePath),
maxBuffer: 50 * 1024 * 1024,
});
const zipStats = await secureFs.stat(tmpZipPath);
res.setHeader('Content-Type', 'application/zip');
res.setHeader('Content-Disposition', `attachment; filename="${zipFileName}"`);
res.setHeader('Content-Length', zipStats.size.toString());
res.setHeader('X-Directory-Size', dirSize.toString());
const stream = createReadStream(tmpZipPath);
stream.pipe(res);
stream.on('end', async () => {
// Cleanup temp file
try {
await secureFs.rm(tmpZipPath);
} catch {
// Ignore cleanup errors
}
});
stream.on('error', async (err) => {
logError(err, 'Download stream error');
try {
await secureFs.rm(tmpZipPath);
} catch {
// Ignore cleanup errors
}
if (!res.headersSent) {
res.status(500).json({ success: false, error: 'Stream error during download' });
}
});
} catch (zipError) {
// Cleanup on zip failure
try {
await secureFs.rm(tmpZipPath);
} catch {
// Ignore
}
throw zipError;
}
} else {
// For individual files, stream directly
res.setHeader('Content-Type', 'application/octet-stream');
res.setHeader('Content-Disposition', `attachment; filename="${fileName}"`);
res.setHeader('Content-Length', stats.size.toString());
const stream = createReadStream(filePath);
stream.pipe(res);
stream.on('error', (err) => {
logError(err, 'Download stream error');
if (!res.headersSent) {
res.status(500).json({ success: false, error: 'Stream error during download' });
}
});
}
} catch (error) {
if (error instanceof PathNotAllowedError) {
res.status(403).json({ success: false, error: getErrorMessage(error) });
return;
}
logError(error, 'Download failed');
res.status(500).json({ success: false, error: getErrorMessage(error) });
}
};
}

View File

@@ -35,9 +35,9 @@ export function createMkdirHandler() {
error: 'Path exists and is not a directory', error: 'Path exists and is not a directory',
}); });
return; return;
} catch (statError: unknown) { } catch (statError: any) {
// ENOENT means path doesn't exist - we should create it // ENOENT means path doesn't exist - we should create it
if ((statError as NodeJS.ErrnoException).code !== 'ENOENT') { if (statError.code !== 'ENOENT') {
// Some other error (could be ELOOP in parent path) // Some other error (could be ELOOP in parent path)
throw statError; throw statError;
} }
@@ -47,7 +47,7 @@ export function createMkdirHandler() {
await secureFs.mkdir(resolvedPath, { recursive: true }); await secureFs.mkdir(resolvedPath, { recursive: true });
res.json({ success: true }); res.json({ success: true });
} catch (error: unknown) { } catch (error: any) {
// Path not allowed - return 403 Forbidden // Path not allowed - return 403 Forbidden
if (error instanceof PathNotAllowedError) { if (error instanceof PathNotAllowedError) {
res.status(403).json({ success: false, error: getErrorMessage(error) }); res.status(403).json({ success: false, error: getErrorMessage(error) });
@@ -55,7 +55,7 @@ export function createMkdirHandler() {
} }
// Handle ELOOP specifically // Handle ELOOP specifically
if ((error as NodeJS.ErrnoException).code === 'ELOOP') { if (error.code === 'ELOOP') {
logError(error, 'Create directory failed - symlink loop detected'); logError(error, 'Create directory failed - symlink loop detected');
res.status(400).json({ res.status(400).json({
success: false, success: false,

View File

@@ -1,79 +0,0 @@
/**
* POST /move endpoint - Move (rename) file or directory to a new location
*/
import type { Request, Response } from 'express';
import * as secureFs from '../../../lib/secure-fs.js';
import path from 'path';
import { PathNotAllowedError } from '@automaker/platform';
import { mkdirSafe } from '@automaker/utils';
import { getErrorMessage, logError } from '../common.js';
export function createMoveHandler() {
return async (req: Request, res: Response): Promise<void> => {
try {
const { sourcePath, destinationPath, overwrite } = req.body as {
sourcePath: string;
destinationPath: string;
overwrite?: boolean;
};
if (!sourcePath || !destinationPath) {
res
.status(400)
.json({ success: false, error: 'sourcePath and destinationPath are required' });
return;
}
// Prevent moving to same location or into its own descendant
const resolvedSrc = path.resolve(sourcePath);
const resolvedDest = path.resolve(destinationPath);
if (resolvedDest === resolvedSrc) {
// No-op: source and destination are the same
res.json({ success: true });
return;
}
if (resolvedDest.startsWith(resolvedSrc + path.sep)) {
res.status(400).json({
success: false,
error: 'Cannot move a folder into one of its own descendants',
});
return;
}
// Check if destination already exists
try {
await secureFs.stat(destinationPath);
// Destination exists
if (!overwrite) {
res.status(409).json({
success: false,
error: 'Destination already exists',
exists: true,
});
return;
}
// If overwrite is true, remove the existing destination first
await secureFs.rm(destinationPath, { recursive: true });
} catch {
// Destination doesn't exist - good to proceed
}
// Ensure parent directory exists
await mkdirSafe(path.dirname(path.resolve(destinationPath)));
// Use rename for the move operation
await secureFs.rename(sourcePath, destinationPath);
res.json({ success: true });
} catch (error) {
if (error instanceof PathNotAllowedError) {
res.status(403).json({ success: false, error: getErrorMessage(error) });
return;
}
logError(error, 'Move file failed');
res.status(500).json({ success: false, error: getErrorMessage(error) });
}
};
}

View File

@@ -3,29 +3,16 @@
*/ */
import type { Request, Response } from 'express'; import type { Request, Response } from 'express';
import path from 'path';
import * as secureFs from '../../../lib/secure-fs.js'; import * as secureFs from '../../../lib/secure-fs.js';
import { PathNotAllowedError } from '@automaker/platform'; import { PathNotAllowedError } from '@automaker/platform';
import { getErrorMessage, logError } from '../common.js'; import { getErrorMessage, logError } from '../common.js';
// Optional files that are expected to not exist in new projects // Optional files that are expected to not exist in new projects
// Don't log ENOENT errors for these to reduce noise // Don't log ENOENT errors for these to reduce noise
const OPTIONAL_FILES = ['categories.json', 'app_spec.txt', 'context-metadata.json']; const OPTIONAL_FILES = ['categories.json', 'app_spec.txt'];
function isOptionalFile(filePath: string): boolean { function isOptionalFile(filePath: string): boolean {
const basename = path.basename(filePath); return OPTIONAL_FILES.some((optionalFile) => filePath.endsWith(optionalFile));
if (OPTIONAL_FILES.some((optionalFile) => basename === optionalFile)) {
return true;
}
// Context and memory files may not exist yet during create/delete or test races
if (filePath.includes('.automaker/context/') || filePath.includes('.automaker/memory/')) {
const name = path.basename(filePath);
const lower = name.toLowerCase();
if (lower.endsWith('.md') || lower.endsWith('.txt') || lower.endsWith('.markdown')) {
return true;
}
}
return false;
} }
function isENOENT(error: unknown): boolean { function isENOENT(error: unknown): boolean {
@@ -52,14 +39,12 @@ export function createReadHandler() {
return; return;
} }
const filePath = req.body?.filePath || ''; // Don't log ENOENT errors for optional files (expected to be missing in new projects)
const optionalMissing = isENOENT(error) && isOptionalFile(filePath); const shouldLog = !(isENOENT(error) && isOptionalFile(req.body?.filePath || ''));
if (!optionalMissing) { if (shouldLog) {
logError(error, 'Read file failed'); logError(error, 'Read file failed');
} }
// Return 404 for missing optional files so clients can handle "not found" res.status(500).json({ success: false, error: getErrorMessage(error) });
const status = optionalMissing ? 404 : 500;
res.status(status).json({ success: false, error: getErrorMessage(error) });
} }
}; };
} }

View File

@@ -10,11 +10,7 @@ import { getErrorMessage, logError } from '../common.js';
export function createResolveDirectoryHandler() { export function createResolveDirectoryHandler() {
return async (req: Request, res: Response): Promise<void> => { return async (req: Request, res: Response): Promise<void> => {
try { try {
const { const { directoryName, sampleFiles, fileCount } = req.body as {
directoryName,
sampleFiles,
fileCount: _fileCount,
} = req.body as {
directoryName: string; directoryName: string;
sampleFiles?: string[]; sampleFiles?: string[];
fileCount?: number; fileCount?: number;

View File

@@ -11,9 +11,10 @@ import { getBoardDir } from '@automaker/platform';
export function createSaveBoardBackgroundHandler() { export function createSaveBoardBackgroundHandler() {
return async (req: Request, res: Response): Promise<void> => { return async (req: Request, res: Response): Promise<void> => {
try { try {
const { data, filename, projectPath } = req.body as { const { data, filename, mimeType, projectPath } = req.body as {
data: string; data: string;
filename: string; filename: string;
mimeType: string;
projectPath: string; projectPath: string;
}; };

View File

@@ -7,14 +7,14 @@ import * as secureFs from '../../../lib/secure-fs.js';
import path from 'path'; import path from 'path';
import { getErrorMessage, logError } from '../common.js'; import { getErrorMessage, logError } from '../common.js';
import { getImagesDir } from '@automaker/platform'; import { getImagesDir } from '@automaker/platform';
import { sanitizeFilename } from '@automaker/utils';
export function createSaveImageHandler() { export function createSaveImageHandler() {
return async (req: Request, res: Response): Promise<void> => { return async (req: Request, res: Response): Promise<void> => {
try { try {
const { data, filename, projectPath } = req.body as { const { data, filename, mimeType, projectPath } = req.body as {
data: string; data: string;
filename: string; filename: string;
mimeType: string;
projectPath: string; projectPath: string;
}; };
@@ -39,7 +39,7 @@ export function createSaveImageHandler() {
// Generate unique filename with timestamp // Generate unique filename with timestamp
const timestamp = Date.now(); const timestamp = Date.now();
const ext = path.extname(filename) || '.png'; const ext = path.extname(filename) || '.png';
const baseName = sanitizeFilename(path.basename(filename, ext), 'image'); const baseName = path.basename(filename, ext);
const uniqueFilename = `${baseName}-${timestamp}${ext}`; const uniqueFilename = `${baseName}-${timestamp}${ext}`;
const filePath = path.join(imagesDir, uniqueFilename); const filePath = path.join(imagesDir, uniqueFilename);

View File

@@ -35,16 +35,6 @@ export function createStatHandler() {
return; return;
} }
// File or directory does not exist - return 404 so UI can handle missing paths
const code =
error && typeof error === 'object' && 'code' in error
? (error as { code: string }).code
: '';
if (code === 'ENOENT') {
res.status(404).json({ success: false, error: 'File or directory not found' });
return;
}
logError(error, 'Get file stats failed'); logError(error, 'Get file stats failed');
res.status(500).json({ success: false, error: getErrorMessage(error) }); res.status(500).json({ success: false, error: getErrorMessage(error) });
} }

View File

@@ -5,7 +5,7 @@
import type { Request, Response } from 'express'; import type { Request, Response } from 'express';
import * as secureFs from '../../../lib/secure-fs.js'; import * as secureFs from '../../../lib/secure-fs.js';
import path from 'path'; import path from 'path';
import { isPathAllowed, getAllowedRootDirectory } from '@automaker/platform'; import { isPathAllowed, PathNotAllowedError, getAllowedRootDirectory } from '@automaker/platform';
import { getErrorMessage, logError } from '../common.js'; import { getErrorMessage, logError } from '../common.js';
export function createValidatePathHandler() { export function createValidatePathHandler() {

View File

@@ -24,9 +24,7 @@ export function createWriteHandler() {
// Ensure parent directory exists (symlink-safe) // Ensure parent directory exists (symlink-safe)
await mkdirSafe(path.dirname(path.resolve(filePath))); await mkdirSafe(path.dirname(path.resolve(filePath)));
// Default content to empty string if undefined/null to prevent writing await secureFs.writeFile(filePath, content, 'utf-8');
// "undefined" as literal text (e.g. when content field is missing from request)
await secureFs.writeFile(filePath, content ?? '', 'utf-8');
res.json({ success: true }); res.json({ success: true });
} catch (error) { } catch (error) {

View File

@@ -1,66 +0,0 @@
import { Router, Request, Response } from 'express';
import { GeminiProvider } from '../../providers/gemini-provider.js';
import { GeminiUsageService } from '../../services/gemini-usage-service.js';
import { createLogger } from '@automaker/utils';
import type { EventEmitter } from '../../lib/events.js';
const logger = createLogger('Gemini');
export function createGeminiRoutes(
usageService: GeminiUsageService,
_events: EventEmitter
): Router {
const router = Router();
// Get current usage/quota data from Google Cloud API
router.get('/usage', async (_req: Request, res: Response) => {
try {
const usageData = await usageService.fetchUsageData();
res.json(usageData);
} catch (error) {
const message = error instanceof Error ? error.message : 'Unknown error';
logger.error('Error fetching Gemini usage:', error);
// Return error in a format the UI expects
res.status(200).json({
authenticated: false,
authMethod: 'none',
usedPercent: 0,
remainingPercent: 100,
lastUpdated: new Date().toISOString(),
error: `Failed to fetch Gemini usage: ${message}`,
});
}
});
// Check if Gemini is available
router.get('/status', async (_req: Request, res: Response) => {
try {
const provider = new GeminiProvider();
const status = await provider.detectInstallation();
// Derive authMethod from typed InstallationStatus fields
const authMethod = status.authenticated
? status.hasApiKey
? 'api_key'
: 'cli_login'
: 'none';
res.json({
success: true,
installed: status.installed,
version: status.version || null,
path: status.path || null,
authenticated: status.authenticated || false,
authMethod,
hasCredentialsFile: false,
});
} catch (error) {
const message = error instanceof Error ? error.message : 'Unknown error';
res.status(500).json({ success: false, error: message });
}
});
return router;
}

View File

@@ -6,22 +6,12 @@ import { Router } from 'express';
import { validatePathParams } from '../../middleware/validate-paths.js'; import { validatePathParams } from '../../middleware/validate-paths.js';
import { createDiffsHandler } from './routes/diffs.js'; import { createDiffsHandler } from './routes/diffs.js';
import { createFileDiffHandler } from './routes/file-diff.js'; import { createFileDiffHandler } from './routes/file-diff.js';
import { createStageFilesHandler } from './routes/stage-files.js';
import { createDetailsHandler } from './routes/details.js';
import { createEnhancedStatusHandler } from './routes/enhanced-status.js';
export function createGitRoutes(): Router { export function createGitRoutes(): Router {
const router = Router(); const router = Router();
router.post('/diffs', validatePathParams('projectPath'), createDiffsHandler()); router.post('/diffs', validatePathParams('projectPath'), createDiffsHandler());
router.post('/file-diff', validatePathParams('projectPath', 'filePath'), createFileDiffHandler()); router.post('/file-diff', validatePathParams('projectPath', 'filePath'), createFileDiffHandler());
router.post(
'/stage-files',
validatePathParams('projectPath', 'files[]'),
createStageFilesHandler()
);
router.post('/details', validatePathParams('projectPath', 'filePath?'), createDetailsHandler());
router.post('/enhanced-status', validatePathParams('projectPath'), createEnhancedStatusHandler());
return router; return router;
} }

View File

@@ -1,248 +0,0 @@
/**
* POST /details endpoint - Get detailed git info for a file or project
* Returns branch, last commit info, diff stats, and conflict status
*/
import type { Request, Response } from 'express';
import { exec, execFile } from 'child_process';
import { promisify } from 'util';
import * as secureFs from '../../../lib/secure-fs.js';
import { getErrorMessage, logError } from '../common.js';
const execAsync = promisify(exec);
const execFileAsync = promisify(execFile);
interface GitFileDetails {
branch: string;
lastCommitHash: string;
lastCommitMessage: string;
lastCommitAuthor: string;
lastCommitTimestamp: string;
linesAdded: number;
linesRemoved: number;
isConflicted: boolean;
isStaged: boolean;
isUnstaged: boolean;
statusLabel: string;
}
export function createDetailsHandler() {
return async (req: Request, res: Response): Promise<void> => {
try {
const { projectPath, filePath } = req.body as {
projectPath: string;
filePath?: string;
};
if (!projectPath) {
res.status(400).json({ success: false, error: 'projectPath required' });
return;
}
try {
// Get current branch
const { stdout: branchRaw } = await execAsync('git rev-parse --abbrev-ref HEAD', {
cwd: projectPath,
});
const branch = branchRaw.trim();
if (!filePath) {
// Project-level details - just return branch info
res.json({
success: true,
details: { branch },
});
return;
}
// Get last commit info for this file
let lastCommitHash = '';
let lastCommitMessage = '';
let lastCommitAuthor = '';
let lastCommitTimestamp = '';
try {
const { stdout: logOutput } = await execFileAsync(
'git',
['log', '-1', '--format=%H|%s|%an|%aI', '--', filePath],
{ cwd: projectPath }
);
if (logOutput.trim()) {
const parts = logOutput.trim().split('|');
lastCommitHash = parts[0] || '';
lastCommitMessage = parts[1] || '';
lastCommitAuthor = parts[2] || '';
lastCommitTimestamp = parts[3] || '';
}
} catch {
// File may not have any commits yet
}
// Get diff stats (lines added/removed)
let linesAdded = 0;
let linesRemoved = 0;
try {
// Check if file is untracked first
const { stdout: statusLine } = await execFileAsync(
'git',
['status', '--porcelain', '--', filePath],
{ cwd: projectPath }
);
if (statusLine.trim().startsWith('??')) {
// Untracked file - count all lines as added using Node.js instead of shell
try {
const fileContent = (await secureFs.readFile(filePath, 'utf-8')).toString();
const lines = fileContent.split('\n');
// Don't count trailing empty line from final newline
linesAdded =
lines.length > 0 && lines[lines.length - 1] === ''
? lines.length - 1
: lines.length;
} catch {
// Ignore
}
} else {
const { stdout: diffStatRaw } = await execFileAsync(
'git',
['diff', '--numstat', 'HEAD', '--', filePath],
{ cwd: projectPath }
);
if (diffStatRaw.trim()) {
const parts = diffStatRaw.trim().split('\t');
linesAdded = parseInt(parts[0], 10) || 0;
linesRemoved = parseInt(parts[1], 10) || 0;
}
// Also check staged diff stats
const { stdout: stagedDiffStatRaw } = await execFileAsync(
'git',
['diff', '--numstat', '--cached', '--', filePath],
{ cwd: projectPath }
);
if (stagedDiffStatRaw.trim()) {
const parts = stagedDiffStatRaw.trim().split('\t');
linesAdded += parseInt(parts[0], 10) || 0;
linesRemoved += parseInt(parts[1], 10) || 0;
}
}
} catch {
// Diff might not be available
}
// Get conflict and staging status
let isConflicted = false;
let isStaged = false;
let isUnstaged = false;
let statusLabel = '';
try {
const { stdout: statusOutput } = await execFileAsync(
'git',
['status', '--porcelain', '--', filePath],
{ cwd: projectPath }
);
if (statusOutput.trim()) {
const indexStatus = statusOutput[0];
const workTreeStatus = statusOutput[1];
// Check for conflicts (both modified, unmerged states)
if (
indexStatus === 'U' ||
workTreeStatus === 'U' ||
(indexStatus === 'A' && workTreeStatus === 'A') ||
(indexStatus === 'D' && workTreeStatus === 'D')
) {
isConflicted = true;
statusLabel = 'Conflicted';
} else {
// Staged changes (index has a status)
if (indexStatus !== ' ' && indexStatus !== '?') {
isStaged = true;
}
// Unstaged changes (work tree has a status)
if (workTreeStatus !== ' ' && workTreeStatus !== '?') {
isUnstaged = true;
}
// Build status label
if (isStaged && isUnstaged) {
statusLabel = 'Staged + Modified';
} else if (isStaged) {
statusLabel = 'Staged';
} else {
const statusChar = workTreeStatus !== ' ' ? workTreeStatus : indexStatus;
switch (statusChar) {
case 'M':
statusLabel = 'Modified';
break;
case 'A':
statusLabel = 'Added';
break;
case 'D':
statusLabel = 'Deleted';
break;
case 'R':
statusLabel = 'Renamed';
break;
case 'C':
statusLabel = 'Copied';
break;
case '?':
statusLabel = 'Untracked';
break;
default:
statusLabel = statusChar || '';
}
}
}
}
} catch {
// Status might not be available
}
const details: GitFileDetails = {
branch,
lastCommitHash,
lastCommitMessage,
lastCommitAuthor,
lastCommitTimestamp,
linesAdded,
linesRemoved,
isConflicted,
isStaged,
isUnstaged,
statusLabel,
};
res.json({ success: true, details });
} catch (innerError) {
logError(innerError, 'Git details failed');
res.json({
success: true,
details: {
branch: '',
lastCommitHash: '',
lastCommitMessage: '',
lastCommitAuthor: '',
lastCommitTimestamp: '',
linesAdded: 0,
linesRemoved: 0,
isConflicted: false,
isStaged: false,
isUnstaged: false,
statusLabel: '',
},
});
}
} catch (error) {
logError(error, 'Get git details failed');
res.status(500).json({ success: false, error: getErrorMessage(error) });
}
};
}

View File

@@ -23,7 +23,6 @@ export function createDiffsHandler() {
diff: result.diff, diff: result.diff,
files: result.files, files: result.files,
hasChanges: result.hasChanges, hasChanges: result.hasChanges,
...(result.mergeState ? { mergeState: result.mergeState } : {}),
}); });
} catch (innerError) { } catch (innerError) {
logError(innerError, 'Git diff failed'); logError(innerError, 'Git diff failed');

View File

@@ -1,176 +0,0 @@
/**
* POST /enhanced-status endpoint - Get enhanced git status with diff stats per file
* Returns per-file status with lines added/removed and staged/unstaged differentiation
*/
import type { Request, Response } from 'express';
import { exec } from 'child_process';
import { promisify } from 'util';
import { getErrorMessage, logError } from '../common.js';
const execAsync = promisify(exec);
interface EnhancedFileStatus {
path: string;
indexStatus: string;
workTreeStatus: string;
isConflicted: boolean;
isStaged: boolean;
isUnstaged: boolean;
linesAdded: number;
linesRemoved: number;
statusLabel: string;
}
function getStatusLabel(indexStatus: string, workTreeStatus: string): string {
// Check for conflicts
if (
indexStatus === 'U' ||
workTreeStatus === 'U' ||
(indexStatus === 'A' && workTreeStatus === 'A') ||
(indexStatus === 'D' && workTreeStatus === 'D')
) {
return 'Conflicted';
}
const hasStaged = indexStatus !== ' ' && indexStatus !== '?';
const hasUnstaged = workTreeStatus !== ' ' && workTreeStatus !== '?';
if (hasStaged && hasUnstaged) return 'Staged + Modified';
if (hasStaged) return 'Staged';
const statusChar = workTreeStatus !== ' ' ? workTreeStatus : indexStatus;
switch (statusChar) {
case 'M':
return 'Modified';
case 'A':
return 'Added';
case 'D':
return 'Deleted';
case 'R':
return 'Renamed';
case 'C':
return 'Copied';
case '?':
return 'Untracked';
default:
return statusChar || '';
}
}
export function createEnhancedStatusHandler() {
return async (req: Request, res: Response): Promise<void> => {
try {
const { projectPath } = req.body as { projectPath: string };
if (!projectPath) {
res.status(400).json({ success: false, error: 'projectPath required' });
return;
}
try {
// Get current branch
const { stdout: branchRaw } = await execAsync('git rev-parse --abbrev-ref HEAD', {
cwd: projectPath,
});
const branch = branchRaw.trim();
// Get porcelain status for all files
const { stdout: statusOutput } = await execAsync('git status --porcelain', {
cwd: projectPath,
});
// Get diff numstat for working tree changes
let workTreeStats: Record<string, { added: number; removed: number }> = {};
try {
const { stdout: numstatRaw } = await execAsync('git diff --numstat', {
cwd: projectPath,
maxBuffer: 10 * 1024 * 1024,
});
for (const line of numstatRaw.trim().split('\n').filter(Boolean)) {
const parts = line.split('\t');
if (parts.length >= 3) {
const added = parseInt(parts[0], 10) || 0;
const removed = parseInt(parts[1], 10) || 0;
workTreeStats[parts[2]] = { added, removed };
}
}
} catch {
// Ignore
}
// Get diff numstat for staged changes
let stagedStats: Record<string, { added: number; removed: number }> = {};
try {
const { stdout: stagedNumstatRaw } = await execAsync('git diff --numstat --cached', {
cwd: projectPath,
maxBuffer: 10 * 1024 * 1024,
});
for (const line of stagedNumstatRaw.trim().split('\n').filter(Boolean)) {
const parts = line.split('\t');
if (parts.length >= 3) {
const added = parseInt(parts[0], 10) || 0;
const removed = parseInt(parts[1], 10) || 0;
stagedStats[parts[2]] = { added, removed };
}
}
} catch {
// Ignore
}
// Parse status and build enhanced file list
const files: EnhancedFileStatus[] = [];
for (const line of statusOutput.split('\n').filter(Boolean)) {
if (line.length < 4) continue;
const indexStatus = line[0];
const workTreeStatus = line[1];
const filePath = line.substring(3).trim();
// Handle renamed files (format: "R old -> new")
const actualPath = filePath.includes(' -> ')
? filePath.split(' -> ')[1].trim()
: filePath;
const isConflicted =
indexStatus === 'U' ||
workTreeStatus === 'U' ||
(indexStatus === 'A' && workTreeStatus === 'A') ||
(indexStatus === 'D' && workTreeStatus === 'D');
const isStaged = indexStatus !== ' ' && indexStatus !== '?';
const isUnstaged = workTreeStatus !== ' ' && workTreeStatus !== '?';
// Combine diff stats from both working tree and staged
const wtStats = workTreeStats[actualPath] || { added: 0, removed: 0 };
const stStats = stagedStats[actualPath] || { added: 0, removed: 0 };
files.push({
path: actualPath,
indexStatus,
workTreeStatus,
isConflicted,
isStaged,
isUnstaged,
linesAdded: wtStats.added + stStats.added,
linesRemoved: wtStats.removed + stStats.removed,
statusLabel: getStatusLabel(indexStatus, workTreeStatus),
});
}
res.json({
success: true,
branch,
files,
});
} catch (innerError) {
logError(innerError, 'Git enhanced status failed');
res.json({ success: true, branch: '', files: [] });
}
} catch (error) {
logError(error, 'Get enhanced status failed');
res.status(500).json({ success: false, error: getErrorMessage(error) });
}
};
}

View File

@@ -1,67 +0,0 @@
/**
* POST /stage-files endpoint - Stage or unstage files in the main project
*/
import type { Request, Response } from 'express';
import { getErrorMessage, logError } from '../common.js';
import { stageFiles, StageFilesValidationError } from '../../../services/stage-files-service.js';
export function createStageFilesHandler() {
return async (req: Request, res: Response): Promise<void> => {
try {
const { projectPath, files, operation } = req.body as {
projectPath: string;
files: string[];
operation: 'stage' | 'unstage';
};
if (!projectPath) {
res.status(400).json({
success: false,
error: 'projectPath required',
});
return;
}
if (!Array.isArray(files) || files.length === 0) {
res.status(400).json({
success: false,
error: 'files array required and must not be empty',
});
return;
}
for (const file of files) {
if (typeof file !== 'string' || file.trim() === '') {
res.status(400).json({
success: false,
error: 'Each element of files must be a non-empty string',
});
return;
}
}
if (operation !== 'stage' && operation !== 'unstage') {
res.status(400).json({
success: false,
error: 'operation must be "stage" or "unstage"',
});
return;
}
const result = await stageFiles(projectPath, files, operation);
res.json({
success: true,
result,
});
} catch (error) {
if (error instanceof StageFilesValidationError) {
res.status(400).json({ success: false, error: error.message });
return;
}
logError(error, `${(req.body as { operation?: string })?.operation ?? 'stage'} files failed`);
res.status(500).json({ success: false, error: getErrorMessage(error) });
}
};
}

View File

@@ -9,8 +9,6 @@ import { createCheckGitHubRemoteHandler } from './routes/check-github-remote.js'
import { createListIssuesHandler } from './routes/list-issues.js'; import { createListIssuesHandler } from './routes/list-issues.js';
import { createListPRsHandler } from './routes/list-prs.js'; import { createListPRsHandler } from './routes/list-prs.js';
import { createListCommentsHandler } from './routes/list-comments.js'; import { createListCommentsHandler } from './routes/list-comments.js';
import { createListPRReviewCommentsHandler } from './routes/list-pr-review-comments.js';
import { createResolvePRCommentHandler } from './routes/resolve-pr-comment.js';
import { createValidateIssueHandler } from './routes/validate-issue.js'; import { createValidateIssueHandler } from './routes/validate-issue.js';
import { import {
createValidationStatusHandler, createValidationStatusHandler,
@@ -31,16 +29,6 @@ export function createGitHubRoutes(
router.post('/issues', validatePathParams('projectPath'), createListIssuesHandler()); router.post('/issues', validatePathParams('projectPath'), createListIssuesHandler());
router.post('/prs', validatePathParams('projectPath'), createListPRsHandler()); router.post('/prs', validatePathParams('projectPath'), createListPRsHandler());
router.post('/issue-comments', validatePathParams('projectPath'), createListCommentsHandler()); router.post('/issue-comments', validatePathParams('projectPath'), createListCommentsHandler());
router.post(
'/pr-review-comments',
validatePathParams('projectPath'),
createListPRReviewCommentsHandler()
);
router.post(
'/resolve-pr-comment',
validatePathParams('projectPath'),
createResolvePRCommentHandler()
);
router.post( router.post(
'/validate-issue', '/validate-issue',
validatePathParams('projectPath'), validatePathParams('projectPath'),

View File

@@ -1,14 +1,38 @@
/** /**
* Common utilities for GitHub routes * Common utilities for GitHub routes
*
* Re-exports shared utilities from lib/exec-utils so route consumers
* can continue importing from this module unchanged.
*/ */
import { exec } from 'child_process'; import { exec } from 'child_process';
import { promisify } from 'util'; import { promisify } from 'util';
import { createLogger } from '@automaker/utils';
const logger = createLogger('GitHub');
export const execAsync = promisify(exec); export const execAsync = promisify(exec);
// Re-export shared utilities from the canonical location // Extended PATH to include common tool installation locations
export { extendedPath, execEnv, getErrorMessage, logError } from '../../../lib/exec-utils.js'; export const extendedPath = [
process.env.PATH,
'/opt/homebrew/bin',
'/usr/local/bin',
'/home/linuxbrew/.linuxbrew/bin',
`${process.env.HOME}/.local/bin`,
]
.filter(Boolean)
.join(':');
export const execEnv = {
...process.env,
PATH: extendedPath,
};
export function getErrorMessage(error: unknown): string {
if (error instanceof Error) {
return error.message;
}
return String(error);
}
export function logError(error: unknown, context: string): void {
logger.error(`${context}:`, error);
}

View File

@@ -1,72 +0,0 @@
/**
* POST /pr-review-comments endpoint - Fetch review comments for a GitHub PR
*
* Fetches both regular PR comments and inline code review comments
* for a specific pull request, providing file path and line context.
*/
import type { Request, Response } from 'express';
import { getErrorMessage, logError } from './common.js';
import { checkGitHubRemote } from './check-github-remote.js';
import {
fetchPRReviewComments,
fetchReviewThreadResolvedStatus,
type PRReviewComment,
type ListPRReviewCommentsResult,
} from '../../../services/pr-review-comments.service.js';
// Re-export types so existing callers continue to work
export type { PRReviewComment, ListPRReviewCommentsResult };
// Re-export service functions so existing callers continue to work
export { fetchPRReviewComments, fetchReviewThreadResolvedStatus };
interface ListPRReviewCommentsRequest {
projectPath: string;
prNumber: number;
}
export function createListPRReviewCommentsHandler() {
return async (req: Request, res: Response): Promise<void> => {
try {
const { projectPath, prNumber } = req.body as ListPRReviewCommentsRequest;
if (!projectPath) {
res.status(400).json({ success: false, error: 'projectPath is required' });
return;
}
if (!prNumber || typeof prNumber !== 'number') {
res
.status(400)
.json({ success: false, error: 'prNumber is required and must be a number' });
return;
}
// Check if this is a GitHub repo and get owner/repo
const remoteStatus = await checkGitHubRemote(projectPath);
if (!remoteStatus.hasGitHubRemote || !remoteStatus.owner || !remoteStatus.repo) {
res.status(400).json({
success: false,
error: 'Project does not have a GitHub remote',
});
return;
}
const comments = await fetchPRReviewComments(
projectPath,
remoteStatus.owner,
remoteStatus.repo,
prNumber
);
res.json({
success: true,
comments,
totalCount: comments.length,
});
} catch (error) {
logError(error, 'Fetch PR review comments failed');
res.status(500).json({ success: false, error: getErrorMessage(error) });
}
};
}

View File

@@ -1,66 +0,0 @@
/**
* POST /resolve-pr-comment endpoint - Resolve or unresolve a GitHub PR review thread
*
* Uses the GitHub GraphQL API to resolve or unresolve a review thread
* identified by its GraphQL node ID (threadId).
*/
import type { Request, Response } from 'express';
import { getErrorMessage, logError } from './common.js';
import { checkGitHubRemote } from './check-github-remote.js';
import { executeReviewThreadMutation } from '../../../services/github-pr-comment.service.js';
export interface ResolvePRCommentResult {
success: boolean;
isResolved?: boolean;
error?: string;
}
interface ResolvePRCommentRequest {
projectPath: string;
threadId: string;
resolve: boolean;
}
export function createResolvePRCommentHandler() {
return async (req: Request, res: Response): Promise<void> => {
try {
const { projectPath, threadId, resolve } = req.body as ResolvePRCommentRequest;
if (!projectPath) {
res.status(400).json({ success: false, error: 'projectPath is required' });
return;
}
if (!threadId) {
res.status(400).json({ success: false, error: 'threadId is required' });
return;
}
if (typeof resolve !== 'boolean') {
res.status(400).json({ success: false, error: 'resolve must be a boolean' });
return;
}
// Check if this is a GitHub repo
const remoteStatus = await checkGitHubRemote(projectPath);
if (!remoteStatus.hasGitHubRemote) {
res.status(400).json({
success: false,
error: 'Project does not have a GitHub remote',
});
return;
}
const result = await executeReviewThreadMutation(projectPath, threadId, resolve);
res.json({
success: true,
isResolved: result.isResolved,
});
} catch (error) {
logError(error, 'Resolve PR comment failed');
res.status(500).json({ success: false, error: getErrorMessage(error) });
}
};
}

View File

@@ -25,7 +25,7 @@ import {
isOpencodeModel, isOpencodeModel,
supportsStructuredOutput, supportsStructuredOutput,
} from '@automaker/types'; } from '@automaker/types';
import { resolvePhaseModel, resolveModelString } from '@automaker/model-resolver'; import { resolvePhaseModel } from '@automaker/model-resolver';
import { extractJson } from '../../../lib/json-extractor.js'; import { extractJson } from '../../../lib/json-extractor.js';
import { writeValidation } from '../../../lib/validation-storage.js'; import { writeValidation } from '../../../lib/validation-storage.js';
import { streamingQuery } from '../../../providers/simple-query-service.js'; import { streamingQuery } from '../../../providers/simple-query-service.js';
@@ -38,7 +38,7 @@ import {
import { import {
getPromptCustomization, getPromptCustomization,
getAutoLoadClaudeMdSetting, getAutoLoadClaudeMdSetting,
resolveProviderContext, getProviderByModelId,
} from '../../../lib/settings-helpers.js'; } from '../../../lib/settings-helpers.js';
import { import {
trySetValidationRunning, trySetValidationRunning,
@@ -64,8 +64,6 @@ interface ValidateIssueRequestBody {
thinkingLevel?: ThinkingLevel; thinkingLevel?: ThinkingLevel;
/** Reasoning effort for Codex models (ignored for non-Codex models) */ /** Reasoning effort for Codex models (ignored for non-Codex models) */
reasoningEffort?: ReasoningEffort; reasoningEffort?: ReasoningEffort;
/** Optional Claude-compatible provider ID for custom providers (e.g., GLM, MiniMax) */
providerId?: string;
/** Comments to include in validation analysis */ /** Comments to include in validation analysis */
comments?: GitHubComment[]; comments?: GitHubComment[];
/** Linked pull requests for this issue */ /** Linked pull requests for this issue */
@@ -89,7 +87,6 @@ async function runValidation(
events: EventEmitter, events: EventEmitter,
abortController: AbortController, abortController: AbortController,
settingsService?: SettingsService, settingsService?: SettingsService,
providerId?: string,
comments?: ValidationComment[], comments?: ValidationComment[],
linkedPRs?: ValidationLinkedPR[], linkedPRs?: ValidationLinkedPR[],
thinkingLevel?: ThinkingLevel, thinkingLevel?: ThinkingLevel,
@@ -179,12 +176,7 @@ ${basePrompt}`;
let credentials = await settingsService?.getCredentials(); let credentials = await settingsService?.getCredentials();
if (settingsService) { if (settingsService) {
const providerResult = await resolveProviderContext( const providerResult = await getProviderByModelId(model, settingsService, '[ValidateIssue]');
settingsService,
model,
providerId,
'[ValidateIssue]'
);
if (providerResult.provider) { if (providerResult.provider) {
claudeCompatibleProvider = providerResult.provider; claudeCompatibleProvider = providerResult.provider;
providerResolvedModel = providerResult.resolvedModel; providerResolvedModel = providerResult.resolvedModel;
@@ -196,12 +188,8 @@ ${basePrompt}`;
} }
} }
// CRITICAL: For custom providers (GLM, MiniMax), pass the provider's model ID (e.g. "GLM-4.7") // Use provider resolved model if available, otherwise use original model
// to the API, NOT the resolved Claude model - otherwise we get "model not found" const effectiveModel = providerResolvedModel || (model as string);
// For standard Claude models, resolve aliases (e.g., 'opus' -> 'claude-opus-4-20250514')
const effectiveModel = claudeCompatibleProvider
? (model as string)
: providerResolvedModel || resolveModelString(model as string);
logger.info(`Using model: ${effectiveModel}`); logger.info(`Using model: ${effectiveModel}`);
// Use streamingQuery with event callbacks // Use streamingQuery with event callbacks
@@ -320,16 +308,10 @@ export function createValidateIssueHandler(
model = 'opus', model = 'opus',
thinkingLevel, thinkingLevel,
reasoningEffort, reasoningEffort,
providerId,
comments: rawComments, comments: rawComments,
linkedPRs: rawLinkedPRs, linkedPRs: rawLinkedPRs,
} = req.body as ValidateIssueRequestBody; } = req.body as ValidateIssueRequestBody;
const normalizedProviderId =
typeof providerId === 'string' && providerId.trim().length > 0
? providerId.trim()
: undefined;
// Transform GitHubComment[] to ValidationComment[] if provided // Transform GitHubComment[] to ValidationComment[] if provided
const validationComments: ValidationComment[] | undefined = rawComments?.map((c) => ({ const validationComments: ValidationComment[] | undefined = rawComments?.map((c) => ({
author: c.author?.login || 'ghost', author: c.author?.login || 'ghost',
@@ -378,14 +360,12 @@ export function createValidateIssueHandler(
isClaudeModel(model) || isClaudeModel(model) ||
isCursorModel(model) || isCursorModel(model) ||
isCodexModel(model) || isCodexModel(model) ||
isOpencodeModel(model) || isOpencodeModel(model);
!!normalizedProviderId;
if (!isValidModel) { if (!isValidModel) {
res.status(400).json({ res.status(400).json({
success: false, success: false,
error: error: 'Invalid model. Must be a Claude, Cursor, Codex, or OpenCode model ID (or alias).',
'Invalid model. Must be a Claude, Cursor, Codex, or OpenCode model ID (or alias), or provide a valid providerId for custom Claude-compatible models.',
}); });
return; return;
} }
@@ -414,7 +394,6 @@ export function createValidateIssueHandler(
events, events,
abortController, abortController,
settingsService, settingsService,
normalizedProviderId,
validationComments, validationComments,
validationLinkedPRs, validationLinkedPRs,
thinkingLevel, thinkingLevel,

View File

@@ -6,6 +6,7 @@ import type { Request, Response } from 'express';
import type { EventEmitter } from '../../../lib/events.js'; import type { EventEmitter } from '../../../lib/events.js';
import type { IssueValidationEvent } from '@automaker/types'; import type { IssueValidationEvent } from '@automaker/types';
import { import {
isValidationRunning,
getValidationStatus, getValidationStatus,
getRunningValidations, getRunningValidations,
abortValidation, abortValidation,
@@ -14,6 +15,7 @@ import {
logger, logger,
} from './validation-common.js'; } from './validation-common.js';
import { import {
readValidation,
getAllValidations, getAllValidations,
getValidationWithFreshness, getValidationWithFreshness,
deleteValidation, deleteValidation,

View File

@@ -12,7 +12,7 @@ export function createProvidersHandler() {
// Get installation status from all providers // Get installation status from all providers
const statuses = await ProviderFactory.checkAllProviders(); const statuses = await ProviderFactory.checkAllProviders();
const providers: Record<string, Record<string, unknown>> = { const providers: Record<string, any> = {
anthropic: { anthropic: {
available: statuses.claude?.installed || false, available: statuses.claude?.installed || false,
hasApiKey: !!process.env.ANTHROPIC_API_KEY, hasApiKey: !!process.env.ANTHROPIC_API_KEY,

View File

@@ -173,7 +173,7 @@ export function createOverviewHandler(
const totalFeatures = features.length; const totalFeatures = features.length;
// Get auto-mode status for this project (main worktree, branchName = null) // Get auto-mode status for this project (main worktree, branchName = null)
const autoModeStatus: ProjectAutoModeStatus = await autoModeService.getStatusForProject( const autoModeStatus: ProjectAutoModeStatus = autoModeService.getStatusForProject(
projectRef.path, projectRef.path,
null null
); );

View File

@@ -14,7 +14,6 @@ import type { GlobalSettings } from '../../../types/settings.js';
import { getErrorMessage, logError, logger } from '../common.js'; import { getErrorMessage, logError, logger } from '../common.js';
import { setLogLevel, LogLevel } from '@automaker/utils'; import { setLogLevel, LogLevel } from '@automaker/utils';
import { setRequestLoggingEnabled } from '../../../index.js'; import { setRequestLoggingEnabled } from '../../../index.js';
import { getTerminalService } from '../../../services/terminal-service.js';
/** /**
* Map server log level string to LogLevel enum * Map server log level string to LogLevel enum
@@ -46,20 +45,18 @@ export function createUpdateGlobalHandler(settingsService: SettingsService) {
} }
// Minimal debug logging to help diagnose accidental wipes. // Minimal debug logging to help diagnose accidental wipes.
const projectsLen = Array.isArray(updates.projects) ? updates.projects.length : undefined; const projectsLen = Array.isArray((updates as any).projects)
const trashedLen = Array.isArray(updates.trashedProjects) ? (updates as any).projects.length
? updates.trashedProjects.length : undefined;
const trashedLen = Array.isArray((updates as any).trashedProjects)
? (updates as any).trashedProjects.length
: undefined; : undefined;
logger.info( logger.info(
`[SERVER_SETTINGS_UPDATE] Request received: projects=${projectsLen ?? 'n/a'}, trashedProjects=${trashedLen ?? 'n/a'}, theme=${ `[SERVER_SETTINGS_UPDATE] Request received: projects=${projectsLen ?? 'n/a'}, trashedProjects=${trashedLen ?? 'n/a'}, theme=${
updates.theme ?? 'n/a' (updates as any).theme ?? 'n/a'
}, localStorageMigrated=${updates.localStorageMigrated ?? 'n/a'}` }, localStorageMigrated=${(updates as any).localStorageMigrated ?? 'n/a'}`
); );
// Get old settings to detect theme changes
const oldSettings = await settingsService.getGlobalSettings();
const oldTheme = oldSettings?.theme;
logger.info('[SERVER_SETTINGS_UPDATE] Calling updateGlobalSettings...'); logger.info('[SERVER_SETTINGS_UPDATE] Calling updateGlobalSettings...');
const settings = await settingsService.updateGlobalSettings(updates); const settings = await settingsService.updateGlobalSettings(updates);
logger.info( logger.info(
@@ -67,37 +64,6 @@ export function createUpdateGlobalHandler(settingsService: SettingsService) {
settings.projects?.length ?? 0 settings.projects?.length ?? 0
); );
// Handle theme change - regenerate terminal RC files for all projects
if ('theme' in updates && updates.theme && updates.theme !== oldTheme) {
const terminalService = getTerminalService(settingsService);
const newTheme = updates.theme;
logger.info(
`[TERMINAL_CONFIG] Theme changed from ${oldTheme} to ${newTheme}, regenerating RC files`
);
// Regenerate RC files for all projects with terminal config enabled
const projects = settings.projects || [];
for (const project of projects) {
try {
const projectSettings = await settingsService.getProjectSettings(project.path);
// Check if terminal config is enabled (global or project-specific)
const terminalConfigEnabled =
projectSettings.terminalConfig?.enabled !== false &&
settings.terminalConfig?.enabled === true;
if (terminalConfigEnabled) {
await terminalService.onThemeChange(project.path, newTheme);
logger.info(`[TERMINAL_CONFIG] Regenerated RC files for project: ${project.name}`);
}
} catch (error) {
logger.warn(
`[TERMINAL_CONFIG] Failed to regenerate RC files for project ${project.name}: ${error}`
);
}
}
}
// Apply server log level if it was updated // Apply server log level if it was updated
if ('serverLogLevel' in updates && updates.serverLogLevel) { if ('serverLogLevel' in updates && updates.serverLogLevel) {
const level = LOG_LEVEL_MAP[updates.serverLogLevel]; const level = LOG_LEVEL_MAP[updates.serverLogLevel];

View File

@@ -4,9 +4,13 @@
import type { Request, Response } from 'express'; import type { Request, Response } from 'express';
import { getErrorMessage, logError } from '../common.js'; import { getErrorMessage, logError } from '../common.js';
import { exec } from 'child_process';
import { promisify } from 'util';
import * as fs from 'fs'; import * as fs from 'fs';
import * as path from 'path'; import * as path from 'path';
const execAsync = promisify(exec);
export function createAuthClaudeHandler() { export function createAuthClaudeHandler() {
return async (_req: Request, res: Response): Promise<void> => { return async (_req: Request, res: Response): Promise<void> => {
try { try {

View File

@@ -4,9 +4,13 @@
import type { Request, Response } from 'express'; import type { Request, Response } from 'express';
import { logError, getErrorMessage } from '../common.js'; import { logError, getErrorMessage } from '../common.js';
import { exec } from 'child_process';
import { promisify } from 'util';
import * as fs from 'fs'; import * as fs from 'fs';
import * as path from 'path'; import * as path from 'path';
const execAsync = promisify(exec);
export function createAuthOpencodeHandler() { export function createAuthOpencodeHandler() {
return async (_req: Request, res: Response): Promise<void> => { return async (_req: Request, res: Response): Promise<void> => {
try { try {

View File

@@ -10,6 +10,9 @@ import type { Request, Response } from 'express';
import { CopilotProvider } from '../../../providers/copilot-provider.js'; import { CopilotProvider } from '../../../providers/copilot-provider.js';
import { getErrorMessage, logError } from '../common.js'; import { getErrorMessage, logError } from '../common.js';
import type { ModelDefinition } from '@automaker/types'; import type { ModelDefinition } from '@automaker/types';
import { createLogger } from '@automaker/utils';
const logger = createLogger('CopilotModelsRoute');
// Singleton provider instance for caching // Singleton provider instance for caching
let providerInstance: CopilotProvider | null = null; let providerInstance: CopilotProvider | null = null;

View File

@@ -14,6 +14,9 @@ import {
} from '../../../providers/opencode-provider.js'; } from '../../../providers/opencode-provider.js';
import { getErrorMessage, logError } from '../common.js'; import { getErrorMessage, logError } from '../common.js';
import type { ModelDefinition } from '@automaker/types'; import type { ModelDefinition } from '@automaker/types';
import { createLogger } from '@automaker/utils';
const logger = createLogger('OpenCodeModelsRoute');
// Singleton provider instance for caching // Singleton provider instance for caching
let providerInstance: OpencodeProvider | null = null; let providerInstance: OpencodeProvider | null = null;

View File

@@ -6,7 +6,6 @@
import type { Request, Response } from 'express'; import type { Request, Response } from 'express';
import { query } from '@anthropic-ai/claude-agent-sdk'; import { query } from '@anthropic-ai/claude-agent-sdk';
import { createLogger } from '@automaker/utils'; import { createLogger } from '@automaker/utils';
import { getClaudeAuthIndicators } from '@automaker/platform';
import { getApiKey } from '../common.js'; import { getApiKey } from '../common.js';
import { import {
createSecureAuthEnv, createSecureAuthEnv,
@@ -80,12 +79,6 @@ function containsAuthError(text: string): boolean {
export function createVerifyClaudeAuthHandler() { export function createVerifyClaudeAuthHandler() {
return async (req: Request, res: Response): Promise<void> => { return async (req: Request, res: Response): Promise<void> => {
try { try {
// In E2E/CI mock mode, skip real API calls
if (process.env.AUTOMAKER_MOCK_AGENT === 'true') {
res.json({ success: true, authenticated: true });
return;
}
// Get the auth method and optional API key from the request body // Get the auth method and optional API key from the request body
const { authMethod, apiKey } = req.body as { const { authMethod, apiKey } = req.body as {
authMethod?: 'cli' | 'api_key'; authMethod?: 'cli' | 'api_key';
@@ -116,7 +109,6 @@ export function createVerifyClaudeAuthHandler() {
let authenticated = false; let authenticated = false;
let errorMessage = ''; let errorMessage = '';
let receivedAnyContent = false; let receivedAnyContent = false;
let cleanupEnv: (() => void) | undefined;
// Create secure auth session // Create secure auth session
const sessionId = `claude-auth-${Date.now()}-${Math.random().toString(36).substr(2, 9)}`; const sessionId = `claude-auth-${Date.now()}-${Math.random().toString(36).substr(2, 9)}`;
@@ -158,13 +150,13 @@ export function createVerifyClaudeAuthHandler() {
AuthSessionManager.createSession(sessionId, authMethod || 'api_key', apiKey, 'anthropic'); AuthSessionManager.createSession(sessionId, authMethod || 'api_key', apiKey, 'anthropic');
// Create temporary environment override for SDK call // Create temporary environment override for SDK call
cleanupEnv = createTempEnvOverride(authEnv); const cleanupEnv = createTempEnvOverride(authEnv);
// Run a minimal query to verify authentication // Run a minimal query to verify authentication
const stream = query({ const stream = query({
prompt: "Reply with only the word 'ok'", prompt: "Reply with only the word 'ok'",
options: { options: {
model: 'claude-sonnet-4-6', model: 'claude-sonnet-4-20250514',
maxTurns: 1, maxTurns: 1,
allowedTools: [], allowedTools: [],
abortController, abortController,
@@ -201,10 +193,8 @@ export function createVerifyClaudeAuthHandler() {
} }
// Check specifically for assistant messages with text content // Check specifically for assistant messages with text content
const msgRecord = msg as Record<string, unknown>; if (msg.type === 'assistant' && (msg as any).message?.content) {
const msgMessage = msgRecord.message as Record<string, unknown> | undefined; const content = (msg as any).message.content;
if (msg.type === 'assistant' && msgMessage?.content) {
const content = msgMessage.content;
if (Array.isArray(content)) { if (Array.isArray(content)) {
for (const block of content) { for (const block of content) {
if (block.type === 'text' && block.text) { if (block.type === 'text' && block.text) {
@@ -320,8 +310,6 @@ export function createVerifyClaudeAuthHandler() {
} }
} finally { } finally {
clearTimeout(timeoutId); clearTimeout(timeoutId);
// Restore process.env to its original state
cleanupEnv?.();
// Clean up the auth session // Clean up the auth session
AuthSessionManager.destroySession(sessionId); AuthSessionManager.destroySession(sessionId);
} }
@@ -332,28 +320,9 @@ export function createVerifyClaudeAuthHandler() {
authMethod, authMethod,
}); });
// Determine specific auth type for success messages
const effectiveAuthMethod = authMethod ?? 'api_key';
let authType: 'oauth' | 'api_key' | 'cli' | undefined;
if (authenticated) {
if (effectiveAuthMethod === 'api_key') {
authType = 'api_key';
} else if (effectiveAuthMethod === 'cli') {
// Check if CLI auth is via OAuth (Claude Code subscription) or generic CLI
try {
const indicators = await getClaudeAuthIndicators();
authType = indicators.credentials?.hasOAuthToken ? 'oauth' : 'cli';
} catch {
// Fall back to generic CLI if credential check fails
authType = 'cli';
}
}
}
res.json({ res.json({
success: true, success: true,
authenticated, authenticated,
authType,
error: errorMessage || undefined, error: errorMessage || undefined,
}); });
} catch (error) { } catch (error) {

View File

@@ -82,12 +82,6 @@ function isRateLimitError(text: string): boolean {
export function createVerifyCodexAuthHandler() { export function createVerifyCodexAuthHandler() {
return async (req: Request, res: Response): Promise<void> => { return async (req: Request, res: Response): Promise<void> => {
// In E2E/CI mock mode, skip real API calls
if (process.env.AUTOMAKER_MOCK_AGENT === 'true') {
res.json({ success: true, authenticated: true });
return;
}
const { authMethod, apiKey } = req.body as { const { authMethod, apiKey } = req.body as {
authMethod?: 'cli' | 'api_key'; authMethod?: 'cli' | 'api_key';
apiKey?: string; apiKey?: string;

Some files were not shown because too many files have changed in this diff Show More