mirror of
https://github.com/AutoMaker-Org/automaker.git
synced 2026-02-03 21:03:08 +00:00
Compare commits
1 Commits
5a5c56a4cf
...
fix/spec-g
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
c7d2033277 |
3
.gitignore
vendored
3
.gitignore
vendored
@@ -95,6 +95,3 @@ data/.api-key
|
||||
data/credentials.json
|
||||
data/
|
||||
.codex/
|
||||
|
||||
# GSD planning docs (local-only)
|
||||
.planning/
|
||||
|
||||
@@ -1,81 +0,0 @@
|
||||
# AutoModeService Refactoring
|
||||
|
||||
## What This Is
|
||||
|
||||
A comprehensive refactoring of the `auto-mode-service.ts` file (5k+ lines) into smaller, focused services with clear boundaries. This is an architectural cleanup of accumulated technical debt from rapid development, breaking the "god object" anti-pattern into maintainable, debuggable modules.
|
||||
|
||||
## Core Value
|
||||
|
||||
All existing auto-mode functionality continues working — features execute, pipelines flow, merges complete — while the codebase becomes maintainable.
|
||||
|
||||
## Requirements
|
||||
|
||||
### Validated
|
||||
|
||||
<!-- Existing functionality that must be preserved -->
|
||||
|
||||
- ✓ Single feature execution with AI agent — existing
|
||||
- ✓ Concurrent execution with configurable limits — existing
|
||||
- ✓ Pipeline orchestration (backlog → in-progress → approval → verified) — existing
|
||||
- ✓ Git worktree isolation per feature — existing
|
||||
- ✓ Automatic merging of completed work — existing
|
||||
- ✓ Custom pipeline support — existing
|
||||
- ✓ Test runner integration — existing
|
||||
- ✓ Event streaming to frontend — existing
|
||||
|
||||
### Active
|
||||
|
||||
<!-- Refactoring goals -->
|
||||
|
||||
- [ ] No service file exceeds ~500 lines
|
||||
- [ ] Each service has single, clear responsibility
|
||||
- [ ] Service boundaries make debugging obvious
|
||||
- [ ] Changes to one service don't risk breaking unrelated features
|
||||
- [ ] Test coverage for critical paths
|
||||
|
||||
### Out of Scope
|
||||
|
||||
- New auto-mode features — this is cleanup, not enhancement
|
||||
- UI changes — backend refactor only
|
||||
- Performance optimization — maintain current performance, don't optimize
|
||||
- Other service refactoring — focus on auto-mode-service.ts only
|
||||
|
||||
## Context
|
||||
|
||||
**Current state:** `apps/server/src/services/auto-mode-service.ts` is ~5700 lines handling:
|
||||
|
||||
- Worktree management (create, cleanup, track)
|
||||
- Agent/task execution coordination
|
||||
- Concurrency control and queue management
|
||||
- Pipeline state machine (column transitions)
|
||||
- Merge handling and conflict resolution
|
||||
- Event emission for real-time updates
|
||||
|
||||
**Technical environment:**
|
||||
|
||||
- Express 5 backend, TypeScript
|
||||
- Event-driven architecture via EventEmitter
|
||||
- WebSocket streaming to React frontend
|
||||
- Git worktrees via @automaker/git-utils
|
||||
- Minimal existing test coverage
|
||||
|
||||
**Codebase analysis:** See `.planning/codebase/` for full architecture, conventions, and existing patterns.
|
||||
|
||||
## Constraints
|
||||
|
||||
- **Breaking changes**: Acceptable — other parts of the app can be updated to match new service interfaces
|
||||
- **Test coverage**: Currently minimal — must add tests during refactoring to catch regressions
|
||||
- **Incremental approach**: Required — can't do big-bang rewrite with everything critical
|
||||
- **Existing patterns**: Follow conventions in `.planning/codebase/CONVENTIONS.md`
|
||||
|
||||
## Key Decisions
|
||||
|
||||
| Decision | Rationale | Outcome |
|
||||
| ------------------------- | --------------------------------------------------- | --------- |
|
||||
| Accept breaking changes | Allows cleaner interfaces, worth the migration cost | — Pending |
|
||||
| Add tests during refactor | No existing safety net, need to build one | — Pending |
|
||||
| Incremental extraction | Everything is critical, can't break it all at once | — Pending |
|
||||
|
||||
---
|
||||
|
||||
_Last updated: 2026-01-27 after initialization_
|
||||
@@ -1,234 +0,0 @@
|
||||
# Architecture
|
||||
|
||||
**Analysis Date:** 2026-01-27
|
||||
|
||||
## Pattern Overview
|
||||
|
||||
**Overall:** Monorepo with layered client-server architecture (Electron-first) and pluggable provider abstraction for AI models.
|
||||
|
||||
**Key Characteristics:**
|
||||
|
||||
- Event-driven communication via WebSocket between frontend and backend
|
||||
- Multi-provider AI model abstraction layer (Claude, Cursor, Codex, Gemini, OpenCode, Copilot)
|
||||
- Feature-centric workflow stored in `.automaker/` directories
|
||||
- Isolated git worktree execution for each feature
|
||||
- State management through Zustand stores with API persistence
|
||||
|
||||
## Layers
|
||||
|
||||
**Presentation Layer (UI):**
|
||||
|
||||
- Purpose: React 19 Electron/web frontend with TanStack Router file-based routing
|
||||
- Location: `apps/ui/src/`
|
||||
- Contains: Route components, view pages, custom React hooks, Zustand stores, API client
|
||||
- Depends on: @automaker/types, @automaker/utils, HTTP API backend
|
||||
- Used by: Electron main process (desktop), web browser (web mode)
|
||||
|
||||
**API Layer (Server):**
|
||||
|
||||
- Purpose: Express 5 backend exposing RESTful and WebSocket endpoints
|
||||
- Location: `apps/server/src/`
|
||||
- Contains: Route handlers, business logic services, middleware, provider adapters
|
||||
- Depends on: @automaker/types, @automaker/utils, @automaker/platform, Claude Agent SDK
|
||||
- Used by: UI frontend via HTTP/WebSocket
|
||||
|
||||
**Service Layer (Server):**
|
||||
|
||||
- Purpose: Business logic and domain operations
|
||||
- Location: `apps/server/src/services/`
|
||||
- Contains: AgentService, FeatureLoader, AutoModeService, SettingsService, DevServerService, etc.
|
||||
- Depends on: Providers, secure filesystem, feature storage
|
||||
- Used by: Route handlers
|
||||
|
||||
**Provider Abstraction (Server):**
|
||||
|
||||
- Purpose: Unified interface for different AI model providers
|
||||
- Location: `apps/server/src/providers/`
|
||||
- Contains: ProviderFactory, specific provider implementations (ClaudeProvider, CursorProvider, CodexProvider, GeminiProvider, OpencodeProvider, CopilotProvider)
|
||||
- Depends on: @automaker/types, provider SDKs
|
||||
- Used by: AgentService
|
||||
|
||||
**Shared Library Layer:**
|
||||
|
||||
- Purpose: Type definitions and utilities shared across apps
|
||||
- Location: `libs/`
|
||||
- Contains: @automaker/types, @automaker/utils, @automaker/platform, @automaker/prompts, @automaker/model-resolver, @automaker/dependency-resolver, @automaker/git-utils, @automaker/spec-parser
|
||||
- Depends on: None (types has no external deps)
|
||||
- Used by: All apps and services
|
||||
|
||||
## Data Flow
|
||||
|
||||
**Feature Execution Flow:**
|
||||
|
||||
1. User creates/updates feature via UI (`apps/ui/src/`)
|
||||
2. UI sends HTTP request to backend (`POST /api/features`)
|
||||
3. Server route handler invokes FeatureLoader to persist to `.automaker/features/{featureId}/`
|
||||
4. When executing, AgentService loads feature, creates isolated git worktree via @automaker/git-utils
|
||||
5. AgentService invokes ProviderFactory to get appropriate AI provider (Claude, Cursor, etc.)
|
||||
6. Provider executes with context from CLAUDE.md files via @automaker/utils loadContextFiles()
|
||||
7. Server emits events via EventEmitter throughout execution
|
||||
8. Events stream to frontend via WebSocket
|
||||
9. UI updates stores and renders real-time progress
|
||||
10. Feature results persist back to `.automaker/features/` with generated agent-output.md
|
||||
|
||||
**State Management:**
|
||||
|
||||
**Frontend State (Zustand):**
|
||||
|
||||
- `app-store.ts`: Global app state (projects, features, settings, boards, themes)
|
||||
- `setup-store.ts`: First-time setup wizard flow
|
||||
- `ideation-store.ts`: Ideation feature state
|
||||
- `test-runners-store.ts`: Test runner configurations
|
||||
- Settings now persist via API (`/api/settings`) rather than localStorage (see use-settings-sync.ts)
|
||||
|
||||
**Backend State (Services):**
|
||||
|
||||
- SettingsService: Global and project-specific settings (in-memory with file persistence)
|
||||
- AgentService: Active agent sessions and conversation history
|
||||
- FeatureLoader: Feature data model operations
|
||||
- DevServerService: Development server logs
|
||||
- EventHistoryService: Persists event logs for replay
|
||||
|
||||
**Real-Time Updates (WebSocket):**
|
||||
|
||||
- Server EventEmitter emits TypedEvent (type + payload)
|
||||
- WebSocket handler subscribes to events and broadcasts to all clients
|
||||
- Frontend listens on multiple WebSocket subscriptions and updates stores
|
||||
|
||||
## Key Abstractions
|
||||
|
||||
**Feature:**
|
||||
|
||||
- Purpose: Represents a development task/story with rich metadata
|
||||
- Location: @automaker/types → `libs/types/src/feature.ts`
|
||||
- Fields: id, title, description, status, images, tasks, priority, etc.
|
||||
- Stored: `.automaker/features/{featureId}/feature.json`
|
||||
|
||||
**Provider:**
|
||||
|
||||
- Purpose: Abstracts different AI model implementations
|
||||
- Location: `apps/server/src/providers/{provider}-provider.ts`
|
||||
- Interface: Common execute() method with consistent message format
|
||||
- Implementations: Claude, Cursor, Codex, Gemini, OpenCode, Copilot
|
||||
- Factory: ProviderFactory picks correct provider based on model ID
|
||||
|
||||
**Event:**
|
||||
|
||||
- Purpose: Real-time updates streamed to frontend
|
||||
- Location: @automaker/types → `libs/types/src/event.ts`
|
||||
- Format: { type: EventType, payload: unknown }
|
||||
- Examples: agent-started, agent-step, agent-complete, feature-updated, etc.
|
||||
|
||||
**AgentSession:**
|
||||
|
||||
- Purpose: Represents a conversation between user and AI agent
|
||||
- Location: @automaker/types → `libs/types/src/session.ts`
|
||||
- Contains: Messages (user + assistant), metadata, creation timestamp
|
||||
- Stored: `{DATA_DIR}/agent-sessions/{sessionId}.json`
|
||||
|
||||
**Settings:**
|
||||
|
||||
- Purpose: Configuration for global and per-project behavior
|
||||
- Location: @automaker/types → `libs/types/src/settings.ts`
|
||||
- Stored: Global in `{DATA_DIR}/settings.json`, per-project in `.automaker/settings.json`
|
||||
- Service: SettingsService in `apps/server/src/services/settings-service.ts`
|
||||
|
||||
## Entry Points
|
||||
|
||||
**Server:**
|
||||
|
||||
- Location: `apps/server/src/index.ts`
|
||||
- Triggers: `npm run dev:server` or Docker startup
|
||||
- Responsibilities:
|
||||
- Initialize Express app with middleware
|
||||
- Create shared EventEmitter for WebSocket streaming
|
||||
- Bootstrap services (SettingsService, AgentService, FeatureLoader, etc.)
|
||||
- Mount API routes at `/api/*`
|
||||
- Create WebSocket servers for agent streaming and terminal sessions
|
||||
- Load and apply user settings (log level, request logging, etc.)
|
||||
|
||||
**UI (Web):**
|
||||
|
||||
- Location: `apps/ui/src/main.ts` (Vite entry), `apps/ui/src/app.tsx` (React component)
|
||||
- Triggers: `npm run dev:web` or `npm run build`
|
||||
- Responsibilities:
|
||||
- Initialize Zustand stores from API settings
|
||||
- Setup React Router with TanStack Router
|
||||
- Render root layout with sidebar and main content area
|
||||
- Handle authentication via verifySession()
|
||||
|
||||
**UI (Electron):**
|
||||
|
||||
- Location: `apps/ui/src/main.ts` (Vite entry), `apps/ui/electron/main-process.ts` (Electron main process)
|
||||
- Triggers: `npm run dev:electron`
|
||||
- Responsibilities:
|
||||
- Launch local server via node-pty
|
||||
- Create native Electron window
|
||||
- Bridge IPC between renderer and main process
|
||||
- Provide file system access via preload.ts APIs
|
||||
|
||||
## Error Handling
|
||||
|
||||
**Strategy:** Layered error classification and user-friendly messaging
|
||||
|
||||
**Patterns:**
|
||||
|
||||
**Backend Error Handling:**
|
||||
|
||||
- Errors classified via `classifyError()` from @automaker/utils
|
||||
- Classification: ParseError, NetworkError, AuthenticationError, RateLimitError, etc.
|
||||
- Response format: `{ success: false, error: { type, message, code }, details? }`
|
||||
- Example: `apps/server/src/lib/error-handler.ts`
|
||||
|
||||
**Frontend Error Handling:**
|
||||
|
||||
- HTTP errors caught by api-fetch.ts with retry logic
|
||||
- WebSocket disconnects trigger reconnection with exponential backoff
|
||||
- Errors shown in toast notifications via `sonner` library
|
||||
- Validation errors caught and displayed inline in forms
|
||||
|
||||
**Agent Execution Errors:**
|
||||
|
||||
- AgentService wraps provider calls in try-catch
|
||||
- Aborts handled specially via `isAbortError()` check
|
||||
- Rate limit errors trigger cooldown before retry
|
||||
- Model-specific errors mapped to user guidance
|
||||
|
||||
## Cross-Cutting Concerns
|
||||
|
||||
**Logging:**
|
||||
|
||||
- Framework: @automaker/utils createLogger()
|
||||
- Pattern: `const logger = createLogger('ModuleName')`
|
||||
- Levels: ERROR, WARN, INFO, DEBUG (configurable via settings)
|
||||
- Output: stdout (dev), files (production)
|
||||
|
||||
**Validation:**
|
||||
|
||||
- File path validation: @automaker/platform initAllowedPaths() enforces restrictions
|
||||
- Model ID validation: @automaker/model-resolver resolveModelString()
|
||||
- JSON schema validation: Manual checks in route handlers (no JSON schema lib)
|
||||
- Authentication: Session token validation via validateWsConnectionToken()
|
||||
|
||||
**Authentication:**
|
||||
|
||||
- Frontend: Session token stored in httpOnly cookie
|
||||
- Backend: authMiddleware checks token on protected routes
|
||||
- WebSocket: validateWsConnectionToken() for upgrade requests
|
||||
- Providers: API keys stored encrypted in `{DATA_DIR}/credentials.json`
|
||||
|
||||
**Internationalization:**
|
||||
|
||||
- Not detected - strings are English-only
|
||||
|
||||
**Performance:**
|
||||
|
||||
- Code splitting: File-based routing via TanStack Router
|
||||
- Lazy loading: React.lazy() in route components
|
||||
- Caching: React Query for HTTP requests (query-keys.ts defines cache strategy)
|
||||
- Image optimization: Automatic base64 encoding for agent context
|
||||
- State hydration: Settings loaded once at startup, synced via API
|
||||
|
||||
---
|
||||
|
||||
_Architecture analysis: 2026-01-27_
|
||||
@@ -1,245 +0,0 @@
|
||||
# Codebase Concerns
|
||||
|
||||
**Analysis Date:** 2026-01-27
|
||||
|
||||
## Tech Debt
|
||||
|
||||
**Loose Type Safety in Error Handling:**
|
||||
|
||||
- Issue: Multiple uses of `as any` type assertions bypass TypeScript safety, particularly in error context handling and provider responses
|
||||
- Files: `apps/server/src/providers/claude-provider.ts` (lines 318-322), `apps/server/src/lib/error-handler.ts`, `apps/server/src/routes/settings/routes/update-global.ts`
|
||||
- Impact: Errors could have unchecked properties; refactoring becomes risky without compiler assistance
|
||||
- Fix approach: Replace `as any` with proper type guards and discriminated unions; create helper functions for safe property access
|
||||
|
||||
**Missing Test Coverage for Critical Services:**
|
||||
|
||||
- Issue: Several core services explicitly excluded from test coverage thresholds due to integration complexity
|
||||
- Files: `apps/server/vitest.config.ts` (line 22), explicitly excluded: `claude-usage-service.ts`, `mcp-test-service.ts`, `cli-provider.ts`, `cursor-provider.ts`
|
||||
- Impact: Usage tracking, MCP integration, and CLI detection could break undetected; regression detection is limited
|
||||
- Fix approach: Create integration test fixtures for CLI providers; mock MCP SDK for mcp-test-service tests; add usage tracking unit tests with mocked API calls
|
||||
|
||||
**Unused/Stub TODO Item Processing:**
|
||||
|
||||
- Issue: TodoWrite tool implementation exists but is partially integrated; tool name constants scattered across codex provider
|
||||
- Files: `apps/server/src/providers/codex-tool-mapping.ts`, `apps/server/src/providers/codex-provider.ts`
|
||||
- Impact: Todo list updates may not synchronize properly with all providers; unclear which providers support TodoWrite
|
||||
- Fix approach: Consolidate tool name constants; add provider capability flags for todo support
|
||||
|
||||
**Electron Electron.ts Size and Complexity:**
|
||||
|
||||
- Issue: Single 3741-line file handles all Electron IPC, native bindings, and communication
|
||||
- Files: `apps/ui/src/lib/electron.ts`
|
||||
- Impact: Difficult to test; hard to isolate bugs; changes require full testing of all features; potential memory overhead from monolithic file
|
||||
- Fix approach: Split by responsibility (IPC, window management, file operations, debug tools); create separate bridge layers
|
||||
|
||||
## Known Bugs
|
||||
|
||||
**API Key Management Incomplete for Gemini:**
|
||||
|
||||
- Symptoms: Gemini API key verification endpoint not implemented despite other providers having verification
|
||||
- Files: `apps/ui/src/components/views/settings-view/api-keys/hooks/use-api-key-management.ts` (line 122)
|
||||
- Trigger: User tries to verify Gemini API key in settings
|
||||
- Workaround: Key verification skipped for Gemini; settings page still accepts and stores key
|
||||
|
||||
**Orphaned Features Detection Vulnerable to False Negatives:**
|
||||
|
||||
- Symptoms: Features marked as orphaned when branch matching logic doesn't account for all scenarios
|
||||
- Files: `apps/server/src/services/auto-mode-service.ts` (lines 5714-5773)
|
||||
- Trigger: Features that were manually switched branches or rebased
|
||||
- Workaround: Manual cleanup via feature deletion; branch comparison is basic name matching only
|
||||
|
||||
**Terminal Themes Incomplete:**
|
||||
|
||||
- Symptoms: Light theme themes (solarizedlight, github) map to same generic lightTheme; no dedicated implementations
|
||||
- Files: `apps/ui/src/config/terminal-themes.ts` (lines 593-594)
|
||||
- Trigger: User selects solarizedlight or github terminal theme
|
||||
- Workaround: Uses generic light theme instead of specific scheme; visual appearance doesn't match expectation
|
||||
|
||||
## Security Considerations
|
||||
|
||||
**Process Environment Variable Exposure:**
|
||||
|
||||
- Risk: Child processes inherit all parent `process.env` including sensitive credentials (API keys, tokens)
|
||||
- Files: `apps/server/src/providers/cursor-provider.ts` (line 993), `apps/server/src/providers/codex-provider.ts` (line 1099)
|
||||
- Current mitigation: Dotenv provides isolation at app startup; selective env passing to some providers
|
||||
- Recommendations: Use explicit allowlists for env vars passed to child processes (only pass REQUIRED_KEYS); audit all spawn calls for env handling; document which providers need which credentials
|
||||
|
||||
**Unvalidated Provider Tool Input:**
|
||||
|
||||
- Risk: Tool input from CLI providers (Cursor, Copilot, Codex) is partially validated through Record<string, unknown> patterns; execution context could be escaped
|
||||
- Files: `apps/server/src/providers/codex-provider.ts` (lines 506-543), `apps/server/src/providers/tool-normalization.ts`
|
||||
- Current mitigation: Status enums validated; tool names checked against allow-lists in some providers
|
||||
- Recommendations: Implement comprehensive schema validation for all tool inputs before execution; use zod or similar for runtime validation; add security tests for injection patterns
|
||||
|
||||
**API Key Storage in Settings Files:**
|
||||
|
||||
- Risk: API keys stored in plaintext in `~/.automaker/settings.json` and `data/settings.json`; file permissions may not be restricted
|
||||
- Files: `apps/server/src/services/settings-service.ts`, uses `atomicWriteJson` without file permission enforcement
|
||||
- Current mitigation: Limited by file system permissions; Electron mode has single-user access
|
||||
- Recommendations: Encrypt sensitive settings fields (apiKeys, tokens); use OS credential stores (Keychain/Credential Manager) for production; add file permission checks on startup
|
||||
|
||||
## Performance Bottlenecks
|
||||
|
||||
**Synchronous Feature Loading at Startup:**
|
||||
|
||||
- Problem: All features loaded synchronously at project load; blocks UI with 1000+ features
|
||||
- Files: `apps/server/src/services/feature-loader.ts` (line 230 Promise.all, but synchronous enumeration)
|
||||
- Cause: Feature directory walk and JSON parsing is not paginated or lazy-loaded
|
||||
- Improvement path: Implement lazy loading with pagination (load first 50, fetch more on scroll); add caching layer with TTL; move to background indexing; add feature count limits with warnings
|
||||
|
||||
**Auto-Mode Concurrency at Max Can Exceed Rate Limits:**
|
||||
|
||||
- Problem: maxConcurrency = 10 can quickly exhaust Claude API rate limits if all features execute simultaneously
|
||||
- Files: `apps/server/src/services/auto-mode-service.ts` (line 2931 Promise.all for concurrent agents)
|
||||
- Cause: No adaptive backoff; no API usage tracking before queuing; hint mentions reducing concurrency but doesn't enforce it
|
||||
- Improvement path: Integrate with claude-usage-service to check remaining quota before starting features; implement exponential backoff on 429 errors; add per-model rate limit tracking
|
||||
|
||||
**Terminal Session Memory Leak Risk:**
|
||||
|
||||
- Problem: Terminal sessions accumulate in memory; expired sessions not cleaned up reliably
|
||||
- Files: `apps/server/src/routes/terminal/common.ts` (line 66 cleanup runs every 5 minutes, but only for tokens)
|
||||
- Cause: Cleanup interval is arbitrary; session map not bounded; no session lifespan limit
|
||||
- Improvement path: Implement LRU eviction with max session count; reduce cleanup interval to 1 minute; add memory usage monitoring; auto-close idle sessions after 30 minutes
|
||||
|
||||
**Large File Content Loading Without Limits:**
|
||||
|
||||
- Problem: File content loaded entirely into memory; `describe-file.ts` truncates at 50KB but loads all content first
|
||||
- Files: `apps/server/src/routes/context/routes/describe-file.ts` (line 128)
|
||||
- Cause: Synchronous file read; no streaming; no check before reading large files
|
||||
- Improvement path: Check file size before reading; stream large files; add file size warnings; implement chunked processing for analysis
|
||||
|
||||
## Fragile Areas
|
||||
|
||||
**Provider Factory Model Resolution:**
|
||||
|
||||
- Files: `apps/server/src/providers/provider-factory.ts`, `apps/server/src/providers/simple-query-service.ts`
|
||||
- Why fragile: Each provider interprets model strings differently; no central registry; model aliases resolved at multiple layers (model-resolver, provider-specific maps, CLI validation)
|
||||
- Safe modification: Add integration tests for each model alias per provider; create model capability matrix; centralize model validation before dispatch
|
||||
- Test coverage: No dedicated tests; relies on E2E; no isolated unit tests for model resolution
|
||||
|
||||
**WebSocket Session Authentication:**
|
||||
|
||||
- Files: `apps/server/src/lib/auth.ts` (line 40 setInterval), `apps/server/src/index.ts` (token validation per message)
|
||||
- Why fragile: Session tokens generated and validated at multiple points; no single source of truth; expiration is not atomic
|
||||
- Safe modification: Add tests for token expiration edge cases; ensure cleanup removes all references; log all auth failures
|
||||
- Test coverage: Auth middleware tested, but not session lifecycle
|
||||
|
||||
**Auto-Mode Feature State Machine:**
|
||||
|
||||
- Files: `apps/server/src/services/auto-mode-service.ts` (lines 465-600)
|
||||
- Why fragile: Multiple states (running, queued, completed, error) managed across different methods; no explicit state transition validation; error recovery is defensive (catches all, logs, continues)
|
||||
- Safe modification: Create explicit state enum with valid transitions; add invariant checks; unit test state transitions with all error cases
|
||||
- Test coverage: Gaps in error recovery paths; no tests for concurrent state changes
|
||||
|
||||
## Scaling Limits
|
||||
|
||||
**Feature Count Scalability:**
|
||||
|
||||
- Current capacity: ~1000 features tested; UI performance degrades with pagination required
|
||||
- Limit: 10K+ features cause >5s load times; memory usage ~100MB for metadata alone
|
||||
- Scaling path: Implement feature database instead of file-per-feature; add ElasticSearch indexing for search; paginate API responses (50 per page); add feature archiving
|
||||
|
||||
**Concurrent Auto-Mode Executions:**
|
||||
|
||||
- Current capacity: maxConcurrency = 10 features; limited by Claude API rate limits
|
||||
- Limit: Rate limit hits at ~4-5 simultaneous features with extended context (100K+ tokens)
|
||||
- Scaling path: Implement token usage budgeting before feature start; queue features with estimated token cost; add provider-specific rate limit handling
|
||||
|
||||
**Terminal Session Count:**
|
||||
|
||||
- Current capacity: ~100 active terminal sessions per server
|
||||
- Limit: Memory grows unbounded; no session count limit enforced
|
||||
- Scaling path: Add max session count with least-recently-used eviction; implement session federation for distributed setup
|
||||
|
||||
**Worktree Disk Usage:**
|
||||
|
||||
- Current capacity: 10K worktrees (~20GB with typical repos)
|
||||
- Limit: `.worktrees` directory grows without cleanup; old worktrees accumulate
|
||||
- Scaling path: Add worktree TTL (delete if not used for 30 days); implement cleanup job; add quota warnings at 50/80% disk
|
||||
|
||||
## Dependencies at Risk
|
||||
|
||||
**node-pty Beta Version:**
|
||||
|
||||
- Risk: `node-pty@1.1.0-beta41` used for terminal emulation; beta status indicates possible instability
|
||||
- Impact: Terminal features could break on minor platform changes; no guarantees on bug fixes
|
||||
- Migration plan: Monitor releases for stable version; pin to specific commit if needed; test extensively on target platforms (macOS, Linux, Windows)
|
||||
|
||||
**@anthropic-ai/claude-agent-sdk 0.1.x:**
|
||||
|
||||
- Risk: Pre-1.0 version; SDK API may change in future releases; limited version stability guarantees
|
||||
- Impact: Breaking changes could require significant refactoring; feature additions in SDK may not align with Automaker roadmap
|
||||
- Migration plan: Pin to specific 0.1.x version; review SDK changelogs before upgrades; maintain SDK compatibility tests; consider fallback implementation for critical paths
|
||||
|
||||
**@openai/codex-sdk 0.77.x:**
|
||||
|
||||
- Risk: Codex model deprecated by OpenAI; SDK may be archived or unsupported
|
||||
- Impact: Codex provider could become non-functional; error messages may not be actionable
|
||||
- Migration plan: Monitor OpenAI roadmap for migration path; implement fallback to Claude for Codex requests; add deprecation warning in UI
|
||||
|
||||
**Express 5.2.x RC Stage:**
|
||||
|
||||
- Risk: Express 5 is still in release candidate phase (as of Node 22); full stability not guaranteed
|
||||
- Impact: Minor version updates could include breaking changes; middleware compatibility issues possible
|
||||
- Migration plan: Maintain compatibility layer for Express 5 API; test with latest major before release; document any version-specific workarounds
|
||||
|
||||
## Missing Critical Features
|
||||
|
||||
**Persistent Session Storage:**
|
||||
|
||||
- Problem: Agent conversation sessions stored only in-memory; restart loses all chat history
|
||||
- Blocks: Long-running analysis across server restarts; session recovery not possible
|
||||
- Impact: Users must re-run entire analysis if server restarts; lost productivity
|
||||
|
||||
**Rate Limit Awareness:**
|
||||
|
||||
- Problem: No tracking of API usage relative to rate limits before executing features
|
||||
- Blocks: Predictable concurrent feature execution; users frequently hit rate limits unexpectedly
|
||||
- Impact: Feature execution fails with cryptic rate limit errors; poor user experience
|
||||
|
||||
**Feature Dependency Visualization:**
|
||||
|
||||
- Problem: Dependency-resolver package exists but no UI to visualize or manage dependencies
|
||||
- Blocks: Users cannot plan feature order; complex dependencies not visible
|
||||
- Impact: Features implemented in wrong order; blocking dependencies missed
|
||||
|
||||
## Test Coverage Gaps
|
||||
|
||||
**CLI Provider Integration:**
|
||||
|
||||
- What's not tested: Actual CLI execution paths; environment setup; error recovery from CLI crashes
|
||||
- Files: `apps/server/src/providers/cli-provider.ts`, `apps/server/src/lib/cli-detection.ts`
|
||||
- Risk: Changes to CLI handling could break silently; detection logic not validated on target platforms
|
||||
- Priority: High - affects all CLI-based providers (Cursor, Copilot, Codex)
|
||||
|
||||
**Cursor Provider Platform-Specific Paths:**
|
||||
|
||||
- What's not tested: Windows/Linux Cursor installation detection; version directory parsing; APPDATA environment variable handling
|
||||
- Files: `apps/server/src/providers/cursor-provider.ts` (lines 267-498)
|
||||
- Risk: Platform-specific bugs not caught; Cursor detection fails on non-standard installations
|
||||
- Priority: High - Cursor is primary provider; platform differences critical
|
||||
|
||||
**Event Hook System State Changes:**
|
||||
|
||||
- What's not tested: Concurrent hook execution; cleanup on server shutdown; webhook delivery retries
|
||||
- Files: `apps/server/src/services/event-hook-service.ts` (line 248 Promise.allSettled)
|
||||
- Risk: Hooks may not execute in expected order; memory not cleaned up; webhooks lost on failure
|
||||
- Priority: Medium - affects automation workflows
|
||||
|
||||
**Error Classification for New Providers:**
|
||||
|
||||
- What's not tested: Each provider's unique error patterns mapped to ErrorType enum; new provider errors not classified
|
||||
- Files: `apps/server/src/lib/error-handler.ts` (lines 58-80), each provider error mapping
|
||||
- Risk: User sees generic "unknown error" instead of actionable message; categorization regresses with new providers
|
||||
- Priority: Medium - impacts user experience
|
||||
|
||||
**Feature State Corruption Scenarios:**
|
||||
|
||||
- What's not tested: Concurrent feature updates; partial writes with power loss; JSON parsing recovery
|
||||
- Files: `apps/server/src/services/feature-loader.ts`, `@automaker/utils` (atomicWriteJson)
|
||||
- Risk: Feature data corrupted on concurrent access; recovery incomplete; no validation before use
|
||||
- Priority: High - data loss risk
|
||||
|
||||
---
|
||||
|
||||
_Concerns audit: 2026-01-27_
|
||||
@@ -1,255 +0,0 @@
|
||||
# Coding Conventions
|
||||
|
||||
**Analysis Date:** 2026-01-27
|
||||
|
||||
## Naming Patterns
|
||||
|
||||
**Files:**
|
||||
|
||||
- PascalCase for class/service files: `auto-mode-service.ts`, `feature-loader.ts`, `claude-provider.ts`
|
||||
- kebab-case for route/handler directories: `auto-mode/`, `features/`, `event-history/`
|
||||
- kebab-case for utility files: `secure-fs.ts`, `sdk-options.ts`, `settings-helpers.ts`
|
||||
- kebab-case for React components: `card.tsx`, `ansi-output.tsx`, `count-up-timer.tsx`
|
||||
- kebab-case for hooks: `use-board-background-settings.ts`, `use-responsive-kanban.ts`, `use-test-logs.ts`
|
||||
- kebab-case for store files: `app-store.ts`, `auth-store.ts`, `setup-store.ts`
|
||||
- Organized by functionality: `routes/features/routes/list.ts`, `routes/features/routes/get.ts`
|
||||
|
||||
**Functions:**
|
||||
|
||||
- camelCase for all function names: `createEventEmitter()`, `getAutomakerDir()`, `executeQuery()`
|
||||
- Verb-first for action functions: `buildPrompt()`, `classifyError()`, `loadContextFiles()`, `atomicWriteJson()`
|
||||
- Prefix with `use` for React hooks: `useBoardBackgroundSettings()`, `useAppStore()`, `useUpdateProjectSettings()`
|
||||
- Private methods prefixed with underscore: `_deleteOrphanedImages()`, `_migrateImages()`
|
||||
|
||||
**Variables:**
|
||||
|
||||
- camelCase for constants and variables: `featureId`, `projectPath`, `modelId`, `tempDir`
|
||||
- UPPER_SNAKE_CASE for global constants/enums: `DEFAULT_MAX_CONCURRENCY`, `DEFAULT_PHASE_MODELS`
|
||||
- Meaningful naming over abbreviations: `featureDirectory` not `fd`, `featureImages` not `img`
|
||||
- Prefixes for computed values: `is*` for booleans: `isClaudeModel`, `isContainerized`, `isAutoLoginEnabled`
|
||||
|
||||
**Types:**
|
||||
|
||||
- PascalCase for interfaces and types: `Feature`, `ExecuteOptions`, `EventEmitter`, `ProviderConfig`
|
||||
- Type files suffixed with `.d.ts`: `paths.d.ts`, `types.d.ts`
|
||||
- Organized by domain: `src/store/types/`, `src/lib/`
|
||||
- Re-export pattern from main package indexes: `export type { Feature };`
|
||||
|
||||
## Code Style
|
||||
|
||||
**Formatting:**
|
||||
|
||||
- Tool: Prettier 3.7.4
|
||||
- Print width: 100 characters
|
||||
- Tab width: 2 spaces
|
||||
- Single quotes for strings
|
||||
- Semicolons required
|
||||
- Trailing commas: es5 (trailing in arrays/objects, not in params)
|
||||
- Arrow functions always include parentheses: `(x) => x * 2`
|
||||
- Line endings: LF (Unix)
|
||||
- Bracket spacing: `{ key: value }`
|
||||
|
||||
**Linting:**
|
||||
|
||||
- Tool: ESLint (flat config in `apps/ui/eslint.config.mjs`)
|
||||
- TypeScript ESLint plugin for `.ts`/`.tsx` files
|
||||
- Recommended configs: `@eslint/js`, `@typescript-eslint/recommended`
|
||||
- Unused variables warning with exception for parameters starting with `_`
|
||||
- Type assertions are allowed with description when using `@ts-ignore`
|
||||
- `@typescript-eslint/no-explicit-any` is warn-level (allow with caution)
|
||||
|
||||
## Import Organization
|
||||
|
||||
**Order:**
|
||||
|
||||
1. Node.js standard library: `import fs from 'fs/promises'`, `import path from 'path'`
|
||||
2. Third-party packages: `import { describe, it } from 'vitest'`, `import { Router } from 'express'`
|
||||
3. Shared packages (monorepo): `import type { Feature } from '@automaker/types'`, `import { createLogger } from '@automaker/utils'`
|
||||
4. Local relative imports: `import { FeatureLoader } from './feature-loader.js'`, `import * as secureFs from '../lib/secure-fs.js'`
|
||||
5. Type imports: separated with `import type { ... } from`
|
||||
|
||||
**Path Aliases:**
|
||||
|
||||
- `@/` - resolves to `./src` in both UI (`apps/ui/`) and server (`apps/server/`)
|
||||
- Shared packages prefixed with `@automaker/`:
|
||||
- `@automaker/types` - core TypeScript definitions
|
||||
- `@automaker/utils` - logging, errors, utilities
|
||||
- `@automaker/prompts` - AI prompt templates
|
||||
- `@automaker/platform` - path management, security, processes
|
||||
- `@automaker/model-resolver` - model alias resolution
|
||||
- `@automaker/dependency-resolver` - feature dependency ordering
|
||||
- `@automaker/git-utils` - git operations
|
||||
- Extensions: `.js` extension used in imports for ESM imports
|
||||
|
||||
**Import Rules:**
|
||||
|
||||
- Always import from shared packages, never from old paths
|
||||
- No circular dependencies between layers
|
||||
- Services import from providers and utilities
|
||||
- Routes import from services
|
||||
- Shared packages have strict dependency hierarchy (types → utils → platform → git-utils → server/ui)
|
||||
|
||||
## Error Handling
|
||||
|
||||
**Patterns:**
|
||||
|
||||
- Use `try-catch` blocks for async operations: wraps feature execution, file operations, git commands
|
||||
- Throw `new Error(message)` with descriptive messages: `throw new Error('already running')`, `throw new Error('Feature ${featureId} not found')`
|
||||
- Classify errors with `classifyError()` from `@automaker/utils` for categorization
|
||||
- Log errors with context using `createLogger()`: includes error classification
|
||||
- Return error info objects: `{ valid: false, errors: [...], warnings: [...] }`
|
||||
- Validation returns structured result: `{ valid, errors, warnings }` from provider `validateConfig()`
|
||||
|
||||
**Error Types:**
|
||||
|
||||
- Authentication errors: distinguish from validation/runtime errors
|
||||
- Path validation errors: caught by middleware in Express routes
|
||||
- File system errors: logged and recovery attempted with backups
|
||||
- SDK/API errors: classified and wrapped with context
|
||||
- Abort/cancellation errors: handled without stack traces (graceful shutdown)
|
||||
|
||||
**Error Messages:**
|
||||
|
||||
- Descriptive and actionable: not vague error codes
|
||||
- Include context when helpful: file paths, feature IDs, model names
|
||||
- User-friendly messages via `getUserFriendlyErrorMessage()` for client display
|
||||
|
||||
## Logging
|
||||
|
||||
**Framework:**
|
||||
|
||||
- Built-in `createLogger()` from `@automaker/utils`
|
||||
- Each module creates logger: `const logger = createLogger('ModuleName')`
|
||||
- Logger functions: `info()`, `warn()`, `error()`, `debug()`
|
||||
|
||||
**Patterns:**
|
||||
|
||||
- Log operation start and completion for significant operations
|
||||
- Log warnings for non-critical issues: file deletion failures, missing optional configs
|
||||
- Log errors with full error object: `logger.error('operation failed', error)`
|
||||
- Use module name as logger context: `createLogger('AutoMode')`, `createLogger('HttpClient')`
|
||||
- Avoid logging sensitive data (API keys, passwords)
|
||||
- No console.log in production code - use logger
|
||||
|
||||
**What to Log:**
|
||||
|
||||
- Feature execution start/completion
|
||||
- Error classification and recovery attempts
|
||||
- File operations (create, delete, migrate)
|
||||
- API calls and responses (in debug mode)
|
||||
- Async operation start/end
|
||||
- Warnings for deprecated patterns
|
||||
|
||||
## Comments
|
||||
|
||||
**When to Comment:**
|
||||
|
||||
- Complex algorithms or business logic: explain the "why" not the "what"
|
||||
- Integration points: explain how modules communicate
|
||||
- Workarounds: explain the constraint that made the workaround necessary
|
||||
- Non-obvious performance implications
|
||||
- Edge cases and their handling
|
||||
|
||||
**JSDoc/TSDoc:**
|
||||
|
||||
- Used for public functions and classes
|
||||
- Document parameters with `@param`
|
||||
- Document return types with `@returns`
|
||||
- Document exceptions with `@throws`
|
||||
- Used for service classes: `/**\n * Module description\n * Manages: ...\n */`
|
||||
- Not required for simple getters/setters
|
||||
|
||||
**Example JSDoc Pattern:**
|
||||
|
||||
```typescript
|
||||
/**
|
||||
* Delete images that were removed from a feature
|
||||
*/
|
||||
private async deleteOrphanedImages(
|
||||
projectPath: string,
|
||||
oldPaths: Array<string>,
|
||||
newPaths: Array<string>
|
||||
): Promise<void> {
|
||||
// Implementation
|
||||
}
|
||||
```
|
||||
|
||||
## Function Design
|
||||
|
||||
**Size:**
|
||||
|
||||
- Keep functions under 100 lines when possible
|
||||
- Large services split into multiple related methods
|
||||
- Private helper methods extracted for complex logic
|
||||
|
||||
**Parameters:**
|
||||
|
||||
- Use destructuring for object parameters with multiple properties
|
||||
- Document parameter types with TypeScript types
|
||||
- Optional parameters marked with `?`
|
||||
- Use `Record<string, unknown>` for flexible object parameters
|
||||
|
||||
**Return Values:**
|
||||
|
||||
- Explicit return types required for all public functions
|
||||
- Return structured objects for multiple values
|
||||
- Use `Promise<T>` for async functions
|
||||
- Async generators use `AsyncGenerator<T>` for streaming responses
|
||||
- Never implicitly return `undefined` (explicit return or throw)
|
||||
|
||||
## Module Design
|
||||
|
||||
**Exports:**
|
||||
|
||||
- Default export for class instantiation: `export default class FeatureLoader {}`
|
||||
- Named exports for functions: `export function createEventEmitter() {}`
|
||||
- Type exports separated: `export type { Feature };`
|
||||
- Barrel files (index.ts) re-export from module
|
||||
|
||||
**Barrel Files:**
|
||||
|
||||
- Used in routes: `routes/features/index.ts` creates router and exports
|
||||
- Used in stores: `store/index.ts` exports all store hooks
|
||||
- Pattern: group related exports for easier importing
|
||||
|
||||
**Service Classes:**
|
||||
|
||||
- Instantiated once and dependency injected
|
||||
- Public methods for API surface
|
||||
- Private methods prefixed with `_`
|
||||
- No static methods - prefer instances or functions
|
||||
- Constructor takes dependencies: `constructor(config?: ProviderConfig)`
|
||||
|
||||
**Provider Pattern:**
|
||||
|
||||
- Abstract base class: `BaseProvider` with abstract methods
|
||||
- Concrete implementations: `ClaudeProvider`, `CodexProvider`, `CursorProvider`
|
||||
- Common interface: `executeQuery()`, `detectInstallation()`, `validateConfig()`
|
||||
- Factory for instantiation: `ProviderFactory.create()`
|
||||
|
||||
## TypeScript Specific
|
||||
|
||||
**Strict Mode:** Always enabled globally
|
||||
|
||||
- `strict: true` in all tsconfigs
|
||||
- No implicit `any` - declare types explicitly
|
||||
- No optional chaining on base types without narrowing
|
||||
|
||||
**Type Definitions:**
|
||||
|
||||
- Interface for shapes: `interface Feature { ... }`
|
||||
- Type for unions/aliases: `type ModelAlias = 'haiku' | 'sonnet' | 'opus'`
|
||||
- Type guards for narrowing: `if (typeof x === 'string') { ... }`
|
||||
- Generic types for reusable patterns: `EventCallback<T>`
|
||||
|
||||
**React Specific (UI):**
|
||||
|
||||
- Functional components only
|
||||
- React 19 with hooks
|
||||
- Type props interface: `interface CardProps extends React.ComponentProps<'div'> { ... }`
|
||||
- Zustand stores for state management
|
||||
- Custom hooks for shared logic
|
||||
|
||||
---
|
||||
|
||||
_Convention analysis: 2026-01-27_
|
||||
@@ -1,232 +0,0 @@
|
||||
# External Integrations
|
||||
|
||||
**Analysis Date:** 2026-01-27
|
||||
|
||||
## APIs & External Services
|
||||
|
||||
**AI/LLM Providers:**
|
||||
|
||||
- Claude (Anthropic)
|
||||
- SDK: `@anthropic-ai/claude-agent-sdk` (0.1.76)
|
||||
- Auth: `ANTHROPIC_API_KEY` environment variable or stored credentials
|
||||
- Features: Extended thinking, vision/images, tools, streaming
|
||||
- Implementation: `apps/server/src/providers/claude-provider.ts`
|
||||
- Models: Opus 4.5, Sonnet 4, Haiku 4.5, and legacy models
|
||||
- Custom endpoints: `ANTHROPIC_BASE_URL` (optional)
|
||||
|
||||
- GitHub Copilot
|
||||
- SDK: `@github/copilot-sdk` (0.1.16)
|
||||
- Auth: GitHub OAuth (via `gh` CLI) or `GITHUB_TOKEN` environment variable
|
||||
- Features: Tools, streaming, runtime model discovery
|
||||
- Implementation: `apps/server/src/providers/copilot-provider.ts`
|
||||
- CLI detection: Searches for Copilot CLI binary
|
||||
- Models: Dynamic discovery via `copilot models list`
|
||||
|
||||
- OpenAI Codex/GPT-4
|
||||
- SDK: `@openai/codex-sdk` (0.77.0)
|
||||
- Auth: `OPENAI_API_KEY` environment variable or stored credentials
|
||||
- Features: Extended thinking, tools, sandbox execution
|
||||
- Implementation: `apps/server/src/providers/codex-provider.ts`
|
||||
- Execution modes: CLI (with sandbox) or SDK (direct API)
|
||||
- Models: Dynamic discovery via Codex CLI or SDK
|
||||
|
||||
- Google Gemini
|
||||
- Implementation: `apps/server/src/providers/gemini-provider.ts`
|
||||
- Features: Vision support, tools, streaming
|
||||
|
||||
- OpenCode (AWS/Azure/other)
|
||||
- Implementation: `apps/server/src/providers/opencode-provider.ts`
|
||||
- Supports: Amazon Bedrock, Azure models, local models
|
||||
- Features: Flexible provider architecture
|
||||
|
||||
- Cursor Editor
|
||||
- Implementation: `apps/server/src/providers/cursor-provider.ts`
|
||||
- Features: Integration with Cursor IDE
|
||||
|
||||
**Model Context Protocol (MCP):**
|
||||
|
||||
- SDK: `@modelcontextprotocol/sdk` (1.25.2)
|
||||
- Purpose: Connect AI agents to external tools and data sources
|
||||
- Implementation: `apps/server/src/services/mcp-test-service.ts`, `apps/server/src/routes/mcp/`
|
||||
- Configuration: Per-project in `.automaker/` directory
|
||||
|
||||
## Data Storage
|
||||
|
||||
**Databases:**
|
||||
|
||||
- None - This codebase does NOT use traditional databases (SQL/NoSQL)
|
||||
- All data stored as files in local filesystem
|
||||
|
||||
**File Storage:**
|
||||
|
||||
- Local filesystem only
|
||||
- Locations:
|
||||
- `.automaker/` - Project-specific data (features, context, settings)
|
||||
- `./data/` or `DATA_DIR` env var - Global data (settings, credentials, sessions)
|
||||
- Secure file operations: `@automaker/platform` exports `secureFs` for restricted file access
|
||||
|
||||
**Caching:**
|
||||
|
||||
- In-memory caches for:
|
||||
- Model lists (Copilot, Codex runtime discovery)
|
||||
- Feature metadata
|
||||
- Project specifications
|
||||
- No distributed/persistent caching system
|
||||
|
||||
## Authentication & Identity
|
||||
|
||||
**Auth Provider:**
|
||||
|
||||
- Custom implementation (no third-party provider)
|
||||
- Authentication methods:
|
||||
1. Claude Max Plan (OAuth via Anthropic CLI)
|
||||
2. API Key mode (ANTHROPIC_API_KEY)
|
||||
3. Custom provider profiles with API keys
|
||||
4. Token-based session authentication for WebSocket
|
||||
|
||||
**Implementation:**
|
||||
|
||||
- `apps/server/src/lib/auth.ts` - Auth middleware
|
||||
- `apps/server/src/routes/auth/` - Auth routes
|
||||
- Session tokens for WebSocket connections
|
||||
- Credential storage in `./data/credentials.json` (encrypted/protected)
|
||||
|
||||
## Monitoring & Observability
|
||||
|
||||
**Error Tracking:**
|
||||
|
||||
- None - No automatic error reporting service integrated
|
||||
- Custom error classification: `@automaker/utils` exports `classifyError()`
|
||||
- User-friendly error messages: `getUserFriendlyErrorMessage()`
|
||||
|
||||
**Logs:**
|
||||
|
||||
- Console logging with configurable levels
|
||||
- Logger: `@automaker/utils` exports `createLogger()`
|
||||
- Log levels: ERROR, WARN, INFO, DEBUG
|
||||
- Environment: `LOG_LEVEL` env var (optional)
|
||||
- Storage: Logs output to console/stdout (no persistent logging to files)
|
||||
|
||||
**Usage Tracking:**
|
||||
|
||||
- Claude API usage: `apps/server/src/services/claude-usage-service.ts`
|
||||
- Codex API usage: `apps/server/src/services/codex-usage-service.ts`
|
||||
- Tracks: Tokens, costs, rates
|
||||
|
||||
## CI/CD & Deployment
|
||||
|
||||
**Hosting:**
|
||||
|
||||
- Local development: Node.js server + Vite dev server
|
||||
- Desktop: Electron application (macOS, Windows, Linux)
|
||||
- Web: Express server deployed to any Node.js host
|
||||
|
||||
**CI Pipeline:**
|
||||
|
||||
- GitHub Actions likely (`.github/workflows/` present in repo)
|
||||
- Testing: Playwright E2E, Vitest unit tests
|
||||
- Linting: ESLint
|
||||
- Formatting: Prettier
|
||||
|
||||
**Build Process:**
|
||||
|
||||
- `npm run build:packages` - Build shared packages
|
||||
- `npm run build` - Build web UI
|
||||
- `npm run build:electron` - Build Electron apps (platform-specific)
|
||||
- Electron Builder handles code signing and distribution
|
||||
|
||||
## Environment Configuration
|
||||
|
||||
**Required env vars:**
|
||||
|
||||
- `ANTHROPIC_API_KEY` - For Claude provider (or provide in settings)
|
||||
- `OPENAI_API_KEY` - For Codex provider (optional)
|
||||
- `GITHUB_TOKEN` - For GitHub operations (optional)
|
||||
|
||||
**Optional env vars:**
|
||||
|
||||
- `PORT` - Server port (default 3008)
|
||||
- `HOST` - Server bind address (default 0.0.0.0)
|
||||
- `HOSTNAME` - Public hostname (default localhost)
|
||||
- `DATA_DIR` - Data storage directory (default ./data)
|
||||
- `ANTHROPIC_BASE_URL` - Custom Claude endpoint
|
||||
- `ALLOWED_ROOT_DIRECTORY` - Restrict file operations to directory
|
||||
- `AUTOMAKER_MOCK_AGENT` - Enable mock agent for testing
|
||||
- `AUTOMAKER_AUTO_LOGIN` - Skip login prompt in dev
|
||||
|
||||
**Secrets location:**
|
||||
|
||||
- Runtime: Environment variables (`process.env`)
|
||||
- Stored: `./data/credentials.json` (file-based)
|
||||
- Retrieval: `apps/server/src/services/settings-service.ts`
|
||||
|
||||
## Webhooks & Callbacks
|
||||
|
||||
**Incoming:**
|
||||
|
||||
- WebSocket connections for real-time agent event streaming
|
||||
- GitHub webhook routes (optional): `apps/server/src/routes/github/`
|
||||
- Terminal WebSocket connections: `apps/server/src/routes/terminal/`
|
||||
|
||||
**Outgoing:**
|
||||
|
||||
- GitHub PRs: `apps/server/src/routes/worktree/routes/create-pr.ts`
|
||||
- Git operations: `@automaker/git-utils` handles commits, pushes
|
||||
- Terminal output streaming via WebSocket to clients
|
||||
- Event hooks: `apps/server/src/services/event-hook-service.ts`
|
||||
|
||||
## Credential Management
|
||||
|
||||
**API Keys Storage:**
|
||||
|
||||
- File: `./data/credentials.json`
|
||||
- Format: JSON with nested structure for different providers
|
||||
```json
|
||||
{
|
||||
"apiKeys": {
|
||||
"anthropic": "sk-...",
|
||||
"openai": "sk-...",
|
||||
"github": "ghp_..."
|
||||
}
|
||||
}
|
||||
```
|
||||
- Access: `SettingsService.getCredentials()` from `apps/server/src/services/settings-service.ts`
|
||||
- Security: File permissions should restrict to current user only
|
||||
|
||||
**Profile/Provider Configuration:**
|
||||
|
||||
- File: `./data/settings.json` (global) or `.automaker/settings.json` (per-project)
|
||||
- Stores: Alternative provider profiles, model mappings, sandbox settings
|
||||
- Types: `ClaudeApiProfile`, `ClaudeCompatibleProvider` from `@automaker/types`
|
||||
|
||||
## Third-Party Service Integration Points
|
||||
|
||||
**Git/GitHub:**
|
||||
|
||||
- `@automaker/git-utils` - Git operations (worktrees, commits, diffs)
|
||||
- Codex/Cursor providers can create GitHub PRs
|
||||
- GitHub CLI (`gh`) detection for Copilot authentication
|
||||
|
||||
**Terminal Access:**
|
||||
|
||||
- `node-pty` (1.1.0-beta41) - Pseudo-terminal interface
|
||||
- `TerminalService` manages terminal sessions
|
||||
- WebSocket streaming to frontend
|
||||
|
||||
**AI Models - Multi-Provider Abstraction:**
|
||||
|
||||
- `BaseProvider` interface: `apps/server/src/providers/base-provider.ts`
|
||||
- Factory pattern: `apps/server/src/providers/provider-factory.ts`
|
||||
- Allows swapping providers without changing agent logic
|
||||
- All providers implement: `executeQuery()`, `detectInstallation()`, `getAvailableModels()`
|
||||
|
||||
**Process Spawning:**
|
||||
|
||||
- `@automaker/platform` exports `spawnProcess()`, `spawnJSONLProcess()`
|
||||
- Codex CLI execution: JSONL output parsing
|
||||
- Copilot CLI execution: Subprocess management
|
||||
- Cursor IDE interaction: Process spawning for tool execution
|
||||
|
||||
---
|
||||
|
||||
_Integration audit: 2026-01-27_
|
||||
@@ -1,230 +0,0 @@
|
||||
# Technology Stack
|
||||
|
||||
**Analysis Date:** 2026-01-27
|
||||
|
||||
## Languages
|
||||
|
||||
**Primary:**
|
||||
|
||||
- TypeScript 5.9.3 - Used across all packages, apps, and configuration
|
||||
- JavaScript (Node.js) - Runtime execution for scripts and tooling
|
||||
|
||||
**Secondary:**
|
||||
|
||||
- YAML 2.7.0 - Configuration files
|
||||
- CSS/Tailwind CSS 4.1.18 - Frontend styling
|
||||
|
||||
## Runtime
|
||||
|
||||
**Environment:**
|
||||
|
||||
- Node.js 22.x (>=22.0.0 <23.0.0) - Required version, specified in `.nvmrc`
|
||||
|
||||
**Package Manager:**
|
||||
|
||||
- npm - Monorepo workspace management via npm workspaces
|
||||
- Lockfile: `package-lock.json` (present)
|
||||
|
||||
## Frameworks
|
||||
|
||||
**Core - Frontend:**
|
||||
|
||||
- React 19.2.3 - UI framework with hooks and concurrent features
|
||||
- Vite 7.3.0 - Build tool and dev server (`apps/ui/vite.config.ts`)
|
||||
- Electron 39.2.7 - Desktop application runtime (`apps/ui/package.json`)
|
||||
- TanStack Router 1.141.6 - File-based routing (React)
|
||||
- Zustand 5.0.9 - State management (lightweight alternative to Redux)
|
||||
- TanStack Query (React Query) 5.90.17 - Server state management
|
||||
|
||||
**Core - Backend:**
|
||||
|
||||
- Express 5.2.1 - HTTP server framework (`apps/server/package.json`)
|
||||
- WebSocket (ws) 8.18.3 - Real-time bidirectional communication
|
||||
- Claude Agent SDK (@anthropic-ai/claude-agent-sdk) 0.1.76 - AI provider integration
|
||||
|
||||
**Testing:**
|
||||
|
||||
- Playwright 1.57.0 - End-to-end testing (`apps/ui` E2E tests)
|
||||
- Vitest 4.0.16 - Unit testing framework (runs on all packages and server)
|
||||
- @vitest/ui 4.0.16 - Visual test runner UI
|
||||
- @vitest/coverage-v8 4.0.16 - Code coverage reporting
|
||||
|
||||
**Build/Dev:**
|
||||
|
||||
- electron-builder 26.0.12 - Electron app packaging and distribution
|
||||
- @vitejs/plugin-react 5.1.2 - Vite React support
|
||||
- vite-plugin-electron 0.29.0 - Vite plugin for Electron main process
|
||||
- vite-plugin-electron-renderer 0.14.6 - Vite plugin for Electron renderer
|
||||
- ESLint 9.39.2 - Code linting (`apps/ui`)
|
||||
- @typescript-eslint/eslint-plugin 8.50.0 - TypeScript ESLint rules
|
||||
- Prettier 3.7.4 - Code formatting (root-level config)
|
||||
- Tailwind CSS 4.1.18 - Utility-first CSS framework
|
||||
- @tailwindcss/vite 4.1.18 - Tailwind Vite integration
|
||||
|
||||
**UI Components & Libraries:**
|
||||
|
||||
- Radix UI - Unstyled accessible component library (@radix-ui packages)
|
||||
- react-dropdown-menu 2.1.16
|
||||
- react-dialog 1.1.15
|
||||
- react-select 2.2.6
|
||||
- react-tooltip 1.2.8
|
||||
- react-tabs 1.1.13
|
||||
- react-collapsible 1.1.12
|
||||
- react-checkbox 1.3.3
|
||||
- react-radio-group 1.3.8
|
||||
- react-popover 1.1.15
|
||||
- react-slider 1.3.6
|
||||
- react-switch 1.2.6
|
||||
- react-scroll-area 1.2.10
|
||||
- react-label 2.1.8
|
||||
- Lucide React 0.562.0 - Icon library
|
||||
- Geist 1.5.1 - Design system UI library
|
||||
- Sonner 2.0.7 - Toast notifications
|
||||
|
||||
**Code Editor & Terminal:**
|
||||
|
||||
- @uiw/react-codemirror 4.25.4 - Code editor React component
|
||||
- CodeMirror (@codemirror packages) 6.x - Editor toolkit
|
||||
- xterm.js (@xterm/xterm) 5.5.0 - Terminal emulator
|
||||
- @xterm/addon-fit 0.10.0 - Fit addon for terminal
|
||||
- @xterm/addon-search 0.15.0 - Search addon for terminal
|
||||
- @xterm/addon-web-links 0.11.0 - Web links addon
|
||||
- @xterm/addon-webgl 0.18.0 - WebGL renderer for terminal
|
||||
|
||||
**Diagram/Graph Visualization:**
|
||||
|
||||
- @xyflow/react 12.10.0 - React flow diagram library
|
||||
- dagre 0.8.5 - Graph layout algorithms
|
||||
|
||||
**Markdown/Content Rendering:**
|
||||
|
||||
- react-markdown 10.1.0 - Markdown parser and renderer
|
||||
- remark-gfm 4.0.1 - GitHub Flavored Markdown support
|
||||
- rehype-raw 7.0.0 - Raw HTML support in markdown
|
||||
- rehype-sanitize 6.0.0 - HTML sanitization
|
||||
|
||||
**Data Validation & Parsing:**
|
||||
|
||||
- zod 3.24.1 or 4.0.0 - Schema validation and TypeScript type inference
|
||||
|
||||
**Utilities:**
|
||||
|
||||
- class-variance-authority 0.7.1 - CSS variant utilities
|
||||
- clsx 2.1.1 - Conditional className utility
|
||||
- cmdk 1.1.1 - Command menu/palette
|
||||
- tailwind-merge 3.4.0 - Tailwind CSS conflict resolution
|
||||
- usehooks-ts 3.1.1 - TypeScript React hooks
|
||||
- @dnd-kit (drag-and-drop) 6.3.1 - Drag and drop library
|
||||
|
||||
**Font Libraries:**
|
||||
|
||||
- @fontsource - Web font packages (Cascadia Code, Fira Code, IBM Plex, Inconsolata, Inter, etc.)
|
||||
|
||||
**Development Utilities:**
|
||||
|
||||
- cross-spawn 7.0.6 - Cross-platform process spawning
|
||||
- dotenv 17.2.3 - Environment variable loading
|
||||
- tsx 4.21.0 - TypeScript execution for Node.js
|
||||
- tree-kill 1.2.2 - Process tree killer utility
|
||||
- node-pty 1.1.0-beta41 - PTY/terminal interface for Node.js
|
||||
|
||||
## Key Dependencies
|
||||
|
||||
**Critical - AI/Agent Integration:**
|
||||
|
||||
- @anthropic-ai/claude-agent-sdk 0.1.76 - Core Claude AI provider
|
||||
- @github/copilot-sdk 0.1.16 - GitHub Copilot integration
|
||||
- @openai/codex-sdk 0.77.0 - OpenAI Codex/GPT-4 integration
|
||||
- @modelcontextprotocol/sdk 1.25.2 - Model Context Protocol servers
|
||||
|
||||
**Infrastructure - Internal Packages:**
|
||||
|
||||
- @automaker/types 1.0.0 - Shared TypeScript type definitions
|
||||
- @automaker/utils 1.0.0 - Logging, error handling, utilities
|
||||
- @automaker/platform 1.0.0 - Path management, security, process spawning
|
||||
- @automaker/prompts 1.0.0 - AI prompt templates
|
||||
- @automaker/model-resolver 1.0.0 - Claude model alias resolution
|
||||
- @automaker/dependency-resolver 1.0.0 - Feature dependency ordering
|
||||
- @automaker/git-utils 1.0.0 - Git operations & worktree management
|
||||
- @automaker/spec-parser 1.0.0 - Project specification parsing
|
||||
|
||||
**Server Utilities:**
|
||||
|
||||
- express 5.2.1 - Web framework
|
||||
- cors 2.8.5 - CORS middleware
|
||||
- morgan 1.10.1 - HTTP request logger
|
||||
- cookie-parser 1.4.7 - Cookie parsing middleware
|
||||
- yaml 2.7.0 - YAML parsing and generation
|
||||
|
||||
**Type Definitions:**
|
||||
|
||||
- @types/express 5.0.6
|
||||
- @types/node 22.19.3
|
||||
- @types/react 19.2.7
|
||||
- @types/react-dom 19.2.3
|
||||
- @types/dagre 0.7.53
|
||||
- @types/ws 8.18.1
|
||||
- @types/cookie 0.6.0
|
||||
- @types/cookie-parser 1.4.10
|
||||
- @types/cors 2.8.19
|
||||
- @types/morgan 1.9.10
|
||||
|
||||
**Optional Dependencies (Platform-specific):**
|
||||
|
||||
- lightningcss (various platforms) 1.29.2 - CSS parser (alternate to PostCSS)
|
||||
- dmg-license 1.0.11 - DMG license dialog for macOS
|
||||
|
||||
## Configuration
|
||||
|
||||
**Environment:**
|
||||
|
||||
- `.env` and `.env.example` files in `apps/server/` and `apps/ui/`
|
||||
- `dotenv` library loads variables from `.env` files
|
||||
- Key env vars:
|
||||
- `ANTHROPIC_API_KEY` - Claude API authentication
|
||||
- `OPENAI_API_KEY` - OpenAI/Codex authentication
|
||||
- `GITHUB_TOKEN` - GitHub API access
|
||||
- `ANTHROPIC_BASE_URL` - Custom Claude endpoint (optional)
|
||||
- `HOST` - Server bind address (default: 0.0.0.0)
|
||||
- `HOSTNAME` - Hostname for URLs (default: localhost)
|
||||
- `PORT` - Server port (default: 3008)
|
||||
- `DATA_DIR` - Data storage directory (default: ./data)
|
||||
- `ALLOWED_ROOT_DIRECTORY` - Restrict file operations
|
||||
- `AUTOMAKER_MOCK_AGENT` - Enable mock agent for testing
|
||||
- `AUTOMAKER_AUTO_LOGIN` - Skip login in dev (disabled in production)
|
||||
- `VITE_HOSTNAME` - Frontend API hostname
|
||||
|
||||
**Build:**
|
||||
|
||||
- `apps/ui/electron-builder.config.json` or `apps/ui/package.json` build config
|
||||
- Electron builder targets:
|
||||
- macOS: DMG and ZIP
|
||||
- Windows: NSIS installer
|
||||
- Linux: AppImage, DEB, RPM
|
||||
- Vite config: `apps/ui/vite.config.ts`, `apps/server/tsconfig.json`
|
||||
- TypeScript config: `tsconfig.json` files in each package
|
||||
|
||||
## Platform Requirements
|
||||
|
||||
**Development:**
|
||||
|
||||
- Node.js 22.x
|
||||
- npm (included with Node.js)
|
||||
- Git (for worktree operations)
|
||||
- Python (optional, for some dev scripts)
|
||||
|
||||
**Production:**
|
||||
|
||||
- Electron desktop app: Windows, macOS, Linux
|
||||
- Web browser: Modern Chromium-based browsers
|
||||
- Server: Any platform supporting Node.js 22.x
|
||||
|
||||
**Deployment Target:**
|
||||
|
||||
- Local desktop (Electron)
|
||||
- Local web server (Express + Vite)
|
||||
- Remote server deployment (Docker, systemd, or other orchestration)
|
||||
|
||||
---
|
||||
|
||||
_Stack analysis: 2026-01-27_
|
||||
@@ -1,340 +0,0 @@
|
||||
# Codebase Structure
|
||||
|
||||
**Analysis Date:** 2026-01-27
|
||||
|
||||
## Directory Layout
|
||||
|
||||
```
|
||||
automaker/
|
||||
├── apps/ # Application packages
|
||||
│ ├── ui/ # React + Electron frontend (port 3007)
|
||||
│ │ ├── src/
|
||||
│ │ │ ├── main.ts # Electron/Vite entry point
|
||||
│ │ │ ├── app.tsx # Root React component (splash, router)
|
||||
│ │ │ ├── renderer.tsx # Electron renderer entry
|
||||
│ │ │ ├── routes/ # TanStack Router file-based routes
|
||||
│ │ │ ├── components/ # React components (views, dialogs, UI, layout)
|
||||
│ │ │ ├── store/ # Zustand state management
|
||||
│ │ │ ├── hooks/ # Custom React hooks
|
||||
│ │ │ ├── lib/ # Utilities (API client, electron, queries, etc.)
|
||||
│ │ │ ├── electron/ # Electron main & preload process files
|
||||
│ │ │ ├── config/ # UI configuration (fonts, themes, routes)
|
||||
│ │ │ └── styles/ # CSS and theme files
|
||||
│ │ ├── public/ # Static assets
|
||||
│ │ └── tests/ # E2E Playwright tests
|
||||
│ │
|
||||
│ └── server/ # Express backend (port 3008)
|
||||
│ ├── src/
|
||||
│ │ ├── index.ts # Express app initialization, route mounting
|
||||
│ │ ├── routes/ # REST API endpoints (30+ route folders)
|
||||
│ │ ├── services/ # Business logic services
|
||||
│ │ ├── providers/ # AI model provider implementations
|
||||
│ │ ├── lib/ # Utilities (events, auth, helpers, etc.)
|
||||
│ │ ├── middleware/ # Express middleware
|
||||
│ │ └── types/ # Server-specific type definitions
|
||||
│ └── tests/ # Unit tests (Vitest)
|
||||
│
|
||||
├── libs/ # Shared npm packages (@automaker/*)
|
||||
│ ├── types/ # @automaker/types (no dependencies)
|
||||
│ │ └── src/
|
||||
│ │ ├── index.ts # Main export with all type definitions
|
||||
│ │ ├── feature.ts # Feature, FeatureStatus, etc.
|
||||
│ │ ├── provider.ts # Provider interfaces, model definitions
|
||||
│ │ ├── settings.ts # Global and project settings types
|
||||
│ │ ├── event.ts # Event types for real-time updates
|
||||
│ │ ├── session.ts # AgentSession, conversation types
|
||||
│ │ ├── model*.ts # Model-specific types (cursor, codex, gemini, etc.)
|
||||
│ │ └── ... 20+ more type files
|
||||
│ │
|
||||
│ ├── utils/ # @automaker/utils (logging, errors, images, context)
|
||||
│ │ └── src/
|
||||
│ │ ├── logger.ts # createLogger() with LogLevel enum
|
||||
│ │ ├── errors.ts # classifyError(), error types
|
||||
│ │ ├── image-utils.ts # Image processing, base64 encoding
|
||||
│ │ ├── context-loader.ts # loadContextFiles() for AI prompts
|
||||
│ │ └── ... more utilities
|
||||
│ │
|
||||
│ ├── platform/ # @automaker/platform (paths, security, OS)
|
||||
│ │ └── src/
|
||||
│ │ ├── index.ts # Path getters (getFeatureDir, getFeaturesDir, etc.)
|
||||
│ │ ├── secure-fs.ts # Secure filesystem operations
|
||||
│ │ └── config/ # Claude auth detection, allowed paths
|
||||
│ │
|
||||
│ ├── prompts/ # @automaker/prompts (AI prompt templates)
|
||||
│ │ └── src/
|
||||
│ │ ├── index.ts # Main prompts export
|
||||
│ │ └── *-prompt.ts # Prompt templates for different features
|
||||
│ │
|
||||
│ ├── model-resolver/ # @automaker/model-resolver
|
||||
│ │ └── src/
|
||||
│ │ └── index.ts # resolveModelString() for model aliases
|
||||
│ │
|
||||
│ ├── dependency-resolver/ # @automaker/dependency-resolver
|
||||
│ │ └── src/
|
||||
│ │ └── index.ts # Resolve feature dependencies
|
||||
│ │
|
||||
│ ├── git-utils/ # @automaker/git-utils (git operations)
|
||||
│ │ └── src/
|
||||
│ │ ├── index.ts # getGitRepositoryDiffs(), worktree management
|
||||
│ │ └── ... git helpers
|
||||
│ │
|
||||
│ ├── spec-parser/ # @automaker/spec-parser
|
||||
│ │ └── src/
|
||||
│ │ └── ... spec parsing utilities
|
||||
│ │
|
||||
│ └── tsconfig.base.json # Base TypeScript config for all packages
|
||||
│
|
||||
├── .automaker/ # Project data directory (created by app)
|
||||
│ ├── features/ # Feature storage
|
||||
│ │ └── {featureId}/
|
||||
│ │ ├── feature.json # Feature metadata and content
|
||||
│ │ ├── agent-output.md # Agent execution results
|
||||
│ │ └── images/ # Feature images
|
||||
│ ├── context/ # Context files (CLAUDE.md, etc.)
|
||||
│ ├── settings.json # Per-project settings
|
||||
│ ├── spec.md # Project specification
|
||||
│ └── analysis.json # Project structure analysis
|
||||
│
|
||||
├── data/ # Global data directory (default, configurable)
|
||||
│ ├── settings.json # Global settings, profiles
|
||||
│ ├── credentials.json # Encrypted API keys
|
||||
│ ├── sessions-metadata.json # Chat session metadata
|
||||
│ └── agent-sessions/ # Conversation histories
|
||||
│
|
||||
├── .planning/ # Generated documentation by GSD orchestrator
|
||||
│ └── codebase/ # Codebase analysis documents
|
||||
│ ├── ARCHITECTURE.md # Architecture patterns and layers
|
||||
│ ├── STRUCTURE.md # This file
|
||||
│ ├── STACK.md # Technology stack
|
||||
│ ├── INTEGRATIONS.md # External API integrations
|
||||
│ ├── CONVENTIONS.md # Code style and naming
|
||||
│ ├── TESTING.md # Testing patterns
|
||||
│ └── CONCERNS.md # Technical debt and issues
|
||||
│
|
||||
├── .github/ # GitHub Actions workflows
|
||||
├── scripts/ # Build and utility scripts
|
||||
├── tests/ # Test data and utilities
|
||||
├── docs/ # Documentation
|
||||
├── package.json # Root workspace config
|
||||
├── package-lock.json # Lock file
|
||||
├── CLAUDE.md # Project instructions for Claude Code
|
||||
├── DEVELOPMENT_WORKFLOW.md # Development guidelines
|
||||
└── README.md # Project overview
|
||||
```
|
||||
|
||||
## Directory Purposes
|
||||
|
||||
**apps/ui/:**
|
||||
|
||||
- Purpose: React frontend for desktop (Electron) and web modes
|
||||
- Build system: Vite 7 with TypeScript
|
||||
- Styling: Tailwind CSS 4
|
||||
- State: Zustand 5 with API persistence
|
||||
- Routing: TanStack Router with file-based structure
|
||||
- Desktop: Electron 39 with preload IPC bridge
|
||||
|
||||
**apps/server/:**
|
||||
|
||||
- Purpose: Express backend API and service layer
|
||||
- Build system: TypeScript → JavaScript
|
||||
- Runtime: Node.js 18+
|
||||
- WebSocket: ws library for real-time streaming
|
||||
- Process management: node-pty for terminal isolation
|
||||
|
||||
**libs/types/:**
|
||||
|
||||
- Purpose: Central type definitions (no dependencies, fast import)
|
||||
- Used by: All other packages and apps
|
||||
- Pattern: Single namespace export from index.ts
|
||||
- Build: Compiled to ESM only
|
||||
|
||||
**libs/utils/:**
|
||||
|
||||
- Purpose: Shared utilities for logging, errors, file operations, image processing
|
||||
- Used by: Server, UI, other libraries
|
||||
- Notable: `createLogger()`, `classifyError()`, `loadContextFiles()`, `readImageAsBase64()`
|
||||
|
||||
**libs/platform/:**
|
||||
|
||||
- Purpose: OS-agnostic path management and security enforcement
|
||||
- Used by: Server services for file operations
|
||||
- Notable: Path normalization, allowed directory enforcement, Claude auth detection
|
||||
|
||||
**libs/prompts/:**
|
||||
|
||||
- Purpose: AI prompt templates injected into agent context
|
||||
- Used by: AgentService when executing features
|
||||
- Pattern: Function exports that return prompt strings
|
||||
|
||||
## Key File Locations
|
||||
|
||||
**Entry Points:**
|
||||
|
||||
**Server:**
|
||||
|
||||
- `apps/server/src/index.ts`: Express server initialization, route mounting, WebSocket setup
|
||||
|
||||
**UI (Web):**
|
||||
|
||||
- `apps/ui/src/main.ts`: Vite entry point
|
||||
- `apps/ui/src/app.tsx`: Root React component
|
||||
|
||||
**UI (Electron):**
|
||||
|
||||
- `apps/ui/src/main.ts`: Vite entry point
|
||||
- `apps/ui/src/electron/main-process.ts`: Electron main process
|
||||
- `apps/ui/src/preload.ts`: Electron preload script for IPC bridge
|
||||
|
||||
**Configuration:**
|
||||
|
||||
- `apps/server/src/index.ts`: PORT, HOST, HOSTNAME, DATA_DIR env vars
|
||||
- `apps/ui/src/config/`: Theme options, fonts, model aliases
|
||||
- `libs/types/src/settings.ts`: Settings schema
|
||||
- `.env.local`: Local development overrides (git-ignored)
|
||||
|
||||
**Core Logic:**
|
||||
|
||||
**Server:**
|
||||
|
||||
- `apps/server/src/services/agent-service.ts`: AI agent execution engine (31KB)
|
||||
- `apps/server/src/services/auto-mode-service.ts`: Feature batching and automation (216KB - largest)
|
||||
- `apps/server/src/services/feature-loader.ts`: Feature persistence and loading
|
||||
- `apps/server/src/services/settings-service.ts`: Settings management
|
||||
- `apps/server/src/providers/provider-factory.ts`: AI provider selection
|
||||
|
||||
**UI:**
|
||||
|
||||
- `apps/ui/src/store/app-store.ts`: Global state (84KB - largest frontend file)
|
||||
- `apps/ui/src/lib/http-api-client.ts`: API client with auth (92KB)
|
||||
- `apps/ui/src/components/views/board-view.tsx`: Kanban board (70KB)
|
||||
- `apps/ui/src/routes/__root.tsx`: Root layout with session init (32KB)
|
||||
|
||||
**Testing:**
|
||||
|
||||
**E2E Tests:**
|
||||
|
||||
- `apps/ui/tests/`: Playwright tests organized by feature area
|
||||
- `settings/`, `features/`, `projects/`, `agent/`, `utils/`, `context/`
|
||||
|
||||
**Unit Tests:**
|
||||
|
||||
- `libs/*/tests/`: Package-specific Vitest tests
|
||||
- `apps/server/src/tests/`: Server integration tests
|
||||
|
||||
**Test Config:**
|
||||
|
||||
- `vitest.config.ts`: Root Vitest configuration
|
||||
- `apps/ui/playwright.config.ts`: Playwright configuration
|
||||
|
||||
## Naming Conventions
|
||||
|
||||
**Files:**
|
||||
|
||||
- **Components:** PascalCase.tsx (e.g., `board-view.tsx`, `session-manager.tsx`)
|
||||
- **Services:** camelCase-service.ts (e.g., `agent-service.ts`, `settings-service.ts`)
|
||||
- **Hooks:** use-kebab-case.ts (e.g., `use-auto-mode.ts`, `use-settings-sync.ts`)
|
||||
- **Utilities:** camelCase.ts (e.g., `api-fetch.ts`, `log-parser.ts`)
|
||||
- **Routes:** kebab-case with index.ts pattern (e.g., `routes/agent/index.ts`)
|
||||
- **Tests:** _.test.ts or _.spec.ts (co-located with source)
|
||||
|
||||
**Directories:**
|
||||
|
||||
- **Feature domains:** kebab-case (e.g., `auto-mode/`, `event-history/`, `project-settings-view/`)
|
||||
- **Type categories:** kebab-case plural (e.g., `types/`, `services/`, `providers/`, `routes/`)
|
||||
- **Shared utilities:** kebab-case (e.g., `lib/`, `utils/`, `hooks/`)
|
||||
|
||||
**TypeScript:**
|
||||
|
||||
- **Types:** PascalCase (e.g., `Feature`, `AgentSession`, `ProviderMessage`)
|
||||
- **Interfaces:** PascalCase (e.g., `EventEmitter`, `ProviderFactory`)
|
||||
- **Enums:** PascalCase (e.g., `LogLevel`, `FeatureStatus`)
|
||||
- **Functions:** camelCase (e.g., `createLogger()`, `classifyError()`)
|
||||
- **Constants:** UPPER_SNAKE_CASE (e.g., `DEFAULT_TIMEOUT_MS`, `MAX_RETRIES`)
|
||||
- **Variables:** camelCase (e.g., `featureId`, `settingsService`)
|
||||
|
||||
## Where to Add New Code
|
||||
|
||||
**New Feature (end-to-end):**
|
||||
|
||||
- API Route: `apps/server/src/routes/{feature-name}/index.ts`
|
||||
- Service Logic: `apps/server/src/services/{feature-name}-service.ts`
|
||||
- UI Route: `apps/ui/src/routes/{feature-name}.tsx` (simple) or `{feature-name}/` (complex with subdir)
|
||||
- Store: `apps/ui/src/store/{feature-name}-store.ts` (if complex state)
|
||||
- Tests: `apps/ui/tests/{feature-name}/` or `apps/server/src/tests/`
|
||||
|
||||
**New Component/Module:**
|
||||
|
||||
- View Components: `apps/ui/src/components/views/{component-name}/`
|
||||
- Dialog Components: `apps/ui/src/components/dialogs/{dialog-name}.tsx`
|
||||
- Shared Components: `apps/ui/src/components/shared/` or `components/ui/` (shadcn)
|
||||
- Layout Components: `apps/ui/src/components/layout/`
|
||||
|
||||
**Utilities:**
|
||||
|
||||
- New Library: Create in `libs/{package-name}/` with package.json and tsconfig.json
|
||||
- Server Utilities: `apps/server/src/lib/{utility-name}.ts`
|
||||
- Shared Utilities: Extend `libs/utils/src/` or create new lib if self-contained
|
||||
- UI Utilities: `apps/ui/src/lib/{utility-name}.ts`
|
||||
|
||||
**New Provider (AI Model):**
|
||||
|
||||
- Implementation: `apps/server/src/providers/{provider-name}-provider.ts`
|
||||
- Types: Add to `libs/types/src/{provider-name}-models.ts`
|
||||
- Model Resolver: Update `libs/model-resolver/src/index.ts` with model alias mapping
|
||||
- Settings: Update `libs/types/src/settings.ts` for provider-specific config
|
||||
|
||||
## Special Directories
|
||||
|
||||
**apps/ui/electron/:**
|
||||
|
||||
- Purpose: Electron-specific code (main process, IPC handlers, native APIs)
|
||||
- Generated: Yes (preload.ts)
|
||||
- Committed: Yes
|
||||
|
||||
**apps/ui/public/**
|
||||
|
||||
- Purpose: Static assets (sounds, images, icons)
|
||||
- Generated: No
|
||||
- Committed: Yes
|
||||
|
||||
**apps/ui/dist/:**
|
||||
|
||||
- Purpose: Built web application
|
||||
- Generated: Yes
|
||||
- Committed: No (.gitignore)
|
||||
|
||||
**apps/ui/dist-electron/:**
|
||||
|
||||
- Purpose: Built Electron app bundle
|
||||
- Generated: Yes
|
||||
- Committed: No (.gitignore)
|
||||
|
||||
**.automaker/features/{featureId}/:**
|
||||
|
||||
- Purpose: Per-feature persistent storage
|
||||
- Structure: feature.json, agent-output.md, images/
|
||||
- Generated: Yes (at runtime)
|
||||
- Committed: Yes (tracked in project git)
|
||||
|
||||
**data/:**
|
||||
|
||||
- Purpose: Global data directory (global settings, credentials, sessions)
|
||||
- Generated: Yes (created at first run)
|
||||
- Committed: No (.gitignore)
|
||||
- Configurable: Via DATA_DIR env var
|
||||
|
||||
**node_modules/:**
|
||||
|
||||
- Purpose: Installed dependencies
|
||||
- Generated: Yes
|
||||
- Committed: No (.gitignore)
|
||||
|
||||
**dist/**, **build/:**
|
||||
|
||||
- Purpose: Build artifacts
|
||||
- Generated: Yes
|
||||
- Committed: No (.gitignore)
|
||||
|
||||
---
|
||||
|
||||
_Structure analysis: 2026-01-27_
|
||||
@@ -1,389 +0,0 @@
|
||||
# Testing Patterns
|
||||
|
||||
**Analysis Date:** 2026-01-27
|
||||
|
||||
## Test Framework
|
||||
|
||||
**Runner:**
|
||||
|
||||
- Vitest 4.0.16 (for unit and integration tests)
|
||||
- Playwright (for E2E tests)
|
||||
- Config: `apps/server/vitest.config.ts`, `libs/*/vitest.config.ts`, `apps/ui/playwright.config.ts`
|
||||
|
||||
**Assertion Library:**
|
||||
|
||||
- Vitest built-in expect assertions
|
||||
- API: `expect().toBe()`, `expect().toEqual()`, `expect().toHaveLength()`, `expect().toHaveProperty()`
|
||||
|
||||
**Run Commands:**
|
||||
|
||||
```bash
|
||||
npm run test # E2E tests (Playwright, headless)
|
||||
npm run test:headed # E2E tests with browser visible
|
||||
npm run test:packages # All shared package unit tests (vitest)
|
||||
npm run test:server # Server unit tests (vitest run)
|
||||
npm run test:server:coverage # Server tests with coverage report
|
||||
npm run test:all # All tests (packages + server)
|
||||
npm run test:unit # Vitest run (all projects)
|
||||
npm run test:unit:watch # Vitest watch mode
|
||||
```
|
||||
|
||||
## Test File Organization
|
||||
|
||||
**Location:**
|
||||
|
||||
- Co-located with source: `src/module.ts` has `tests/unit/module.test.ts`
|
||||
- Server tests: `apps/server/tests/` (separate directory)
|
||||
- Library tests: `libs/*/tests/` (each package)
|
||||
- E2E tests: `apps/ui/tests/` (Playwright)
|
||||
|
||||
**Naming:**
|
||||
|
||||
- Pattern: `{moduleName}.test.ts` for unit tests
|
||||
- Pattern: `{moduleName}.spec.ts` for specification tests
|
||||
- Glob pattern: `tests/**/*.test.ts`, `tests/**/*.spec.ts`
|
||||
|
||||
**Structure:**
|
||||
|
||||
```
|
||||
apps/server/
|
||||
├── tests/
|
||||
│ ├── setup.ts # Global test setup
|
||||
│ ├── unit/
|
||||
│ │ ├── providers/ # Provider tests
|
||||
│ │ │ ├── claude-provider.test.ts
|
||||
│ │ │ ├── codex-provider.test.ts
|
||||
│ │ │ └── base-provider.test.ts
|
||||
│ │ └── services/
|
||||
│ └── utils/
|
||||
│ └── helpers.ts # Test utilities
|
||||
└── src/
|
||||
|
||||
libs/platform/
|
||||
├── tests/
|
||||
│ ├── paths.test.ts
|
||||
│ ├── security.test.ts
|
||||
│ ├── subprocess.test.ts
|
||||
│ └── node-finder.test.ts
|
||||
└── src/
|
||||
```
|
||||
|
||||
## Test Structure
|
||||
|
||||
**Suite Organization:**
|
||||
|
||||
```typescript
|
||||
import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest';
|
||||
import { FeatureLoader } from '@/services/feature-loader.js';
|
||||
|
||||
describe('feature-loader.ts', () => {
|
||||
let featureLoader: FeatureLoader;
|
||||
|
||||
beforeEach(() => {
|
||||
vi.clearAllMocks();
|
||||
featureLoader = new FeatureLoader();
|
||||
});
|
||||
|
||||
afterEach(async () => {
|
||||
// Cleanup resources
|
||||
});
|
||||
|
||||
describe('methodName', () => {
|
||||
it('should do specific thing', () => {
|
||||
expect(result).toBe(expected);
|
||||
});
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
**Patterns:**
|
||||
|
||||
- Setup pattern: `beforeEach()` initializes test instance, clears mocks
|
||||
- Teardown pattern: `afterEach()` cleans up temp directories, removes created files
|
||||
- Assertion pattern: one logical assertion per test (or multiple closely related)
|
||||
- Test isolation: each test runs with fresh setup
|
||||
|
||||
## Mocking
|
||||
|
||||
**Framework:**
|
||||
|
||||
- Vitest `vi` module: `vi.mock()`, `vi.mocked()`, `vi.clearAllMocks()`
|
||||
- Mock patterns: module mocking, function spying, return value mocking
|
||||
|
||||
**Patterns:**
|
||||
|
||||
Module mocking:
|
||||
|
||||
```typescript
|
||||
vi.mock('@anthropic-ai/claude-agent-sdk');
|
||||
// In test:
|
||||
vi.mocked(sdk.query).mockReturnValue(
|
||||
(async function* () {
|
||||
yield { type: 'text', text: 'Response 1' };
|
||||
})()
|
||||
);
|
||||
```
|
||||
|
||||
Async generator mocking (for streaming APIs):
|
||||
|
||||
```typescript
|
||||
const generator = provider.executeQuery({
|
||||
prompt: 'Hello',
|
||||
model: 'claude-opus-4-5-20251101',
|
||||
cwd: '/test',
|
||||
});
|
||||
const results = await collectAsyncGenerator(generator);
|
||||
```
|
||||
|
||||
Partial mocking with spies:
|
||||
|
||||
```typescript
|
||||
const provider = new TestProvider();
|
||||
const spy = vi.spyOn(provider, 'getName');
|
||||
spy.mockReturnValue('mocked-name');
|
||||
```
|
||||
|
||||
**What to Mock:**
|
||||
|
||||
- External APIs (Claude SDK, GitHub SDK, cloud services)
|
||||
- File system operations (use temp directories instead when possible)
|
||||
- Network calls
|
||||
- Process execution
|
||||
- Time-dependent operations
|
||||
|
||||
**What NOT to Mock:**
|
||||
|
||||
- Core business logic (test the actual implementation)
|
||||
- Type definitions
|
||||
- Internal module dependencies (test integration with real services)
|
||||
- Standard library functions (fs, path, etc. - use fixtures instead)
|
||||
|
||||
## Fixtures and Factories
|
||||
|
||||
**Test Data:**
|
||||
|
||||
```typescript
|
||||
// Test helper for collecting async generator results
|
||||
async function collectAsyncGenerator<T>(generator: AsyncGenerator<T>): Promise<T[]> {
|
||||
const results: T[] = [];
|
||||
for await (const item of generator) {
|
||||
results.push(item);
|
||||
}
|
||||
return results;
|
||||
}
|
||||
|
||||
// Temporary directory fixture
|
||||
beforeEach(async () => {
|
||||
tempDir = await fs.mkdtemp(path.join(os.tmpdir(), 'test-'));
|
||||
projectPath = path.join(tempDir, 'test-project');
|
||||
await fs.mkdir(projectPath, { recursive: true });
|
||||
});
|
||||
|
||||
afterEach(async () => {
|
||||
try {
|
||||
await fs.rm(tempDir, { recursive: true, force: true });
|
||||
} catch (error) {
|
||||
// Ignore cleanup errors
|
||||
}
|
||||
});
|
||||
```
|
||||
|
||||
**Location:**
|
||||
|
||||
- Inline in test files for simple fixtures
|
||||
- `tests/utils/helpers.ts` for shared test utilities
|
||||
- Factory functions for complex test objects: `createTestProvider()`, `createMockFeature()`
|
||||
|
||||
## Coverage
|
||||
|
||||
**Requirements (Server):**
|
||||
|
||||
- Lines: 60%
|
||||
- Functions: 75%
|
||||
- Branches: 55%
|
||||
- Statements: 60%
|
||||
- Config: `apps/server/vitest.config.ts` with thresholds
|
||||
|
||||
**Excluded from Coverage:**
|
||||
|
||||
- Route handlers: tested via integration/E2E tests
|
||||
- Type re-exports
|
||||
- Middleware: tested via integration tests
|
||||
- Prompt templates
|
||||
- MCP integration: awaits MCP SDK integration tests
|
||||
- Provider CLI integrations: awaits integration tests
|
||||
|
||||
**View Coverage:**
|
||||
|
||||
```bash
|
||||
npm run test:server:coverage # Generate coverage report
|
||||
# Opens HTML report in: apps/server/coverage/index.html
|
||||
```
|
||||
|
||||
**Coverage Tools:**
|
||||
|
||||
- Provider: v8
|
||||
- Reporters: text, json, html, lcov
|
||||
- File inclusion: `src/**/*.ts`
|
||||
- File exclusion: `src/**/*.d.ts`, specific service files in thresholds
|
||||
|
||||
## Test Types
|
||||
|
||||
**Unit Tests:**
|
||||
|
||||
- Scope: Individual functions and methods
|
||||
- Approach: Test inputs → outputs with mocked dependencies
|
||||
- Location: `apps/server/tests/unit/`
|
||||
- Examples:
|
||||
- Provider executeQuery() with mocked SDK
|
||||
- Path construction functions with assertions
|
||||
- Error classification with different error types
|
||||
- Config validation with various inputs
|
||||
|
||||
**Integration Tests:**
|
||||
|
||||
- Scope: Multiple modules working together
|
||||
- Approach: Test actual service calls with real file system or temp directories
|
||||
- Pattern: Setup data → call method → verify results
|
||||
- Example: Feature loader reading/writing feature.json files
|
||||
- Example: Auto-mode service coordinating with multiple services
|
||||
|
||||
**E2E Tests:**
|
||||
|
||||
- Framework: Playwright
|
||||
- Scope: Full user workflows from UI
|
||||
- Location: `apps/ui/tests/`
|
||||
- Config: `apps/ui/playwright.config.ts`
|
||||
- Setup:
|
||||
- Backend server with mock agent enabled
|
||||
- Frontend Vite dev server
|
||||
- Sequential execution (workers: 1) to avoid auth conflicts
|
||||
- Screenshots/traces on failure
|
||||
- Auth: Global setup authentication in `tests/global-setup.ts`
|
||||
- Fixtures: `tests/e2e-fixtures/` for test project data
|
||||
|
||||
## Common Patterns
|
||||
|
||||
**Async Testing:**
|
||||
|
||||
```typescript
|
||||
it('should execute async operation', async () => {
|
||||
const result = await featureLoader.loadFeature(projectPath, featureId);
|
||||
expect(result).toBeDefined();
|
||||
expect(result.id).toBe(featureId);
|
||||
});
|
||||
|
||||
// For streams/generators:
|
||||
const generator = provider.executeQuery({ prompt, model, cwd });
|
||||
const results = await collectAsyncGenerator(generator);
|
||||
expect(results).toHaveLength(2);
|
||||
```
|
||||
|
||||
**Error Testing:**
|
||||
|
||||
```typescript
|
||||
it('should throw error when feature not found', async () => {
|
||||
await expect(featureLoader.getFeature(projectPath, 'nonexistent')).rejects.toThrow('not found');
|
||||
});
|
||||
|
||||
// Testing error classification:
|
||||
const errorInfo = classifyError(new Error('ENOENT'));
|
||||
expect(errorInfo.category).toBe('FileSystem');
|
||||
```
|
||||
|
||||
**Fixture Setup:**
|
||||
|
||||
```typescript
|
||||
it('should create feature with images', async () => {
|
||||
// Setup: create temp feature directory
|
||||
const featureDir = path.join(projectPath, '.automaker', 'features', featureId);
|
||||
await fs.mkdir(featureDir, { recursive: true });
|
||||
|
||||
// Act: perform operation
|
||||
const result = await featureLoader.updateFeature(projectPath, {
|
||||
id: featureId,
|
||||
imagePaths: ['/temp/image.png'],
|
||||
});
|
||||
|
||||
// Assert: verify file operations
|
||||
const migratedPath = path.join(featureDir, 'images', 'image.png');
|
||||
expect(fs.existsSync(migratedPath)).toBe(true);
|
||||
});
|
||||
```
|
||||
|
||||
**Mock Reset Pattern:**
|
||||
|
||||
```typescript
|
||||
// In vitest.config.ts:
|
||||
mockReset: true, // Reset all mocks before each test
|
||||
restoreMocks: true, // Restore original implementations
|
||||
clearMocks: true, // Clear mock call history
|
||||
|
||||
// In test:
|
||||
beforeEach(() => {
|
||||
vi.clearAllMocks();
|
||||
delete process.env.ANTHROPIC_API_KEY;
|
||||
});
|
||||
```
|
||||
|
||||
## Test Configuration
|
||||
|
||||
**Vitest Config Patterns:**
|
||||
|
||||
Server config (`apps/server/vitest.config.ts`):
|
||||
|
||||
- Environment: node
|
||||
- Globals: true (describe/it without imports)
|
||||
- Setup files: `./tests/setup.ts`
|
||||
- Alias resolution: resolves `@automaker/*` to source files for mocking
|
||||
|
||||
Library config:
|
||||
|
||||
- Simpler setup: just environment and globals
|
||||
- Coverage with high thresholds (90%+ lines)
|
||||
|
||||
**Global Setup:**
|
||||
|
||||
```typescript
|
||||
// tests/setup.ts
|
||||
import { vi, beforeEach } from 'vitest';
|
||||
|
||||
process.env.NODE_ENV = 'test';
|
||||
process.env.DATA_DIR = '/tmp/test-data';
|
||||
|
||||
beforeEach(() => {
|
||||
vi.clearAllMocks();
|
||||
});
|
||||
```
|
||||
|
||||
## Testing Best Practices
|
||||
|
||||
**Isolation:**
|
||||
|
||||
- Each test is independent (no state sharing)
|
||||
- Cleanup temp files in afterEach
|
||||
- Reset mocks and environment variables in beforeEach
|
||||
|
||||
**Clarity:**
|
||||
|
||||
- Descriptive test names: "should do X when Y condition"
|
||||
- One logical assertion per test
|
||||
- Clear arrange-act-assert structure
|
||||
|
||||
**Speed:**
|
||||
|
||||
- Mock external services
|
||||
- Use in-memory temp directories
|
||||
- Avoid real network calls
|
||||
- Sequential E2E tests to prevent conflicts
|
||||
|
||||
**Maintainability:**
|
||||
|
||||
- Use beforeEach/afterEach for common setup
|
||||
- Extract test helpers to `tests/utils/`
|
||||
- Keep test data simple and local
|
||||
- Mock consistently across tests
|
||||
|
||||
---
|
||||
|
||||
_Testing analysis: 2026-01-27_
|
||||
@@ -25,7 +25,6 @@ COPY libs/types/package*.json ./libs/types/
|
||||
COPY libs/utils/package*.json ./libs/utils/
|
||||
COPY libs/prompts/package*.json ./libs/prompts/
|
||||
COPY libs/platform/package*.json ./libs/platform/
|
||||
COPY libs/spec-parser/package*.json ./libs/spec-parser/
|
||||
COPY libs/model-resolver/package*.json ./libs/model-resolver/
|
||||
COPY libs/dependency-resolver/package*.json ./libs/dependency-resolver/
|
||||
COPY libs/git-utils/package*.json ./libs/git-utils/
|
||||
|
||||
300
SECURITY_TODO.md
Normal file
300
SECURITY_TODO.md
Normal file
@@ -0,0 +1,300 @@
|
||||
# Security Audit Findings - v0.13.0rc Branch
|
||||
|
||||
**Date:** $(date)
|
||||
**Audit Type:** Git diff security review against v0.13.0rc branch
|
||||
**Status:** ⚠️ Security vulnerabilities found - requires fixes before release
|
||||
|
||||
## Executive Summary
|
||||
|
||||
No intentionally malicious code was detected in the changes. However, several **critical security vulnerabilities** were identified that could allow command injection attacks. These must be fixed before release.
|
||||
|
||||
---
|
||||
|
||||
## 🔴 Critical Security Issues
|
||||
|
||||
### 1. Command Injection in Merge Handler
|
||||
|
||||
**File:** `apps/server/src/routes/worktree/routes/merge.ts`
|
||||
**Lines:** 43, 54, 65-66, 93
|
||||
**Severity:** CRITICAL
|
||||
|
||||
**Issue:**
|
||||
User-controlled inputs (`branchName`, `mergeTo`, `options?.message`) are directly interpolated into shell commands without validation, allowing command injection attacks.
|
||||
|
||||
**Vulnerable Code:**
|
||||
|
||||
```typescript
|
||||
// Line 43 - branchName not validated
|
||||
await execAsync(`git rev-parse --verify ${branchName}`, { cwd: projectPath });
|
||||
|
||||
// Line 54 - mergeTo not validated
|
||||
await execAsync(`git rev-parse --verify ${mergeTo}`, { cwd: projectPath });
|
||||
|
||||
// Lines 65-66 - branchName and message not validated
|
||||
const mergeCmd = options?.squash
|
||||
? `git merge --squash ${branchName}`
|
||||
: `git merge ${branchName} -m "${options?.message || `Merge ${branchName} into ${mergeTo}`}"`;
|
||||
|
||||
// Line 93 - message not sanitized
|
||||
await execAsync(`git commit -m "${options?.message || `Merge ${branchName} (squash)`}"`, {
|
||||
cwd: projectPath,
|
||||
});
|
||||
```
|
||||
|
||||
**Attack Vector:**
|
||||
An attacker could inject shell commands via branch names or commit messages:
|
||||
|
||||
- Branch name: `main; rm -rf /`
|
||||
- Commit message: `"; malicious_command; "`
|
||||
|
||||
**Fix Required:**
|
||||
|
||||
1. Validate `branchName` and `mergeTo` using `isValidBranchName()` before use
|
||||
2. Sanitize commit messages or use `execGitCommand` with proper escaping
|
||||
3. Replace `execAsync` template literals with `execGitCommand` array-based calls
|
||||
|
||||
**Note:** `isValidBranchName` is imported but only used AFTER deletion (line 119), not before execAsync calls.
|
||||
|
||||
---
|
||||
|
||||
### 2. Command Injection in Push Handler
|
||||
|
||||
**File:** `apps/server/src/routes/worktree/routes/push.ts`
|
||||
**Lines:** 44, 49
|
||||
**Severity:** CRITICAL
|
||||
|
||||
**Issue:**
|
||||
User-controlled `remote` parameter and `branchName` are directly interpolated into shell commands without validation.
|
||||
|
||||
**Vulnerable Code:**
|
||||
|
||||
```typescript
|
||||
// Line 38 - remote defaults to 'origin' but not validated
|
||||
const targetRemote = remote || 'origin';
|
||||
|
||||
// Lines 44, 49 - targetRemote and branchName not validated
|
||||
await execAsync(`git push -u ${targetRemote} ${branchName} ${forceFlag}`, {
|
||||
cwd: worktreePath,
|
||||
});
|
||||
await execAsync(`git push --set-upstream ${targetRemote} ${branchName} ${forceFlag}`, {
|
||||
cwd: worktreePath,
|
||||
});
|
||||
```
|
||||
|
||||
**Attack Vector:**
|
||||
An attacker could inject commands via the remote name:
|
||||
|
||||
- Remote: `origin; malicious_command; #`
|
||||
|
||||
**Fix Required:**
|
||||
|
||||
1. Validate `targetRemote` parameter (alphanumeric + `-`, `_` only)
|
||||
2. Validate `branchName` before use (even though it comes from git output)
|
||||
3. Use `execGitCommand` with array arguments instead of template literals
|
||||
|
||||
---
|
||||
|
||||
### 3. Unsafe Environment Variable Export in Shell Script
|
||||
|
||||
**File:** `start-automaker.sh`
|
||||
**Lines:** 5068, 5085
|
||||
**Severity:** CRITICAL
|
||||
|
||||
**Issue:**
|
||||
Unsafe parsing and export of `.env` file contents using `xargs` without proper handling of special characters.
|
||||
|
||||
**Vulnerable Code:**
|
||||
|
||||
```bash
|
||||
export $(grep -v '^#' .env | xargs)
|
||||
```
|
||||
|
||||
**Attack Vector:**
|
||||
If `.env` file contains malicious content with spaces, special characters, or code, it could be executed:
|
||||
|
||||
- `.env` entry: `VAR="value; malicious_command"`
|
||||
- Could lead to code execution during startup
|
||||
|
||||
**Fix Required:**
|
||||
Replace with safer parsing method:
|
||||
|
||||
```bash
|
||||
# Safer approach
|
||||
set -a
|
||||
source <(grep -v '^#' .env | sed 's/^/export /')
|
||||
set +a
|
||||
|
||||
# Or even safer - validate each line
|
||||
while IFS= read -r line; do
|
||||
[[ "$line" =~ ^[[:space:]]*# ]] && continue
|
||||
[[ -z "$line" ]] && continue
|
||||
if [[ "$line" =~ ^([A-Za-z_][A-Za-z0-9_]*)=(.*)$ ]]; then
|
||||
export "${BASH_REMATCH[1]}"="${BASH_REMATCH[2]}"
|
||||
fi
|
||||
done < .env
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🟡 Moderate Security Concerns
|
||||
|
||||
### 4. Inconsistent Use of Secure Command Execution
|
||||
|
||||
**Issue:**
|
||||
The codebase has `execGitCommand()` function available (which uses array arguments and is safer), but it's not consistently used. Some places still use `execAsync` with template literals.
|
||||
|
||||
**Files Affected:**
|
||||
|
||||
- `apps/server/src/routes/worktree/routes/merge.ts`
|
||||
- `apps/server/src/routes/worktree/routes/push.ts`
|
||||
|
||||
**Recommendation:**
|
||||
|
||||
- Audit all `execAsync` calls with template literals
|
||||
- Replace with `execGitCommand` where possible
|
||||
- Document when `execAsync` is acceptable (only with fully validated inputs)
|
||||
|
||||
---
|
||||
|
||||
### 5. Missing Input Validation
|
||||
|
||||
**Issues:**
|
||||
|
||||
1. `targetRemote` in `push.ts` defaults to 'origin' but isn't validated
|
||||
2. Commit messages in `merge.ts` aren't sanitized before use in shell commands
|
||||
3. `worktreePath` validation relies on middleware but should be double-checked
|
||||
|
||||
**Recommendation:**
|
||||
|
||||
- Add validation functions for remote names
|
||||
- Sanitize commit messages (remove shell metacharacters)
|
||||
- Add defensive validation even when middleware exists
|
||||
|
||||
---
|
||||
|
||||
## ✅ Positive Security Findings
|
||||
|
||||
1. **No Hardcoded Credentials:** No API keys, passwords, or tokens found in the diff
|
||||
2. **No Data Exfiltration:** No suspicious network requests or data transmission patterns
|
||||
3. **No Backdoors:** No hidden functionality or unauthorized access patterns detected
|
||||
4. **Safe Command Execution:** `execGitCommand` function properly uses array arguments in some places
|
||||
5. **Environment Variable Handling:** `init-script-service.ts` properly sanitizes environment variables (lines 194-220)
|
||||
|
||||
---
|
||||
|
||||
## 📋 Action Items
|
||||
|
||||
### Immediate (Before Release)
|
||||
|
||||
- [ ] **Fix command injection in `merge.ts`**
|
||||
- [ ] Validate `branchName` with `isValidBranchName()` before line 43
|
||||
- [ ] Validate `mergeTo` with `isValidBranchName()` before line 54
|
||||
- [ ] Sanitize commit messages or use `execGitCommand` for merge commands
|
||||
- [ ] Replace `execAsync` template literals with `execGitCommand` array calls
|
||||
|
||||
- [ ] **Fix command injection in `push.ts`**
|
||||
- [ ] Add validation function for remote names
|
||||
- [ ] Validate `targetRemote` before use
|
||||
- [ ] Validate `branchName` before use (defensive programming)
|
||||
- [ ] Replace `execAsync` template literals with `execGitCommand`
|
||||
|
||||
- [ ] **Fix shell script security issue**
|
||||
- [ ] Replace unsafe `export $(grep ... | xargs)` with safer parsing
|
||||
- [ ] Add validation for `.env` file contents
|
||||
- [ ] Test with edge cases (spaces, special chars, quotes)
|
||||
|
||||
### Short-term (Next Sprint)
|
||||
|
||||
- [ ] **Audit all `execAsync` calls**
|
||||
- [ ] Create inventory of all `execAsync` calls with template literals
|
||||
- [ ] Replace with `execGitCommand` where possible
|
||||
- [ ] Document exceptions and why they're safe
|
||||
|
||||
- [ ] **Add input validation utilities**
|
||||
- [ ] Create `isValidRemoteName()` function
|
||||
- [ ] Create `sanitizeCommitMessage()` function
|
||||
- [ ] Add validation for all user-controlled inputs
|
||||
|
||||
- [ ] **Security testing**
|
||||
- [ ] Add unit tests for command injection prevention
|
||||
- [ ] Add integration tests with malicious inputs
|
||||
- [ ] Test shell script with malicious `.env` files
|
||||
|
||||
### Long-term (Security Hardening)
|
||||
|
||||
- [ ] **Code review process**
|
||||
- [ ] Add security checklist for PR reviews
|
||||
- [ ] Require security review for shell command execution changes
|
||||
- [ ] Add automated security scanning
|
||||
|
||||
- [ ] **Documentation**
|
||||
- [ ] Document secure coding practices for shell commands
|
||||
- [ ] Create security guidelines for contributors
|
||||
- [ ] Add security section to CONTRIBUTING.md
|
||||
|
||||
---
|
||||
|
||||
## 🔍 Testing Recommendations
|
||||
|
||||
### Command Injection Tests
|
||||
|
||||
```typescript
|
||||
// Test cases for merge.ts
|
||||
describe('merge handler security', () => {
|
||||
it('should reject branch names with shell metacharacters', () => {
|
||||
// Test: branchName = "main; rm -rf /"
|
||||
// Expected: Validation error, command not executed
|
||||
});
|
||||
|
||||
it('should sanitize commit messages', () => {
|
||||
// Test: message = '"; malicious_command; "'
|
||||
// Expected: Sanitized or rejected
|
||||
});
|
||||
});
|
||||
|
||||
// Test cases for push.ts
|
||||
describe('push handler security', () => {
|
||||
it('should reject remote names with shell metacharacters', () => {
|
||||
// Test: remote = "origin; malicious_command; #"
|
||||
// Expected: Validation error, command not executed
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
### Shell Script Tests
|
||||
|
||||
```bash
|
||||
# Test with malicious .env content
|
||||
echo 'VAR="value; echo PWNED"' > test.env
|
||||
# Expected: Should not execute the command
|
||||
|
||||
# Test with spaces in values
|
||||
echo 'VAR="value with spaces"' > test.env
|
||||
# Expected: Should handle correctly
|
||||
|
||||
# Test with special characters
|
||||
echo 'VAR="value\$with\$dollars"' > test.env
|
||||
# Expected: Should handle correctly
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 📚 References
|
||||
|
||||
- [OWASP Command Injection](https://owasp.org/www-community/attacks/Command_Injection)
|
||||
- [Node.js Child Process Security](https://nodejs.org/api/child_process.html#child_process_security_concerns)
|
||||
- [Shell Script Security Best Practices](https://mywiki.wooledge.org/BashGuide/Practices)
|
||||
|
||||
---
|
||||
|
||||
## Notes
|
||||
|
||||
- All findings are based on code diff analysis
|
||||
- No runtime testing was performed
|
||||
- Assumes attacker has access to API endpoints (authenticated or unauthenticated)
|
||||
- Fixes should be tested thoroughly before deployment
|
||||
|
||||
---
|
||||
|
||||
**Last Updated:** $(date)
|
||||
**Next Review:** After fixes are implemented
|
||||
25
TODO.md
Normal file
25
TODO.md
Normal file
@@ -0,0 +1,25 @@
|
||||
# Bugs
|
||||
|
||||
- Setting the default model does not seem like it works.
|
||||
|
||||
# Performance (completed)
|
||||
|
||||
- [x] Graph performance mode for large graphs (compact nodes/edges + visible-only rendering)
|
||||
- [x] Render containment on heavy scroll regions (kanban columns, chat history)
|
||||
- [x] Reduce blur/shadow effects when lists get large
|
||||
- [x] React Query tuning for heavy datasets (less refetch on focus/reconnect)
|
||||
- [x] DnD/list rendering optimizations (virtualized kanban + memoized card sections)
|
||||
|
||||
# UX
|
||||
|
||||
- Consolidate all models to a single place in the settings instead of having AI profiles and all this other stuff
|
||||
- Simplify the create feature modal. It should just be one page. I don't need nessa tabs and all these nested buttons. It's too complex.
|
||||
- added to do's list checkbox directly into the card so as it's going through if there's any to do items we can see those update live
|
||||
- When the feature is done, I want to see a summary of the LLM. That's the first thing I should see when I double click the card.
|
||||
- I went away to mass edit all my features. For example, when I created a new project, it added auto testing on every single feature card. Now I have to manually go through one by one and change those. Have a way to mass edit those, the configuration of all them.
|
||||
- Double check and debug if there's memory leaks. It seems like the memory of automaker grows like 3 gigabytes. It's 5gb right now and I'm running three different cursor cli features implementing at the same time.
|
||||
- Typing in the text area of the plan mode was super laggy.
|
||||
- When I have a bunch of features running at the same time, it seems like I cannot edit the features in the backlog. Like they don't persist their file changes and I think this is because of the secure FS file has an internal queue to prevent hitting that file open write limit. We may have to reconsider refactoring away from file system and do Postgres or SQLite or something.
|
||||
- modals are not scrollable if height of the screen is small enough
|
||||
- and the Agent Runner add an archival button for the new sessions.
|
||||
- investigate a potential issue with the feature cards not refreshing. I see a lock icon on the feature card But it doesn't go away until I open the card and edit it and I turn the testing mode off. I think there's like a refresh sync issue.
|
||||
@@ -16,7 +16,7 @@ import { createServer } from 'http';
|
||||
import dotenv from 'dotenv';
|
||||
|
||||
import { createEventEmitter, type EventEmitter } from './lib/events.js';
|
||||
import { initAllowedPaths, getClaudeAuthIndicators } from '@automaker/platform';
|
||||
import { initAllowedPaths } from '@automaker/platform';
|
||||
import { createLogger, setLogLevel, LogLevel } from '@automaker/utils';
|
||||
|
||||
const logger = createLogger('Server');
|
||||
@@ -56,7 +56,7 @@ import {
|
||||
import { createSettingsRoutes } from './routes/settings/index.js';
|
||||
import { AgentService } from './services/agent-service.js';
|
||||
import { FeatureLoader } from './services/feature-loader.js';
|
||||
import { AutoModeServiceCompat } from './services/auto-mode/index.js';
|
||||
import { AutoModeService } from './services/auto-mode-service.js';
|
||||
import { getTerminalService } from './services/terminal-service.js';
|
||||
import { SettingsService } from './services/settings-service.js';
|
||||
import { createSpecRegenerationRoutes } from './routes/app-spec/index.js';
|
||||
@@ -117,44 +117,15 @@ export function isRequestLoggingEnabled(): boolean {
|
||||
// Width for log box content (excluding borders)
|
||||
const BOX_CONTENT_WIDTH = 67;
|
||||
|
||||
// Check for Claude authentication (async - runs in background)
|
||||
// The Claude Agent SDK can use either ANTHROPIC_API_KEY or Claude Code CLI authentication
|
||||
(async () => {
|
||||
const hasAnthropicKey = !!process.env.ANTHROPIC_API_KEY;
|
||||
// Check for required environment variables
|
||||
const hasAnthropicKey = !!process.env.ANTHROPIC_API_KEY;
|
||||
|
||||
if (hasAnthropicKey) {
|
||||
logger.info('✓ ANTHROPIC_API_KEY detected');
|
||||
return;
|
||||
}
|
||||
|
||||
// Check for Claude Code CLI authentication
|
||||
try {
|
||||
const indicators = await getClaudeAuthIndicators();
|
||||
const hasCliAuth =
|
||||
indicators.hasStatsCacheWithActivity ||
|
||||
(indicators.hasSettingsFile && indicators.hasProjectsSessions) ||
|
||||
(indicators.hasCredentialsFile &&
|
||||
(indicators.credentials?.hasOAuthToken || indicators.credentials?.hasApiKey));
|
||||
|
||||
if (hasCliAuth) {
|
||||
logger.info('✓ Claude Code CLI authentication detected');
|
||||
return;
|
||||
}
|
||||
} catch (error) {
|
||||
// Ignore errors checking CLI auth - will fall through to warning
|
||||
logger.warn('Error checking for Claude Code CLI authentication:', error);
|
||||
}
|
||||
|
||||
// No authentication found - show warning
|
||||
if (!hasAnthropicKey) {
|
||||
const wHeader = '⚠️ WARNING: No Claude authentication configured'.padEnd(BOX_CONTENT_WIDTH);
|
||||
const w1 = 'The Claude Agent SDK requires authentication to function.'.padEnd(BOX_CONTENT_WIDTH);
|
||||
const w2 = 'Options:'.padEnd(BOX_CONTENT_WIDTH);
|
||||
const w3 = '1. Install Claude Code CLI and authenticate with subscription'.padEnd(
|
||||
BOX_CONTENT_WIDTH
|
||||
);
|
||||
const w4 = '2. Set your Anthropic API key:'.padEnd(BOX_CONTENT_WIDTH);
|
||||
const w5 = ' export ANTHROPIC_API_KEY="sk-ant-..."'.padEnd(BOX_CONTENT_WIDTH);
|
||||
const w6 = '3. Use the setup wizard in Settings to configure authentication.'.padEnd(
|
||||
const w2 = 'Set your Anthropic API key:'.padEnd(BOX_CONTENT_WIDTH);
|
||||
const w3 = ' export ANTHROPIC_API_KEY="sk-ant-..."'.padEnd(BOX_CONTENT_WIDTH);
|
||||
const w4 = 'Or use the setup wizard in Settings to configure authentication.'.padEnd(
|
||||
BOX_CONTENT_WIDTH
|
||||
);
|
||||
|
||||
@@ -167,13 +138,14 @@ const BOX_CONTENT_WIDTH = 67;
|
||||
║ ║
|
||||
║ ${w2}║
|
||||
║ ${w3}║
|
||||
║ ║
|
||||
║ ${w4}║
|
||||
║ ${w5}║
|
||||
║ ${w6}║
|
||||
║ ║
|
||||
╚═════════════════════════════════════════════════════════════════════╝
|
||||
`);
|
||||
})();
|
||||
} else {
|
||||
logger.info('✓ ANTHROPIC_API_KEY detected');
|
||||
}
|
||||
|
||||
// Initialize security
|
||||
initAllowedPaths();
|
||||
@@ -258,9 +230,7 @@ const events: EventEmitter = createEventEmitter();
|
||||
const settingsService = new SettingsService(DATA_DIR);
|
||||
const agentService = new AgentService(DATA_DIR, events, settingsService);
|
||||
const featureLoader = new FeatureLoader();
|
||||
|
||||
// Auto-mode services: compatibility layer provides old interface while using new architecture
|
||||
const autoModeService = new AutoModeServiceCompat(events, settingsService, featureLoader);
|
||||
const autoModeService = new AutoModeService(events, settingsService);
|
||||
const claudeUsageService = new ClaudeUsageService();
|
||||
const codexAppServerService = new CodexAppServerService();
|
||||
const codexModelCacheService = new CodexModelCacheService(DATA_DIR, codexAppServerService);
|
||||
@@ -356,10 +326,7 @@ app.get('/api/health/detailed', createDetailedHandler());
|
||||
app.use('/api/fs', createFsRoutes(events));
|
||||
app.use('/api/agent', createAgentRoutes(agentService, events));
|
||||
app.use('/api/sessions', createSessionsRoutes(agentService));
|
||||
app.use(
|
||||
'/api/features',
|
||||
createFeaturesRoutes(featureLoader, settingsService, events, autoModeService)
|
||||
);
|
||||
app.use('/api/features', createFeaturesRoutes(featureLoader, settingsService, events));
|
||||
app.use('/api/auto-mode', createAutoModeRoutes(autoModeService));
|
||||
app.use('/api/enhance-prompt', createEnhancePromptRoutes(settingsService));
|
||||
app.use('/api/worktree', createWorktreeRoutes(events, settingsService));
|
||||
@@ -802,36 +769,21 @@ process.on('uncaughtException', (error: Error) => {
|
||||
process.exit(1);
|
||||
});
|
||||
|
||||
// Graceful shutdown timeout (30 seconds)
|
||||
const SHUTDOWN_TIMEOUT_MS = 30000;
|
||||
|
||||
// Graceful shutdown helper
|
||||
const gracefulShutdown = async (signal: string) => {
|
||||
logger.info(`${signal} received, shutting down...`);
|
||||
|
||||
// Set up a force-exit timeout to prevent hanging
|
||||
const forceExitTimeout = setTimeout(() => {
|
||||
logger.error(`Shutdown timed out after ${SHUTDOWN_TIMEOUT_MS}ms, forcing exit`);
|
||||
process.exit(1);
|
||||
}, SHUTDOWN_TIMEOUT_MS);
|
||||
|
||||
// Mark all running features as interrupted before shutdown
|
||||
// This ensures they can be resumed when the server restarts
|
||||
// Note: markAllRunningFeaturesInterrupted handles errors internally and never rejects
|
||||
await autoModeService.markAllRunningFeaturesInterrupted(`${signal} signal received`);
|
||||
|
||||
// Graceful shutdown
|
||||
process.on('SIGTERM', () => {
|
||||
logger.info('SIGTERM received, shutting down...');
|
||||
terminalService.cleanup();
|
||||
server.close(() => {
|
||||
clearTimeout(forceExitTimeout);
|
||||
logger.info('Server closed');
|
||||
process.exit(0);
|
||||
});
|
||||
};
|
||||
|
||||
process.on('SIGTERM', () => {
|
||||
gracefulShutdown('SIGTERM');
|
||||
});
|
||||
|
||||
process.on('SIGINT', () => {
|
||||
gracefulShutdown('SIGINT');
|
||||
logger.info('SIGINT received, shutting down...');
|
||||
terminalService.cleanup();
|
||||
server.close(() => {
|
||||
logger.info('Server closed');
|
||||
process.exit(0);
|
||||
});
|
||||
});
|
||||
|
||||
@@ -98,14 +98,9 @@ const TEXT_ENCODING = 'utf-8';
|
||||
* This is the "no output" timeout - if the CLI doesn't produce any JSONL output
|
||||
* for this duration, the process is killed. For reasoning models with high
|
||||
* reasoning effort, this timeout is dynamically extended via calculateReasoningTimeout().
|
||||
*
|
||||
* For feature generation (which can generate 50+ features), we use a much longer
|
||||
* base timeout (5 minutes) since Codex models are slower at generating large JSON responses.
|
||||
*
|
||||
* @see calculateReasoningTimeout from @automaker/types
|
||||
*/
|
||||
const CODEX_CLI_TIMEOUT_MS = DEFAULT_TIMEOUT_MS;
|
||||
const CODEX_FEATURE_GENERATION_BASE_TIMEOUT_MS = 300000; // 5 minutes for feature generation
|
||||
const CONTEXT_WINDOW_256K = 256000;
|
||||
const MAX_OUTPUT_32K = 32000;
|
||||
const MAX_OUTPUT_16K = 16000;
|
||||
@@ -832,14 +827,7 @@ export class CodexProvider extends BaseProvider {
|
||||
// Higher reasoning effort (e.g., 'xhigh' for "xtra thinking" mode) requires more time
|
||||
// for the model to generate reasoning tokens before producing output.
|
||||
// This fixes GitHub issue #530 where features would get stuck with reasoning models.
|
||||
//
|
||||
// For feature generation with 'xhigh', use the extended 5-minute base timeout
|
||||
// since generating 50+ features takes significantly longer than normal operations.
|
||||
const baseTimeout =
|
||||
options.reasoningEffort === 'xhigh'
|
||||
? CODEX_FEATURE_GENERATION_BASE_TIMEOUT_MS
|
||||
: CODEX_CLI_TIMEOUT_MS;
|
||||
const timeout = calculateReasoningTimeout(options.reasoningEffort, baseTimeout);
|
||||
const timeout = calculateReasoningTimeout(options.reasoningEffort, CODEX_CLI_TIMEOUT_MS);
|
||||
|
||||
const stream = spawnJSONLProcess({
|
||||
command: commandPath,
|
||||
|
||||
@@ -8,11 +8,10 @@
|
||||
import * as secureFs from '../../lib/secure-fs.js';
|
||||
import type { EventEmitter } from '../../lib/events.js';
|
||||
import { createLogger } from '@automaker/utils';
|
||||
import { DEFAULT_PHASE_MODELS, supportsStructuredOutput, isCodexModel } from '@automaker/types';
|
||||
import { DEFAULT_PHASE_MODELS } from '@automaker/types';
|
||||
import { resolvePhaseModel } from '@automaker/model-resolver';
|
||||
import { streamingQuery } from '../../providers/simple-query-service.js';
|
||||
import { parseAndCreateFeatures } from './parse-and-create-features.js';
|
||||
import { extractJsonWithArray } from '../../lib/json-extractor.js';
|
||||
import { getAppSpecPath } from '@automaker/platform';
|
||||
import type { SettingsService } from '../../services/settings-service.js';
|
||||
import {
|
||||
@@ -26,64 +25,6 @@ const logger = createLogger('SpecRegeneration');
|
||||
|
||||
const DEFAULT_MAX_FEATURES = 50;
|
||||
|
||||
/**
|
||||
* Timeout for Codex models when generating features (5 minutes).
|
||||
* Codex models are slower and need more time to generate 50+ features.
|
||||
*/
|
||||
const CODEX_FEATURE_GENERATION_TIMEOUT_MS = 300000; // 5 minutes
|
||||
|
||||
/**
|
||||
* Type for extracted features JSON response
|
||||
*/
|
||||
interface FeaturesExtractionResult {
|
||||
features: Array<{
|
||||
id: string;
|
||||
category?: string;
|
||||
title: string;
|
||||
description: string;
|
||||
priority?: number;
|
||||
complexity?: 'simple' | 'moderate' | 'complex';
|
||||
dependencies?: string[];
|
||||
}>;
|
||||
}
|
||||
|
||||
/**
|
||||
* JSON schema for features output format (Claude/Codex structured output)
|
||||
*/
|
||||
const featuresOutputSchema = {
|
||||
type: 'object',
|
||||
properties: {
|
||||
features: {
|
||||
type: 'array',
|
||||
items: {
|
||||
type: 'object',
|
||||
properties: {
|
||||
id: { type: 'string', description: 'Unique feature identifier (kebab-case)' },
|
||||
category: { type: 'string', description: 'Feature category' },
|
||||
title: { type: 'string', description: 'Short, descriptive title' },
|
||||
description: { type: 'string', description: 'Detailed feature description' },
|
||||
priority: {
|
||||
type: 'number',
|
||||
description: 'Priority level: 1 (highest) to 5 (lowest)',
|
||||
},
|
||||
complexity: {
|
||||
type: 'string',
|
||||
enum: ['simple', 'moderate', 'complex'],
|
||||
description: 'Implementation complexity',
|
||||
},
|
||||
dependencies: {
|
||||
type: 'array',
|
||||
items: { type: 'string' },
|
||||
description: 'IDs of features this depends on',
|
||||
},
|
||||
},
|
||||
required: ['id', 'title', 'description'],
|
||||
},
|
||||
},
|
||||
},
|
||||
required: ['features'],
|
||||
} as const;
|
||||
|
||||
export async function generateFeaturesFromSpec(
|
||||
projectPath: string,
|
||||
events: EventEmitter,
|
||||
@@ -195,80 +136,23 @@ Generate ${featureCount} NEW features that build on each other logically. Rememb
|
||||
provider: undefined,
|
||||
credentials: undefined,
|
||||
};
|
||||
const { model, thinkingLevel, reasoningEffort } = resolvePhaseModel(phaseModelEntry);
|
||||
const { model, thinkingLevel } = resolvePhaseModel(phaseModelEntry);
|
||||
|
||||
logger.info('Using model:', model, provider ? `via provider: ${provider.name}` : 'direct API');
|
||||
|
||||
// Codex models need extended timeout for generating many features.
|
||||
// Use 'xhigh' reasoning effort to get 5-minute timeout (300s base * 1.0x = 300s).
|
||||
// The Codex provider has a special 5-minute base timeout for feature generation.
|
||||
const isCodex = isCodexModel(model);
|
||||
const effectiveReasoningEffort = isCodex ? 'xhigh' : reasoningEffort;
|
||||
|
||||
if (isCodex) {
|
||||
logger.info('Codex model detected - using extended timeout (5 minutes for feature generation)');
|
||||
}
|
||||
if (effectiveReasoningEffort) {
|
||||
logger.info('Reasoning effort:', effectiveReasoningEffort);
|
||||
}
|
||||
|
||||
// Determine if we should use structured output based on model type
|
||||
const useStructuredOutput = supportsStructuredOutput(model);
|
||||
logger.info(
|
||||
`Structured output mode: ${useStructuredOutput ? 'enabled (Claude/Codex)' : 'disabled (using JSON instructions)'}`
|
||||
);
|
||||
|
||||
// Build the final prompt - for non-Claude/Codex models, include explicit JSON instructions
|
||||
let finalPrompt = prompt;
|
||||
if (!useStructuredOutput) {
|
||||
finalPrompt = `${prompt}
|
||||
|
||||
CRITICAL INSTRUCTIONS:
|
||||
1. DO NOT write any files. Return the JSON in your response only.
|
||||
2. After analyzing the spec, respond with ONLY a JSON object - no explanations, no markdown, just raw JSON.
|
||||
3. The JSON must have this exact structure:
|
||||
{
|
||||
"features": [
|
||||
{
|
||||
"id": "unique-feature-id",
|
||||
"category": "Category Name",
|
||||
"title": "Short Feature Title",
|
||||
"description": "Detailed description of the feature",
|
||||
"priority": 1,
|
||||
"complexity": "simple|moderate|complex",
|
||||
"dependencies": ["other-feature-id"]
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
4. Feature IDs must be unique, lowercase, kebab-case (e.g., "user-authentication", "data-export")
|
||||
5. Priority ranges from 1 (highest) to 5 (lowest)
|
||||
6. Complexity must be one of: "simple", "moderate", "complex"
|
||||
7. Dependencies is an array of feature IDs that must be completed first (can be empty)
|
||||
|
||||
Your entire response should be valid JSON starting with { and ending with }. No text before or after.`;
|
||||
}
|
||||
|
||||
// Use streamingQuery with event callbacks
|
||||
const result = await streamingQuery({
|
||||
prompt: finalPrompt,
|
||||
prompt,
|
||||
model,
|
||||
cwd: projectPath,
|
||||
maxTurns: 250,
|
||||
allowedTools: ['Read', 'Glob', 'Grep'],
|
||||
abortController,
|
||||
thinkingLevel,
|
||||
reasoningEffort: effectiveReasoningEffort, // Extended timeout for Codex models
|
||||
readOnly: true, // Feature generation only reads code, doesn't write
|
||||
settingSources: autoLoadClaudeMd ? ['user', 'project', 'local'] : undefined,
|
||||
claudeCompatibleProvider: provider, // Pass provider for alternative endpoint configuration
|
||||
credentials, // Pass credentials for resolving 'credentials' apiKeySource
|
||||
outputFormat: useStructuredOutput
|
||||
? {
|
||||
type: 'json_schema',
|
||||
schema: featuresOutputSchema,
|
||||
}
|
||||
: undefined,
|
||||
onText: (text) => {
|
||||
logger.debug(`Feature text block received (${text.length} chars)`);
|
||||
events.emit('spec-regeneration:event', {
|
||||
@@ -279,51 +163,15 @@ Your entire response should be valid JSON starting with { and ending with }. No
|
||||
},
|
||||
});
|
||||
|
||||
// Get response content - prefer structured output if available
|
||||
let contentForParsing: string;
|
||||
const responseText = result.text;
|
||||
|
||||
if (result.structured_output) {
|
||||
// Use structured output from Claude/Codex models
|
||||
logger.info('✅ Received structured output from model');
|
||||
contentForParsing = JSON.stringify(result.structured_output);
|
||||
logger.debug('Structured output:', contentForParsing);
|
||||
} else {
|
||||
// Use text response (for non-Claude/Codex models or fallback)
|
||||
// Pre-extract JSON to handle conversational text that may surround the JSON response
|
||||
// This follows the same pattern used in generate-spec.ts and validate-issue.ts
|
||||
const rawText = result.text;
|
||||
logger.info(`Feature stream complete.`);
|
||||
logger.info(`Feature response length: ${rawText.length} chars`);
|
||||
logger.info('========== FULL RESPONSE TEXT ==========');
|
||||
logger.info(rawText);
|
||||
logger.info('========== END RESPONSE TEXT ==========');
|
||||
logger.info(`Feature stream complete.`);
|
||||
logger.info(`Feature response length: ${responseText.length} chars`);
|
||||
logger.info('========== FULL RESPONSE TEXT ==========');
|
||||
logger.info(responseText);
|
||||
logger.info('========== END RESPONSE TEXT ==========');
|
||||
|
||||
// Pre-extract JSON from response - handles conversational text around the JSON
|
||||
const extracted = extractJsonWithArray<FeaturesExtractionResult>(rawText, 'features', {
|
||||
logger,
|
||||
});
|
||||
if (extracted) {
|
||||
contentForParsing = JSON.stringify(extracted);
|
||||
logger.info('✅ Pre-extracted JSON from text response');
|
||||
} else {
|
||||
// If pre-extraction fails, we know the next step will also fail.
|
||||
// Throw an error here to avoid redundant parsing and make the failure point clearer.
|
||||
logger.error(
|
||||
'❌ Could not extract features JSON from model response. Full response text was:\n' +
|
||||
rawText
|
||||
);
|
||||
const errorMessage =
|
||||
'Failed to parse features from model response: No valid JSON with a "features" array found.';
|
||||
events.emit('spec-regeneration:event', {
|
||||
type: 'spec_regeneration_error',
|
||||
error: errorMessage,
|
||||
projectPath: projectPath,
|
||||
});
|
||||
throw new Error(errorMessage);
|
||||
}
|
||||
}
|
||||
|
||||
await parseAndCreateFeatures(projectPath, contentForParsing, events);
|
||||
await parseAndCreateFeatures(projectPath, responseText, events);
|
||||
|
||||
logger.debug('========== generateFeaturesFromSpec() completed ==========');
|
||||
}
|
||||
|
||||
@@ -9,7 +9,7 @@ import * as secureFs from '../../lib/secure-fs.js';
|
||||
import type { EventEmitter } from '../../lib/events.js';
|
||||
import { specOutputSchema, specToXml, type SpecOutput } from '../../lib/app-spec-format.js';
|
||||
import { createLogger } from '@automaker/utils';
|
||||
import { DEFAULT_PHASE_MODELS, supportsStructuredOutput } from '@automaker/types';
|
||||
import { DEFAULT_PHASE_MODELS, isClaudeModel, isCodexModel } from '@automaker/types';
|
||||
import { resolvePhaseModel } from '@automaker/model-resolver';
|
||||
import { extractJson } from '../../lib/json-extractor.js';
|
||||
import { streamingQuery } from '../../providers/simple-query-service.js';
|
||||
@@ -120,13 +120,10 @@ ${prompts.appSpec.structuredSpecInstructions}`;
|
||||
let responseText = '';
|
||||
let structuredOutput: SpecOutput | null = null;
|
||||
|
||||
// Determine if we should use structured output based on model type
|
||||
const useStructuredOutput = supportsStructuredOutput(model);
|
||||
logger.info(
|
||||
`Structured output mode: ${useStructuredOutput ? 'enabled (Claude/Codex)' : 'disabled (using JSON instructions)'}`
|
||||
);
|
||||
// Determine if we should use structured output (only Claude and Codex support it)
|
||||
const useStructuredOutput = isClaudeModel(model) || isCodexModel(model);
|
||||
|
||||
// Build the final prompt - for non-Claude/Codex models, include JSON schema instructions
|
||||
// Build the final prompt - for Cursor, include JSON schema instructions
|
||||
let finalPrompt = prompt;
|
||||
if (!useStructuredOutput) {
|
||||
finalPrompt = `${prompt}
|
||||
|
||||
@@ -10,10 +10,9 @@
|
||||
import * as secureFs from '../../lib/secure-fs.js';
|
||||
import type { EventEmitter } from '../../lib/events.js';
|
||||
import { createLogger } from '@automaker/utils';
|
||||
import { DEFAULT_PHASE_MODELS, supportsStructuredOutput } from '@automaker/types';
|
||||
import { DEFAULT_PHASE_MODELS } from '@automaker/types';
|
||||
import { resolvePhaseModel } from '@automaker/model-resolver';
|
||||
import { streamingQuery } from '../../providers/simple-query-service.js';
|
||||
import { extractJson } from '../../lib/json-extractor.js';
|
||||
import { getAppSpecPath } from '@automaker/platform';
|
||||
import type { SettingsService } from '../../services/settings-service.js';
|
||||
import {
|
||||
@@ -35,28 +34,6 @@ import { getNotificationService } from '../../services/notification-service.js';
|
||||
|
||||
const logger = createLogger('SpecSync');
|
||||
|
||||
/**
|
||||
* Type for extracted tech stack JSON response
|
||||
*/
|
||||
interface TechStackExtractionResult {
|
||||
technologies: string[];
|
||||
}
|
||||
|
||||
/**
|
||||
* JSON schema for tech stack analysis output (Claude/Codex structured output)
|
||||
*/
|
||||
const techStackOutputSchema = {
|
||||
type: 'object',
|
||||
properties: {
|
||||
technologies: {
|
||||
type: 'array',
|
||||
items: { type: 'string' },
|
||||
description: 'List of technologies detected in the project',
|
||||
},
|
||||
},
|
||||
required: ['technologies'],
|
||||
} as const;
|
||||
|
||||
/**
|
||||
* Result of a sync operation
|
||||
*/
|
||||
@@ -199,14 +176,8 @@ export async function syncSpec(
|
||||
|
||||
logger.info('Using model:', model, provider ? `via provider: ${provider.name}` : 'direct API');
|
||||
|
||||
// Determine if we should use structured output based on model type
|
||||
const useStructuredOutput = supportsStructuredOutput(model);
|
||||
logger.info(
|
||||
`Structured output mode: ${useStructuredOutput ? 'enabled (Claude/Codex)' : 'disabled (using JSON instructions)'}`
|
||||
);
|
||||
|
||||
// Use AI to analyze tech stack
|
||||
let techAnalysisPrompt = `Analyze this project and return ONLY a JSON object with the current technology stack.
|
||||
const techAnalysisPrompt = `Analyze this project and return ONLY a JSON object with the current technology stack.
|
||||
|
||||
Current known technologies: ${currentTechStack.join(', ')}
|
||||
|
||||
@@ -222,16 +193,6 @@ Return ONLY this JSON format, no other text:
|
||||
"technologies": ["Technology 1", "Technology 2", ...]
|
||||
}`;
|
||||
|
||||
// Add explicit JSON instructions for non-Claude/Codex models
|
||||
if (!useStructuredOutput) {
|
||||
techAnalysisPrompt = `${techAnalysisPrompt}
|
||||
|
||||
CRITICAL INSTRUCTIONS:
|
||||
1. DO NOT write any files. Return the JSON in your response only.
|
||||
2. Your entire response should be valid JSON starting with { and ending with }.
|
||||
3. No explanations, no markdown, no text before or after the JSON.`;
|
||||
}
|
||||
|
||||
try {
|
||||
const techResult = await streamingQuery({
|
||||
prompt: techAnalysisPrompt,
|
||||
@@ -245,67 +206,44 @@ CRITICAL INSTRUCTIONS:
|
||||
settingSources: autoLoadClaudeMd ? ['user', 'project', 'local'] : undefined,
|
||||
claudeCompatibleProvider: provider, // Pass provider for alternative endpoint configuration
|
||||
credentials, // Pass credentials for resolving 'credentials' apiKeySource
|
||||
outputFormat: useStructuredOutput
|
||||
? {
|
||||
type: 'json_schema',
|
||||
schema: techStackOutputSchema,
|
||||
}
|
||||
: undefined,
|
||||
onText: (text) => {
|
||||
logger.debug(`Tech analysis text: ${text.substring(0, 100)}`);
|
||||
},
|
||||
});
|
||||
|
||||
// Parse tech stack from response - prefer structured output if available
|
||||
let parsedTechnologies: string[] | null = null;
|
||||
// Parse tech stack from response
|
||||
const jsonMatch = techResult.text.match(/\{[\s\S]*"technologies"[\s\S]*\}/);
|
||||
if (jsonMatch) {
|
||||
const parsed = JSON.parse(jsonMatch[0]);
|
||||
if (Array.isArray(parsed.technologies)) {
|
||||
const newTechStack = parsed.technologies as string[];
|
||||
|
||||
if (techResult.structured_output) {
|
||||
// Use structured output from Claude/Codex models
|
||||
const structured = techResult.structured_output as unknown as TechStackExtractionResult;
|
||||
if (Array.isArray(structured.technologies)) {
|
||||
parsedTechnologies = structured.technologies;
|
||||
logger.info('✅ Received structured output for tech analysis');
|
||||
}
|
||||
} else {
|
||||
// Fall back to text parsing for non-Claude/Codex models
|
||||
const extracted = extractJson<TechStackExtractionResult>(techResult.text, {
|
||||
logger,
|
||||
requiredKey: 'technologies',
|
||||
requireArray: true,
|
||||
});
|
||||
if (extracted && Array.isArray(extracted.technologies)) {
|
||||
parsedTechnologies = extracted.technologies;
|
||||
logger.info('✅ Extracted tech stack from text response');
|
||||
} else {
|
||||
logger.warn('⚠️ Failed to extract tech stack JSON from response');
|
||||
}
|
||||
}
|
||||
// Calculate differences
|
||||
const currentSet = new Set(currentTechStack.map((t) => t.toLowerCase()));
|
||||
const newSet = new Set(newTechStack.map((t) => t.toLowerCase()));
|
||||
|
||||
if (parsedTechnologies) {
|
||||
const newTechStack = parsedTechnologies;
|
||||
|
||||
// Calculate differences
|
||||
const currentSet = new Set(currentTechStack.map((t) => t.toLowerCase()));
|
||||
const newSet = new Set(newTechStack.map((t) => t.toLowerCase()));
|
||||
|
||||
for (const tech of newTechStack) {
|
||||
if (!currentSet.has(tech.toLowerCase())) {
|
||||
result.techStackUpdates.added.push(tech);
|
||||
for (const tech of newTechStack) {
|
||||
if (!currentSet.has(tech.toLowerCase())) {
|
||||
result.techStackUpdates.added.push(tech);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
for (const tech of currentTechStack) {
|
||||
if (!newSet.has(tech.toLowerCase())) {
|
||||
result.techStackUpdates.removed.push(tech);
|
||||
for (const tech of currentTechStack) {
|
||||
if (!newSet.has(tech.toLowerCase())) {
|
||||
result.techStackUpdates.removed.push(tech);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Update spec with new tech stack if there are changes
|
||||
if (result.techStackUpdates.added.length > 0 || result.techStackUpdates.removed.length > 0) {
|
||||
specContent = updateTechnologyStack(specContent, newTechStack);
|
||||
logger.info(
|
||||
`Updated tech stack: +${result.techStackUpdates.added.length}, -${result.techStackUpdates.removed.length}`
|
||||
);
|
||||
// Update spec with new tech stack if there are changes
|
||||
if (
|
||||
result.techStackUpdates.added.length > 0 ||
|
||||
result.techStackUpdates.removed.length > 0
|
||||
) {
|
||||
specContent = updateTechnologyStack(specContent, newTechStack);
|
||||
logger.info(
|
||||
`Updated tech stack: +${result.techStackUpdates.added.length}, -${result.techStackUpdates.removed.length}`
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
} catch (error) {
|
||||
|
||||
@@ -1,12 +1,11 @@
|
||||
/**
|
||||
* Auto Mode routes - HTTP API for autonomous feature implementation
|
||||
*
|
||||
* Uses AutoModeServiceCompat which provides the old interface while
|
||||
* delegating to GlobalAutoModeService and per-project facades.
|
||||
* Uses the AutoModeService for real feature execution with Claude Agent SDK
|
||||
*/
|
||||
|
||||
import { Router } from 'express';
|
||||
import type { AutoModeServiceCompat } from '../../services/auto-mode/index.js';
|
||||
import type { AutoModeService } from '../../services/auto-mode-service.js';
|
||||
import { validatePathParams } from '../../middleware/validate-paths.js';
|
||||
import { createStopFeatureHandler } from './routes/stop-feature.js';
|
||||
import { createStatusHandler } from './routes/status.js';
|
||||
@@ -22,12 +21,7 @@ import { createCommitFeatureHandler } from './routes/commit-feature.js';
|
||||
import { createApprovePlanHandler } from './routes/approve-plan.js';
|
||||
import { createResumeInterruptedHandler } from './routes/resume-interrupted.js';
|
||||
|
||||
/**
|
||||
* Create auto-mode routes.
|
||||
*
|
||||
* @param autoModeService - AutoModeServiceCompat instance
|
||||
*/
|
||||
export function createAutoModeRoutes(autoModeService: AutoModeServiceCompat): Router {
|
||||
export function createAutoModeRoutes(autoModeService: AutoModeService): Router {
|
||||
const router = Router();
|
||||
|
||||
// Auto loop control routes
|
||||
|
||||
@@ -3,13 +3,13 @@
|
||||
*/
|
||||
|
||||
import type { Request, Response } from 'express';
|
||||
import type { AutoModeServiceCompat } from '../../../services/auto-mode/index.js';
|
||||
import type { AutoModeService } from '../../../services/auto-mode-service.js';
|
||||
import { createLogger } from '@automaker/utils';
|
||||
import { getErrorMessage, logError } from '../common.js';
|
||||
|
||||
const logger = createLogger('AutoMode');
|
||||
|
||||
export function createAnalyzeProjectHandler(autoModeService: AutoModeServiceCompat) {
|
||||
export function createAnalyzeProjectHandler(autoModeService: AutoModeService) {
|
||||
return async (req: Request, res: Response): Promise<void> => {
|
||||
try {
|
||||
const { projectPath } = req.body as { projectPath: string };
|
||||
|
||||
@@ -3,13 +3,13 @@
|
||||
*/
|
||||
|
||||
import type { Request, Response } from 'express';
|
||||
import type { AutoModeServiceCompat } from '../../../services/auto-mode/index.js';
|
||||
import type { AutoModeService } from '../../../services/auto-mode-service.js';
|
||||
import { createLogger } from '@automaker/utils';
|
||||
import { getErrorMessage, logError } from '../common.js';
|
||||
|
||||
const logger = createLogger('AutoMode');
|
||||
|
||||
export function createApprovePlanHandler(autoModeService: AutoModeServiceCompat) {
|
||||
export function createApprovePlanHandler(autoModeService: AutoModeService) {
|
||||
return async (req: Request, res: Response): Promise<void> => {
|
||||
try {
|
||||
const { featureId, approved, editedPlan, feedback, projectPath } = req.body as {
|
||||
@@ -48,11 +48,11 @@ export function createApprovePlanHandler(autoModeService: AutoModeServiceCompat)
|
||||
|
||||
// Resolve the pending approval (with recovery support)
|
||||
const result = await autoModeService.resolvePlanApproval(
|
||||
projectPath || '',
|
||||
featureId,
|
||||
approved,
|
||||
editedPlan,
|
||||
feedback
|
||||
feedback,
|
||||
projectPath
|
||||
);
|
||||
|
||||
if (!result.success) {
|
||||
|
||||
@@ -3,10 +3,10 @@
|
||||
*/
|
||||
|
||||
import type { Request, Response } from 'express';
|
||||
import type { AutoModeServiceCompat } from '../../../services/auto-mode/index.js';
|
||||
import type { AutoModeService } from '../../../services/auto-mode-service.js';
|
||||
import { getErrorMessage, logError } from '../common.js';
|
||||
|
||||
export function createCommitFeatureHandler(autoModeService: AutoModeServiceCompat) {
|
||||
export function createCommitFeatureHandler(autoModeService: AutoModeService) {
|
||||
return async (req: Request, res: Response): Promise<void> => {
|
||||
try {
|
||||
const { projectPath, featureId, worktreePath } = req.body as {
|
||||
|
||||
@@ -3,10 +3,10 @@
|
||||
*/
|
||||
|
||||
import type { Request, Response } from 'express';
|
||||
import type { AutoModeServiceCompat } from '../../../services/auto-mode/index.js';
|
||||
import type { AutoModeService } from '../../../services/auto-mode-service.js';
|
||||
import { getErrorMessage, logError } from '../common.js';
|
||||
|
||||
export function createContextExistsHandler(autoModeService: AutoModeServiceCompat) {
|
||||
export function createContextExistsHandler(autoModeService: AutoModeService) {
|
||||
return async (req: Request, res: Response): Promise<void> => {
|
||||
try {
|
||||
const { projectPath, featureId } = req.body as {
|
||||
|
||||
@@ -3,13 +3,13 @@
|
||||
*/
|
||||
|
||||
import type { Request, Response } from 'express';
|
||||
import type { AutoModeServiceCompat } from '../../../services/auto-mode/index.js';
|
||||
import type { AutoModeService } from '../../../services/auto-mode-service.js';
|
||||
import { createLogger } from '@automaker/utils';
|
||||
import { getErrorMessage, logError } from '../common.js';
|
||||
|
||||
const logger = createLogger('AutoMode');
|
||||
|
||||
export function createFollowUpFeatureHandler(autoModeService: AutoModeServiceCompat) {
|
||||
export function createFollowUpFeatureHandler(autoModeService: AutoModeService) {
|
||||
return async (req: Request, res: Response): Promise<void> => {
|
||||
try {
|
||||
const { projectPath, featureId, prompt, imagePaths, useWorktrees } = req.body as {
|
||||
@@ -30,12 +30,16 @@ export function createFollowUpFeatureHandler(autoModeService: AutoModeServiceCom
|
||||
|
||||
// Start follow-up in background
|
||||
// followUpFeature derives workDir from feature.branchName
|
||||
// Default to false to match run-feature/resume-feature behavior.
|
||||
// Worktrees should only be used when explicitly enabled by the user.
|
||||
autoModeService
|
||||
// Default to false to match run-feature/resume-feature behavior.
|
||||
// Worktrees should only be used when explicitly enabled by the user.
|
||||
.followUpFeature(projectPath, featureId, prompt, imagePaths, useWorktrees ?? false)
|
||||
.catch((error) => {
|
||||
logger.error(`[AutoMode] Follow up feature ${featureId} error:`, error);
|
||||
})
|
||||
.finally(() => {
|
||||
// Release the starting slot when follow-up completes (success or error)
|
||||
// Note: The feature should be in runningFeatures by this point
|
||||
});
|
||||
|
||||
res.json({ success: true });
|
||||
|
||||
@@ -3,13 +3,13 @@
|
||||
*/
|
||||
|
||||
import type { Request, Response } from 'express';
|
||||
import type { AutoModeServiceCompat } from '../../../services/auto-mode/index.js';
|
||||
import type { AutoModeService } from '../../../services/auto-mode-service.js';
|
||||
import { createLogger } from '@automaker/utils';
|
||||
import { getErrorMessage, logError } from '../common.js';
|
||||
|
||||
const logger = createLogger('AutoMode');
|
||||
|
||||
export function createResumeFeatureHandler(autoModeService: AutoModeServiceCompat) {
|
||||
export function createResumeFeatureHandler(autoModeService: AutoModeService) {
|
||||
return async (req: Request, res: Response): Promise<void> => {
|
||||
try {
|
||||
const { projectPath, featureId, useWorktrees } = req.body as {
|
||||
|
||||
@@ -7,7 +7,7 @@
|
||||
|
||||
import type { Request, Response } from 'express';
|
||||
import { createLogger } from '@automaker/utils';
|
||||
import type { AutoModeServiceCompat } from '../../../services/auto-mode/index.js';
|
||||
import type { AutoModeService } from '../../../services/auto-mode-service.js';
|
||||
|
||||
const logger = createLogger('ResumeInterrupted');
|
||||
|
||||
@@ -15,7 +15,7 @@ interface ResumeInterruptedRequest {
|
||||
projectPath: string;
|
||||
}
|
||||
|
||||
export function createResumeInterruptedHandler(autoModeService: AutoModeServiceCompat) {
|
||||
export function createResumeInterruptedHandler(autoModeService: AutoModeService) {
|
||||
return async (req: Request, res: Response): Promise<void> => {
|
||||
const { projectPath } = req.body as ResumeInterruptedRequest;
|
||||
|
||||
@@ -28,7 +28,6 @@ export function createResumeInterruptedHandler(autoModeService: AutoModeServiceC
|
||||
|
||||
try {
|
||||
await autoModeService.resumeInterruptedFeatures(projectPath);
|
||||
|
||||
res.json({
|
||||
success: true,
|
||||
message: 'Resume check completed',
|
||||
|
||||
@@ -3,13 +3,13 @@
|
||||
*/
|
||||
|
||||
import type { Request, Response } from 'express';
|
||||
import type { AutoModeServiceCompat } from '../../../services/auto-mode/index.js';
|
||||
import type { AutoModeService } from '../../../services/auto-mode-service.js';
|
||||
import { createLogger } from '@automaker/utils';
|
||||
import { getErrorMessage, logError } from '../common.js';
|
||||
|
||||
const logger = createLogger('AutoMode');
|
||||
|
||||
export function createRunFeatureHandler(autoModeService: AutoModeServiceCompat) {
|
||||
export function createRunFeatureHandler(autoModeService: AutoModeService) {
|
||||
return async (req: Request, res: Response): Promise<void> => {
|
||||
try {
|
||||
const { projectPath, featureId, useWorktrees } = req.body as {
|
||||
@@ -50,6 +50,10 @@ export function createRunFeatureHandler(autoModeService: AutoModeServiceCompat)
|
||||
.executeFeature(projectPath, featureId, useWorktrees ?? false, false)
|
||||
.catch((error) => {
|
||||
logger.error(`Feature ${featureId} error:`, error);
|
||||
})
|
||||
.finally(() => {
|
||||
// Release the starting slot when execution completes (success or error)
|
||||
// Note: The feature should be in runningFeatures by this point
|
||||
});
|
||||
|
||||
res.json({ success: true });
|
||||
|
||||
@@ -3,13 +3,13 @@
|
||||
*/
|
||||
|
||||
import type { Request, Response } from 'express';
|
||||
import type { AutoModeServiceCompat } from '../../../services/auto-mode/index.js';
|
||||
import type { AutoModeService } from '../../../services/auto-mode-service.js';
|
||||
import { createLogger } from '@automaker/utils';
|
||||
import { getErrorMessage, logError } from '../common.js';
|
||||
|
||||
const logger = createLogger('AutoMode');
|
||||
|
||||
export function createStartHandler(autoModeService: AutoModeServiceCompat) {
|
||||
export function createStartHandler(autoModeService: AutoModeService) {
|
||||
return async (req: Request, res: Response): Promise<void> => {
|
||||
try {
|
||||
const { projectPath, branchName, maxConcurrency } = req.body as {
|
||||
|
||||
@@ -6,13 +6,10 @@
|
||||
*/
|
||||
|
||||
import type { Request, Response } from 'express';
|
||||
import type { AutoModeServiceCompat } from '../../../services/auto-mode/index.js';
|
||||
import type { AutoModeService } from '../../../services/auto-mode-service.js';
|
||||
import { getErrorMessage, logError } from '../common.js';
|
||||
|
||||
/**
|
||||
* Create status handler.
|
||||
*/
|
||||
export function createStatusHandler(autoModeService: AutoModeServiceCompat) {
|
||||
export function createStatusHandler(autoModeService: AutoModeService) {
|
||||
return async (req: Request, res: Response): Promise<void> => {
|
||||
try {
|
||||
const { projectPath, branchName } = req.body as {
|
||||
@@ -24,7 +21,6 @@ export function createStatusHandler(autoModeService: AutoModeServiceCompat) {
|
||||
if (projectPath) {
|
||||
// Normalize branchName: undefined becomes null
|
||||
const normalizedBranchName = branchName ?? null;
|
||||
|
||||
const projectStatus = autoModeService.getStatusForProject(
|
||||
projectPath,
|
||||
normalizedBranchName
|
||||
@@ -42,7 +38,7 @@ export function createStatusHandler(autoModeService: AutoModeServiceCompat) {
|
||||
return;
|
||||
}
|
||||
|
||||
// Global status for backward compatibility
|
||||
// Fall back to global status for backward compatibility
|
||||
const status = autoModeService.getStatus();
|
||||
const activeProjects = autoModeService.getActiveAutoLoopProjects();
|
||||
const activeWorktrees = autoModeService.getActiveAutoLoopWorktrees();
|
||||
|
||||
@@ -3,10 +3,10 @@
|
||||
*/
|
||||
|
||||
import type { Request, Response } from 'express';
|
||||
import type { AutoModeServiceCompat } from '../../../services/auto-mode/index.js';
|
||||
import type { AutoModeService } from '../../../services/auto-mode-service.js';
|
||||
import { getErrorMessage, logError } from '../common.js';
|
||||
|
||||
export function createStopFeatureHandler(autoModeService: AutoModeServiceCompat) {
|
||||
export function createStopFeatureHandler(autoModeService: AutoModeService) {
|
||||
return async (req: Request, res: Response): Promise<void> => {
|
||||
try {
|
||||
const { featureId } = req.body as { featureId: string };
|
||||
|
||||
@@ -3,13 +3,13 @@
|
||||
*/
|
||||
|
||||
import type { Request, Response } from 'express';
|
||||
import type { AutoModeServiceCompat } from '../../../services/auto-mode/index.js';
|
||||
import type { AutoModeService } from '../../../services/auto-mode-service.js';
|
||||
import { createLogger } from '@automaker/utils';
|
||||
import { getErrorMessage, logError } from '../common.js';
|
||||
|
||||
const logger = createLogger('AutoMode');
|
||||
|
||||
export function createStopHandler(autoModeService: AutoModeServiceCompat) {
|
||||
export function createStopHandler(autoModeService: AutoModeService) {
|
||||
return async (req: Request, res: Response): Promise<void> => {
|
||||
try {
|
||||
const { projectPath, branchName } = req.body as {
|
||||
|
||||
@@ -3,10 +3,10 @@
|
||||
*/
|
||||
|
||||
import type { Request, Response } from 'express';
|
||||
import type { AutoModeServiceCompat } from '../../../services/auto-mode/index.js';
|
||||
import type { AutoModeService } from '../../../services/auto-mode-service.js';
|
||||
import { getErrorMessage, logError } from '../common.js';
|
||||
|
||||
export function createVerifyFeatureHandler(autoModeService: AutoModeServiceCompat) {
|
||||
export function createVerifyFeatureHandler(autoModeService: AutoModeService) {
|
||||
return async (req: Request, res: Response): Promise<void> => {
|
||||
try {
|
||||
const { projectPath, featureId } = req.body as {
|
||||
|
||||
@@ -5,7 +5,6 @@
|
||||
import { Router } from 'express';
|
||||
import { FeatureLoader } from '../../services/feature-loader.js';
|
||||
import type { SettingsService } from '../../services/settings-service.js';
|
||||
import type { AutoModeServiceCompat } from '../../services/auto-mode/index.js';
|
||||
import type { EventEmitter } from '../../lib/events.js';
|
||||
import { validatePathParams } from '../../middleware/validate-paths.js';
|
||||
import { createListHandler } from './routes/list.js';
|
||||
@@ -23,16 +22,11 @@ import { createImportHandler, createConflictCheckHandler } from './routes/import
|
||||
export function createFeaturesRoutes(
|
||||
featureLoader: FeatureLoader,
|
||||
settingsService?: SettingsService,
|
||||
events?: EventEmitter,
|
||||
autoModeService?: AutoModeServiceCompat
|
||||
events?: EventEmitter
|
||||
): Router {
|
||||
const router = Router();
|
||||
|
||||
router.post(
|
||||
'/list',
|
||||
validatePathParams('projectPath'),
|
||||
createListHandler(featureLoader, autoModeService)
|
||||
);
|
||||
router.post('/list', validatePathParams('projectPath'), createListHandler(featureLoader));
|
||||
router.post('/get', validatePathParams('projectPath'), createGetHandler(featureLoader));
|
||||
router.post(
|
||||
'/create',
|
||||
|
||||
@@ -43,7 +43,7 @@ export function createCreateHandler(featureLoader: FeatureLoader, events?: Event
|
||||
if (events) {
|
||||
events.emit('feature:created', {
|
||||
featureId: created.id,
|
||||
featureName: created.title || 'Untitled Feature',
|
||||
featureName: created.name,
|
||||
projectPath,
|
||||
});
|
||||
}
|
||||
|
||||
@@ -1,22 +1,12 @@
|
||||
/**
|
||||
* POST /list endpoint - List all features for a project
|
||||
*
|
||||
* Also performs orphan detection when a project is loaded to identify
|
||||
* features whose branches no longer exist. This runs on every project load/switch.
|
||||
*/
|
||||
|
||||
import type { Request, Response } from 'express';
|
||||
import { FeatureLoader } from '../../../services/feature-loader.js';
|
||||
import type { AutoModeServiceCompat } from '../../../services/auto-mode/index.js';
|
||||
import { getErrorMessage, logError } from '../common.js';
|
||||
import { createLogger } from '@automaker/utils';
|
||||
|
||||
const logger = createLogger('FeaturesListRoute');
|
||||
|
||||
export function createListHandler(
|
||||
featureLoader: FeatureLoader,
|
||||
autoModeService?: AutoModeServiceCompat
|
||||
) {
|
||||
export function createListHandler(featureLoader: FeatureLoader) {
|
||||
return async (req: Request, res: Response): Promise<void> => {
|
||||
try {
|
||||
const { projectPath } = req.body as { projectPath: string };
|
||||
@@ -27,26 +17,6 @@ export function createListHandler(
|
||||
}
|
||||
|
||||
const features = await featureLoader.getAll(projectPath);
|
||||
|
||||
// Run orphan detection in background when project is loaded
|
||||
// This detects features whose branches no longer exist (e.g., after merge/delete)
|
||||
// We don't await this to keep the list response fast
|
||||
// Note: detectOrphanedFeatures handles errors internally and always resolves
|
||||
if (autoModeService) {
|
||||
autoModeService.detectOrphanedFeatures(projectPath).then((orphanedFeatures) => {
|
||||
if (orphanedFeatures.length > 0) {
|
||||
logger.info(
|
||||
`[ProjectLoad] Detected ${orphanedFeatures.length} orphaned feature(s) in ${projectPath}`
|
||||
);
|
||||
for (const { feature, missingBranch } of orphanedFeatures) {
|
||||
logger.info(
|
||||
`[ProjectLoad] Orphaned: ${feature.title || feature.id} - branch "${missingBranch}" no longer exists`
|
||||
);
|
||||
}
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
res.json({ success: true, features });
|
||||
} catch (error) {
|
||||
logError(error, 'List features failed');
|
||||
|
||||
@@ -31,9 +31,7 @@ export function createSaveBoardBackgroundHandler() {
|
||||
await secureFs.mkdir(boardDir, { recursive: true });
|
||||
|
||||
// Decode base64 data (remove data URL prefix if present)
|
||||
// Use a regex that handles all data URL formats including those with extra params
|
||||
// e.g., data:image/gif;charset=utf-8;base64,R0lGOD...
|
||||
const base64Data = data.replace(/^data:[^,]+,/, '');
|
||||
const base64Data = data.replace(/^data:image\/\w+;base64,/, '');
|
||||
const buffer = Buffer.from(base64Data, 'base64');
|
||||
|
||||
// Use a fixed filename for the board background (overwrite previous)
|
||||
|
||||
@@ -31,9 +31,7 @@ export function createSaveImageHandler() {
|
||||
await secureFs.mkdir(imagesDir, { recursive: true });
|
||||
|
||||
// Decode base64 data (remove data URL prefix if present)
|
||||
// Use a regex that handles all data URL formats including those with extra params
|
||||
// e.g., data:image/gif;charset=utf-8;base64,R0lGOD...
|
||||
const base64Data = data.replace(/^data:[^,]+,/, '');
|
||||
const base64Data = data.replace(/^data:image\/\w+;base64,/, '');
|
||||
const buffer = Buffer.from(base64Data, 'base64');
|
||||
|
||||
// Generate unique filename with timestamp
|
||||
|
||||
@@ -23,7 +23,6 @@ import {
|
||||
isCodexModel,
|
||||
isCursorModel,
|
||||
isOpencodeModel,
|
||||
supportsStructuredOutput,
|
||||
} from '@automaker/types';
|
||||
import { resolvePhaseModel } from '@automaker/model-resolver';
|
||||
import { extractJson } from '../../../lib/json-extractor.js';
|
||||
@@ -125,9 +124,8 @@ async function runValidation(
|
||||
const prompts = await getPromptCustomization(settingsService, '[ValidateIssue]');
|
||||
const issueValidationSystemPrompt = prompts.issueValidation.systemPrompt;
|
||||
|
||||
// Determine if we should use structured output based on model type
|
||||
// Claude and Codex support it; Cursor, Gemini, OpenCode, Copilot don't
|
||||
const useStructuredOutput = supportsStructuredOutput(model);
|
||||
// Determine if we should use structured output (Claude/Codex support it, Cursor/OpenCode don't)
|
||||
const useStructuredOutput = isClaudeModel(model) || isCodexModel(model);
|
||||
|
||||
// Build the final prompt - for Cursor, include system prompt and JSON schema instructions
|
||||
let finalPrompt = basePrompt;
|
||||
|
||||
@@ -4,21 +4,15 @@
|
||||
|
||||
import type { Request, Response } from 'express';
|
||||
import type { IdeationService } from '../../../services/ideation-service.js';
|
||||
import type { IdeationContextSources } from '@automaker/types';
|
||||
import { createLogger } from '@automaker/utils';
|
||||
import { getErrorMessage, logError } from '../common.js';
|
||||
|
||||
const logger = createLogger('ideation:suggestions-generate');
|
||||
|
||||
/**
|
||||
* Creates an Express route handler for generating AI-powered ideation suggestions.
|
||||
* Accepts a prompt, category, and optional context sources configuration,
|
||||
* then returns structured suggestions that can be added to the board.
|
||||
*/
|
||||
export function createSuggestionsGenerateHandler(ideationService: IdeationService) {
|
||||
return async (req: Request, res: Response): Promise<void> => {
|
||||
try {
|
||||
const { projectPath, promptId, category, count, contextSources } = req.body;
|
||||
const { projectPath, promptId, category, count } = req.body;
|
||||
|
||||
if (!projectPath) {
|
||||
res.status(400).json({ success: false, error: 'projectPath is required' });
|
||||
@@ -44,8 +38,7 @@ export function createSuggestionsGenerateHandler(ideationService: IdeationServic
|
||||
projectPath,
|
||||
promptId,
|
||||
category,
|
||||
suggestionCount,
|
||||
contextSources as IdeationContextSources | undefined
|
||||
suggestionCount
|
||||
);
|
||||
|
||||
res.json({
|
||||
|
||||
@@ -4,14 +4,14 @@
|
||||
|
||||
import { Router } from 'express';
|
||||
import type { FeatureLoader } from '../../services/feature-loader.js';
|
||||
import type { AutoModeServiceCompat } from '../../services/auto-mode/index.js';
|
||||
import type { AutoModeService } from '../../services/auto-mode-service.js';
|
||||
import type { SettingsService } from '../../services/settings-service.js';
|
||||
import type { NotificationService } from '../../services/notification-service.js';
|
||||
import { createOverviewHandler } from './routes/overview.js';
|
||||
|
||||
export function createProjectsRoutes(
|
||||
featureLoader: FeatureLoader,
|
||||
autoModeService: AutoModeServiceCompat,
|
||||
autoModeService: AutoModeService,
|
||||
settingsService: SettingsService,
|
||||
notificationService: NotificationService
|
||||
): Router {
|
||||
|
||||
@@ -9,11 +9,7 @@
|
||||
|
||||
import type { Request, Response } from 'express';
|
||||
import type { FeatureLoader } from '../../../services/feature-loader.js';
|
||||
import type {
|
||||
AutoModeServiceCompat,
|
||||
RunningAgentInfo,
|
||||
ProjectAutoModeStatus,
|
||||
} from '../../../services/auto-mode/index.js';
|
||||
import type { AutoModeService } from '../../../services/auto-mode-service.js';
|
||||
import type { SettingsService } from '../../../services/settings-service.js';
|
||||
import type { NotificationService } from '../../../services/notification-service.js';
|
||||
import type {
|
||||
@@ -151,7 +147,7 @@ function getLastActivityAt(features: Feature[]): string | undefined {
|
||||
|
||||
export function createOverviewHandler(
|
||||
featureLoader: FeatureLoader,
|
||||
autoModeService: AutoModeServiceCompat,
|
||||
autoModeService: AutoModeService,
|
||||
settingsService: SettingsService,
|
||||
notificationService: NotificationService
|
||||
) {
|
||||
@@ -162,7 +158,7 @@ export function createOverviewHandler(
|
||||
const projectRefs: ProjectRef[] = settings.projects || [];
|
||||
|
||||
// Get all running agents once to count live running features per project
|
||||
const allRunningAgents: RunningAgentInfo[] = await autoModeService.getRunningAgents();
|
||||
const allRunningAgents = await autoModeService.getRunningAgents();
|
||||
|
||||
// Collect project statuses in parallel
|
||||
const projectStatusPromises = projectRefs.map(async (projectRef): Promise<ProjectStatus> => {
|
||||
@@ -173,10 +169,7 @@ export function createOverviewHandler(
|
||||
const totalFeatures = features.length;
|
||||
|
||||
// Get auto-mode status for this project (main worktree, branchName = null)
|
||||
const autoModeStatus: ProjectAutoModeStatus = autoModeService.getStatusForProject(
|
||||
projectRef.path,
|
||||
null
|
||||
);
|
||||
const autoModeStatus = autoModeService.getStatusForProject(projectRef.path, null);
|
||||
const isAutoModeRunning = autoModeStatus.isAutoLoopRunning;
|
||||
|
||||
// Count live running features for this project (across all branches)
|
||||
|
||||
@@ -3,10 +3,10 @@
|
||||
*/
|
||||
|
||||
import { Router } from 'express';
|
||||
import type { AutoModeServiceCompat } from '../../services/auto-mode/index.js';
|
||||
import type { AutoModeService } from '../../services/auto-mode-service.js';
|
||||
import { createIndexHandler } from './routes/index.js';
|
||||
|
||||
export function createRunningAgentsRoutes(autoModeService: AutoModeServiceCompat): Router {
|
||||
export function createRunningAgentsRoutes(autoModeService: AutoModeService): Router {
|
||||
const router = Router();
|
||||
|
||||
router.get('/', createIndexHandler(autoModeService));
|
||||
|
||||
@@ -3,17 +3,16 @@
|
||||
*/
|
||||
|
||||
import type { Request, Response } from 'express';
|
||||
import type { AutoModeServiceCompat } from '../../../services/auto-mode/index.js';
|
||||
import type { AutoModeService } from '../../../services/auto-mode-service.js';
|
||||
import { getBacklogPlanStatus, getRunningDetails } from '../../backlog-plan/common.js';
|
||||
import { getAllRunningGenerations } from '../../app-spec/common.js';
|
||||
import path from 'path';
|
||||
import { getErrorMessage, logError } from '../common.js';
|
||||
|
||||
export function createIndexHandler(autoModeService: AutoModeServiceCompat) {
|
||||
export function createIndexHandler(autoModeService: AutoModeService) {
|
||||
return async (_req: Request, res: Response): Promise<void> => {
|
||||
try {
|
||||
const runningAgents = [...(await autoModeService.getRunningAgents())];
|
||||
|
||||
const backlogPlanStatus = getBacklogPlanStatus();
|
||||
const backlogPlanDetails = getRunningDetails();
|
||||
|
||||
|
||||
@@ -39,15 +39,8 @@ interface GitHubRemoteCacheEntry {
|
||||
checkedAt: number;
|
||||
}
|
||||
|
||||
interface GitHubPRCacheEntry {
|
||||
prs: Map<string, WorktreePRInfo>;
|
||||
fetchedAt: number;
|
||||
}
|
||||
|
||||
const githubRemoteCache = new Map<string, GitHubRemoteCacheEntry>();
|
||||
const githubPRCache = new Map<string, GitHubPRCacheEntry>();
|
||||
const GITHUB_REMOTE_CACHE_TTL_MS = 5 * 60 * 1000; // 5 minutes
|
||||
const GITHUB_PR_CACHE_TTL_MS = 2 * 60 * 1000; // 2 minutes - avoid hitting GitHub on every poll
|
||||
|
||||
interface WorktreeInfo {
|
||||
path: string;
|
||||
@@ -187,21 +180,9 @@ async function getGitHubRemoteStatus(projectPath: string): Promise<GitHubRemoteS
|
||||
* This also allows detecting PRs that were created outside the app.
|
||||
*
|
||||
* Uses cached GitHub remote status to avoid repeated warnings when the
|
||||
* project doesn't have a GitHub remote configured. Results are cached
|
||||
* briefly to avoid hammering GitHub on frequent worktree polls.
|
||||
* project doesn't have a GitHub remote configured.
|
||||
*/
|
||||
async function fetchGitHubPRs(
|
||||
projectPath: string,
|
||||
forceRefresh = false
|
||||
): Promise<Map<string, WorktreePRInfo>> {
|
||||
const now = Date.now();
|
||||
const cached = githubPRCache.get(projectPath);
|
||||
|
||||
// Return cached result if valid and not forcing refresh
|
||||
if (!forceRefresh && cached && now - cached.fetchedAt < GITHUB_PR_CACHE_TTL_MS) {
|
||||
return cached.prs;
|
||||
}
|
||||
|
||||
async function fetchGitHubPRs(projectPath: string): Promise<Map<string, WorktreePRInfo>> {
|
||||
const prMap = new Map<string, WorktreePRInfo>();
|
||||
|
||||
try {
|
||||
@@ -244,22 +225,8 @@ async function fetchGitHubPRs(
|
||||
createdAt: pr.createdAt,
|
||||
});
|
||||
}
|
||||
|
||||
// Only update cache on successful fetch
|
||||
githubPRCache.set(projectPath, {
|
||||
prs: prMap,
|
||||
fetchedAt: Date.now(),
|
||||
});
|
||||
} catch (error) {
|
||||
// On fetch failure, return stale cached data if available to avoid
|
||||
// repeated API calls during GitHub API flakiness or temporary outages
|
||||
if (cached) {
|
||||
logger.warn(`Failed to fetch GitHub PRs, returning stale cache: ${getErrorMessage(error)}`);
|
||||
// Extend cache TTL to avoid repeated retries during outages
|
||||
githubPRCache.set(projectPath, { prs: cached.prs, fetchedAt: Date.now() });
|
||||
return cached.prs;
|
||||
}
|
||||
// No cache available, log warning and return empty map
|
||||
// Silently fail - PR detection is optional
|
||||
logger.warn(`Failed to fetch GitHub PRs: ${getErrorMessage(error)}`);
|
||||
}
|
||||
|
||||
@@ -397,7 +364,7 @@ export function createListHandler() {
|
||||
// Only fetch GitHub PRs if includeDetails is requested (performance optimization).
|
||||
// Uses --state all to detect merged/closed PRs, limited to 1000 recent PRs.
|
||||
const githubPRs = includeDetails
|
||||
? await fetchGitHubPRs(projectPath, forceRefreshGitHub)
|
||||
? await fetchGitHubPRs(projectPath)
|
||||
: new Map<string, WorktreePRInfo>();
|
||||
|
||||
for (const worktree of worktrees) {
|
||||
|
||||
@@ -1,83 +0,0 @@
|
||||
/**
|
||||
* AgentExecutor Types - Type definitions for agent execution
|
||||
*/
|
||||
|
||||
import type {
|
||||
PlanningMode,
|
||||
ThinkingLevel,
|
||||
ParsedTask,
|
||||
ClaudeCompatibleProvider,
|
||||
Credentials,
|
||||
} from '@automaker/types';
|
||||
import type { BaseProvider } from '../providers/base-provider.js';
|
||||
|
||||
export interface AgentExecutionOptions {
|
||||
workDir: string;
|
||||
featureId: string;
|
||||
prompt: string;
|
||||
projectPath: string;
|
||||
abortController: AbortController;
|
||||
imagePaths?: string[];
|
||||
model?: string;
|
||||
planningMode?: PlanningMode;
|
||||
requirePlanApproval?: boolean;
|
||||
previousContent?: string;
|
||||
systemPrompt?: string;
|
||||
autoLoadClaudeMd?: boolean;
|
||||
thinkingLevel?: ThinkingLevel;
|
||||
branchName?: string | null;
|
||||
credentials?: Credentials;
|
||||
claudeCompatibleProvider?: ClaudeCompatibleProvider;
|
||||
mcpServers?: Record<string, unknown>;
|
||||
sdkOptions?: {
|
||||
maxTurns?: number;
|
||||
allowedTools?: string[];
|
||||
systemPrompt?: string | { type: 'preset'; preset: 'claude_code'; append?: string };
|
||||
settingSources?: Array<'user' | 'project' | 'local'>;
|
||||
};
|
||||
provider: BaseProvider;
|
||||
effectiveBareModel: string;
|
||||
specAlreadyDetected?: boolean;
|
||||
existingApprovedPlanContent?: string;
|
||||
persistedTasks?: ParsedTask[];
|
||||
}
|
||||
|
||||
export interface AgentExecutionResult {
|
||||
responseText: string;
|
||||
specDetected: boolean;
|
||||
tasksCompleted: number;
|
||||
aborted: boolean;
|
||||
}
|
||||
|
||||
export type WaitForApprovalFn = (
|
||||
featureId: string,
|
||||
projectPath: string
|
||||
) => Promise<{ approved: boolean; feedback?: string; editedPlan?: string }>;
|
||||
|
||||
export type SaveFeatureSummaryFn = (
|
||||
projectPath: string,
|
||||
featureId: string,
|
||||
summary: string
|
||||
) => Promise<void>;
|
||||
|
||||
export type UpdateFeatureSummaryFn = (
|
||||
projectPath: string,
|
||||
featureId: string,
|
||||
summary: string
|
||||
) => Promise<void>;
|
||||
|
||||
export type BuildTaskPromptFn = (
|
||||
task: ParsedTask,
|
||||
allTasks: ParsedTask[],
|
||||
taskIndex: number,
|
||||
planContent: string,
|
||||
taskPromptTemplate: string,
|
||||
userFeedback?: string
|
||||
) => string;
|
||||
|
||||
export interface AgentExecutorCallbacks {
|
||||
waitForApproval: WaitForApprovalFn;
|
||||
saveFeatureSummary: SaveFeatureSummaryFn;
|
||||
updateFeatureSummary: UpdateFeatureSummaryFn;
|
||||
buildTaskPrompt: BuildTaskPromptFn;
|
||||
}
|
||||
@@ -1,693 +0,0 @@
|
||||
/**
|
||||
* AgentExecutor - Core agent execution engine with streaming support
|
||||
*/
|
||||
|
||||
import path from 'path';
|
||||
import type { ExecuteOptions, ParsedTask } from '@automaker/types';
|
||||
import { buildPromptWithImages, createLogger } from '@automaker/utils';
|
||||
import { getFeatureDir } from '@automaker/platform';
|
||||
import * as secureFs from '../lib/secure-fs.js';
|
||||
import { TypedEventBus } from './typed-event-bus.js';
|
||||
import { FeatureStateManager } from './feature-state-manager.js';
|
||||
import { PlanApprovalService } from './plan-approval-service.js';
|
||||
import type { SettingsService } from './settings-service.js';
|
||||
import {
|
||||
parseTasksFromSpec,
|
||||
detectTaskStartMarker,
|
||||
detectTaskCompleteMarker,
|
||||
detectPhaseCompleteMarker,
|
||||
detectSpecFallback,
|
||||
extractSummary,
|
||||
} from './spec-parser.js';
|
||||
import { getPromptCustomization } from '../lib/settings-helpers.js';
|
||||
import type {
|
||||
AgentExecutionOptions,
|
||||
AgentExecutionResult,
|
||||
AgentExecutorCallbacks,
|
||||
} from './agent-executor-types.js';
|
||||
|
||||
// Re-export types for backward compatibility
|
||||
export type {
|
||||
AgentExecutionOptions,
|
||||
AgentExecutionResult,
|
||||
WaitForApprovalFn,
|
||||
SaveFeatureSummaryFn,
|
||||
UpdateFeatureSummaryFn,
|
||||
BuildTaskPromptFn,
|
||||
} from './agent-executor-types.js';
|
||||
|
||||
const logger = createLogger('AgentExecutor');
|
||||
|
||||
export class AgentExecutor {
|
||||
private static readonly WRITE_DEBOUNCE_MS = 500;
|
||||
private static readonly STREAM_HEARTBEAT_MS = 15_000;
|
||||
|
||||
constructor(
|
||||
private eventBus: TypedEventBus,
|
||||
private featureStateManager: FeatureStateManager,
|
||||
private planApprovalService: PlanApprovalService,
|
||||
private settingsService: SettingsService | null = null
|
||||
) {}
|
||||
|
||||
async execute(
|
||||
options: AgentExecutionOptions,
|
||||
callbacks: AgentExecutorCallbacks
|
||||
): Promise<AgentExecutionResult> {
|
||||
const {
|
||||
workDir,
|
||||
featureId,
|
||||
projectPath,
|
||||
abortController,
|
||||
branchName = null,
|
||||
provider,
|
||||
effectiveBareModel,
|
||||
previousContent,
|
||||
planningMode = 'skip',
|
||||
requirePlanApproval = false,
|
||||
specAlreadyDetected = false,
|
||||
existingApprovedPlanContent,
|
||||
persistedTasks,
|
||||
credentials,
|
||||
claudeCompatibleProvider,
|
||||
mcpServers,
|
||||
sdkOptions,
|
||||
} = options;
|
||||
const { content: promptContent } = await buildPromptWithImages(
|
||||
options.prompt,
|
||||
options.imagePaths,
|
||||
workDir,
|
||||
false
|
||||
);
|
||||
const executeOptions: ExecuteOptions = {
|
||||
prompt: promptContent,
|
||||
model: effectiveBareModel,
|
||||
maxTurns: sdkOptions?.maxTurns,
|
||||
cwd: workDir,
|
||||
allowedTools: sdkOptions?.allowedTools as string[] | undefined,
|
||||
abortController,
|
||||
systemPrompt: sdkOptions?.systemPrompt,
|
||||
settingSources: sdkOptions?.settingSources,
|
||||
mcpServers:
|
||||
mcpServers && Object.keys(mcpServers).length > 0
|
||||
? (mcpServers as Record<string, { command: string }>)
|
||||
: undefined,
|
||||
thinkingLevel: options.thinkingLevel,
|
||||
credentials,
|
||||
claudeCompatibleProvider,
|
||||
};
|
||||
const featureDirForOutput = getFeatureDir(projectPath, featureId);
|
||||
const outputPath = path.join(featureDirForOutput, 'agent-output.md');
|
||||
const rawOutputPath = path.join(featureDirForOutput, 'raw-output.jsonl');
|
||||
const enableRawOutput =
|
||||
process.env.AUTOMAKER_DEBUG_RAW_OUTPUT === 'true' ||
|
||||
process.env.AUTOMAKER_DEBUG_RAW_OUTPUT === '1';
|
||||
let responseText = previousContent
|
||||
? `${previousContent}\n\n---\n\n## Follow-up Session\n\n`
|
||||
: '';
|
||||
let specDetected = specAlreadyDetected,
|
||||
tasksCompleted = 0,
|
||||
aborted = false;
|
||||
let writeTimeout: ReturnType<typeof setTimeout> | null = null,
|
||||
rawOutputLines: string[] = [],
|
||||
rawWriteTimeout: ReturnType<typeof setTimeout> | null = null;
|
||||
|
||||
const writeToFile = async (): Promise<void> => {
|
||||
try {
|
||||
await secureFs.mkdir(path.dirname(outputPath), { recursive: true });
|
||||
await secureFs.writeFile(outputPath, responseText);
|
||||
} catch (error) {
|
||||
logger.error(`Failed to write agent output for ${featureId}:`, error);
|
||||
}
|
||||
};
|
||||
const scheduleWrite = (): void => {
|
||||
if (writeTimeout) clearTimeout(writeTimeout);
|
||||
writeTimeout = setTimeout(() => writeToFile(), AgentExecutor.WRITE_DEBOUNCE_MS);
|
||||
};
|
||||
const appendRawEvent = (event: unknown): void => {
|
||||
if (!enableRawOutput) return;
|
||||
try {
|
||||
rawOutputLines.push(
|
||||
JSON.stringify({ timestamp: new Date().toISOString(), event }, null, 4)
|
||||
);
|
||||
if (rawWriteTimeout) clearTimeout(rawWriteTimeout);
|
||||
rawWriteTimeout = setTimeout(async () => {
|
||||
try {
|
||||
await secureFs.mkdir(path.dirname(rawOutputPath), { recursive: true });
|
||||
await secureFs.appendFile(rawOutputPath, rawOutputLines.join('\n') + '\n');
|
||||
rawOutputLines = [];
|
||||
} catch {
|
||||
/* ignore */
|
||||
}
|
||||
}, AgentExecutor.WRITE_DEBOUNCE_MS);
|
||||
} catch {
|
||||
/* ignore */
|
||||
}
|
||||
};
|
||||
|
||||
const streamStartTime = Date.now();
|
||||
let receivedAnyStreamMessage = false;
|
||||
const streamHeartbeat = setInterval(() => {
|
||||
if (!receivedAnyStreamMessage)
|
||||
logger.info(
|
||||
`Waiting for first model response for feature ${featureId} (${Math.round((Date.now() - streamStartTime) / 1000)}s elapsed)...`
|
||||
);
|
||||
}, AgentExecutor.STREAM_HEARTBEAT_MS);
|
||||
const planningModeRequiresApproval =
|
||||
planningMode === 'spec' ||
|
||||
planningMode === 'full' ||
|
||||
(planningMode === 'lite' && requirePlanApproval);
|
||||
const requiresApproval = planningModeRequiresApproval && requirePlanApproval;
|
||||
|
||||
if (existingApprovedPlanContent && persistedTasks && persistedTasks.length > 0) {
|
||||
const result = await this.executeTasksLoop(
|
||||
options,
|
||||
persistedTasks,
|
||||
existingApprovedPlanContent,
|
||||
responseText,
|
||||
scheduleWrite,
|
||||
callbacks
|
||||
);
|
||||
clearInterval(streamHeartbeat);
|
||||
if (writeTimeout) clearTimeout(writeTimeout);
|
||||
if (rawWriteTimeout) clearTimeout(rawWriteTimeout);
|
||||
await writeToFile();
|
||||
return {
|
||||
responseText: result.responseText,
|
||||
specDetected: true,
|
||||
tasksCompleted: result.tasksCompleted,
|
||||
aborted: result.aborted,
|
||||
};
|
||||
}
|
||||
|
||||
logger.info(`Starting stream for feature ${featureId}...`);
|
||||
|
||||
try {
|
||||
const stream = provider.executeQuery(executeOptions);
|
||||
streamLoop: for await (const msg of stream) {
|
||||
receivedAnyStreamMessage = true;
|
||||
appendRawEvent(msg);
|
||||
if (abortController.signal.aborted) {
|
||||
aborted = true;
|
||||
throw new Error('Feature execution aborted');
|
||||
}
|
||||
if (msg.type === 'assistant' && msg.message?.content) {
|
||||
for (const block of msg.message.content) {
|
||||
if (block.type === 'text') {
|
||||
const newText = block.text || '';
|
||||
if (!newText) continue;
|
||||
if (responseText.length > 0 && newText.length > 0) {
|
||||
const endsWithSentence = /[.!?:]\s*$/.test(responseText),
|
||||
endsWithNewline = /\n\s*$/.test(responseText);
|
||||
if (
|
||||
!endsWithNewline &&
|
||||
(endsWithSentence || /^[\n#\-*>]/.test(newText)) &&
|
||||
!/[a-zA-Z0-9]/.test(responseText.slice(-1))
|
||||
)
|
||||
responseText += '\n\n';
|
||||
}
|
||||
responseText += newText;
|
||||
if (
|
||||
block.text &&
|
||||
(block.text.includes('Invalid API key') ||
|
||||
block.text.includes('authentication_failed') ||
|
||||
block.text.includes('Fix external API key'))
|
||||
)
|
||||
throw new Error(
|
||||
"Authentication failed: Invalid or expired API key. Please check your ANTHROPIC_API_KEY, or run 'claude login' to re-authenticate."
|
||||
);
|
||||
scheduleWrite();
|
||||
const hasExplicitMarker = responseText.includes('[SPEC_GENERATED]'),
|
||||
hasFallbackSpec = !hasExplicitMarker && detectSpecFallback(responseText);
|
||||
if (
|
||||
planningModeRequiresApproval &&
|
||||
!specDetected &&
|
||||
(hasExplicitMarker || hasFallbackSpec)
|
||||
) {
|
||||
specDetected = true;
|
||||
const planContent = hasExplicitMarker
|
||||
? responseText.substring(0, responseText.indexOf('[SPEC_GENERATED]')).trim()
|
||||
: responseText.trim();
|
||||
if (!hasExplicitMarker)
|
||||
logger.info(`Using fallback spec detection for feature ${featureId}`);
|
||||
const result = await this.handleSpecGenerated(
|
||||
options,
|
||||
planContent,
|
||||
responseText,
|
||||
requiresApproval,
|
||||
scheduleWrite,
|
||||
callbacks
|
||||
);
|
||||
responseText = result.responseText;
|
||||
tasksCompleted = result.tasksCompleted;
|
||||
break streamLoop;
|
||||
}
|
||||
if (!specDetected)
|
||||
this.eventBus.emitAutoModeEvent('auto_mode_progress', {
|
||||
featureId,
|
||||
branchName,
|
||||
content: block.text,
|
||||
});
|
||||
} else if (block.type === 'tool_use') {
|
||||
this.eventBus.emitAutoModeEvent('auto_mode_tool', {
|
||||
featureId,
|
||||
branchName,
|
||||
tool: block.name,
|
||||
input: block.input,
|
||||
});
|
||||
if (responseText.length > 0 && !responseText.endsWith('\n')) responseText += '\n';
|
||||
responseText += `\n🔧 Tool: ${block.name}\n`;
|
||||
if (block.input) responseText += `Input: ${JSON.stringify(block.input, null, 2)}\n`;
|
||||
scheduleWrite();
|
||||
}
|
||||
}
|
||||
} else if (msg.type === 'error') {
|
||||
throw new Error(msg.error || 'Unknown error');
|
||||
} else if (msg.type === 'result' && msg.subtype === 'success') scheduleWrite();
|
||||
}
|
||||
await writeToFile();
|
||||
if (enableRawOutput && rawOutputLines.length > 0) {
|
||||
try {
|
||||
await secureFs.mkdir(path.dirname(rawOutputPath), { recursive: true });
|
||||
await secureFs.appendFile(rawOutputPath, rawOutputLines.join('\n') + '\n');
|
||||
} catch {
|
||||
/* ignore */
|
||||
}
|
||||
}
|
||||
} finally {
|
||||
clearInterval(streamHeartbeat);
|
||||
if (writeTimeout) clearTimeout(writeTimeout);
|
||||
if (rawWriteTimeout) clearTimeout(rawWriteTimeout);
|
||||
}
|
||||
return { responseText, specDetected, tasksCompleted, aborted };
|
||||
}
|
||||
|
||||
private async executeTasksLoop(
|
||||
options: AgentExecutionOptions,
|
||||
tasks: ParsedTask[],
|
||||
planContent: string,
|
||||
initialResponseText: string,
|
||||
scheduleWrite: () => void,
|
||||
callbacks: AgentExecutorCallbacks,
|
||||
userFeedback?: string
|
||||
): Promise<{ responseText: string; tasksCompleted: number; aborted: boolean }> {
|
||||
const {
|
||||
featureId,
|
||||
projectPath,
|
||||
abortController,
|
||||
branchName = null,
|
||||
provider,
|
||||
sdkOptions,
|
||||
} = options;
|
||||
logger.info(`Starting task execution for feature ${featureId} with ${tasks.length} tasks`);
|
||||
const taskPrompts = await getPromptCustomization(this.settingsService, '[AutoMode]');
|
||||
let responseText = initialResponseText,
|
||||
tasksCompleted = 0;
|
||||
|
||||
for (let taskIndex = 0; taskIndex < tasks.length; taskIndex++) {
|
||||
const task = tasks[taskIndex];
|
||||
if (task.status === 'completed') {
|
||||
tasksCompleted++;
|
||||
continue;
|
||||
}
|
||||
if (abortController.signal.aborted) return { responseText, tasksCompleted, aborted: true };
|
||||
await this.featureStateManager.updateTaskStatus(
|
||||
projectPath,
|
||||
featureId,
|
||||
task.id,
|
||||
'in_progress'
|
||||
);
|
||||
this.eventBus.emitAutoModeEvent('auto_mode_task_started', {
|
||||
featureId,
|
||||
projectPath,
|
||||
branchName,
|
||||
taskId: task.id,
|
||||
taskDescription: task.description,
|
||||
taskIndex,
|
||||
tasksTotal: tasks.length,
|
||||
});
|
||||
await this.featureStateManager.updateFeaturePlanSpec(projectPath, featureId, {
|
||||
currentTaskId: task.id,
|
||||
});
|
||||
const taskPrompt = callbacks.buildTaskPrompt(
|
||||
task,
|
||||
tasks,
|
||||
taskIndex,
|
||||
planContent,
|
||||
taskPrompts.taskExecution.taskPromptTemplate,
|
||||
userFeedback
|
||||
);
|
||||
const taskStream = provider.executeQuery(
|
||||
this.buildExecOpts(options, taskPrompt, Math.min(sdkOptions?.maxTurns || 100, 50))
|
||||
);
|
||||
let taskOutput = '',
|
||||
taskStartDetected = false,
|
||||
taskCompleteDetected = false;
|
||||
|
||||
for await (const msg of taskStream) {
|
||||
if (msg.type === 'assistant' && msg.message?.content) {
|
||||
for (const b of msg.message.content) {
|
||||
if (b.type === 'text') {
|
||||
const text = b.text || '';
|
||||
taskOutput += text;
|
||||
responseText += text;
|
||||
this.eventBus.emitAutoModeEvent('auto_mode_progress', {
|
||||
featureId,
|
||||
branchName,
|
||||
content: text,
|
||||
});
|
||||
scheduleWrite();
|
||||
if (!taskStartDetected) {
|
||||
const sid = detectTaskStartMarker(taskOutput);
|
||||
if (sid) {
|
||||
taskStartDetected = true;
|
||||
await this.featureStateManager.updateTaskStatus(
|
||||
projectPath,
|
||||
featureId,
|
||||
sid,
|
||||
'in_progress'
|
||||
);
|
||||
}
|
||||
}
|
||||
if (!taskCompleteDetected) {
|
||||
const cid = detectTaskCompleteMarker(taskOutput);
|
||||
if (cid) {
|
||||
taskCompleteDetected = true;
|
||||
await this.featureStateManager.updateTaskStatus(
|
||||
projectPath,
|
||||
featureId,
|
||||
cid,
|
||||
'completed'
|
||||
);
|
||||
}
|
||||
}
|
||||
const pn = detectPhaseCompleteMarker(text);
|
||||
if (pn !== null)
|
||||
this.eventBus.emitAutoModeEvent('auto_mode_phase_complete', {
|
||||
featureId,
|
||||
projectPath,
|
||||
branchName,
|
||||
phaseNumber: pn,
|
||||
});
|
||||
} else if (b.type === 'tool_use')
|
||||
this.eventBus.emitAutoModeEvent('auto_mode_tool', {
|
||||
featureId,
|
||||
branchName,
|
||||
tool: b.name,
|
||||
input: b.input,
|
||||
});
|
||||
}
|
||||
} else if (msg.type === 'error')
|
||||
throw new Error(msg.error || `Error during task ${task.id}`);
|
||||
else if (msg.type === 'result' && msg.subtype === 'success') {
|
||||
taskOutput += msg.result || '';
|
||||
responseText += msg.result || '';
|
||||
}
|
||||
}
|
||||
if (!taskCompleteDetected)
|
||||
await this.featureStateManager.updateTaskStatus(
|
||||
projectPath,
|
||||
featureId,
|
||||
task.id,
|
||||
'completed'
|
||||
);
|
||||
tasksCompleted = taskIndex + 1;
|
||||
this.eventBus.emitAutoModeEvent('auto_mode_task_complete', {
|
||||
featureId,
|
||||
projectPath,
|
||||
branchName,
|
||||
taskId: task.id,
|
||||
tasksCompleted,
|
||||
tasksTotal: tasks.length,
|
||||
});
|
||||
await this.featureStateManager.updateFeaturePlanSpec(projectPath, featureId, {
|
||||
tasksCompleted,
|
||||
});
|
||||
if (task.phase) {
|
||||
const next = tasks[taskIndex + 1];
|
||||
if (!next || next.phase !== task.phase) {
|
||||
const m = task.phase.match(/Phase\s*(\d+)/i);
|
||||
if (m)
|
||||
this.eventBus.emitAutoModeEvent('auto_mode_phase_complete', {
|
||||
featureId,
|
||||
projectPath,
|
||||
branchName,
|
||||
phaseNumber: parseInt(m[1], 10),
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
const summary = extractSummary(responseText);
|
||||
if (summary) await callbacks.saveFeatureSummary(projectPath, featureId, summary);
|
||||
return { responseText, tasksCompleted, aborted: false };
|
||||
}
|
||||
|
||||
private async handleSpecGenerated(
|
||||
options: AgentExecutionOptions,
|
||||
planContent: string,
|
||||
initialResponseText: string,
|
||||
requiresApproval: boolean,
|
||||
scheduleWrite: () => void,
|
||||
callbacks: AgentExecutorCallbacks
|
||||
): Promise<{ responseText: string; tasksCompleted: number }> {
|
||||
const {
|
||||
workDir,
|
||||
featureId,
|
||||
projectPath,
|
||||
abortController,
|
||||
branchName = null,
|
||||
planningMode = 'skip',
|
||||
provider,
|
||||
effectiveBareModel,
|
||||
credentials,
|
||||
claudeCompatibleProvider,
|
||||
mcpServers,
|
||||
sdkOptions,
|
||||
} = options;
|
||||
let responseText = initialResponseText,
|
||||
parsedTasks = parseTasksFromSpec(planContent);
|
||||
logger.info(`Parsed ${parsedTasks.length} tasks from spec for feature ${featureId}`);
|
||||
await this.featureStateManager.updateFeaturePlanSpec(projectPath, featureId, {
|
||||
status: 'generated',
|
||||
content: planContent,
|
||||
version: 1,
|
||||
generatedAt: new Date().toISOString(),
|
||||
reviewedByUser: false,
|
||||
tasks: parsedTasks,
|
||||
tasksTotal: parsedTasks.length,
|
||||
tasksCompleted: 0,
|
||||
});
|
||||
const planSummary = extractSummary(planContent);
|
||||
if (planSummary) await callbacks.updateFeatureSummary(projectPath, featureId, planSummary);
|
||||
let approvedPlanContent = planContent,
|
||||
userFeedback: string | undefined,
|
||||
currentPlanContent = planContent,
|
||||
planVersion = 1;
|
||||
|
||||
if (requiresApproval) {
|
||||
let planApproved = false;
|
||||
while (!planApproved) {
|
||||
logger.info(
|
||||
`Spec v${planVersion} generated for feature ${featureId}, waiting for approval`
|
||||
);
|
||||
this.eventBus.emitAutoModeEvent('plan_approval_required', {
|
||||
featureId,
|
||||
projectPath,
|
||||
branchName,
|
||||
planContent: currentPlanContent,
|
||||
planningMode,
|
||||
planVersion,
|
||||
});
|
||||
const approvalResult = await callbacks.waitForApproval(featureId, projectPath);
|
||||
if (approvalResult.approved) {
|
||||
planApproved = true;
|
||||
userFeedback = approvalResult.feedback;
|
||||
approvedPlanContent = approvalResult.editedPlan || currentPlanContent;
|
||||
if (approvalResult.editedPlan) {
|
||||
// Re-parse tasks from edited plan to ensure we execute the updated tasks
|
||||
const editedTasks = parseTasksFromSpec(approvalResult.editedPlan);
|
||||
parsedTasks = editedTasks;
|
||||
await this.featureStateManager.updateFeaturePlanSpec(projectPath, featureId, {
|
||||
content: approvalResult.editedPlan,
|
||||
tasks: editedTasks,
|
||||
tasksTotal: editedTasks.length,
|
||||
tasksCompleted: 0,
|
||||
});
|
||||
}
|
||||
this.eventBus.emitAutoModeEvent('plan_approved', {
|
||||
featureId,
|
||||
projectPath,
|
||||
branchName,
|
||||
hasEdits: !!approvalResult.editedPlan,
|
||||
planVersion,
|
||||
});
|
||||
} else {
|
||||
const hasFeedback = approvalResult.feedback?.trim().length,
|
||||
hasEdits = approvalResult.editedPlan?.trim().length;
|
||||
if (!hasFeedback && !hasEdits) throw new Error('Plan cancelled by user');
|
||||
planVersion++;
|
||||
this.eventBus.emitAutoModeEvent('plan_revision_requested', {
|
||||
featureId,
|
||||
projectPath,
|
||||
branchName,
|
||||
feedback: approvalResult.feedback,
|
||||
hasEdits: !!hasEdits,
|
||||
planVersion,
|
||||
});
|
||||
const revPrompts = await getPromptCustomization(this.settingsService, '[AutoMode]');
|
||||
const taskEx =
|
||||
planningMode === 'full'
|
||||
? '```tasks\n## Phase 1: Foundation\n- [ ] T001: [Description] | File: [path/to/file]\n```'
|
||||
: '```tasks\n- [ ] T001: [Description] | File: [path/to/file]\n```';
|
||||
let revPrompt = revPrompts.taskExecution.planRevisionTemplate
|
||||
.replace(/\{\{planVersion\}\}/g, String(planVersion - 1))
|
||||
.replace(
|
||||
/\{\{previousPlan\}\}/g,
|
||||
hasEdits ? approvalResult.editedPlan || currentPlanContent : currentPlanContent
|
||||
)
|
||||
.replace(
|
||||
/\{\{userFeedback\}\}/g,
|
||||
approvalResult.feedback || 'Please revise the plan based on the edits above.'
|
||||
)
|
||||
.replace(/\{\{planningMode\}\}/g, planningMode)
|
||||
.replace(/\{\{taskFormatExample\}\}/g, taskEx);
|
||||
await this.featureStateManager.updateFeaturePlanSpec(projectPath, featureId, {
|
||||
status: 'generating',
|
||||
version: planVersion,
|
||||
});
|
||||
let revText = '';
|
||||
for await (const msg of provider.executeQuery(
|
||||
this.buildExecOpts(options, revPrompt, sdkOptions?.maxTurns || 100)
|
||||
)) {
|
||||
if (msg.type === 'assistant' && msg.message?.content)
|
||||
for (const b of msg.message.content)
|
||||
if (b.type === 'text') {
|
||||
revText += b.text || '';
|
||||
this.eventBus.emitAutoModeEvent('auto_mode_progress', {
|
||||
featureId,
|
||||
content: b.text,
|
||||
});
|
||||
}
|
||||
if (msg.type === 'error') throw new Error(msg.error || 'Error during plan revision');
|
||||
if (msg.type === 'result' && msg.subtype === 'success') revText += msg.result || '';
|
||||
}
|
||||
const mi = revText.indexOf('[SPEC_GENERATED]');
|
||||
currentPlanContent = mi > 0 ? revText.substring(0, mi).trim() : revText.trim();
|
||||
const revisedTasks = parseTasksFromSpec(currentPlanContent);
|
||||
if (revisedTasks.length === 0 && (planningMode === 'spec' || planningMode === 'full'))
|
||||
this.eventBus.emitAutoModeEvent('plan_revision_warning', {
|
||||
featureId,
|
||||
projectPath,
|
||||
branchName,
|
||||
planningMode,
|
||||
warning: 'Revised plan missing tasks block',
|
||||
});
|
||||
await this.featureStateManager.updateFeaturePlanSpec(projectPath, featureId, {
|
||||
status: 'generated',
|
||||
content: currentPlanContent,
|
||||
version: planVersion,
|
||||
tasks: revisedTasks,
|
||||
tasksTotal: revisedTasks.length,
|
||||
tasksCompleted: 0,
|
||||
});
|
||||
parsedTasks = revisedTasks;
|
||||
responseText += revText;
|
||||
}
|
||||
}
|
||||
} else {
|
||||
this.eventBus.emitAutoModeEvent('plan_auto_approved', {
|
||||
featureId,
|
||||
projectPath,
|
||||
branchName,
|
||||
planContent,
|
||||
planningMode,
|
||||
});
|
||||
}
|
||||
await this.featureStateManager.updateFeaturePlanSpec(projectPath, featureId, {
|
||||
status: 'approved',
|
||||
approvedAt: new Date().toISOString(),
|
||||
reviewedByUser: requiresApproval,
|
||||
});
|
||||
let tasksCompleted = 0;
|
||||
if (parsedTasks.length > 0) {
|
||||
const r = await this.executeTasksLoop(
|
||||
options,
|
||||
parsedTasks,
|
||||
approvedPlanContent,
|
||||
responseText,
|
||||
scheduleWrite,
|
||||
callbacks,
|
||||
userFeedback
|
||||
);
|
||||
responseText = r.responseText;
|
||||
tasksCompleted = r.tasksCompleted;
|
||||
} else {
|
||||
const r = await this.executeSingleAgentContinuation(
|
||||
options,
|
||||
approvedPlanContent,
|
||||
userFeedback,
|
||||
responseText
|
||||
);
|
||||
responseText = r.responseText;
|
||||
}
|
||||
const summary = extractSummary(responseText);
|
||||
if (summary) await callbacks.saveFeatureSummary(projectPath, featureId, summary);
|
||||
return { responseText, tasksCompleted };
|
||||
}
|
||||
|
||||
private buildExecOpts(o: AgentExecutionOptions, prompt: string, maxTurns?: number) {
|
||||
return {
|
||||
prompt,
|
||||
model: o.effectiveBareModel,
|
||||
maxTurns,
|
||||
cwd: o.workDir,
|
||||
allowedTools: o.sdkOptions?.allowedTools as string[] | undefined,
|
||||
abortController: o.abortController,
|
||||
mcpServers:
|
||||
o.mcpServers && Object.keys(o.mcpServers).length > 0
|
||||
? (o.mcpServers as Record<string, { command: string }>)
|
||||
: undefined,
|
||||
credentials: o.credentials,
|
||||
claudeCompatibleProvider: o.claudeCompatibleProvider,
|
||||
};
|
||||
}
|
||||
|
||||
private async executeSingleAgentContinuation(
|
||||
options: AgentExecutionOptions,
|
||||
planContent: string,
|
||||
userFeedback: string | undefined,
|
||||
initialResponseText: string
|
||||
): Promise<{ responseText: string }> {
|
||||
const { featureId, branchName = null, provider } = options;
|
||||
logger.info(`No parsed tasks, using single-agent execution for feature ${featureId}`);
|
||||
const prompts = await getPromptCustomization(this.settingsService, '[AutoMode]');
|
||||
const contPrompt = prompts.taskExecution.continuationAfterApprovalTemplate
|
||||
.replace(/\{\{userFeedback\}\}/g, userFeedback || '')
|
||||
.replace(/\{\{approvedPlan\}\}/g, planContent);
|
||||
let responseText = initialResponseText;
|
||||
for await (const msg of provider.executeQuery(
|
||||
this.buildExecOpts(options, contPrompt, options.sdkOptions?.maxTurns)
|
||||
)) {
|
||||
if (msg.type === 'assistant' && msg.message?.content)
|
||||
for (const b of msg.message.content) {
|
||||
if (b.type === 'text') {
|
||||
responseText += b.text || '';
|
||||
this.eventBus.emitAutoModeEvent('auto_mode_progress', {
|
||||
featureId,
|
||||
branchName,
|
||||
content: b.text,
|
||||
});
|
||||
} else if (b.type === 'tool_use')
|
||||
this.eventBus.emitAutoModeEvent('auto_mode_tool', {
|
||||
featureId,
|
||||
branchName,
|
||||
tool: b.name,
|
||||
input: b.input,
|
||||
});
|
||||
}
|
||||
else if (msg.type === 'error')
|
||||
throw new Error(msg.error || 'Unknown error during implementation');
|
||||
else if (msg.type === 'result' && msg.subtype === 'success') responseText += msg.result || '';
|
||||
}
|
||||
return { responseText };
|
||||
}
|
||||
}
|
||||
@@ -1,414 +0,0 @@
|
||||
/**
|
||||
* AutoLoopCoordinator - Manages the auto-mode loop lifecycle and failure tracking
|
||||
*/
|
||||
|
||||
import type { Feature } from '@automaker/types';
|
||||
import { createLogger, classifyError } from '@automaker/utils';
|
||||
import type { TypedEventBus } from './typed-event-bus.js';
|
||||
import type { ConcurrencyManager } from './concurrency-manager.js';
|
||||
import type { SettingsService } from './settings-service.js';
|
||||
import { DEFAULT_MAX_CONCURRENCY } from '@automaker/types';
|
||||
|
||||
const logger = createLogger('AutoLoopCoordinator');
|
||||
|
||||
const CONSECUTIVE_FAILURE_THRESHOLD = 3;
|
||||
const FAILURE_WINDOW_MS = 60000;
|
||||
|
||||
export interface AutoModeConfig {
|
||||
maxConcurrency: number;
|
||||
useWorktrees: boolean;
|
||||
projectPath: string;
|
||||
branchName: string | null;
|
||||
}
|
||||
|
||||
export interface ProjectAutoLoopState {
|
||||
abortController: AbortController;
|
||||
config: AutoModeConfig;
|
||||
isRunning: boolean;
|
||||
consecutiveFailures: { timestamp: number; error: string }[];
|
||||
pausedDueToFailures: boolean;
|
||||
hasEmittedIdleEvent: boolean;
|
||||
branchName: string | null;
|
||||
}
|
||||
|
||||
export function getWorktreeAutoLoopKey(projectPath: string, branchName: string | null): string {
|
||||
return `${projectPath}::${(branchName === 'main' ? null : branchName) ?? '__main__'}`;
|
||||
}
|
||||
|
||||
export type ExecuteFeatureFn = (
|
||||
projectPath: string,
|
||||
featureId: string,
|
||||
useWorktrees: boolean,
|
||||
isAutoMode: boolean
|
||||
) => Promise<void>;
|
||||
export type LoadPendingFeaturesFn = (
|
||||
projectPath: string,
|
||||
branchName: string | null
|
||||
) => Promise<Feature[]>;
|
||||
export type SaveExecutionStateFn = (
|
||||
projectPath: string,
|
||||
branchName: string | null,
|
||||
maxConcurrency: number
|
||||
) => Promise<void>;
|
||||
export type ClearExecutionStateFn = (
|
||||
projectPath: string,
|
||||
branchName: string | null
|
||||
) => Promise<void>;
|
||||
export type ResetStuckFeaturesFn = (projectPath: string) => Promise<void>;
|
||||
export type IsFeatureFinishedFn = (feature: Feature) => boolean;
|
||||
|
||||
export class AutoLoopCoordinator {
|
||||
private autoLoopsByProject = new Map<string, ProjectAutoLoopState>();
|
||||
|
||||
constructor(
|
||||
private eventBus: TypedEventBus,
|
||||
private concurrencyManager: ConcurrencyManager,
|
||||
private settingsService: SettingsService | null,
|
||||
private executeFeatureFn: ExecuteFeatureFn,
|
||||
private loadPendingFeaturesFn: LoadPendingFeaturesFn,
|
||||
private saveExecutionStateFn: SaveExecutionStateFn,
|
||||
private clearExecutionStateFn: ClearExecutionStateFn,
|
||||
private resetStuckFeaturesFn: ResetStuckFeaturesFn,
|
||||
private isFeatureFinishedFn: IsFeatureFinishedFn,
|
||||
private isFeatureRunningFn: (featureId: string) => boolean
|
||||
) {}
|
||||
|
||||
/**
|
||||
* Start the auto mode loop for a specific project/worktree (supports multiple concurrent projects and worktrees)
|
||||
* @param projectPath - The project to start auto mode for
|
||||
* @param branchName - The branch name for worktree scoping, null for main worktree
|
||||
* @param maxConcurrency - Maximum concurrent features (default: DEFAULT_MAX_CONCURRENCY)
|
||||
*/
|
||||
async startAutoLoopForProject(
|
||||
projectPath: string,
|
||||
branchName: string | null = null,
|
||||
maxConcurrency?: number
|
||||
): Promise<number> {
|
||||
const resolvedMaxConcurrency = await this.resolveMaxConcurrency(
|
||||
projectPath,
|
||||
branchName,
|
||||
maxConcurrency
|
||||
);
|
||||
|
||||
// Use worktree-scoped key
|
||||
const worktreeKey = getWorktreeAutoLoopKey(projectPath, branchName);
|
||||
|
||||
// Check if this project/worktree already has an active autoloop
|
||||
const existingState = this.autoLoopsByProject.get(worktreeKey);
|
||||
if (existingState?.isRunning) {
|
||||
const worktreeDesc = branchName ? `worktree ${branchName}` : 'main worktree';
|
||||
throw new Error(
|
||||
`Auto mode is already running for ${worktreeDesc} in project: ${projectPath}`
|
||||
);
|
||||
}
|
||||
|
||||
// Create new project/worktree autoloop state
|
||||
const abortController = new AbortController();
|
||||
const config: AutoModeConfig = {
|
||||
maxConcurrency: resolvedMaxConcurrency,
|
||||
useWorktrees: true,
|
||||
projectPath,
|
||||
branchName,
|
||||
};
|
||||
|
||||
const projectState: ProjectAutoLoopState = {
|
||||
abortController,
|
||||
config,
|
||||
isRunning: true,
|
||||
consecutiveFailures: [],
|
||||
pausedDueToFailures: false,
|
||||
hasEmittedIdleEvent: false,
|
||||
branchName,
|
||||
};
|
||||
|
||||
this.autoLoopsByProject.set(worktreeKey, projectState);
|
||||
try {
|
||||
await this.resetStuckFeaturesFn(projectPath);
|
||||
} catch {
|
||||
/* ignore */
|
||||
}
|
||||
this.eventBus.emitAutoModeEvent('auto_mode_started', {
|
||||
message: `Auto mode started with max ${resolvedMaxConcurrency} concurrent features`,
|
||||
projectPath,
|
||||
branchName,
|
||||
maxConcurrency: resolvedMaxConcurrency,
|
||||
});
|
||||
await this.saveExecutionStateFn(projectPath, branchName, resolvedMaxConcurrency);
|
||||
this.runAutoLoopForProject(worktreeKey).catch((error) => {
|
||||
const errorInfo = classifyError(error);
|
||||
this.eventBus.emitAutoModeEvent('auto_mode_error', {
|
||||
error: errorInfo.message,
|
||||
errorType: errorInfo.type,
|
||||
projectPath,
|
||||
branchName,
|
||||
});
|
||||
});
|
||||
return resolvedMaxConcurrency;
|
||||
}
|
||||
|
||||
private async runAutoLoopForProject(worktreeKey: string): Promise<void> {
|
||||
const projectState = this.autoLoopsByProject.get(worktreeKey);
|
||||
if (!projectState) return;
|
||||
const { projectPath, branchName } = projectState.config;
|
||||
let iterationCount = 0;
|
||||
|
||||
while (projectState.isRunning && !projectState.abortController.signal.aborted) {
|
||||
iterationCount++;
|
||||
try {
|
||||
const runningCount = await this.getRunningCountForWorktree(projectPath, branchName);
|
||||
if (runningCount >= projectState.config.maxConcurrency) {
|
||||
await this.sleep(5000, projectState.abortController.signal);
|
||||
continue;
|
||||
}
|
||||
const pendingFeatures = await this.loadPendingFeaturesFn(projectPath, branchName);
|
||||
if (pendingFeatures.length === 0) {
|
||||
if (runningCount === 0 && !projectState.hasEmittedIdleEvent) {
|
||||
this.eventBus.emitAutoModeEvent('auto_mode_idle', {
|
||||
message: 'No pending features - auto mode idle',
|
||||
projectPath,
|
||||
branchName,
|
||||
});
|
||||
projectState.hasEmittedIdleEvent = true;
|
||||
}
|
||||
await this.sleep(10000, projectState.abortController.signal);
|
||||
continue;
|
||||
}
|
||||
const nextFeature = pendingFeatures.find(
|
||||
(f) => !this.isFeatureRunningFn(f.id) && !this.isFeatureFinishedFn(f)
|
||||
);
|
||||
if (nextFeature) {
|
||||
projectState.hasEmittedIdleEvent = false;
|
||||
this.executeFeatureFn(
|
||||
projectPath,
|
||||
nextFeature.id,
|
||||
projectState.config.useWorktrees,
|
||||
true
|
||||
).catch((error) => {
|
||||
const errorInfo = classifyError(error);
|
||||
logger.error(`Auto-loop feature ${nextFeature.id} failed:`, errorInfo.message);
|
||||
if (this.trackFailureAndCheckPauseForProject(projectPath, branchName, errorInfo)) {
|
||||
this.signalShouldPauseForProject(projectPath, branchName, errorInfo);
|
||||
}
|
||||
});
|
||||
}
|
||||
await this.sleep(2000, projectState.abortController.signal);
|
||||
} catch {
|
||||
if (projectState.abortController.signal.aborted) break;
|
||||
await this.sleep(5000, projectState.abortController.signal);
|
||||
}
|
||||
}
|
||||
projectState.isRunning = false;
|
||||
}
|
||||
|
||||
async stopAutoLoopForProject(
|
||||
projectPath: string,
|
||||
branchName: string | null = null
|
||||
): Promise<number> {
|
||||
const worktreeKey = getWorktreeAutoLoopKey(projectPath, branchName);
|
||||
const projectState = this.autoLoopsByProject.get(worktreeKey);
|
||||
if (!projectState) return 0;
|
||||
const wasRunning = projectState.isRunning;
|
||||
projectState.isRunning = false;
|
||||
projectState.abortController.abort();
|
||||
await this.clearExecutionStateFn(projectPath, branchName);
|
||||
if (wasRunning)
|
||||
this.eventBus.emitAutoModeEvent('auto_mode_stopped', {
|
||||
message: 'Auto mode stopped',
|
||||
projectPath,
|
||||
branchName,
|
||||
});
|
||||
this.autoLoopsByProject.delete(worktreeKey);
|
||||
return await this.getRunningCountForWorktree(projectPath, branchName);
|
||||
}
|
||||
|
||||
isAutoLoopRunningForProject(projectPath: string, branchName: string | null = null): boolean {
|
||||
const worktreeKey = getWorktreeAutoLoopKey(projectPath, branchName);
|
||||
const projectState = this.autoLoopsByProject.get(worktreeKey);
|
||||
return projectState?.isRunning ?? false;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get auto loop config for a specific project/worktree
|
||||
* @param projectPath - The project path
|
||||
* @param branchName - The branch name, or null for main worktree
|
||||
*/
|
||||
getAutoLoopConfigForProject(
|
||||
projectPath: string,
|
||||
branchName: string | null = null
|
||||
): AutoModeConfig | null {
|
||||
const worktreeKey = getWorktreeAutoLoopKey(projectPath, branchName);
|
||||
const projectState = this.autoLoopsByProject.get(worktreeKey);
|
||||
return projectState?.config ?? null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get all active auto loop worktrees with their project paths and branch names
|
||||
*/
|
||||
getActiveWorktrees(): Array<{ projectPath: string; branchName: string | null }> {
|
||||
const activeWorktrees: Array<{ projectPath: string; branchName: string | null }> = [];
|
||||
for (const [, state] of this.autoLoopsByProject) {
|
||||
if (state.isRunning) {
|
||||
activeWorktrees.push({
|
||||
projectPath: state.config.projectPath,
|
||||
branchName: state.branchName,
|
||||
});
|
||||
}
|
||||
}
|
||||
return activeWorktrees;
|
||||
}
|
||||
|
||||
getActiveProjects(): string[] {
|
||||
const activeProjects = new Set<string>();
|
||||
for (const [, state] of this.autoLoopsByProject) {
|
||||
if (state.isRunning) activeProjects.add(state.config.projectPath);
|
||||
}
|
||||
return Array.from(activeProjects);
|
||||
}
|
||||
|
||||
async getRunningCountForWorktree(
|
||||
projectPath: string,
|
||||
branchName: string | null
|
||||
): Promise<number> {
|
||||
return this.concurrencyManager.getRunningCountForWorktree(projectPath, branchName);
|
||||
}
|
||||
|
||||
trackFailureAndCheckPauseForProject(
|
||||
projectPath: string,
|
||||
branchNameOrError: string | null | { type: string; message: string },
|
||||
errorInfo?: { type: string; message: string }
|
||||
): boolean {
|
||||
// Support both old (projectPath, errorInfo) and new (projectPath, branchName, errorInfo) signatures
|
||||
let branchName: string | null;
|
||||
let actualErrorInfo: { type: string; message: string };
|
||||
if (
|
||||
typeof branchNameOrError === 'object' &&
|
||||
branchNameOrError !== null &&
|
||||
'type' in branchNameOrError
|
||||
) {
|
||||
// Old signature: (projectPath, errorInfo)
|
||||
branchName = null;
|
||||
actualErrorInfo = branchNameOrError;
|
||||
} else {
|
||||
// New signature: (projectPath, branchName, errorInfo)
|
||||
branchName = branchNameOrError;
|
||||
actualErrorInfo = errorInfo!;
|
||||
}
|
||||
const projectState = this.autoLoopsByProject.get(
|
||||
getWorktreeAutoLoopKey(projectPath, branchName)
|
||||
);
|
||||
if (!projectState) return false;
|
||||
const now = Date.now();
|
||||
projectState.consecutiveFailures.push({ timestamp: now, error: actualErrorInfo.message });
|
||||
projectState.consecutiveFailures = projectState.consecutiveFailures.filter(
|
||||
(f) => now - f.timestamp < FAILURE_WINDOW_MS
|
||||
);
|
||||
return (
|
||||
projectState.consecutiveFailures.length >= CONSECUTIVE_FAILURE_THRESHOLD ||
|
||||
actualErrorInfo.type === 'quota_exhausted' ||
|
||||
actualErrorInfo.type === 'rate_limit'
|
||||
);
|
||||
}
|
||||
|
||||
signalShouldPauseForProject(
|
||||
projectPath: string,
|
||||
branchNameOrError: string | null | { type: string; message: string },
|
||||
errorInfo?: { type: string; message: string }
|
||||
): void {
|
||||
// Support both old (projectPath, errorInfo) and new (projectPath, branchName, errorInfo) signatures
|
||||
let branchName: string | null;
|
||||
let actualErrorInfo: { type: string; message: string };
|
||||
if (
|
||||
typeof branchNameOrError === 'object' &&
|
||||
branchNameOrError !== null &&
|
||||
'type' in branchNameOrError
|
||||
) {
|
||||
branchName = null;
|
||||
actualErrorInfo = branchNameOrError;
|
||||
} else {
|
||||
branchName = branchNameOrError;
|
||||
actualErrorInfo = errorInfo!;
|
||||
}
|
||||
|
||||
const projectState = this.autoLoopsByProject.get(
|
||||
getWorktreeAutoLoopKey(projectPath, branchName)
|
||||
);
|
||||
if (!projectState || projectState.pausedDueToFailures) return;
|
||||
projectState.pausedDueToFailures = true;
|
||||
const failureCount = projectState.consecutiveFailures.length;
|
||||
this.eventBus.emitAutoModeEvent('auto_mode_paused_failures', {
|
||||
message:
|
||||
failureCount >= CONSECUTIVE_FAILURE_THRESHOLD
|
||||
? `Auto Mode paused: ${failureCount} consecutive failures detected.`
|
||||
: 'Auto Mode paused: Usage limit or API error detected.',
|
||||
errorType: actualErrorInfo.type,
|
||||
originalError: actualErrorInfo.message,
|
||||
failureCount,
|
||||
projectPath,
|
||||
branchName,
|
||||
});
|
||||
this.stopAutoLoopForProject(projectPath, branchName);
|
||||
}
|
||||
|
||||
resetFailureTrackingForProject(projectPath: string, branchName: string | null = null): void {
|
||||
const projectState = this.autoLoopsByProject.get(
|
||||
getWorktreeAutoLoopKey(projectPath, branchName)
|
||||
);
|
||||
if (projectState) {
|
||||
projectState.consecutiveFailures = [];
|
||||
projectState.pausedDueToFailures = false;
|
||||
}
|
||||
}
|
||||
|
||||
recordSuccessForProject(projectPath: string, branchName: string | null = null): void {
|
||||
const projectState = this.autoLoopsByProject.get(
|
||||
getWorktreeAutoLoopKey(projectPath, branchName)
|
||||
);
|
||||
if (projectState) projectState.consecutiveFailures = [];
|
||||
}
|
||||
|
||||
async resolveMaxConcurrency(
|
||||
projectPath: string,
|
||||
branchName: string | null,
|
||||
provided?: number
|
||||
): Promise<number> {
|
||||
if (typeof provided === 'number' && Number.isFinite(provided)) return provided;
|
||||
if (!this.settingsService) return DEFAULT_MAX_CONCURRENCY;
|
||||
try {
|
||||
const settings = await this.settingsService.getGlobalSettings();
|
||||
const globalMax =
|
||||
typeof settings.maxConcurrency === 'number'
|
||||
? settings.maxConcurrency
|
||||
: DEFAULT_MAX_CONCURRENCY;
|
||||
const projectId = settings.projects?.find((p) => p.path === projectPath)?.id;
|
||||
const autoModeByWorktree = settings.autoModeByWorktree;
|
||||
if (projectId && autoModeByWorktree && typeof autoModeByWorktree === 'object') {
|
||||
const normalizedBranch =
|
||||
branchName === null || branchName === 'main' ? '__main__' : branchName;
|
||||
const worktreeId = `${projectId}::${normalizedBranch}`;
|
||||
if (
|
||||
worktreeId in autoModeByWorktree &&
|
||||
typeof autoModeByWorktree[worktreeId]?.maxConcurrency === 'number'
|
||||
) {
|
||||
return autoModeByWorktree[worktreeId].maxConcurrency;
|
||||
}
|
||||
}
|
||||
return globalMax;
|
||||
} catch {
|
||||
return DEFAULT_MAX_CONCURRENCY;
|
||||
}
|
||||
}
|
||||
|
||||
private sleep(ms: number, signal?: AbortSignal): Promise<void> {
|
||||
return new Promise((resolve, reject) => {
|
||||
if (signal?.aborted) {
|
||||
reject(new Error('Aborted'));
|
||||
return;
|
||||
}
|
||||
const timeout = setTimeout(resolve, ms);
|
||||
signal?.addEventListener('abort', () => {
|
||||
clearTimeout(timeout);
|
||||
reject(new Error('Aborted'));
|
||||
});
|
||||
});
|
||||
}
|
||||
}
|
||||
4764
apps/server/src/services/auto-mode-service.ts
Normal file
4764
apps/server/src/services/auto-mode-service.ts
Normal file
File diff suppressed because it is too large
Load Diff
@@ -1,225 +0,0 @@
|
||||
/**
|
||||
* Compatibility Shim - Provides AutoModeService-like interface using the new architecture
|
||||
*
|
||||
* This allows existing routes to work without major changes during the transition.
|
||||
* Routes receive this shim which delegates to GlobalAutoModeService and facades.
|
||||
*
|
||||
* This is a TEMPORARY shim - routes should be updated to use the new interface directly.
|
||||
*/
|
||||
|
||||
import type { Feature } from '@automaker/types';
|
||||
import type { EventEmitter } from '../../lib/events.js';
|
||||
import { GlobalAutoModeService } from './global-service.js';
|
||||
import { AutoModeServiceFacade } from './facade.js';
|
||||
import type { SettingsService } from '../settings-service.js';
|
||||
import type { FeatureLoader } from '../feature-loader.js';
|
||||
import type { FacadeOptions, AutoModeStatus, RunningAgentInfo } from './types.js';
|
||||
|
||||
/**
|
||||
* AutoModeServiceCompat wraps GlobalAutoModeService and facades to provide
|
||||
* the old AutoModeService interface that routes expect.
|
||||
*/
|
||||
export class AutoModeServiceCompat {
|
||||
private readonly globalService: GlobalAutoModeService;
|
||||
private readonly facadeOptions: FacadeOptions;
|
||||
|
||||
constructor(
|
||||
events: EventEmitter,
|
||||
settingsService: SettingsService | null,
|
||||
featureLoader: FeatureLoader
|
||||
) {
|
||||
this.globalService = new GlobalAutoModeService(events, settingsService, featureLoader);
|
||||
const sharedServices = this.globalService.getSharedServices();
|
||||
|
||||
this.facadeOptions = {
|
||||
events,
|
||||
settingsService,
|
||||
featureLoader,
|
||||
sharedServices,
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the global service for direct access
|
||||
*/
|
||||
getGlobalService(): GlobalAutoModeService {
|
||||
return this.globalService;
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a facade for a specific project
|
||||
*/
|
||||
createFacade(projectPath: string): AutoModeServiceFacade {
|
||||
return AutoModeServiceFacade.create(projectPath, this.facadeOptions);
|
||||
}
|
||||
|
||||
// ===========================================================================
|
||||
// GLOBAL OPERATIONS (delegated to GlobalAutoModeService)
|
||||
// ===========================================================================
|
||||
|
||||
getStatus(): AutoModeStatus {
|
||||
return this.globalService.getStatus();
|
||||
}
|
||||
|
||||
getActiveAutoLoopProjects(): string[] {
|
||||
return this.globalService.getActiveAutoLoopProjects();
|
||||
}
|
||||
|
||||
getActiveAutoLoopWorktrees(): Array<{ projectPath: string; branchName: string | null }> {
|
||||
return this.globalService.getActiveAutoLoopWorktrees();
|
||||
}
|
||||
|
||||
async getRunningAgents(): Promise<RunningAgentInfo[]> {
|
||||
return this.globalService.getRunningAgents();
|
||||
}
|
||||
|
||||
async markAllRunningFeaturesInterrupted(reason?: string): Promise<void> {
|
||||
return this.globalService.markAllRunningFeaturesInterrupted(reason);
|
||||
}
|
||||
|
||||
// ===========================================================================
|
||||
// PER-PROJECT OPERATIONS (delegated to facades)
|
||||
// ===========================================================================
|
||||
|
||||
getStatusForProject(
|
||||
projectPath: string,
|
||||
branchName: string | null = null
|
||||
): {
|
||||
isAutoLoopRunning: boolean;
|
||||
runningFeatures: string[];
|
||||
runningCount: number;
|
||||
maxConcurrency: number;
|
||||
branchName: string | null;
|
||||
} {
|
||||
const facade = this.createFacade(projectPath);
|
||||
return facade.getStatusForProject(branchName);
|
||||
}
|
||||
|
||||
isAutoLoopRunningForProject(projectPath: string, branchName: string | null = null): boolean {
|
||||
const facade = this.createFacade(projectPath);
|
||||
return facade.isAutoLoopRunning(branchName);
|
||||
}
|
||||
|
||||
async startAutoLoopForProject(
|
||||
projectPath: string,
|
||||
branchName: string | null = null,
|
||||
maxConcurrency?: number
|
||||
): Promise<number> {
|
||||
const facade = this.createFacade(projectPath);
|
||||
return facade.startAutoLoop(branchName, maxConcurrency);
|
||||
}
|
||||
|
||||
async stopAutoLoopForProject(
|
||||
projectPath: string,
|
||||
branchName: string | null = null
|
||||
): Promise<number> {
|
||||
const facade = this.createFacade(projectPath);
|
||||
return facade.stopAutoLoop(branchName);
|
||||
}
|
||||
|
||||
async executeFeature(
|
||||
projectPath: string,
|
||||
featureId: string,
|
||||
useWorktrees = false,
|
||||
isAutoMode = false,
|
||||
providedWorktreePath?: string,
|
||||
options?: { continuationPrompt?: string; _calledInternally?: boolean }
|
||||
): Promise<void> {
|
||||
const facade = this.createFacade(projectPath);
|
||||
return facade.executeFeature(
|
||||
featureId,
|
||||
useWorktrees,
|
||||
isAutoMode,
|
||||
providedWorktreePath,
|
||||
options
|
||||
);
|
||||
}
|
||||
|
||||
async stopFeature(featureId: string): Promise<boolean> {
|
||||
// Stop feature is tricky - we need to find which project the feature is running in
|
||||
// The concurrency manager tracks this
|
||||
const runningAgents = await this.getRunningAgents();
|
||||
const agent = runningAgents.find((a) => a.featureId === featureId);
|
||||
if (agent) {
|
||||
const facade = this.createFacade(agent.projectPath);
|
||||
return facade.stopFeature(featureId);
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
async resumeFeature(projectPath: string, featureId: string, useWorktrees = false): Promise<void> {
|
||||
const facade = this.createFacade(projectPath);
|
||||
return facade.resumeFeature(featureId, useWorktrees);
|
||||
}
|
||||
|
||||
async followUpFeature(
|
||||
projectPath: string,
|
||||
featureId: string,
|
||||
prompt: string,
|
||||
imagePaths?: string[],
|
||||
useWorktrees = true
|
||||
): Promise<void> {
|
||||
const facade = this.createFacade(projectPath);
|
||||
return facade.followUpFeature(featureId, prompt, imagePaths, useWorktrees);
|
||||
}
|
||||
|
||||
async verifyFeature(projectPath: string, featureId: string): Promise<boolean> {
|
||||
const facade = this.createFacade(projectPath);
|
||||
return facade.verifyFeature(featureId);
|
||||
}
|
||||
|
||||
async commitFeature(
|
||||
projectPath: string,
|
||||
featureId: string,
|
||||
providedWorktreePath?: string
|
||||
): Promise<string | null> {
|
||||
const facade = this.createFacade(projectPath);
|
||||
return facade.commitFeature(featureId, providedWorktreePath);
|
||||
}
|
||||
|
||||
async contextExists(projectPath: string, featureId: string): Promise<boolean> {
|
||||
const facade = this.createFacade(projectPath);
|
||||
return facade.contextExists(featureId);
|
||||
}
|
||||
|
||||
async analyzeProject(projectPath: string): Promise<void> {
|
||||
const facade = this.createFacade(projectPath);
|
||||
return facade.analyzeProject();
|
||||
}
|
||||
|
||||
async resolvePlanApproval(
|
||||
projectPath: string,
|
||||
featureId: string,
|
||||
approved: boolean,
|
||||
editedPlan?: string,
|
||||
feedback?: string
|
||||
): Promise<{ success: boolean; error?: string }> {
|
||||
const facade = this.createFacade(projectPath);
|
||||
return facade.resolvePlanApproval(featureId, approved, editedPlan, feedback);
|
||||
}
|
||||
|
||||
async resumeInterruptedFeatures(projectPath: string): Promise<void> {
|
||||
const facade = this.createFacade(projectPath);
|
||||
return facade.resumeInterruptedFeatures();
|
||||
}
|
||||
|
||||
async checkWorktreeCapacity(
|
||||
projectPath: string,
|
||||
featureId: string
|
||||
): Promise<{
|
||||
hasCapacity: boolean;
|
||||
currentAgents: number;
|
||||
maxAgents: number;
|
||||
branchName: string | null;
|
||||
}> {
|
||||
const facade = this.createFacade(projectPath);
|
||||
return facade.checkWorktreeCapacity(featureId);
|
||||
}
|
||||
|
||||
async detectOrphanedFeatures(
|
||||
projectPath: string
|
||||
): Promise<Array<{ feature: Feature; missingBranch: string }>> {
|
||||
const facade = this.createFacade(projectPath);
|
||||
return facade.detectOrphanedFeatures();
|
||||
}
|
||||
}
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,204 +0,0 @@
|
||||
/**
|
||||
* GlobalAutoModeService - Global operations for auto-mode that span across all projects
|
||||
*
|
||||
* This service manages global state and operations that are not project-specific:
|
||||
* - Overall status (all running features across all projects)
|
||||
* - Active auto loop projects and worktrees
|
||||
* - Graceful shutdown (mark all features as interrupted)
|
||||
*
|
||||
* Per-project operations should use AutoModeServiceFacade instead.
|
||||
*/
|
||||
|
||||
import path from 'path';
|
||||
import type { Feature } from '@automaker/types';
|
||||
import { createLogger } from '@automaker/utils';
|
||||
import type { EventEmitter } from '../../lib/events.js';
|
||||
import { TypedEventBus } from '../typed-event-bus.js';
|
||||
import { ConcurrencyManager } from '../concurrency-manager.js';
|
||||
import { WorktreeResolver } from '../worktree-resolver.js';
|
||||
import { AutoLoopCoordinator } from '../auto-loop-coordinator.js';
|
||||
import { FeatureStateManager } from '../feature-state-manager.js';
|
||||
import { FeatureLoader } from '../feature-loader.js';
|
||||
import type { SettingsService } from '../settings-service.js';
|
||||
import type { SharedServices, AutoModeStatus, RunningAgentInfo } from './types.js';
|
||||
|
||||
const logger = createLogger('GlobalAutoModeService');
|
||||
|
||||
/**
|
||||
* GlobalAutoModeService provides global operations for auto-mode.
|
||||
*
|
||||
* Created once at server startup, shared across all facades.
|
||||
*/
|
||||
export class GlobalAutoModeService {
|
||||
private readonly eventBus: TypedEventBus;
|
||||
private readonly concurrencyManager: ConcurrencyManager;
|
||||
private readonly autoLoopCoordinator: AutoLoopCoordinator;
|
||||
private readonly worktreeResolver: WorktreeResolver;
|
||||
private readonly featureStateManager: FeatureStateManager;
|
||||
private readonly featureLoader: FeatureLoader;
|
||||
|
||||
constructor(
|
||||
events: EventEmitter,
|
||||
settingsService: SettingsService | null,
|
||||
featureLoader: FeatureLoader = new FeatureLoader()
|
||||
) {
|
||||
this.featureLoader = featureLoader;
|
||||
this.eventBus = new TypedEventBus(events);
|
||||
this.worktreeResolver = new WorktreeResolver();
|
||||
this.concurrencyManager = new ConcurrencyManager((p) =>
|
||||
this.worktreeResolver.getCurrentBranch(p)
|
||||
);
|
||||
this.featureStateManager = new FeatureStateManager(events, featureLoader);
|
||||
|
||||
// Create AutoLoopCoordinator with callbacks
|
||||
// IMPORTANT: This coordinator is for MONITORING ONLY (getActiveProjects, getActiveWorktrees).
|
||||
// Facades MUST create their own AutoLoopCoordinator for actual execution.
|
||||
// The executeFeatureFn here is a safety guard - it should never be called.
|
||||
this.autoLoopCoordinator = new AutoLoopCoordinator(
|
||||
this.eventBus,
|
||||
this.concurrencyManager,
|
||||
settingsService,
|
||||
// executeFeatureFn - throws because facades must use their own coordinator for execution
|
||||
async () => {
|
||||
throw new Error(
|
||||
'executeFeatureFn not available in GlobalAutoModeService. ' +
|
||||
'Facades must create their own AutoLoopCoordinator for execution.'
|
||||
);
|
||||
},
|
||||
// getBacklogFeaturesFn
|
||||
(pPath, branchName) =>
|
||||
featureLoader
|
||||
.getAll(pPath)
|
||||
.then((features) =>
|
||||
features.filter(
|
||||
(f) =>
|
||||
(f.status === 'backlog' || f.status === 'ready') &&
|
||||
(branchName === null
|
||||
? !f.branchName || f.branchName === 'main'
|
||||
: f.branchName === branchName)
|
||||
)
|
||||
),
|
||||
// saveExecutionStateFn - placeholder
|
||||
async () => {},
|
||||
// clearExecutionStateFn - placeholder
|
||||
async () => {},
|
||||
// resetStuckFeaturesFn
|
||||
(pPath) => this.featureStateManager.resetStuckFeatures(pPath),
|
||||
// isFeatureDoneFn
|
||||
(feature) =>
|
||||
feature.status === 'completed' ||
|
||||
feature.status === 'verified' ||
|
||||
feature.status === 'waiting_approval',
|
||||
// isFeatureRunningFn
|
||||
(featureId) => this.concurrencyManager.isRunning(featureId)
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the shared services for use by facades.
|
||||
* This allows facades to share state with the global service.
|
||||
*/
|
||||
getSharedServices(): SharedServices {
|
||||
return {
|
||||
eventBus: this.eventBus,
|
||||
concurrencyManager: this.concurrencyManager,
|
||||
autoLoopCoordinator: this.autoLoopCoordinator,
|
||||
worktreeResolver: this.worktreeResolver,
|
||||
};
|
||||
}
|
||||
|
||||
// ===========================================================================
|
||||
// GLOBAL STATUS (3 methods)
|
||||
// ===========================================================================
|
||||
|
||||
/**
|
||||
* Get global status (all projects combined)
|
||||
*/
|
||||
getStatus(): AutoModeStatus {
|
||||
const allRunning = this.concurrencyManager.getAllRunning();
|
||||
return {
|
||||
isRunning: allRunning.length > 0,
|
||||
runningFeatures: allRunning.map((rf) => rf.featureId),
|
||||
runningCount: allRunning.length,
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Get all active auto loop projects (unique project paths)
|
||||
*/
|
||||
getActiveAutoLoopProjects(): string[] {
|
||||
return this.autoLoopCoordinator.getActiveProjects();
|
||||
}
|
||||
|
||||
/**
|
||||
* Get all active auto loop worktrees
|
||||
*/
|
||||
getActiveAutoLoopWorktrees(): Array<{ projectPath: string; branchName: string | null }> {
|
||||
return this.autoLoopCoordinator.getActiveWorktrees();
|
||||
}
|
||||
|
||||
// ===========================================================================
|
||||
// RUNNING AGENTS (1 method)
|
||||
// ===========================================================================
|
||||
|
||||
/**
|
||||
* Get detailed info about all running agents
|
||||
*/
|
||||
async getRunningAgents(): Promise<RunningAgentInfo[]> {
|
||||
const agents = await Promise.all(
|
||||
this.concurrencyManager.getAllRunning().map(async (rf) => {
|
||||
let title: string | undefined;
|
||||
let description: string | undefined;
|
||||
let branchName: string | undefined;
|
||||
|
||||
try {
|
||||
const feature = await this.featureLoader.get(rf.projectPath, rf.featureId);
|
||||
if (feature) {
|
||||
title = feature.title;
|
||||
description = feature.description;
|
||||
branchName = feature.branchName;
|
||||
}
|
||||
} catch {
|
||||
// Silently ignore
|
||||
}
|
||||
|
||||
return {
|
||||
featureId: rf.featureId,
|
||||
projectPath: rf.projectPath,
|
||||
projectName: path.basename(rf.projectPath),
|
||||
isAutoMode: rf.isAutoMode,
|
||||
model: rf.model,
|
||||
provider: rf.provider,
|
||||
title,
|
||||
description,
|
||||
branchName,
|
||||
};
|
||||
})
|
||||
);
|
||||
return agents;
|
||||
}
|
||||
|
||||
// ===========================================================================
|
||||
// LIFECYCLE (1 method)
|
||||
// ===========================================================================
|
||||
|
||||
/**
|
||||
* Mark all running features as interrupted.
|
||||
* Called during graceful shutdown.
|
||||
*
|
||||
* @param reason - Optional reason for the interruption
|
||||
*/
|
||||
async markAllRunningFeaturesInterrupted(reason?: string): Promise<void> {
|
||||
const allRunning = this.concurrencyManager.getAllRunning();
|
||||
|
||||
for (const rf of allRunning) {
|
||||
await this.featureStateManager.markFeatureInterrupted(rf.projectPath, rf.featureId, reason);
|
||||
}
|
||||
|
||||
if (allRunning.length > 0) {
|
||||
logger.info(
|
||||
`Marked ${allRunning.length} running feature(s) as interrupted: ${reason || 'no reason provided'}`
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,76 +0,0 @@
|
||||
/**
|
||||
* Auto Mode Service Module
|
||||
*
|
||||
* Entry point for auto-mode functionality. Exports:
|
||||
* - GlobalAutoModeService: Global operations that span all projects
|
||||
* - AutoModeServiceFacade: Per-project facade for auto-mode operations
|
||||
* - createAutoModeFacade: Convenience factory function
|
||||
* - Types for route consumption
|
||||
*/
|
||||
|
||||
// Main exports
|
||||
export { GlobalAutoModeService } from './global-service.js';
|
||||
export { AutoModeServiceFacade } from './facade.js';
|
||||
export { AutoModeServiceCompat } from './compat.js';
|
||||
|
||||
// Convenience factory function
|
||||
import { AutoModeServiceFacade } from './facade.js';
|
||||
import type { FacadeOptions } from './types.js';
|
||||
|
||||
/**
|
||||
* Create an AutoModeServiceFacade instance for a specific project.
|
||||
*
|
||||
* This is a convenience wrapper around AutoModeServiceFacade.create().
|
||||
*
|
||||
* @param projectPath - The project path this facade operates on
|
||||
* @param options - Configuration options including events, settingsService, featureLoader
|
||||
* @returns A new AutoModeServiceFacade instance
|
||||
*
|
||||
* @example
|
||||
* ```typescript
|
||||
* import { createAutoModeFacade } from './services/auto-mode';
|
||||
*
|
||||
* const facade = createAutoModeFacade('/path/to/project', {
|
||||
* events: eventEmitter,
|
||||
* settingsService,
|
||||
* });
|
||||
*
|
||||
* // Start auto mode
|
||||
* await facade.startAutoLoop(null, 3);
|
||||
*
|
||||
* // Check status
|
||||
* const status = facade.getStatusForProject();
|
||||
* ```
|
||||
*/
|
||||
export function createAutoModeFacade(
|
||||
projectPath: string,
|
||||
options: FacadeOptions
|
||||
): AutoModeServiceFacade {
|
||||
return AutoModeServiceFacade.create(projectPath, options);
|
||||
}
|
||||
|
||||
// Type exports from types.ts
|
||||
export type {
|
||||
FacadeOptions,
|
||||
SharedServices,
|
||||
AutoModeStatus,
|
||||
ProjectAutoModeStatus,
|
||||
WorktreeCapacityInfo,
|
||||
RunningAgentInfo,
|
||||
OrphanedFeatureInfo,
|
||||
GlobalAutoModeOperations,
|
||||
} from './types.js';
|
||||
|
||||
// Re-export types from extracted services for route convenience
|
||||
export type {
|
||||
AutoModeConfig,
|
||||
ProjectAutoLoopState,
|
||||
RunningFeature,
|
||||
AcquireParams,
|
||||
WorktreeInfo,
|
||||
PipelineContext,
|
||||
PipelineStatusInfo,
|
||||
PlanApprovalResult,
|
||||
ResolveApprovalResult,
|
||||
ExecutionState,
|
||||
} from './types.js';
|
||||
@@ -1,128 +0,0 @@
|
||||
/**
|
||||
* Facade Types - Type definitions for AutoModeServiceFacade
|
||||
*
|
||||
* Contains:
|
||||
* - FacadeOptions interface for factory configuration
|
||||
* - Re-exports of types from extracted services that routes might need
|
||||
* - Additional types for facade method signatures
|
||||
*/
|
||||
|
||||
import type { EventEmitter } from '../../lib/events.js';
|
||||
import type { Feature, ModelProvider } from '@automaker/types';
|
||||
import type { SettingsService } from '../settings-service.js';
|
||||
import type { FeatureLoader } from '../feature-loader.js';
|
||||
import type { ConcurrencyManager } from '../concurrency-manager.js';
|
||||
import type { AutoLoopCoordinator } from '../auto-loop-coordinator.js';
|
||||
import type { WorktreeResolver } from '../worktree-resolver.js';
|
||||
import type { TypedEventBus } from '../typed-event-bus.js';
|
||||
|
||||
// Re-export types from extracted services for route consumption
|
||||
export type { AutoModeConfig, ProjectAutoLoopState } from '../auto-loop-coordinator.js';
|
||||
|
||||
export type { RunningFeature, AcquireParams } from '../concurrency-manager.js';
|
||||
|
||||
export type { WorktreeInfo } from '../worktree-resolver.js';
|
||||
|
||||
export type { PipelineContext, PipelineStatusInfo } from '../pipeline-orchestrator.js';
|
||||
|
||||
export type { PlanApprovalResult, ResolveApprovalResult } from '../plan-approval-service.js';
|
||||
|
||||
export type { ExecutionState } from '../recovery-service.js';
|
||||
|
||||
/**
|
||||
* Shared services that can be passed to facades to enable state sharing
|
||||
*/
|
||||
export interface SharedServices {
|
||||
/** TypedEventBus for typed event emission */
|
||||
eventBus: TypedEventBus;
|
||||
/** ConcurrencyManager for tracking running features across all projects */
|
||||
concurrencyManager: ConcurrencyManager;
|
||||
/** AutoLoopCoordinator for managing auto loop state across all projects */
|
||||
autoLoopCoordinator: AutoLoopCoordinator;
|
||||
/** WorktreeResolver for git worktree operations */
|
||||
worktreeResolver: WorktreeResolver;
|
||||
}
|
||||
|
||||
/**
|
||||
* Options for creating an AutoModeServiceFacade instance
|
||||
*/
|
||||
export interface FacadeOptions {
|
||||
/** EventEmitter for broadcasting events to clients */
|
||||
events: EventEmitter;
|
||||
/** SettingsService for reading project/global settings (optional) */
|
||||
settingsService?: SettingsService | null;
|
||||
/** FeatureLoader for loading feature data (optional, defaults to new FeatureLoader()) */
|
||||
featureLoader?: FeatureLoader;
|
||||
/** Shared services for state sharing across facades (optional) */
|
||||
sharedServices?: SharedServices;
|
||||
}
|
||||
|
||||
/**
|
||||
* Status returned by getStatus()
|
||||
*/
|
||||
export interface AutoModeStatus {
|
||||
isRunning: boolean;
|
||||
runningFeatures: string[];
|
||||
runningCount: number;
|
||||
}
|
||||
|
||||
/**
|
||||
* Status returned by getStatusForProject()
|
||||
*/
|
||||
export interface ProjectAutoModeStatus {
|
||||
isAutoLoopRunning: boolean;
|
||||
runningFeatures: string[];
|
||||
runningCount: number;
|
||||
maxConcurrency: number;
|
||||
branchName: string | null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Capacity info returned by checkWorktreeCapacity()
|
||||
*/
|
||||
export interface WorktreeCapacityInfo {
|
||||
hasCapacity: boolean;
|
||||
currentAgents: number;
|
||||
maxAgents: number;
|
||||
branchName: string | null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Running agent info returned by getRunningAgents()
|
||||
*/
|
||||
export interface RunningAgentInfo {
|
||||
featureId: string;
|
||||
projectPath: string;
|
||||
projectName: string;
|
||||
isAutoMode: boolean;
|
||||
model?: string;
|
||||
provider?: ModelProvider;
|
||||
title?: string;
|
||||
description?: string;
|
||||
branchName?: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* Orphaned feature info returned by detectOrphanedFeatures()
|
||||
*/
|
||||
export interface OrphanedFeatureInfo {
|
||||
feature: Feature;
|
||||
missingBranch: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* Interface describing global auto-mode operations (not project-specific).
|
||||
* Used by routes that need global state access.
|
||||
*/
|
||||
export interface GlobalAutoModeOperations {
|
||||
/** Get global status (all projects combined) */
|
||||
getStatus(): AutoModeStatus;
|
||||
/** Get all active auto loop projects (unique project paths) */
|
||||
getActiveAutoLoopProjects(): string[];
|
||||
/** Get all active auto loop worktrees */
|
||||
getActiveAutoLoopWorktrees(): Array<{ projectPath: string; branchName: string | null }>;
|
||||
/** Get detailed info about all running agents */
|
||||
getRunningAgents(): Promise<RunningAgentInfo[]>;
|
||||
/** Mark all running features as interrupted (for graceful shutdown) */
|
||||
markAllRunningFeaturesInterrupted(reason?: string): Promise<void>;
|
||||
}
|
||||
@@ -1,226 +0,0 @@
|
||||
/**
|
||||
* ConcurrencyManager - Manages running feature slots with lease-based reference counting
|
||||
*
|
||||
* Extracted from AutoModeService to provide a standalone service for tracking
|
||||
* running feature execution with proper lease counting to support nested calls
|
||||
* (e.g., resumeFeature -> executeFeature).
|
||||
*
|
||||
* Key behaviors:
|
||||
* - acquire() with existing entry + allowReuse: increment leaseCount, return existing
|
||||
* - acquire() with existing entry + no allowReuse: throw Error('already running')
|
||||
* - release() decrements leaseCount, only deletes at 0
|
||||
* - release() with force:true bypasses leaseCount check
|
||||
*/
|
||||
|
||||
import type { ModelProvider } from '@automaker/types';
|
||||
|
||||
/**
|
||||
* Function type for getting the current branch of a project.
|
||||
* Injected to allow for testing and decoupling from git operations.
|
||||
*/
|
||||
export type GetCurrentBranchFn = (projectPath: string) => Promise<string | null>;
|
||||
|
||||
/**
|
||||
* Represents a running feature execution with all tracking metadata
|
||||
*/
|
||||
export interface RunningFeature {
|
||||
featureId: string;
|
||||
projectPath: string;
|
||||
worktreePath: string | null;
|
||||
branchName: string | null;
|
||||
abortController: AbortController;
|
||||
isAutoMode: boolean;
|
||||
startTime: number;
|
||||
leaseCount: number;
|
||||
model?: string;
|
||||
provider?: ModelProvider;
|
||||
}
|
||||
|
||||
/**
|
||||
* Parameters for acquiring a running feature slot
|
||||
*/
|
||||
export interface AcquireParams {
|
||||
featureId: string;
|
||||
projectPath: string;
|
||||
isAutoMode: boolean;
|
||||
allowReuse?: boolean;
|
||||
abortController?: AbortController;
|
||||
}
|
||||
|
||||
/**
|
||||
* ConcurrencyManager manages the running features Map with lease-based reference counting.
|
||||
*
|
||||
* This supports nested execution patterns where a feature may be acquired multiple times
|
||||
* (e.g., during resume operations) and should only be released when all references are done.
|
||||
*/
|
||||
export class ConcurrencyManager {
|
||||
private runningFeatures = new Map<string, RunningFeature>();
|
||||
private getCurrentBranch: GetCurrentBranchFn;
|
||||
|
||||
/**
|
||||
* @param getCurrentBranch - Function to get the current branch for a project.
|
||||
* If not provided, defaults to returning 'main'.
|
||||
*/
|
||||
constructor(getCurrentBranch?: GetCurrentBranchFn) {
|
||||
this.getCurrentBranch = getCurrentBranch ?? (() => Promise.resolve('main'));
|
||||
}
|
||||
|
||||
/**
|
||||
* Acquire a slot in the runningFeatures map for a feature.
|
||||
* Implements reference counting via leaseCount to support nested calls
|
||||
* (e.g., resumeFeature -> executeFeature).
|
||||
*
|
||||
* @param params.featureId - ID of the feature to track
|
||||
* @param params.projectPath - Path to the project
|
||||
* @param params.isAutoMode - Whether this is an auto-mode execution
|
||||
* @param params.allowReuse - If true, allows incrementing leaseCount for already-running features
|
||||
* @param params.abortController - Optional abort controller to use
|
||||
* @returns The RunningFeature entry (existing or newly created)
|
||||
* @throws Error if feature is already running and allowReuse is false
|
||||
*/
|
||||
acquire(params: AcquireParams): RunningFeature {
|
||||
const existing = this.runningFeatures.get(params.featureId);
|
||||
if (existing) {
|
||||
if (!params.allowReuse) {
|
||||
throw new Error('already running');
|
||||
}
|
||||
existing.leaseCount += 1;
|
||||
return existing;
|
||||
}
|
||||
|
||||
const abortController = params.abortController ?? new AbortController();
|
||||
const entry: RunningFeature = {
|
||||
featureId: params.featureId,
|
||||
projectPath: params.projectPath,
|
||||
worktreePath: null,
|
||||
branchName: null,
|
||||
abortController,
|
||||
isAutoMode: params.isAutoMode,
|
||||
startTime: Date.now(),
|
||||
leaseCount: 1,
|
||||
};
|
||||
this.runningFeatures.set(params.featureId, entry);
|
||||
return entry;
|
||||
}
|
||||
|
||||
/**
|
||||
* Release a slot in the runningFeatures map for a feature.
|
||||
* Decrements leaseCount and only removes the entry when it reaches zero,
|
||||
* unless force option is used.
|
||||
*
|
||||
* @param featureId - ID of the feature to release
|
||||
* @param options.force - If true, immediately removes the entry regardless of leaseCount
|
||||
*/
|
||||
release(featureId: string, options?: { force?: boolean }): void {
|
||||
const entry = this.runningFeatures.get(featureId);
|
||||
if (!entry) {
|
||||
return;
|
||||
}
|
||||
|
||||
if (options?.force) {
|
||||
this.runningFeatures.delete(featureId);
|
||||
return;
|
||||
}
|
||||
|
||||
entry.leaseCount -= 1;
|
||||
if (entry.leaseCount <= 0) {
|
||||
this.runningFeatures.delete(featureId);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if a feature is currently running
|
||||
*
|
||||
* @param featureId - ID of the feature to check
|
||||
* @returns true if the feature is in the runningFeatures map
|
||||
*/
|
||||
isRunning(featureId: string): boolean {
|
||||
return this.runningFeatures.has(featureId);
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the RunningFeature entry for a feature
|
||||
*
|
||||
* @param featureId - ID of the feature
|
||||
* @returns The RunningFeature entry or undefined if not running
|
||||
*/
|
||||
getRunningFeature(featureId: string): RunningFeature | undefined {
|
||||
return this.runningFeatures.get(featureId);
|
||||
}
|
||||
|
||||
/**
|
||||
* Get count of running features for a specific project
|
||||
*
|
||||
* @param projectPath - The project path to count features for
|
||||
* @returns Number of running features for the project
|
||||
*/
|
||||
getRunningCount(projectPath: string): number {
|
||||
let count = 0;
|
||||
for (const [, feature] of this.runningFeatures) {
|
||||
if (feature.projectPath === projectPath) {
|
||||
count++;
|
||||
}
|
||||
}
|
||||
return count;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get count of running features for a specific worktree
|
||||
*
|
||||
* @param projectPath - The project path
|
||||
* @param branchName - The branch name, or null for main worktree
|
||||
* (features without branchName or matching primary branch)
|
||||
* @returns Number of running features for the worktree
|
||||
*/
|
||||
async getRunningCountForWorktree(
|
||||
projectPath: string,
|
||||
branchName: string | null
|
||||
): Promise<number> {
|
||||
// Get the actual primary branch name for the project
|
||||
const primaryBranch = await this.getCurrentBranch(projectPath);
|
||||
|
||||
let count = 0;
|
||||
for (const [, feature] of this.runningFeatures) {
|
||||
// Filter by project path AND branchName to get accurate worktree-specific count
|
||||
const featureBranch = feature.branchName ?? null;
|
||||
if (branchName === null) {
|
||||
// Main worktree: match features with branchName === null OR branchName matching primary branch
|
||||
const isPrimaryBranch =
|
||||
featureBranch === null || (primaryBranch && featureBranch === primaryBranch);
|
||||
if (feature.projectPath === projectPath && isPrimaryBranch) {
|
||||
count++;
|
||||
}
|
||||
} else {
|
||||
// Feature worktree: exact match
|
||||
if (feature.projectPath === projectPath && featureBranch === branchName) {
|
||||
count++;
|
||||
}
|
||||
}
|
||||
}
|
||||
return count;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get all currently running features
|
||||
*
|
||||
* @returns Array of all RunningFeature entries
|
||||
*/
|
||||
getAllRunning(): RunningFeature[] {
|
||||
return Array.from(this.runningFeatures.values());
|
||||
}
|
||||
|
||||
/**
|
||||
* Update properties of a running feature
|
||||
*
|
||||
* @param featureId - ID of the feature to update
|
||||
* @param updates - Partial RunningFeature properties to update
|
||||
*/
|
||||
updateRunningFeature(featureId: string, updates: Partial<RunningFeature>): void {
|
||||
const entry = this.runningFeatures.get(featureId);
|
||||
if (!entry) {
|
||||
return;
|
||||
}
|
||||
|
||||
Object.assign(entry, updates);
|
||||
}
|
||||
}
|
||||
@@ -169,10 +169,9 @@ export class EventHookService {
|
||||
}
|
||||
|
||||
// Build context for variable substitution
|
||||
// Use loaded featureName (from feature.title) or fall back to payload.featureName
|
||||
const context: HookContext = {
|
||||
featureId: payload.featureId,
|
||||
featureName: featureName || payload.featureName,
|
||||
featureName: payload.featureName,
|
||||
projectPath: payload.projectPath,
|
||||
projectName: payload.projectPath ? this.extractProjectName(payload.projectPath) : undefined,
|
||||
error: payload.error || payload.message,
|
||||
|
||||
@@ -1,373 +0,0 @@
|
||||
/**
|
||||
* ExecutionService - Feature execution lifecycle coordination
|
||||
*/
|
||||
|
||||
import path from 'path';
|
||||
import type { Feature } from '@automaker/types';
|
||||
import { createLogger, classifyError, loadContextFiles, recordMemoryUsage } from '@automaker/utils';
|
||||
import { resolveModelString, DEFAULT_MODELS } from '@automaker/model-resolver';
|
||||
import { getFeatureDir } from '@automaker/platform';
|
||||
import { ProviderFactory } from '../providers/provider-factory.js';
|
||||
import * as secureFs from '../lib/secure-fs.js';
|
||||
import {
|
||||
getPromptCustomization,
|
||||
getAutoLoadClaudeMdSetting,
|
||||
filterClaudeMdFromContext,
|
||||
} from '../lib/settings-helpers.js';
|
||||
import { validateWorkingDirectory } from '../lib/sdk-options.js';
|
||||
import { extractSummary } from './spec-parser.js';
|
||||
import type { TypedEventBus } from './typed-event-bus.js';
|
||||
import type { ConcurrencyManager, RunningFeature } from './concurrency-manager.js';
|
||||
import type { WorktreeResolver } from './worktree-resolver.js';
|
||||
import type { SettingsService } from './settings-service.js';
|
||||
import type { PipelineContext } from './pipeline-orchestrator.js';
|
||||
import { pipelineService } from './pipeline-service.js';
|
||||
|
||||
// Re-export callback types from execution-types.ts for backward compatibility
|
||||
export type {
|
||||
RunAgentFn,
|
||||
ExecutePipelineFn,
|
||||
UpdateFeatureStatusFn,
|
||||
LoadFeatureFn,
|
||||
GetPlanningPromptPrefixFn,
|
||||
SaveFeatureSummaryFn,
|
||||
RecordLearningsFn,
|
||||
ContextExistsFn,
|
||||
ResumeFeatureFn,
|
||||
TrackFailureFn,
|
||||
SignalPauseFn,
|
||||
RecordSuccessFn,
|
||||
SaveExecutionStateFn,
|
||||
LoadContextFilesFn,
|
||||
} from './execution-types.js';
|
||||
|
||||
import type {
|
||||
RunAgentFn,
|
||||
ExecutePipelineFn,
|
||||
UpdateFeatureStatusFn,
|
||||
LoadFeatureFn,
|
||||
GetPlanningPromptPrefixFn,
|
||||
SaveFeatureSummaryFn,
|
||||
RecordLearningsFn,
|
||||
ContextExistsFn,
|
||||
ResumeFeatureFn,
|
||||
TrackFailureFn,
|
||||
SignalPauseFn,
|
||||
RecordSuccessFn,
|
||||
SaveExecutionStateFn,
|
||||
LoadContextFilesFn,
|
||||
} from './execution-types.js';
|
||||
|
||||
const logger = createLogger('ExecutionService');
|
||||
|
||||
export class ExecutionService {
|
||||
constructor(
|
||||
private eventBus: TypedEventBus,
|
||||
private concurrencyManager: ConcurrencyManager,
|
||||
private worktreeResolver: WorktreeResolver,
|
||||
private settingsService: SettingsService | null,
|
||||
// Callback dependencies for delegation
|
||||
private runAgentFn: RunAgentFn,
|
||||
private executePipelineFn: ExecutePipelineFn,
|
||||
private updateFeatureStatusFn: UpdateFeatureStatusFn,
|
||||
private loadFeatureFn: LoadFeatureFn,
|
||||
private getPlanningPromptPrefixFn: GetPlanningPromptPrefixFn,
|
||||
private saveFeatureSummaryFn: SaveFeatureSummaryFn,
|
||||
private recordLearningsFn: RecordLearningsFn,
|
||||
private contextExistsFn: ContextExistsFn,
|
||||
private resumeFeatureFn: ResumeFeatureFn,
|
||||
private trackFailureFn: TrackFailureFn,
|
||||
private signalPauseFn: SignalPauseFn,
|
||||
private recordSuccessFn: RecordSuccessFn,
|
||||
private saveExecutionStateFn: SaveExecutionStateFn,
|
||||
private loadContextFilesFn: LoadContextFilesFn
|
||||
) {}
|
||||
|
||||
private acquireRunningFeature(options: {
|
||||
featureId: string;
|
||||
projectPath: string;
|
||||
isAutoMode: boolean;
|
||||
allowReuse?: boolean;
|
||||
}): RunningFeature {
|
||||
return this.concurrencyManager.acquire(options);
|
||||
}
|
||||
|
||||
private releaseRunningFeature(featureId: string, options?: { force?: boolean }): void {
|
||||
this.concurrencyManager.release(featureId, options);
|
||||
}
|
||||
|
||||
private extractTitleFromDescription(description: string | undefined): string {
|
||||
if (!description?.trim()) return 'Untitled Feature';
|
||||
const firstLine = description.split('\n')[0].trim();
|
||||
return firstLine.length <= 60 ? firstLine : firstLine.substring(0, 57) + '...';
|
||||
}
|
||||
|
||||
buildFeaturePrompt(
|
||||
feature: Feature,
|
||||
taskExecutionPrompts: {
|
||||
implementationInstructions: string;
|
||||
playwrightVerificationInstructions: string;
|
||||
}
|
||||
): string {
|
||||
const title = this.extractTitleFromDescription(feature.description);
|
||||
|
||||
let prompt = `## Feature Implementation Task
|
||||
|
||||
**Feature ID:** ${feature.id}
|
||||
**Title:** ${title}
|
||||
**Description:** ${feature.description}
|
||||
`;
|
||||
|
||||
if (feature.spec) {
|
||||
prompt += `
|
||||
**Specification:**
|
||||
${feature.spec}
|
||||
`;
|
||||
}
|
||||
|
||||
if (feature.imagePaths && feature.imagePaths.length > 0) {
|
||||
const imagesList = feature.imagePaths
|
||||
.map((img, idx) => {
|
||||
const imgPath = typeof img === 'string' ? img : img.path;
|
||||
const filename =
|
||||
typeof img === 'string'
|
||||
? imgPath.split('/').pop()
|
||||
: img.filename || imgPath.split('/').pop();
|
||||
const mimeType = typeof img === 'string' ? 'image/*' : img.mimeType || 'image/*';
|
||||
return ` ${idx + 1}. ${filename} (${mimeType})\n Path: ${imgPath}`;
|
||||
})
|
||||
.join('\n');
|
||||
prompt += `\n**Context Images Attached:**\n${feature.imagePaths.length} image(s) attached:\n${imagesList}\n`;
|
||||
}
|
||||
|
||||
prompt += feature.skipTests
|
||||
? `\n${taskExecutionPrompts.implementationInstructions}`
|
||||
: `\n${taskExecutionPrompts.implementationInstructions}\n\n${taskExecutionPrompts.playwrightVerificationInstructions}`;
|
||||
return prompt;
|
||||
}
|
||||
|
||||
async executeFeature(
|
||||
projectPath: string,
|
||||
featureId: string,
|
||||
useWorktrees = false,
|
||||
isAutoMode = false,
|
||||
providedWorktreePath?: string,
|
||||
options?: { continuationPrompt?: string; _calledInternally?: boolean }
|
||||
): Promise<void> {
|
||||
const tempRunningFeature = this.acquireRunningFeature({
|
||||
featureId,
|
||||
projectPath,
|
||||
isAutoMode,
|
||||
allowReuse: options?._calledInternally,
|
||||
});
|
||||
const abortController = tempRunningFeature.abortController;
|
||||
if (isAutoMode) await this.saveExecutionStateFn(projectPath);
|
||||
let feature: Feature | null = null;
|
||||
|
||||
try {
|
||||
validateWorkingDirectory(projectPath);
|
||||
feature = await this.loadFeatureFn(projectPath, featureId);
|
||||
if (!feature) throw new Error(`Feature ${featureId} not found`);
|
||||
|
||||
if (!options?.continuationPrompt) {
|
||||
if (feature.planSpec?.status === 'approved') {
|
||||
const prompts = await getPromptCustomization(this.settingsService, '[ExecutionService]');
|
||||
let continuationPrompt = prompts.taskExecution.continuationAfterApprovalTemplate;
|
||||
continuationPrompt = continuationPrompt
|
||||
.replace(/\{\{userFeedback\}\}/g, '')
|
||||
.replace(/\{\{approvedPlan\}\}/g, feature.planSpec.content || '');
|
||||
return await this.executeFeature(
|
||||
projectPath,
|
||||
featureId,
|
||||
useWorktrees,
|
||||
isAutoMode,
|
||||
providedWorktreePath,
|
||||
{ continuationPrompt, _calledInternally: true }
|
||||
);
|
||||
}
|
||||
if (await this.contextExistsFn(projectPath, featureId)) {
|
||||
return await this.resumeFeatureFn(projectPath, featureId, useWorktrees, true);
|
||||
}
|
||||
}
|
||||
|
||||
let worktreePath: string | null = null;
|
||||
const branchName = feature.branchName;
|
||||
if (useWorktrees && branchName) {
|
||||
worktreePath = await this.worktreeResolver.findWorktreeForBranch(projectPath, branchName);
|
||||
if (worktreePath) logger.info(`Using worktree for branch "${branchName}": ${worktreePath}`);
|
||||
}
|
||||
const workDir = worktreePath ? path.resolve(worktreePath) : path.resolve(projectPath);
|
||||
validateWorkingDirectory(workDir);
|
||||
tempRunningFeature.worktreePath = worktreePath;
|
||||
tempRunningFeature.branchName = branchName ?? null;
|
||||
await this.updateFeatureStatusFn(projectPath, featureId, 'in_progress');
|
||||
this.eventBus.emitAutoModeEvent('auto_mode_feature_start', {
|
||||
featureId,
|
||||
projectPath,
|
||||
branchName: feature.branchName ?? null,
|
||||
feature: {
|
||||
id: featureId,
|
||||
title: feature.title || 'Loading...',
|
||||
description: feature.description || 'Feature is starting',
|
||||
},
|
||||
});
|
||||
|
||||
const autoLoadClaudeMd = await getAutoLoadClaudeMdSetting(
|
||||
projectPath,
|
||||
this.settingsService,
|
||||
'[ExecutionService]'
|
||||
);
|
||||
const prompts = await getPromptCustomization(this.settingsService, '[ExecutionService]');
|
||||
let prompt: string;
|
||||
const contextResult = await this.loadContextFilesFn({
|
||||
projectPath,
|
||||
fsModule: secureFs as Parameters<typeof loadContextFiles>[0]['fsModule'],
|
||||
taskContext: {
|
||||
title: feature.title ?? '',
|
||||
description: feature.description ?? '',
|
||||
},
|
||||
});
|
||||
const combinedSystemPrompt = filterClaudeMdFromContext(contextResult, autoLoadClaudeMd);
|
||||
|
||||
if (options?.continuationPrompt) {
|
||||
prompt = options.continuationPrompt;
|
||||
} else {
|
||||
prompt =
|
||||
(await this.getPlanningPromptPrefixFn(feature)) +
|
||||
this.buildFeaturePrompt(feature, prompts.taskExecution);
|
||||
if (feature.planningMode && feature.planningMode !== 'skip') {
|
||||
this.eventBus.emitAutoModeEvent('planning_started', {
|
||||
featureId: feature.id,
|
||||
mode: feature.planningMode,
|
||||
message: `Starting ${feature.planningMode} planning phase`,
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
const imagePaths = feature.imagePaths?.map((img) =>
|
||||
typeof img === 'string' ? img : img.path
|
||||
);
|
||||
const model = resolveModelString(feature.model, DEFAULT_MODELS.claude);
|
||||
tempRunningFeature.model = model;
|
||||
tempRunningFeature.provider = ProviderFactory.getProviderNameForModel(model);
|
||||
|
||||
await this.runAgentFn(
|
||||
workDir,
|
||||
featureId,
|
||||
prompt,
|
||||
abortController,
|
||||
projectPath,
|
||||
imagePaths,
|
||||
model,
|
||||
{
|
||||
projectPath,
|
||||
planningMode: feature.planningMode,
|
||||
requirePlanApproval: feature.requirePlanApproval,
|
||||
systemPrompt: combinedSystemPrompt || undefined,
|
||||
autoLoadClaudeMd,
|
||||
thinkingLevel: feature.thinkingLevel,
|
||||
branchName: feature.branchName ?? null,
|
||||
}
|
||||
);
|
||||
|
||||
const pipelineConfig = await pipelineService.getPipelineConfig(projectPath);
|
||||
const excludedStepIds = new Set(feature.excludedPipelineSteps || []);
|
||||
const sortedSteps = [...(pipelineConfig?.steps || [])]
|
||||
.sort((a, b) => a.order - b.order)
|
||||
.filter((step) => !excludedStepIds.has(step.id));
|
||||
if (sortedSteps.length > 0) {
|
||||
await this.executePipelineFn({
|
||||
projectPath,
|
||||
featureId,
|
||||
feature,
|
||||
steps: sortedSteps,
|
||||
workDir,
|
||||
worktreePath,
|
||||
branchName: feature.branchName ?? null,
|
||||
abortController,
|
||||
autoLoadClaudeMd,
|
||||
testAttempts: 0,
|
||||
maxTestAttempts: 5,
|
||||
});
|
||||
}
|
||||
|
||||
const finalStatus = feature.skipTests ? 'waiting_approval' : 'verified';
|
||||
await this.updateFeatureStatusFn(projectPath, featureId, finalStatus);
|
||||
this.recordSuccessFn();
|
||||
|
||||
try {
|
||||
const outputPath = path.join(getFeatureDir(projectPath, featureId), 'agent-output.md');
|
||||
let agentOutput = '';
|
||||
try {
|
||||
agentOutput = (await secureFs.readFile(outputPath, 'utf-8')) as string;
|
||||
} catch {
|
||||
/* */
|
||||
}
|
||||
if (agentOutput) {
|
||||
const summary = extractSummary(agentOutput);
|
||||
if (summary) await this.saveFeatureSummaryFn(projectPath, featureId, summary);
|
||||
}
|
||||
if (contextResult.memoryFiles.length > 0 && agentOutput) {
|
||||
await recordMemoryUsage(
|
||||
projectPath,
|
||||
contextResult.memoryFiles,
|
||||
agentOutput,
|
||||
true,
|
||||
secureFs as Parameters<typeof recordMemoryUsage>[4]
|
||||
);
|
||||
}
|
||||
await this.recordLearningsFn(projectPath, feature, agentOutput);
|
||||
} catch {
|
||||
/* learnings recording failed */
|
||||
}
|
||||
|
||||
this.eventBus.emitAutoModeEvent('auto_mode_feature_complete', {
|
||||
featureId,
|
||||
featureName: feature.title,
|
||||
branchName: feature.branchName ?? null,
|
||||
passes: true,
|
||||
message: `Feature completed in ${Math.round((Date.now() - tempRunningFeature.startTime) / 1000)}s${finalStatus === 'verified' ? ' - auto-verified' : ''}`,
|
||||
projectPath,
|
||||
model: tempRunningFeature.model,
|
||||
provider: tempRunningFeature.provider,
|
||||
});
|
||||
} catch (error) {
|
||||
const errorInfo = classifyError(error);
|
||||
if (errorInfo.isAbort) {
|
||||
this.eventBus.emitAutoModeEvent('auto_mode_feature_complete', {
|
||||
featureId,
|
||||
featureName: feature?.title,
|
||||
branchName: feature?.branchName ?? null,
|
||||
passes: false,
|
||||
message: 'Feature stopped by user',
|
||||
projectPath,
|
||||
});
|
||||
} else {
|
||||
logger.error(`Feature ${featureId} failed:`, error);
|
||||
await this.updateFeatureStatusFn(projectPath, featureId, 'backlog');
|
||||
this.eventBus.emitAutoModeEvent('auto_mode_error', {
|
||||
featureId,
|
||||
featureName: feature?.title,
|
||||
branchName: feature?.branchName ?? null,
|
||||
error: errorInfo.message,
|
||||
errorType: errorInfo.type,
|
||||
projectPath,
|
||||
});
|
||||
if (this.trackFailureFn({ type: errorInfo.type, message: errorInfo.message })) {
|
||||
this.signalPauseFn({ type: errorInfo.type, message: errorInfo.message });
|
||||
}
|
||||
}
|
||||
} finally {
|
||||
this.releaseRunningFeature(featureId);
|
||||
if (isAutoMode && projectPath) await this.saveExecutionStateFn(projectPath);
|
||||
}
|
||||
}
|
||||
|
||||
async stopFeature(featureId: string): Promise<boolean> {
|
||||
const running = this.concurrencyManager.getRunningFeature(featureId);
|
||||
if (!running) return false;
|
||||
running.abortController.abort();
|
||||
this.releaseRunningFeature(featureId, { force: true });
|
||||
return true;
|
||||
}
|
||||
}
|
||||
@@ -1,212 +0,0 @@
|
||||
/**
|
||||
* Execution Types - Type definitions for ExecutionService and related services
|
||||
*
|
||||
* Contains callback types used by ExecutionService for dependency injection,
|
||||
* allowing the service to delegate to other services without circular dependencies.
|
||||
*/
|
||||
|
||||
import type { Feature, PlanningMode, ThinkingLevel } from '@automaker/types';
|
||||
import type { loadContextFiles } from '@automaker/utils';
|
||||
import type { PipelineContext } from './pipeline-orchestrator.js';
|
||||
|
||||
// =============================================================================
|
||||
// ExecutionService Callback Types
|
||||
// =============================================================================
|
||||
|
||||
/**
|
||||
* Function to run the agent with a prompt
|
||||
*/
|
||||
export type RunAgentFn = (
|
||||
workDir: string,
|
||||
featureId: string,
|
||||
prompt: string,
|
||||
abortController: AbortController,
|
||||
projectPath: string,
|
||||
imagePaths?: string[],
|
||||
model?: string,
|
||||
options?: {
|
||||
projectPath?: string;
|
||||
planningMode?: PlanningMode;
|
||||
requirePlanApproval?: boolean;
|
||||
previousContent?: string;
|
||||
systemPrompt?: string;
|
||||
autoLoadClaudeMd?: boolean;
|
||||
thinkingLevel?: ThinkingLevel;
|
||||
branchName?: string | null;
|
||||
}
|
||||
) => Promise<void>;
|
||||
|
||||
/**
|
||||
* Function to execute pipeline steps
|
||||
*/
|
||||
export type ExecutePipelineFn = (context: PipelineContext) => Promise<void>;
|
||||
|
||||
/**
|
||||
* Function to update feature status
|
||||
*/
|
||||
export type UpdateFeatureStatusFn = (
|
||||
projectPath: string,
|
||||
featureId: string,
|
||||
status: string
|
||||
) => Promise<void>;
|
||||
|
||||
/**
|
||||
* Function to load a feature by ID
|
||||
*/
|
||||
export type LoadFeatureFn = (projectPath: string, featureId: string) => Promise<Feature | null>;
|
||||
|
||||
/**
|
||||
* Function to get the planning prompt prefix based on feature's planning mode
|
||||
*/
|
||||
export type GetPlanningPromptPrefixFn = (feature: Feature) => Promise<string>;
|
||||
|
||||
/**
|
||||
* Function to save a feature summary
|
||||
*/
|
||||
export type SaveFeatureSummaryFn = (
|
||||
projectPath: string,
|
||||
featureId: string,
|
||||
summary: string
|
||||
) => Promise<void>;
|
||||
|
||||
/**
|
||||
* Function to record learnings from a completed feature
|
||||
*/
|
||||
export type RecordLearningsFn = (
|
||||
projectPath: string,
|
||||
feature: Feature,
|
||||
agentOutput: string
|
||||
) => Promise<void>;
|
||||
|
||||
/**
|
||||
* Function to check if context exists for a feature
|
||||
*/
|
||||
export type ContextExistsFn = (projectPath: string, featureId: string) => Promise<boolean>;
|
||||
|
||||
/**
|
||||
* Function to resume a feature (continues from saved context or starts fresh)
|
||||
*/
|
||||
export type ResumeFeatureFn = (
|
||||
projectPath: string,
|
||||
featureId: string,
|
||||
useWorktrees: boolean,
|
||||
_calledInternally: boolean
|
||||
) => Promise<void>;
|
||||
|
||||
/**
|
||||
* Function to track failure and check if pause threshold is reached
|
||||
* Returns true if auto-mode should pause
|
||||
*/
|
||||
export type TrackFailureFn = (errorInfo: { type: string; message: string }) => boolean;
|
||||
|
||||
/**
|
||||
* Function to signal that auto-mode should pause due to failures
|
||||
*/
|
||||
export type SignalPauseFn = (errorInfo: { type: string; message: string }) => void;
|
||||
|
||||
/**
|
||||
* Function to record a successful execution (resets failure tracking)
|
||||
*/
|
||||
export type RecordSuccessFn = () => void;
|
||||
|
||||
/**
|
||||
* Function to save execution state
|
||||
*/
|
||||
export type SaveExecutionStateFn = (projectPath: string) => Promise<void>;
|
||||
|
||||
/**
|
||||
* Type alias for loadContextFiles function
|
||||
*/
|
||||
export type LoadContextFilesFn = typeof loadContextFiles;
|
||||
|
||||
// =============================================================================
|
||||
// PipelineOrchestrator Callback Types
|
||||
// =============================================================================
|
||||
|
||||
/**
|
||||
* Function to build feature prompt
|
||||
*/
|
||||
export type BuildFeaturePromptFn = (
|
||||
feature: Feature,
|
||||
prompts: { implementationInstructions: string; playwrightVerificationInstructions: string }
|
||||
) => string;
|
||||
|
||||
/**
|
||||
* Function to execute a feature
|
||||
*/
|
||||
export type ExecuteFeatureFn = (
|
||||
projectPath: string,
|
||||
featureId: string,
|
||||
useWorktrees: boolean,
|
||||
isAutoMode: boolean,
|
||||
providedWorktreePath?: string,
|
||||
options?: { continuationPrompt?: string; _calledInternally?: boolean }
|
||||
) => Promise<void>;
|
||||
|
||||
/**
|
||||
* Function to run agent (for PipelineOrchestrator)
|
||||
*/
|
||||
export type PipelineRunAgentFn = (
|
||||
workDir: string,
|
||||
featureId: string,
|
||||
prompt: string,
|
||||
abortController: AbortController,
|
||||
projectPath: string,
|
||||
imagePaths?: string[],
|
||||
model?: string,
|
||||
options?: Record<string, unknown>
|
||||
) => Promise<void>;
|
||||
|
||||
// =============================================================================
|
||||
// AutoLoopCoordinator Callback Types
|
||||
// =============================================================================
|
||||
|
||||
/**
|
||||
* Function to execute a feature in auto-loop
|
||||
*/
|
||||
export type AutoLoopExecuteFeatureFn = (
|
||||
projectPath: string,
|
||||
featureId: string,
|
||||
useWorktrees: boolean,
|
||||
isAutoMode: boolean
|
||||
) => Promise<void>;
|
||||
|
||||
/**
|
||||
* Function to load pending features for a worktree
|
||||
*/
|
||||
export type LoadPendingFeaturesFn = (
|
||||
projectPath: string,
|
||||
branchName: string | null
|
||||
) => Promise<Feature[]>;
|
||||
|
||||
/**
|
||||
* Function to save execution state for auto-loop
|
||||
*/
|
||||
export type AutoLoopSaveExecutionStateFn = (
|
||||
projectPath: string,
|
||||
branchName: string | null,
|
||||
maxConcurrency: number
|
||||
) => Promise<void>;
|
||||
|
||||
/**
|
||||
* Function to clear execution state
|
||||
*/
|
||||
export type ClearExecutionStateFn = (
|
||||
projectPath: string,
|
||||
branchName: string | null
|
||||
) => Promise<void>;
|
||||
|
||||
/**
|
||||
* Function to reset stuck features
|
||||
*/
|
||||
export type ResetStuckFeaturesFn = (projectPath: string) => Promise<void>;
|
||||
|
||||
/**
|
||||
* Function to check if a feature is finished
|
||||
*/
|
||||
export type IsFeatureFinishedFn = (feature: Feature) => boolean;
|
||||
|
||||
/**
|
||||
* Function to check if a feature is running
|
||||
*/
|
||||
export type IsFeatureRunningFn = (featureId: string) => boolean;
|
||||
@@ -1,446 +0,0 @@
|
||||
/**
|
||||
* FeatureStateManager - Manages feature status updates with proper persistence
|
||||
*
|
||||
* Extracted from AutoModeService to provide a standalone service for:
|
||||
* - Updating feature status with proper disk persistence
|
||||
* - Handling corrupted JSON with backup recovery
|
||||
* - Emitting events AFTER successful persistence (prevent stale data on refresh)
|
||||
* - Resetting stuck features after server restart
|
||||
*
|
||||
* Key behaviors:
|
||||
* - Persist BEFORE emit (Pitfall 2 from research)
|
||||
* - Use readJsonWithRecovery for all reads
|
||||
* - markInterrupted preserves pipeline_* statuses
|
||||
*/
|
||||
|
||||
import path from 'path';
|
||||
import type { Feature, ParsedTask, PlanSpec } from '@automaker/types';
|
||||
import {
|
||||
atomicWriteJson,
|
||||
readJsonWithRecovery,
|
||||
logRecoveryWarning,
|
||||
DEFAULT_BACKUP_COUNT,
|
||||
createLogger,
|
||||
} from '@automaker/utils';
|
||||
import { getFeatureDir, getFeaturesDir } from '@automaker/platform';
|
||||
import * as secureFs from '../lib/secure-fs.js';
|
||||
import type { EventEmitter } from '../lib/events.js';
|
||||
import { getNotificationService } from './notification-service.js';
|
||||
import { FeatureLoader } from './feature-loader.js';
|
||||
|
||||
const logger = createLogger('FeatureStateManager');
|
||||
|
||||
/**
|
||||
* FeatureStateManager handles feature status updates with persistence guarantees.
|
||||
*
|
||||
* This service is responsible for:
|
||||
* 1. Updating feature status and persisting to disk BEFORE emitting events
|
||||
* 2. Handling corrupted JSON with automatic backup recovery
|
||||
* 3. Resetting stuck features after server restarts
|
||||
* 4. Managing justFinishedAt timestamps for UI badges
|
||||
*/
|
||||
export class FeatureStateManager {
|
||||
private events: EventEmitter;
|
||||
private featureLoader: FeatureLoader;
|
||||
|
||||
constructor(events: EventEmitter, featureLoader: FeatureLoader) {
|
||||
this.events = events;
|
||||
this.featureLoader = featureLoader;
|
||||
}
|
||||
|
||||
/**
|
||||
* Load a feature from disk with recovery support
|
||||
*
|
||||
* @param projectPath - Path to the project
|
||||
* @param featureId - ID of the feature to load
|
||||
* @returns The feature data, or null if not found/recoverable
|
||||
*/
|
||||
async loadFeature(projectPath: string, featureId: string): Promise<Feature | null> {
|
||||
const featureDir = getFeatureDir(projectPath, featureId);
|
||||
const featurePath = path.join(featureDir, 'feature.json');
|
||||
|
||||
try {
|
||||
const result = await readJsonWithRecovery<Feature | null>(featurePath, null, {
|
||||
maxBackups: DEFAULT_BACKUP_COUNT,
|
||||
autoRestore: true,
|
||||
});
|
||||
logRecoveryWarning(result, `Feature ${featureId}`, logger);
|
||||
return result.data;
|
||||
} catch {
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Update feature status with proper persistence and event ordering.
|
||||
*
|
||||
* IMPORTANT: Persists to disk BEFORE emitting events to prevent stale data
|
||||
* on client refresh (Pitfall 2 from research).
|
||||
*
|
||||
* @param projectPath - Path to the project
|
||||
* @param featureId - ID of the feature to update
|
||||
* @param status - New status value
|
||||
*/
|
||||
async updateFeatureStatus(projectPath: string, featureId: string, status: string): Promise<void> {
|
||||
const featureDir = getFeatureDir(projectPath, featureId);
|
||||
const featurePath = path.join(featureDir, 'feature.json');
|
||||
|
||||
try {
|
||||
// Use recovery-enabled read for corrupted file handling
|
||||
const result = await readJsonWithRecovery<Feature | null>(featurePath, null, {
|
||||
maxBackups: DEFAULT_BACKUP_COUNT,
|
||||
autoRestore: true,
|
||||
});
|
||||
|
||||
logRecoveryWarning(result, `Feature ${featureId}`, logger);
|
||||
|
||||
const feature = result.data;
|
||||
if (!feature) {
|
||||
logger.warn(`Feature ${featureId} not found or could not be recovered`);
|
||||
return;
|
||||
}
|
||||
|
||||
feature.status = status;
|
||||
feature.updatedAt = new Date().toISOString();
|
||||
|
||||
// Set justFinishedAt timestamp when moving to waiting_approval (agent just completed)
|
||||
// Badge will show for 2 minutes after this timestamp
|
||||
if (status === 'waiting_approval') {
|
||||
feature.justFinishedAt = new Date().toISOString();
|
||||
} else {
|
||||
// Clear the timestamp when moving to other statuses
|
||||
feature.justFinishedAt = undefined;
|
||||
}
|
||||
|
||||
// PERSIST BEFORE EMIT (Pitfall 2)
|
||||
await atomicWriteJson(featurePath, feature, { backupCount: DEFAULT_BACKUP_COUNT });
|
||||
|
||||
// Create notifications for important status changes
|
||||
const notificationService = getNotificationService();
|
||||
if (status === 'waiting_approval') {
|
||||
await notificationService.createNotification({
|
||||
type: 'feature_waiting_approval',
|
||||
title: 'Feature Ready for Review',
|
||||
message: `"${feature.name || featureId}" is ready for your review and approval.`,
|
||||
featureId,
|
||||
projectPath,
|
||||
});
|
||||
} else if (status === 'verified') {
|
||||
await notificationService.createNotification({
|
||||
type: 'feature_verified',
|
||||
title: 'Feature Verified',
|
||||
message: `"${feature.name || featureId}" has been verified and is complete.`,
|
||||
featureId,
|
||||
projectPath,
|
||||
});
|
||||
}
|
||||
|
||||
// Sync completed/verified features to app_spec.txt
|
||||
if (status === 'verified' || status === 'completed') {
|
||||
try {
|
||||
await this.featureLoader.syncFeatureToAppSpec(projectPath, feature);
|
||||
} catch (syncError) {
|
||||
// Log but don't fail the status update if sync fails
|
||||
logger.warn(`Failed to sync feature ${featureId} to app_spec.txt:`, syncError);
|
||||
}
|
||||
}
|
||||
} catch (error) {
|
||||
logger.error(`Failed to update feature status for ${featureId}:`, error);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Mark a feature as interrupted due to server restart or other interruption.
|
||||
*
|
||||
* This is a convenience helper that updates the feature status to 'interrupted',
|
||||
* indicating the feature was in progress but execution was disrupted (e.g., server
|
||||
* restart, process crash, or manual stop). Features with this status can be
|
||||
* resumed later using the resume functionality.
|
||||
*
|
||||
* Note: Features with pipeline_* statuses are preserved rather than overwritten
|
||||
* to 'interrupted'. This ensures that resumePipelineFeature() can pick up from
|
||||
* the correct pipeline step after a restart.
|
||||
*
|
||||
* @param projectPath - Path to the project
|
||||
* @param featureId - ID of the feature to mark as interrupted
|
||||
* @param reason - Optional reason for the interruption (logged for debugging)
|
||||
*/
|
||||
async markFeatureInterrupted(
|
||||
projectPath: string,
|
||||
featureId: string,
|
||||
reason?: string
|
||||
): Promise<void> {
|
||||
// Load the feature to check its current status
|
||||
const feature = await this.loadFeature(projectPath, featureId);
|
||||
const currentStatus = feature?.status;
|
||||
|
||||
// Preserve pipeline_* statuses so resumePipelineFeature can resume from the correct step
|
||||
if (currentStatus && currentStatus.startsWith('pipeline_')) {
|
||||
logger.info(
|
||||
`Feature ${featureId} was in ${currentStatus}; preserving pipeline status for resume`
|
||||
);
|
||||
return;
|
||||
}
|
||||
|
||||
if (reason) {
|
||||
logger.info(`Marking feature ${featureId} as interrupted: ${reason}`);
|
||||
} else {
|
||||
logger.info(`Marking feature ${featureId} as interrupted`);
|
||||
}
|
||||
|
||||
await this.updateFeatureStatus(projectPath, featureId, 'interrupted');
|
||||
}
|
||||
|
||||
/**
|
||||
* Reset features that were stuck in transient states due to server crash.
|
||||
* Called when auto mode is enabled to clean up from previous session.
|
||||
*
|
||||
* Resets:
|
||||
* - in_progress features back to ready (if has plan) or backlog (if no plan)
|
||||
* - generating planSpec status back to pending
|
||||
* - in_progress tasks back to pending
|
||||
*
|
||||
* @param projectPath - The project path to reset features for
|
||||
*/
|
||||
async resetStuckFeatures(projectPath: string): Promise<void> {
|
||||
const featuresDir = getFeaturesDir(projectPath);
|
||||
|
||||
try {
|
||||
const entries = await secureFs.readdir(featuresDir, { withFileTypes: true });
|
||||
|
||||
for (const entry of entries) {
|
||||
if (!entry.isDirectory()) continue;
|
||||
|
||||
const featurePath = path.join(featuresDir, entry.name, 'feature.json');
|
||||
const result = await readJsonWithRecovery<Feature | null>(featurePath, null, {
|
||||
maxBackups: DEFAULT_BACKUP_COUNT,
|
||||
autoRestore: true,
|
||||
});
|
||||
|
||||
const feature = result.data;
|
||||
if (!feature) continue;
|
||||
|
||||
let needsUpdate = false;
|
||||
|
||||
// Reset in_progress features back to ready/backlog
|
||||
if (feature.status === 'in_progress') {
|
||||
const hasApprovedPlan = feature.planSpec?.status === 'approved';
|
||||
feature.status = hasApprovedPlan ? 'ready' : 'backlog';
|
||||
needsUpdate = true;
|
||||
logger.info(
|
||||
`[resetStuckFeatures] Reset feature ${feature.id} from in_progress to ${feature.status}`
|
||||
);
|
||||
}
|
||||
|
||||
// Reset generating planSpec status back to pending (spec generation was interrupted)
|
||||
if (feature.planSpec?.status === 'generating') {
|
||||
feature.planSpec.status = 'pending';
|
||||
needsUpdate = true;
|
||||
logger.info(
|
||||
`[resetStuckFeatures] Reset feature ${feature.id} planSpec status from generating to pending`
|
||||
);
|
||||
}
|
||||
|
||||
// Reset any in_progress tasks back to pending (task execution was interrupted)
|
||||
if (feature.planSpec?.tasks) {
|
||||
for (const task of feature.planSpec.tasks) {
|
||||
if (task.status === 'in_progress') {
|
||||
task.status = 'pending';
|
||||
needsUpdate = true;
|
||||
logger.info(
|
||||
`[resetStuckFeatures] Reset task ${task.id} for feature ${feature.id} from in_progress to pending`
|
||||
);
|
||||
// Clear currentTaskId if it points to this reverted task
|
||||
if (feature.planSpec?.currentTaskId === task.id) {
|
||||
feature.planSpec.currentTaskId = undefined;
|
||||
logger.info(
|
||||
`[resetStuckFeatures] Cleared planSpec.currentTaskId for feature ${feature.id} (was pointing to reverted task ${task.id})`
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (needsUpdate) {
|
||||
feature.updatedAt = new Date().toISOString();
|
||||
await atomicWriteJson(featurePath, feature, { backupCount: DEFAULT_BACKUP_COUNT });
|
||||
}
|
||||
}
|
||||
} catch (error) {
|
||||
// If features directory doesn't exist, that's fine
|
||||
if ((error as NodeJS.ErrnoException).code !== 'ENOENT') {
|
||||
logger.error(`[resetStuckFeatures] Error resetting features for ${projectPath}:`, error);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Update the planSpec of a feature with partial updates.
|
||||
*
|
||||
* @param projectPath - The project path
|
||||
* @param featureId - The feature ID
|
||||
* @param updates - Partial PlanSpec updates to apply
|
||||
*/
|
||||
async updateFeaturePlanSpec(
|
||||
projectPath: string,
|
||||
featureId: string,
|
||||
updates: Partial<PlanSpec>
|
||||
): Promise<void> {
|
||||
const featureDir = getFeatureDir(projectPath, featureId);
|
||||
const featurePath = path.join(featureDir, 'feature.json');
|
||||
|
||||
try {
|
||||
const result = await readJsonWithRecovery<Feature | null>(featurePath, null, {
|
||||
maxBackups: DEFAULT_BACKUP_COUNT,
|
||||
autoRestore: true,
|
||||
});
|
||||
|
||||
logRecoveryWarning(result, `Feature ${featureId}`, logger);
|
||||
|
||||
const feature = result.data;
|
||||
if (!feature) {
|
||||
logger.warn(`Feature ${featureId} not found or could not be recovered`);
|
||||
return;
|
||||
}
|
||||
|
||||
// Initialize planSpec if it doesn't exist
|
||||
if (!feature.planSpec) {
|
||||
feature.planSpec = {
|
||||
status: 'pending',
|
||||
version: 1,
|
||||
reviewedByUser: false,
|
||||
};
|
||||
}
|
||||
|
||||
// Capture old content BEFORE applying updates for version comparison
|
||||
const oldContent = feature.planSpec.content;
|
||||
|
||||
// Apply updates
|
||||
Object.assign(feature.planSpec, updates);
|
||||
|
||||
// If content is being updated and it's different from old content, increment version
|
||||
if (updates.content && updates.content !== oldContent) {
|
||||
feature.planSpec.version = (feature.planSpec.version || 0) + 1;
|
||||
}
|
||||
|
||||
feature.updatedAt = new Date().toISOString();
|
||||
|
||||
// PERSIST BEFORE EMIT
|
||||
await atomicWriteJson(featurePath, feature, { backupCount: DEFAULT_BACKUP_COUNT });
|
||||
} catch (error) {
|
||||
logger.error(`Failed to update planSpec for ${featureId}:`, error);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Save the extracted summary to a feature's summary field.
|
||||
* This is called after agent execution completes to save a summary
|
||||
* extracted from the agent's output using <summary> tags.
|
||||
*
|
||||
* @param projectPath - The project path
|
||||
* @param featureId - The feature ID
|
||||
* @param summary - The summary text to save
|
||||
*/
|
||||
async saveFeatureSummary(projectPath: string, featureId: string, summary: string): Promise<void> {
|
||||
const featureDir = getFeatureDir(projectPath, featureId);
|
||||
const featurePath = path.join(featureDir, 'feature.json');
|
||||
|
||||
try {
|
||||
const result = await readJsonWithRecovery<Feature | null>(featurePath, null, {
|
||||
maxBackups: DEFAULT_BACKUP_COUNT,
|
||||
autoRestore: true,
|
||||
});
|
||||
|
||||
logRecoveryWarning(result, `Feature ${featureId}`, logger);
|
||||
|
||||
const feature = result.data;
|
||||
if (!feature) {
|
||||
logger.warn(`Feature ${featureId} not found or could not be recovered`);
|
||||
return;
|
||||
}
|
||||
|
||||
feature.summary = summary;
|
||||
feature.updatedAt = new Date().toISOString();
|
||||
|
||||
// PERSIST BEFORE EMIT
|
||||
await atomicWriteJson(featurePath, feature, { backupCount: DEFAULT_BACKUP_COUNT });
|
||||
|
||||
// Emit event for UI update
|
||||
this.emitAutoModeEvent('auto_mode_summary', {
|
||||
featureId,
|
||||
projectPath,
|
||||
summary,
|
||||
});
|
||||
} catch (error) {
|
||||
logger.error(`Failed to save summary for ${featureId}:`, error);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Update the status of a specific task within planSpec.tasks
|
||||
*
|
||||
* @param projectPath - The project path
|
||||
* @param featureId - The feature ID
|
||||
* @param taskId - The task ID to update
|
||||
* @param status - The new task status
|
||||
*/
|
||||
async updateTaskStatus(
|
||||
projectPath: string,
|
||||
featureId: string,
|
||||
taskId: string,
|
||||
status: ParsedTask['status']
|
||||
): Promise<void> {
|
||||
const featureDir = getFeatureDir(projectPath, featureId);
|
||||
const featurePath = path.join(featureDir, 'feature.json');
|
||||
|
||||
try {
|
||||
const result = await readJsonWithRecovery<Feature | null>(featurePath, null, {
|
||||
maxBackups: DEFAULT_BACKUP_COUNT,
|
||||
autoRestore: true,
|
||||
});
|
||||
|
||||
logRecoveryWarning(result, `Feature ${featureId}`, logger);
|
||||
|
||||
const feature = result.data;
|
||||
if (!feature || !feature.planSpec?.tasks) {
|
||||
logger.warn(`Feature ${featureId} not found or has no tasks`);
|
||||
return;
|
||||
}
|
||||
|
||||
// Find and update the task
|
||||
const task = feature.planSpec.tasks.find((t) => t.id === taskId);
|
||||
if (task) {
|
||||
task.status = status;
|
||||
feature.updatedAt = new Date().toISOString();
|
||||
|
||||
// PERSIST BEFORE EMIT
|
||||
await atomicWriteJson(featurePath, feature, { backupCount: DEFAULT_BACKUP_COUNT });
|
||||
|
||||
// Emit event for UI update
|
||||
this.emitAutoModeEvent('auto_mode_task_status', {
|
||||
featureId,
|
||||
projectPath,
|
||||
taskId,
|
||||
status,
|
||||
tasks: feature.planSpec.tasks,
|
||||
});
|
||||
}
|
||||
} catch (error) {
|
||||
logger.error(`Failed to update task ${taskId} status for ${featureId}:`, error);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Emit an auto-mode event via the event emitter
|
||||
*
|
||||
* @param eventType - The event type (e.g., 'auto_mode_summary')
|
||||
* @param data - The event payload
|
||||
*/
|
||||
private emitAutoModeEvent(eventType: string, data: Record<string, unknown>): void {
|
||||
// Wrap the event in auto-mode:event format expected by the client
|
||||
this.events.emit('auto-mode:event', {
|
||||
type: eventType,
|
||||
...data,
|
||||
});
|
||||
}
|
||||
}
|
||||
@@ -23,9 +23,7 @@ import type {
|
||||
SendMessageOptions,
|
||||
PromptCategory,
|
||||
IdeationPrompt,
|
||||
IdeationContextSources,
|
||||
} from '@automaker/types';
|
||||
import { DEFAULT_IDEATION_CONTEXT_SOURCES } from '@automaker/types';
|
||||
import {
|
||||
getIdeationDir,
|
||||
getIdeasDir,
|
||||
@@ -34,10 +32,8 @@ import {
|
||||
getIdeationSessionsDir,
|
||||
getIdeationSessionPath,
|
||||
getIdeationAnalysisPath,
|
||||
getAppSpecPath,
|
||||
ensureIdeationDir,
|
||||
} from '@automaker/platform';
|
||||
import { extractXmlElements, extractImplementedFeatures } from '../lib/xml-extractor.js';
|
||||
import { createLogger, loadContextFiles, isAbortError } from '@automaker/utils';
|
||||
import { ProviderFactory } from '../providers/provider-factory.js';
|
||||
import type { SettingsService } from './settings-service.js';
|
||||
@@ -642,12 +638,8 @@ export class IdeationService {
|
||||
projectPath: string,
|
||||
promptId: string,
|
||||
category: IdeaCategory,
|
||||
count: number = 10,
|
||||
contextSources?: IdeationContextSources
|
||||
count: number = 10
|
||||
): Promise<AnalysisSuggestion[]> {
|
||||
const suggestionCount = Math.min(Math.max(Math.floor(count ?? 10), 1), 20);
|
||||
// Merge with defaults for backward compatibility
|
||||
const sources = { ...DEFAULT_IDEATION_CONTEXT_SOURCES, ...contextSources };
|
||||
validateWorkingDirectory(projectPath);
|
||||
|
||||
// Get the prompt
|
||||
@@ -664,26 +656,16 @@ export class IdeationService {
|
||||
});
|
||||
|
||||
try {
|
||||
// Load context files (respecting toggle settings)
|
||||
// Load context files
|
||||
const contextResult = await loadContextFiles({
|
||||
projectPath,
|
||||
fsModule: secureFs as Parameters<typeof loadContextFiles>[0]['fsModule'],
|
||||
includeContextFiles: sources.useContextFiles,
|
||||
includeMemory: sources.useMemoryFiles,
|
||||
});
|
||||
|
||||
// Build context from multiple sources
|
||||
let contextPrompt = contextResult.formattedPrompt;
|
||||
|
||||
// Add app spec context if enabled
|
||||
if (sources.useAppSpec) {
|
||||
const appSpecContext = await this.buildAppSpecContext(projectPath);
|
||||
if (appSpecContext) {
|
||||
contextPrompt = contextPrompt ? `${contextPrompt}\n\n${appSpecContext}` : appSpecContext;
|
||||
}
|
||||
}
|
||||
|
||||
// If no context was found, try to gather basic project info
|
||||
// If no context files, try to gather basic project info
|
||||
if (!contextPrompt) {
|
||||
const projectInfo = await this.gatherBasicProjectInfo(projectPath);
|
||||
if (projectInfo) {
|
||||
@@ -691,11 +673,8 @@ export class IdeationService {
|
||||
}
|
||||
}
|
||||
|
||||
// Gather existing features and ideas to prevent duplicates (respecting toggle settings)
|
||||
const existingWorkContext = await this.gatherExistingWorkContext(projectPath, {
|
||||
includeFeatures: sources.useExistingFeatures,
|
||||
includeIdeas: sources.useExistingIdeas,
|
||||
});
|
||||
// Gather existing features and ideas to prevent duplicates
|
||||
const existingWorkContext = await this.gatherExistingWorkContext(projectPath);
|
||||
|
||||
// Get customized prompts from settings
|
||||
const prompts = await getPromptCustomization(this.settingsService, '[IdeationService]');
|
||||
@@ -705,7 +684,7 @@ export class IdeationService {
|
||||
prompts.ideation.suggestionsSystemPrompt,
|
||||
contextPrompt,
|
||||
category,
|
||||
suggestionCount,
|
||||
count,
|
||||
existingWorkContext
|
||||
);
|
||||
|
||||
@@ -772,11 +751,7 @@ export class IdeationService {
|
||||
}
|
||||
|
||||
// Parse the response into structured suggestions
|
||||
const suggestions = this.parseSuggestionsFromResponse(
|
||||
responseText,
|
||||
category,
|
||||
suggestionCount
|
||||
);
|
||||
const suggestions = this.parseSuggestionsFromResponse(responseText, category);
|
||||
|
||||
// Emit complete event
|
||||
this.events.emit('ideation:suggestions', {
|
||||
@@ -839,47 +814,40 @@ ${contextSection}${existingWorkSection}`;
|
||||
*/
|
||||
private parseSuggestionsFromResponse(
|
||||
response: string,
|
||||
category: IdeaCategory,
|
||||
count: number
|
||||
category: IdeaCategory
|
||||
): AnalysisSuggestion[] {
|
||||
try {
|
||||
// Try to extract JSON from the response
|
||||
const jsonMatch = response.match(/\[[\s\S]*\]/);
|
||||
if (!jsonMatch) {
|
||||
logger.warn('No JSON array found in response, falling back to text parsing');
|
||||
return this.parseTextResponse(response, category, count);
|
||||
return this.parseTextResponse(response, category);
|
||||
}
|
||||
|
||||
const parsed = JSON.parse(jsonMatch[0]);
|
||||
if (!Array.isArray(parsed)) {
|
||||
return this.parseTextResponse(response, category, count);
|
||||
return this.parseTextResponse(response, category);
|
||||
}
|
||||
|
||||
return parsed
|
||||
.map((item: any, index: number) => ({
|
||||
id: this.generateId('sug'),
|
||||
category,
|
||||
title: item.title || `Suggestion ${index + 1}`,
|
||||
description: item.description || '',
|
||||
rationale: item.rationale || '',
|
||||
priority: item.priority || 'medium',
|
||||
relatedFiles: item.relatedFiles || [],
|
||||
}))
|
||||
.slice(0, count);
|
||||
return parsed.map((item: any, index: number) => ({
|
||||
id: this.generateId('sug'),
|
||||
category,
|
||||
title: item.title || `Suggestion ${index + 1}`,
|
||||
description: item.description || '',
|
||||
rationale: item.rationale || '',
|
||||
priority: item.priority || 'medium',
|
||||
relatedFiles: item.relatedFiles || [],
|
||||
}));
|
||||
} catch (error) {
|
||||
logger.warn('Failed to parse JSON response:', error);
|
||||
return this.parseTextResponse(response, category, count);
|
||||
return this.parseTextResponse(response, category);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Fallback: parse text response into suggestions
|
||||
*/
|
||||
private parseTextResponse(
|
||||
response: string,
|
||||
category: IdeaCategory,
|
||||
count: number
|
||||
): AnalysisSuggestion[] {
|
||||
private parseTextResponse(response: string, category: IdeaCategory): AnalysisSuggestion[] {
|
||||
const suggestions: AnalysisSuggestion[] = [];
|
||||
|
||||
// Try to find numbered items or headers
|
||||
@@ -939,7 +907,7 @@ ${contextSection}${existingWorkSection}`;
|
||||
});
|
||||
}
|
||||
|
||||
return suggestions.slice(0, count);
|
||||
return suggestions.slice(0, 5); // Max 5 suggestions
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
@@ -1377,68 +1345,6 @@ ${contextSection}${existingWorkSection}`;
|
||||
return descriptions[category] || '';
|
||||
}
|
||||
|
||||
/**
|
||||
* Build context from app_spec.txt for suggestion generation
|
||||
* Extracts project name, overview, capabilities, and implemented features
|
||||
*/
|
||||
private async buildAppSpecContext(projectPath: string): Promise<string> {
|
||||
try {
|
||||
const specPath = getAppSpecPath(projectPath);
|
||||
const specContent = (await secureFs.readFile(specPath, 'utf-8')) as string;
|
||||
|
||||
const parts: string[] = [];
|
||||
parts.push('## App Specification');
|
||||
|
||||
// Extract project name
|
||||
const projectNames = extractXmlElements(specContent, 'project_name');
|
||||
if (projectNames.length > 0 && projectNames[0]) {
|
||||
parts.push(`**Project:** ${projectNames[0]}`);
|
||||
}
|
||||
|
||||
// Extract overview
|
||||
const overviews = extractXmlElements(specContent, 'overview');
|
||||
if (overviews.length > 0 && overviews[0]) {
|
||||
parts.push(`**Overview:** ${overviews[0]}`);
|
||||
}
|
||||
|
||||
// Extract core capabilities
|
||||
const capabilities = extractXmlElements(specContent, 'capability');
|
||||
if (capabilities.length > 0) {
|
||||
parts.push('**Core Capabilities:**');
|
||||
for (const cap of capabilities) {
|
||||
parts.push(`- ${cap}`);
|
||||
}
|
||||
}
|
||||
|
||||
// Extract implemented features
|
||||
const implementedFeatures = extractImplementedFeatures(specContent);
|
||||
if (implementedFeatures.length > 0) {
|
||||
parts.push('**Implemented Features:**');
|
||||
for (const feature of implementedFeatures) {
|
||||
if (feature.description) {
|
||||
parts.push(`- ${feature.name}: ${feature.description}`);
|
||||
} else {
|
||||
parts.push(`- ${feature.name}`);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Only return content if we extracted something meaningful
|
||||
if (parts.length > 1) {
|
||||
return parts.join('\n');
|
||||
}
|
||||
return '';
|
||||
} catch (error) {
|
||||
// If file doesn't exist, return empty string silently
|
||||
if ((error as NodeJS.ErrnoException).code === 'ENOENT') {
|
||||
return '';
|
||||
}
|
||||
// For other errors, log and return empty string
|
||||
logger.warn('Failed to build app spec context:', error);
|
||||
return '';
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Gather basic project information for context when no context files exist
|
||||
*/
|
||||
@@ -1534,15 +1440,11 @@ ${contextSection}${existingWorkSection}`;
|
||||
* Gather existing features and ideas to prevent duplicate suggestions
|
||||
* Returns a concise list of titles grouped by status to avoid polluting context
|
||||
*/
|
||||
private async gatherExistingWorkContext(
|
||||
projectPath: string,
|
||||
options?: { includeFeatures?: boolean; includeIdeas?: boolean }
|
||||
): Promise<string> {
|
||||
const { includeFeatures = true, includeIdeas = true } = options ?? {};
|
||||
private async gatherExistingWorkContext(projectPath: string): Promise<string> {
|
||||
const parts: string[] = [];
|
||||
|
||||
// Load existing features from the board
|
||||
if (includeFeatures && this.featureLoader) {
|
||||
if (this.featureLoader) {
|
||||
try {
|
||||
const features = await this.featureLoader.getAll(projectPath);
|
||||
if (features.length > 0) {
|
||||
@@ -1590,36 +1492,34 @@ ${contextSection}${existingWorkSection}`;
|
||||
}
|
||||
|
||||
// Load existing ideas
|
||||
if (includeIdeas) {
|
||||
try {
|
||||
const ideas = await this.getIdeas(projectPath);
|
||||
// Filter out archived ideas
|
||||
const activeIdeas = ideas.filter((idea) => idea.status !== 'archived');
|
||||
try {
|
||||
const ideas = await this.getIdeas(projectPath);
|
||||
// Filter out archived ideas
|
||||
const activeIdeas = ideas.filter((idea) => idea.status !== 'archived');
|
||||
|
||||
if (activeIdeas.length > 0) {
|
||||
parts.push('## Existing Ideas (Do NOT regenerate these)');
|
||||
parts.push(
|
||||
'The following ideas have already been captured. Do NOT suggest similar ideas:\n'
|
||||
);
|
||||
if (activeIdeas.length > 0) {
|
||||
parts.push('## Existing Ideas (Do NOT regenerate these)');
|
||||
parts.push(
|
||||
'The following ideas have already been captured. Do NOT suggest similar ideas:\n'
|
||||
);
|
||||
|
||||
// Group by category for organization
|
||||
const byCategory: Record<string, string[]> = {};
|
||||
for (const idea of activeIdeas) {
|
||||
const cat = idea.category || 'feature';
|
||||
if (!byCategory[cat]) {
|
||||
byCategory[cat] = [];
|
||||
}
|
||||
byCategory[cat].push(idea.title);
|
||||
// Group by category for organization
|
||||
const byCategory: Record<string, string[]> = {};
|
||||
for (const idea of activeIdeas) {
|
||||
const cat = idea.category || 'feature';
|
||||
if (!byCategory[cat]) {
|
||||
byCategory[cat] = [];
|
||||
}
|
||||
|
||||
for (const [category, titles] of Object.entries(byCategory)) {
|
||||
parts.push(`**${category}:** ${titles.join(', ')}`);
|
||||
}
|
||||
parts.push('');
|
||||
byCategory[cat].push(idea.title);
|
||||
}
|
||||
} catch (error) {
|
||||
logger.warn('Failed to load existing ideas:', error);
|
||||
|
||||
for (const [category, titles] of Object.entries(byCategory)) {
|
||||
parts.push(`**${category}:** ${titles.join(', ')}`);
|
||||
}
|
||||
parts.push('');
|
||||
}
|
||||
} catch (error) {
|
||||
logger.warn('Failed to load existing ideas:', error);
|
||||
}
|
||||
|
||||
return parts.join('\n');
|
||||
|
||||
@@ -1,583 +0,0 @@
|
||||
/**
|
||||
* PipelineOrchestrator - Pipeline step execution and coordination
|
||||
*/
|
||||
|
||||
import path from 'path';
|
||||
import type {
|
||||
Feature,
|
||||
PipelineStep,
|
||||
PipelineConfig,
|
||||
FeatureStatusWithPipeline,
|
||||
} from '@automaker/types';
|
||||
import { createLogger, loadContextFiles, classifyError } from '@automaker/utils';
|
||||
import { getFeatureDir } from '@automaker/platform';
|
||||
import { resolveModelString, DEFAULT_MODELS } from '@automaker/model-resolver';
|
||||
import * as secureFs from '../lib/secure-fs.js';
|
||||
import {
|
||||
getPromptCustomization,
|
||||
getAutoLoadClaudeMdSetting,
|
||||
filterClaudeMdFromContext,
|
||||
} from '../lib/settings-helpers.js';
|
||||
import { validateWorkingDirectory } from '../lib/sdk-options.js';
|
||||
import type { TypedEventBus } from './typed-event-bus.js';
|
||||
import type { FeatureStateManager } from './feature-state-manager.js';
|
||||
import type { AgentExecutor } from './agent-executor.js';
|
||||
import type { WorktreeResolver } from './worktree-resolver.js';
|
||||
import type { SettingsService } from './settings-service.js';
|
||||
import type { ConcurrencyManager } from './concurrency-manager.js';
|
||||
import { pipelineService } from './pipeline-service.js';
|
||||
import type { TestRunnerService, TestRunStatus } from './test-runner-service.js';
|
||||
import type {
|
||||
PipelineContext,
|
||||
PipelineStatusInfo,
|
||||
StepResult,
|
||||
MergeResult,
|
||||
UpdateFeatureStatusFn,
|
||||
BuildFeaturePromptFn,
|
||||
ExecuteFeatureFn,
|
||||
RunAgentFn,
|
||||
} from './pipeline-types.js';
|
||||
|
||||
// Re-export types for backward compatibility
|
||||
export type {
|
||||
PipelineContext,
|
||||
PipelineStatusInfo,
|
||||
StepResult,
|
||||
MergeResult,
|
||||
UpdateFeatureStatusFn,
|
||||
BuildFeaturePromptFn,
|
||||
ExecuteFeatureFn,
|
||||
RunAgentFn,
|
||||
} from './pipeline-types.js';
|
||||
|
||||
const logger = createLogger('PipelineOrchestrator');
|
||||
|
||||
export class PipelineOrchestrator {
|
||||
constructor(
|
||||
private eventBus: TypedEventBus,
|
||||
private featureStateManager: FeatureStateManager,
|
||||
private agentExecutor: AgentExecutor,
|
||||
private testRunnerService: TestRunnerService,
|
||||
private worktreeResolver: WorktreeResolver,
|
||||
private concurrencyManager: ConcurrencyManager,
|
||||
private settingsService: SettingsService | null,
|
||||
private updateFeatureStatusFn: UpdateFeatureStatusFn,
|
||||
private loadContextFilesFn: typeof loadContextFiles,
|
||||
private buildFeaturePromptFn: BuildFeaturePromptFn,
|
||||
private executeFeatureFn: ExecuteFeatureFn,
|
||||
private runAgentFn: RunAgentFn,
|
||||
private serverPort = 3008
|
||||
) {}
|
||||
|
||||
async executePipeline(ctx: PipelineContext): Promise<void> {
|
||||
const { projectPath, featureId, feature, steps, workDir, abortController, autoLoadClaudeMd } =
|
||||
ctx;
|
||||
const prompts = await getPromptCustomization(this.settingsService, '[AutoMode]');
|
||||
const contextResult = await this.loadContextFilesFn({
|
||||
projectPath,
|
||||
fsModule: secureFs as Parameters<typeof loadContextFiles>[0]['fsModule'],
|
||||
taskContext: { title: feature.title ?? '', description: feature.description ?? '' },
|
||||
});
|
||||
const contextFilesPrompt = filterClaudeMdFromContext(contextResult, autoLoadClaudeMd);
|
||||
const contextPath = path.join(getFeatureDir(projectPath, featureId), 'agent-output.md');
|
||||
let previousContext = '';
|
||||
try {
|
||||
previousContext = (await secureFs.readFile(contextPath, 'utf-8')) as string;
|
||||
} catch {
|
||||
/* */
|
||||
}
|
||||
|
||||
for (let i = 0; i < steps.length; i++) {
|
||||
const step = steps[i];
|
||||
if (abortController.signal.aborted) throw new Error('Pipeline execution aborted');
|
||||
await this.updateFeatureStatusFn(projectPath, featureId, `pipeline_${step.id}`);
|
||||
this.eventBus.emitAutoModeEvent('auto_mode_progress', {
|
||||
featureId,
|
||||
branchName: feature.branchName ?? null,
|
||||
content: `Starting pipeline step ${i + 1}/${steps.length}: ${step.name}`,
|
||||
projectPath,
|
||||
});
|
||||
this.eventBus.emitAutoModeEvent('pipeline_step_started', {
|
||||
featureId,
|
||||
stepId: step.id,
|
||||
stepName: step.name,
|
||||
stepIndex: i,
|
||||
totalSteps: steps.length,
|
||||
projectPath,
|
||||
});
|
||||
const model = resolveModelString(feature.model, DEFAULT_MODELS.claude);
|
||||
await this.runAgentFn(
|
||||
workDir,
|
||||
featureId,
|
||||
this.buildPipelineStepPrompt(step, feature, previousContext, prompts.taskExecution),
|
||||
abortController,
|
||||
projectPath,
|
||||
undefined,
|
||||
model,
|
||||
{
|
||||
projectPath,
|
||||
planningMode: 'skip',
|
||||
requirePlanApproval: false,
|
||||
previousContent: previousContext,
|
||||
systemPrompt: contextFilesPrompt || undefined,
|
||||
autoLoadClaudeMd,
|
||||
thinkingLevel: feature.thinkingLevel,
|
||||
}
|
||||
);
|
||||
try {
|
||||
previousContext = (await secureFs.readFile(contextPath, 'utf-8')) as string;
|
||||
} catch {
|
||||
/* */
|
||||
}
|
||||
this.eventBus.emitAutoModeEvent('pipeline_step_complete', {
|
||||
featureId,
|
||||
stepId: step.id,
|
||||
stepName: step.name,
|
||||
stepIndex: i,
|
||||
totalSteps: steps.length,
|
||||
projectPath,
|
||||
});
|
||||
}
|
||||
if (ctx.branchName) {
|
||||
const mergeResult = await this.attemptMerge(ctx);
|
||||
if (!mergeResult.success && mergeResult.hasConflicts) return;
|
||||
}
|
||||
}
|
||||
|
||||
buildPipelineStepPrompt(
|
||||
step: PipelineStep,
|
||||
feature: Feature,
|
||||
previousContext: string,
|
||||
taskPrompts: { implementationInstructions: string; playwrightVerificationInstructions: string }
|
||||
): string {
|
||||
let prompt = `## Pipeline Step: ${step.name}\n\nThis is an automated pipeline step.\n\n### Feature Context\n${this.buildFeaturePromptFn(feature, taskPrompts)}\n\n`;
|
||||
if (previousContext) prompt += `### Previous Work\n${previousContext}\n\n`;
|
||||
return (
|
||||
prompt +
|
||||
`### Pipeline Step Instructions\n${step.instructions}\n\n### Task\nComplete the pipeline step instructions above.`
|
||||
);
|
||||
}
|
||||
|
||||
async detectPipelineStatus(
|
||||
projectPath: string,
|
||||
featureId: string,
|
||||
currentStatus: FeatureStatusWithPipeline
|
||||
): Promise<PipelineStatusInfo> {
|
||||
const isPipeline = pipelineService.isPipelineStatus(currentStatus);
|
||||
if (!isPipeline)
|
||||
return {
|
||||
isPipeline: false,
|
||||
stepId: null,
|
||||
stepIndex: -1,
|
||||
totalSteps: 0,
|
||||
step: null,
|
||||
config: null,
|
||||
};
|
||||
const stepId = pipelineService.getStepIdFromStatus(currentStatus);
|
||||
if (!stepId)
|
||||
return {
|
||||
isPipeline: true,
|
||||
stepId: null,
|
||||
stepIndex: -1,
|
||||
totalSteps: 0,
|
||||
step: null,
|
||||
config: null,
|
||||
};
|
||||
const config = await pipelineService.getPipelineConfig(projectPath);
|
||||
if (!config || config.steps.length === 0)
|
||||
return { isPipeline: true, stepId, stepIndex: -1, totalSteps: 0, step: null, config: null };
|
||||
const sortedSteps = [...config.steps].sort((a, b) => a.order - b.order);
|
||||
const stepIndex = sortedSteps.findIndex((s) => s.id === stepId);
|
||||
return {
|
||||
isPipeline: true,
|
||||
stepId,
|
||||
stepIndex,
|
||||
totalSteps: sortedSteps.length,
|
||||
step: stepIndex === -1 ? null : sortedSteps[stepIndex],
|
||||
config,
|
||||
};
|
||||
}
|
||||
|
||||
async resumePipeline(
|
||||
projectPath: string,
|
||||
feature: Feature,
|
||||
useWorktrees: boolean,
|
||||
pipelineInfo: PipelineStatusInfo
|
||||
): Promise<void> {
|
||||
const featureId = feature.id;
|
||||
const contextPath = path.join(getFeatureDir(projectPath, featureId), 'agent-output.md');
|
||||
let hasContext = false;
|
||||
try {
|
||||
await secureFs.access(contextPath);
|
||||
hasContext = true;
|
||||
} catch {
|
||||
/* No context */
|
||||
}
|
||||
|
||||
if (!hasContext) {
|
||||
logger.warn(`No context for feature ${featureId}, restarting pipeline`);
|
||||
await this.updateFeatureStatusFn(projectPath, featureId, 'in_progress');
|
||||
return this.executeFeatureFn(projectPath, featureId, useWorktrees, false, undefined, {
|
||||
_calledInternally: true,
|
||||
});
|
||||
}
|
||||
|
||||
if (pipelineInfo.stepIndex === -1) {
|
||||
logger.warn(`Step ${pipelineInfo.stepId} no longer exists, completing feature`);
|
||||
const finalStatus = feature.skipTests ? 'waiting_approval' : 'verified';
|
||||
await this.updateFeatureStatusFn(projectPath, featureId, finalStatus);
|
||||
this.eventBus.emitAutoModeEvent('auto_mode_feature_complete', {
|
||||
featureId,
|
||||
featureName: feature.title,
|
||||
branchName: feature.branchName ?? null,
|
||||
passes: true,
|
||||
message: 'Pipeline step no longer exists',
|
||||
projectPath,
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
if (!pipelineInfo.config) throw new Error('Pipeline config is null but stepIndex is valid');
|
||||
return this.resumeFromStep(
|
||||
projectPath,
|
||||
feature,
|
||||
useWorktrees,
|
||||
pipelineInfo.stepIndex,
|
||||
pipelineInfo.config
|
||||
);
|
||||
}
|
||||
|
||||
/** Resume from a specific step index */
|
||||
async resumeFromStep(
|
||||
projectPath: string,
|
||||
feature: Feature,
|
||||
useWorktrees: boolean,
|
||||
startFromStepIndex: number,
|
||||
pipelineConfig: PipelineConfig
|
||||
): Promise<void> {
|
||||
const featureId = feature.id;
|
||||
const allSortedSteps = [...pipelineConfig.steps].sort((a, b) => a.order - b.order);
|
||||
if (startFromStepIndex < 0 || startFromStepIndex >= allSortedSteps.length)
|
||||
throw new Error(`Invalid step index: ${startFromStepIndex}`);
|
||||
|
||||
const excludedStepIds = new Set(feature.excludedPipelineSteps || []);
|
||||
let currentStep = allSortedSteps[startFromStepIndex];
|
||||
|
||||
if (excludedStepIds.has(currentStep.id)) {
|
||||
const nextStatus = pipelineService.getNextStatus(
|
||||
`pipeline_${currentStep.id}`,
|
||||
pipelineConfig,
|
||||
feature.skipTests ?? false,
|
||||
feature.excludedPipelineSteps
|
||||
);
|
||||
if (!pipelineService.isPipelineStatus(nextStatus)) {
|
||||
await this.updateFeatureStatusFn(projectPath, featureId, nextStatus);
|
||||
this.eventBus.emitAutoModeEvent('auto_mode_feature_complete', {
|
||||
featureId,
|
||||
featureName: feature.title,
|
||||
branchName: feature.branchName ?? null,
|
||||
passes: true,
|
||||
message: 'Pipeline completed (remaining steps excluded)',
|
||||
projectPath,
|
||||
});
|
||||
return;
|
||||
}
|
||||
const nextStepId = pipelineService.getStepIdFromStatus(nextStatus);
|
||||
const nextStepIndex = allSortedSteps.findIndex((s) => s.id === nextStepId);
|
||||
if (nextStepIndex === -1) throw new Error(`Next step ${nextStepId} not found`);
|
||||
startFromStepIndex = nextStepIndex;
|
||||
}
|
||||
|
||||
const stepsToExecute = allSortedSteps
|
||||
.slice(startFromStepIndex)
|
||||
.filter((step) => !excludedStepIds.has(step.id));
|
||||
if (stepsToExecute.length === 0) {
|
||||
const finalStatus = feature.skipTests ? 'waiting_approval' : 'verified';
|
||||
await this.updateFeatureStatusFn(projectPath, featureId, finalStatus);
|
||||
this.eventBus.emitAutoModeEvent('auto_mode_feature_complete', {
|
||||
featureId,
|
||||
featureName: feature.title,
|
||||
branchName: feature.branchName ?? null,
|
||||
passes: true,
|
||||
message: 'Pipeline completed (all steps excluded)',
|
||||
projectPath,
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
const runningEntry = this.concurrencyManager.acquire({
|
||||
featureId,
|
||||
projectPath,
|
||||
isAutoMode: false,
|
||||
allowReuse: true,
|
||||
});
|
||||
const abortController = runningEntry.abortController;
|
||||
runningEntry.branchName = feature.branchName ?? null;
|
||||
|
||||
try {
|
||||
validateWorkingDirectory(projectPath);
|
||||
let worktreePath: string | null = null;
|
||||
const branchName = feature.branchName;
|
||||
|
||||
if (useWorktrees && branchName) {
|
||||
worktreePath = await this.worktreeResolver.findWorktreeForBranch(projectPath, branchName);
|
||||
if (worktreePath) logger.info(`Using worktree for branch "${branchName}": ${worktreePath}`);
|
||||
}
|
||||
|
||||
const workDir = worktreePath ? path.resolve(worktreePath) : path.resolve(projectPath);
|
||||
validateWorkingDirectory(workDir);
|
||||
runningEntry.worktreePath = worktreePath;
|
||||
runningEntry.branchName = branchName ?? null;
|
||||
|
||||
this.eventBus.emitAutoModeEvent('auto_mode_feature_start', {
|
||||
featureId,
|
||||
projectPath,
|
||||
branchName: branchName ?? null,
|
||||
feature: {
|
||||
id: featureId,
|
||||
title: feature.title || 'Resuming Pipeline',
|
||||
description: feature.description,
|
||||
},
|
||||
});
|
||||
|
||||
const autoLoadClaudeMd = await getAutoLoadClaudeMdSetting(
|
||||
projectPath,
|
||||
this.settingsService,
|
||||
'[AutoMode]'
|
||||
);
|
||||
const context: PipelineContext = {
|
||||
projectPath,
|
||||
featureId,
|
||||
feature,
|
||||
steps: stepsToExecute,
|
||||
workDir,
|
||||
worktreePath,
|
||||
branchName: branchName ?? null,
|
||||
abortController,
|
||||
autoLoadClaudeMd,
|
||||
testAttempts: 0,
|
||||
maxTestAttempts: 5,
|
||||
};
|
||||
|
||||
await this.executePipeline(context);
|
||||
|
||||
const finalStatus = feature.skipTests ? 'waiting_approval' : 'verified';
|
||||
await this.updateFeatureStatusFn(projectPath, featureId, finalStatus);
|
||||
logger.info(`Pipeline resume completed for feature ${featureId}`);
|
||||
this.eventBus.emitAutoModeEvent('auto_mode_feature_complete', {
|
||||
featureId,
|
||||
featureName: feature.title,
|
||||
branchName: feature.branchName ?? null,
|
||||
passes: true,
|
||||
message: 'Pipeline resumed successfully',
|
||||
projectPath,
|
||||
});
|
||||
} catch (error) {
|
||||
const errorInfo = classifyError(error);
|
||||
if (errorInfo.isAbort) {
|
||||
this.eventBus.emitAutoModeEvent('auto_mode_feature_complete', {
|
||||
featureId,
|
||||
featureName: feature.title,
|
||||
branchName: feature.branchName ?? null,
|
||||
passes: false,
|
||||
message: 'Pipeline stopped by user',
|
||||
projectPath,
|
||||
});
|
||||
} else {
|
||||
logger.error(`Pipeline resume failed for ${featureId}:`, error);
|
||||
await this.updateFeatureStatusFn(projectPath, featureId, 'backlog');
|
||||
this.eventBus.emitAutoModeEvent('auto_mode_error', {
|
||||
featureId,
|
||||
featureName: feature.title,
|
||||
branchName: feature.branchName ?? null,
|
||||
error: errorInfo.message,
|
||||
errorType: errorInfo.type,
|
||||
projectPath,
|
||||
});
|
||||
}
|
||||
} finally {
|
||||
this.concurrencyManager.release(featureId);
|
||||
}
|
||||
}
|
||||
|
||||
/** Execute test step with agent fix loop (REQ-F07) */
|
||||
async executeTestStep(context: PipelineContext, testCommand: string): Promise<StepResult> {
|
||||
const { featureId, projectPath, workDir, abortController, maxTestAttempts } = context;
|
||||
|
||||
for (let attempt = 1; attempt <= maxTestAttempts; attempt++) {
|
||||
if (abortController.signal.aborted)
|
||||
return { success: false, message: 'Test execution aborted' };
|
||||
logger.info(`Running tests for ${featureId} (attempt ${attempt}/${maxTestAttempts})`);
|
||||
|
||||
const testResult = await this.testRunnerService.startTests(workDir, { command: testCommand });
|
||||
if (!testResult.success || !testResult.result?.sessionId)
|
||||
return {
|
||||
success: false,
|
||||
testsPassed: false,
|
||||
message: testResult.error || 'Failed to start tests',
|
||||
};
|
||||
|
||||
const completionResult = await this.waitForTestCompletion(testResult.result.sessionId);
|
||||
if (completionResult.status === 'passed') return { success: true, testsPassed: true };
|
||||
|
||||
const sessionOutput = this.testRunnerService.getSessionOutput(testResult.result.sessionId);
|
||||
const scrollback = sessionOutput.result?.output || '';
|
||||
this.eventBus.emitAutoModeEvent('pipeline_test_failed', {
|
||||
featureId,
|
||||
attempt,
|
||||
maxAttempts: maxTestAttempts,
|
||||
failedTests: this.extractFailedTestNames(scrollback),
|
||||
projectPath,
|
||||
});
|
||||
|
||||
if (attempt < maxTestAttempts) {
|
||||
const fixPrompt = `## Test Failures - Please Fix\n\n${this.buildTestFailureSummary(scrollback)}\n\nFix the failing tests without modifying test code unless clearly wrong.`;
|
||||
await this.runAgentFn(
|
||||
workDir,
|
||||
featureId,
|
||||
fixPrompt,
|
||||
abortController,
|
||||
projectPath,
|
||||
undefined,
|
||||
undefined,
|
||||
{ projectPath, planningMode: 'skip', requirePlanApproval: false }
|
||||
);
|
||||
}
|
||||
}
|
||||
return {
|
||||
success: false,
|
||||
testsPassed: false,
|
||||
message: `Tests failed after ${maxTestAttempts} attempts`,
|
||||
};
|
||||
}
|
||||
|
||||
/** Wait for test completion */
|
||||
private async waitForTestCompletion(
|
||||
sessionId: string
|
||||
): Promise<{ status: TestRunStatus; exitCode: number | null; duration: number }> {
|
||||
return new Promise((resolve) => {
|
||||
const checkInterval = setInterval(() => {
|
||||
const session = this.testRunnerService.getSession(sessionId);
|
||||
if (session && session.status !== 'running' && session.status !== 'pending') {
|
||||
clearInterval(checkInterval);
|
||||
resolve({
|
||||
status: session.status,
|
||||
exitCode: session.exitCode,
|
||||
duration: session.finishedAt
|
||||
? session.finishedAt.getTime() - session.startedAt.getTime()
|
||||
: 0,
|
||||
});
|
||||
}
|
||||
}, 1000);
|
||||
setTimeout(() => {
|
||||
clearInterval(checkInterval);
|
||||
resolve({ status: 'failed', exitCode: null, duration: 600000 });
|
||||
}, 600000);
|
||||
});
|
||||
}
|
||||
|
||||
/** Attempt to merge feature branch (REQ-F05) */
|
||||
async attemptMerge(context: PipelineContext): Promise<MergeResult> {
|
||||
const { projectPath, featureId, branchName, worktreePath, feature } = context;
|
||||
if (!branchName) return { success: false, error: 'No branch name for merge' };
|
||||
|
||||
logger.info(`Attempting auto-merge for feature ${featureId} (branch: ${branchName})`);
|
||||
try {
|
||||
const response = await fetch(`http://localhost:${this.serverPort}/api/worktree/merge`, {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify({
|
||||
projectPath,
|
||||
branchName,
|
||||
worktreePath,
|
||||
targetBranch: 'main',
|
||||
options: { deleteWorktreeAndBranch: false },
|
||||
}),
|
||||
});
|
||||
|
||||
if (!response) {
|
||||
return { success: false, error: 'No response from merge endpoint' };
|
||||
}
|
||||
|
||||
// Defensively parse JSON response
|
||||
let data: { success: boolean; hasConflicts?: boolean; error?: string };
|
||||
try {
|
||||
data = (await response.json()) as {
|
||||
success: boolean;
|
||||
hasConflicts?: boolean;
|
||||
error?: string;
|
||||
};
|
||||
} catch (parseError) {
|
||||
logger.error(`Failed to parse merge response:`, parseError);
|
||||
return { success: false, error: 'Invalid response from merge endpoint' };
|
||||
}
|
||||
|
||||
if (!response.ok) {
|
||||
if (data.hasConflicts) {
|
||||
await this.updateFeatureStatusFn(projectPath, featureId, 'merge_conflict');
|
||||
this.eventBus.emitAutoModeEvent('pipeline_merge_conflict', {
|
||||
featureId,
|
||||
branchName,
|
||||
projectPath,
|
||||
});
|
||||
return { success: false, hasConflicts: true, needsAgentResolution: true };
|
||||
}
|
||||
return { success: false, error: data.error };
|
||||
}
|
||||
|
||||
logger.info(`Auto-merge successful for feature ${featureId}`);
|
||||
this.eventBus.emitAutoModeEvent('auto_mode_feature_complete', {
|
||||
featureId,
|
||||
featureName: feature.title,
|
||||
branchName,
|
||||
passes: true,
|
||||
message: 'Pipeline completed and merged',
|
||||
projectPath,
|
||||
});
|
||||
return { success: true };
|
||||
} catch (error) {
|
||||
logger.error(`Merge failed for ${featureId}:`, error);
|
||||
return { success: false, error: (error as Error).message };
|
||||
}
|
||||
}
|
||||
|
||||
/** Build a concise test failure summary for the agent */
|
||||
buildTestFailureSummary(scrollback: string): string {
|
||||
const lines = scrollback.split('\n');
|
||||
const failedTests: string[] = [];
|
||||
let passCount = 0,
|
||||
failCount = 0;
|
||||
|
||||
for (const line of lines) {
|
||||
const trimmed = line.trim();
|
||||
if (trimmed.includes('FAIL') || trimmed.includes('FAILED')) {
|
||||
const match = trimmed.match(/(?:FAIL|FAILED)\s+(.+)/);
|
||||
if (match) failedTests.push(match[1].trim());
|
||||
failCount++;
|
||||
} else if (trimmed.includes('PASS') || trimmed.includes('PASSED')) passCount++;
|
||||
if (trimmed.match(/^>\s+.*\.(test|spec)\./)) failedTests.push(trimmed.replace(/^>\s+/, ''));
|
||||
if (
|
||||
trimmed.includes('AssertionError') ||
|
||||
trimmed.includes('toBe') ||
|
||||
trimmed.includes('toEqual')
|
||||
)
|
||||
failedTests.push(trimmed);
|
||||
}
|
||||
|
||||
const unique = [...new Set(failedTests)].slice(0, 10);
|
||||
return `Test Results: ${passCount} passed, ${failCount} failed.\n\nFailed tests:\n${unique.map((t) => `- ${t}`).join('\n')}\n\nOutput (last 2000 chars):\n${scrollback.slice(-2000)}`;
|
||||
}
|
||||
|
||||
/** Extract failed test names from scrollback */
|
||||
private extractFailedTestNames(scrollback: string): string[] {
|
||||
const failedTests: string[] = [];
|
||||
for (const line of scrollback.split('\n')) {
|
||||
const trimmed = line.trim();
|
||||
if (trimmed.includes('FAIL') || trimmed.includes('FAILED')) {
|
||||
const match = trimmed.match(/(?:FAIL|FAILED)\s+(.+)/);
|
||||
if (match) failedTests.push(match[1].trim());
|
||||
}
|
||||
}
|
||||
return [...new Set(failedTests)].slice(0, 20);
|
||||
}
|
||||
}
|
||||
@@ -1,72 +0,0 @@
|
||||
/**
|
||||
* Pipeline Types - Type definitions for PipelineOrchestrator
|
||||
*/
|
||||
|
||||
import type { Feature, PipelineStep, PipelineConfig } from '@automaker/types';
|
||||
|
||||
export interface PipelineContext {
|
||||
projectPath: string;
|
||||
featureId: string;
|
||||
feature: Feature;
|
||||
steps: PipelineStep[];
|
||||
workDir: string;
|
||||
worktreePath: string | null;
|
||||
branchName: string | null;
|
||||
abortController: AbortController;
|
||||
autoLoadClaudeMd: boolean;
|
||||
testAttempts: number;
|
||||
maxTestAttempts: number;
|
||||
}
|
||||
|
||||
export interface PipelineStatusInfo {
|
||||
isPipeline: boolean;
|
||||
stepId: string | null;
|
||||
stepIndex: number;
|
||||
totalSteps: number;
|
||||
step: PipelineStep | null;
|
||||
config: PipelineConfig | null;
|
||||
}
|
||||
|
||||
export interface StepResult {
|
||||
success: boolean;
|
||||
testsPassed?: boolean;
|
||||
message?: string;
|
||||
}
|
||||
|
||||
export interface MergeResult {
|
||||
success: boolean;
|
||||
hasConflicts?: boolean;
|
||||
needsAgentResolution?: boolean;
|
||||
error?: string;
|
||||
}
|
||||
|
||||
export type UpdateFeatureStatusFn = (
|
||||
projectPath: string,
|
||||
featureId: string,
|
||||
status: string
|
||||
) => Promise<void>;
|
||||
|
||||
export type BuildFeaturePromptFn = (
|
||||
feature: Feature,
|
||||
prompts: { implementationInstructions: string; playwrightVerificationInstructions: string }
|
||||
) => string;
|
||||
|
||||
export type ExecuteFeatureFn = (
|
||||
projectPath: string,
|
||||
featureId: string,
|
||||
useWorktrees: boolean,
|
||||
useScreenshots: boolean,
|
||||
model?: string,
|
||||
options?: { _calledInternally?: boolean }
|
||||
) => Promise<void>;
|
||||
|
||||
export type RunAgentFn = (
|
||||
workDir: string,
|
||||
featureId: string,
|
||||
prompt: string,
|
||||
abortController: AbortController,
|
||||
projectPath: string,
|
||||
imagePaths?: string[],
|
||||
model?: string,
|
||||
options?: Record<string, unknown>
|
||||
) => Promise<void>;
|
||||
@@ -1,323 +0,0 @@
|
||||
/**
|
||||
* PlanApprovalService - Manages plan approval workflow with timeout and recovery
|
||||
*
|
||||
* Key behaviors:
|
||||
* - Timeout stored in closure, wrapped resolve/reject ensures cleanup
|
||||
* - Recovery returns needsRecovery flag (caller handles execution)
|
||||
* - Auto-reject on timeout (safety feature, not auto-approve)
|
||||
*/
|
||||
|
||||
import { createLogger } from '@automaker/utils';
|
||||
import type { TypedEventBus } from './typed-event-bus.js';
|
||||
import type { FeatureStateManager } from './feature-state-manager.js';
|
||||
import type { SettingsService } from './settings-service.js';
|
||||
|
||||
const logger = createLogger('PlanApprovalService');
|
||||
|
||||
/** Result returned when approval is resolved */
|
||||
export interface PlanApprovalResult {
|
||||
approved: boolean;
|
||||
editedPlan?: string;
|
||||
feedback?: string;
|
||||
}
|
||||
|
||||
/** Result returned from resolveApproval method */
|
||||
export interface ResolveApprovalResult {
|
||||
success: boolean;
|
||||
error?: string;
|
||||
needsRecovery?: boolean;
|
||||
}
|
||||
|
||||
/** Represents an orphaned approval that needs recovery after server restart */
|
||||
export interface OrphanedApproval {
|
||||
featureId: string;
|
||||
projectPath: string;
|
||||
generatedAt?: string;
|
||||
planContent?: string;
|
||||
}
|
||||
|
||||
/** Internal: timeoutId stored in closure, NOT in this object */
|
||||
interface PendingApproval {
|
||||
resolve: (result: PlanApprovalResult) => void;
|
||||
reject: (error: Error) => void;
|
||||
featureId: string;
|
||||
projectPath: string;
|
||||
}
|
||||
|
||||
/** Default timeout: 30 minutes */
|
||||
const DEFAULT_APPROVAL_TIMEOUT_MS = 30 * 60 * 1000;
|
||||
|
||||
/**
|
||||
* PlanApprovalService handles the plan approval workflow with lifecycle management.
|
||||
*/
|
||||
export class PlanApprovalService {
|
||||
private pendingApprovals = new Map<string, PendingApproval>();
|
||||
private eventBus: TypedEventBus;
|
||||
private featureStateManager: FeatureStateManager;
|
||||
private settingsService: SettingsService | null;
|
||||
|
||||
constructor(
|
||||
eventBus: TypedEventBus,
|
||||
featureStateManager: FeatureStateManager,
|
||||
settingsService: SettingsService | null
|
||||
) {
|
||||
this.eventBus = eventBus;
|
||||
this.featureStateManager = featureStateManager;
|
||||
this.settingsService = settingsService;
|
||||
}
|
||||
|
||||
/** Generate project-scoped key to prevent collisions across projects */
|
||||
private approvalKey(projectPath: string, featureId: string): string {
|
||||
return `${projectPath}::${featureId}`;
|
||||
}
|
||||
|
||||
/** Wait for plan approval with timeout (default 30 min). Rejects on timeout/cancellation. */
|
||||
async waitForApproval(featureId: string, projectPath: string): Promise<PlanApprovalResult> {
|
||||
const timeoutMs = await this.getTimeoutMs(projectPath);
|
||||
const timeoutMinutes = Math.round(timeoutMs / 60000);
|
||||
const key = this.approvalKey(projectPath, featureId);
|
||||
|
||||
logger.info(`Registering pending approval for feature ${featureId} in project ${projectPath}`);
|
||||
logger.info(
|
||||
`Current pending approvals: ${Array.from(this.pendingApprovals.keys()).join(', ') || 'none'}`
|
||||
);
|
||||
|
||||
return new Promise((resolve, reject) => {
|
||||
// Set up timeout to prevent indefinite waiting and memory leaks
|
||||
// timeoutId stored in closure, NOT in PendingApproval object
|
||||
const timeoutId = setTimeout(() => {
|
||||
const pending = this.pendingApprovals.get(key);
|
||||
if (pending) {
|
||||
logger.warn(
|
||||
`Plan approval for feature ${featureId} timed out after ${timeoutMinutes} minutes`
|
||||
);
|
||||
this.pendingApprovals.delete(key);
|
||||
reject(
|
||||
new Error(
|
||||
`Plan approval timed out after ${timeoutMinutes} minutes - feature execution cancelled`
|
||||
)
|
||||
);
|
||||
}
|
||||
}, timeoutMs);
|
||||
|
||||
// Wrap resolve/reject to clear timeout when approval is resolved
|
||||
// This ensures timeout is ALWAYS cleared on any resolution path
|
||||
const wrappedResolve = (result: PlanApprovalResult) => {
|
||||
clearTimeout(timeoutId);
|
||||
resolve(result);
|
||||
};
|
||||
|
||||
const wrappedReject = (error: Error) => {
|
||||
clearTimeout(timeoutId);
|
||||
reject(error);
|
||||
};
|
||||
|
||||
this.pendingApprovals.set(key, {
|
||||
resolve: wrappedResolve,
|
||||
reject: wrappedReject,
|
||||
featureId,
|
||||
projectPath,
|
||||
});
|
||||
|
||||
logger.info(
|
||||
`Pending approval registered for feature ${featureId} (timeout: ${timeoutMinutes} minutes)`
|
||||
);
|
||||
});
|
||||
}
|
||||
|
||||
/** Resolve approval. Recovery path: returns needsRecovery=true if planSpec.status='generated'. */
|
||||
async resolveApproval(
|
||||
featureId: string,
|
||||
approved: boolean,
|
||||
options?: { editedPlan?: string; feedback?: string; projectPath?: string }
|
||||
): Promise<ResolveApprovalResult> {
|
||||
const { editedPlan, feedback, projectPath: projectPathFromClient } = options ?? {};
|
||||
|
||||
logger.info(`resolveApproval called for feature ${featureId}, approved=${approved}`);
|
||||
logger.info(
|
||||
`Current pending approvals: ${Array.from(this.pendingApprovals.keys()).join(', ') || 'none'}`
|
||||
);
|
||||
|
||||
// Try to find pending approval using project-scoped key if projectPath is available
|
||||
let foundKey: string | undefined;
|
||||
let pending: PendingApproval | undefined;
|
||||
|
||||
if (projectPathFromClient) {
|
||||
foundKey = this.approvalKey(projectPathFromClient, featureId);
|
||||
pending = this.pendingApprovals.get(foundKey);
|
||||
} else {
|
||||
// Fallback: search by featureId (backward compatibility)
|
||||
for (const [key, approval] of this.pendingApprovals) {
|
||||
if (approval.featureId === featureId) {
|
||||
foundKey = key;
|
||||
pending = approval;
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (!pending) {
|
||||
logger.info(`No pending approval in Map for feature ${featureId}`);
|
||||
|
||||
// RECOVERY: If no pending approval but we have projectPath from client,
|
||||
// check if feature's planSpec.status is 'generated' and handle recovery
|
||||
if (projectPathFromClient) {
|
||||
logger.info(`Attempting recovery with projectPath: ${projectPathFromClient}`);
|
||||
const feature = await this.featureStateManager.loadFeature(
|
||||
projectPathFromClient,
|
||||
featureId
|
||||
);
|
||||
|
||||
if (feature?.planSpec?.status === 'generated') {
|
||||
logger.info(`Feature ${featureId} has planSpec.status='generated', performing recovery`);
|
||||
|
||||
if (approved) {
|
||||
// Update planSpec to approved
|
||||
await this.featureStateManager.updateFeaturePlanSpec(projectPathFromClient, featureId, {
|
||||
status: 'approved',
|
||||
approvedAt: new Date().toISOString(),
|
||||
reviewedByUser: true,
|
||||
content: editedPlan || feature.planSpec.content,
|
||||
});
|
||||
|
||||
logger.info(`Recovery approval complete for feature ${featureId}`);
|
||||
|
||||
// Return needsRecovery flag - caller (AutoModeService) handles execution
|
||||
return { success: true, needsRecovery: true };
|
||||
} else {
|
||||
// Rejection recovery
|
||||
await this.featureStateManager.updateFeaturePlanSpec(projectPathFromClient, featureId, {
|
||||
status: 'rejected',
|
||||
reviewedByUser: true,
|
||||
});
|
||||
|
||||
await this.featureStateManager.updateFeatureStatus(
|
||||
projectPathFromClient,
|
||||
featureId,
|
||||
'backlog'
|
||||
);
|
||||
|
||||
this.eventBus.emitAutoModeEvent('plan_rejected', {
|
||||
featureId,
|
||||
projectPath: projectPathFromClient,
|
||||
feedback,
|
||||
});
|
||||
|
||||
return { success: true };
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
logger.info(
|
||||
`ERROR: No pending approval found for feature ${featureId} and recovery not possible`
|
||||
);
|
||||
return {
|
||||
success: false,
|
||||
error: `No pending approval for feature ${featureId}`,
|
||||
};
|
||||
}
|
||||
|
||||
logger.info(`Found pending approval for feature ${featureId}, proceeding...`);
|
||||
|
||||
const { projectPath } = pending;
|
||||
|
||||
// Update feature's planSpec status
|
||||
await this.featureStateManager.updateFeaturePlanSpec(projectPath, featureId, {
|
||||
status: approved ? 'approved' : 'rejected',
|
||||
approvedAt: approved ? new Date().toISOString() : undefined,
|
||||
reviewedByUser: true,
|
||||
content: editedPlan, // Update content if user provided an edited version
|
||||
});
|
||||
|
||||
// If rejected with feedback, emit event so client knows the rejection reason
|
||||
if (!approved && feedback) {
|
||||
this.eventBus.emitAutoModeEvent('plan_rejected', {
|
||||
featureId,
|
||||
projectPath,
|
||||
feedback,
|
||||
});
|
||||
}
|
||||
|
||||
// Resolve the promise with all data including feedback
|
||||
// This triggers the wrapped resolve which clears the timeout
|
||||
pending.resolve({ approved, editedPlan, feedback });
|
||||
if (foundKey) {
|
||||
this.pendingApprovals.delete(foundKey);
|
||||
}
|
||||
|
||||
return { success: true };
|
||||
}
|
||||
|
||||
/** Cancel approval (e.g., when feature stopped). Timeout cleared via wrapped reject. */
|
||||
cancelApproval(featureId: string, projectPath?: string): void {
|
||||
logger.info(`cancelApproval called for feature ${featureId}`);
|
||||
logger.info(
|
||||
`Current pending approvals: ${Array.from(this.pendingApprovals.keys()).join(', ') || 'none'}`
|
||||
);
|
||||
|
||||
// If projectPath provided, use project-scoped key; otherwise search by featureId
|
||||
let foundKey: string | undefined;
|
||||
let pending: PendingApproval | undefined;
|
||||
|
||||
if (projectPath) {
|
||||
foundKey = this.approvalKey(projectPath, featureId);
|
||||
pending = this.pendingApprovals.get(foundKey);
|
||||
} else {
|
||||
// Fallback: search for any approval with this featureId (backward compatibility)
|
||||
for (const [key, approval] of this.pendingApprovals) {
|
||||
if (approval.featureId === featureId) {
|
||||
foundKey = key;
|
||||
pending = approval;
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (pending && foundKey) {
|
||||
logger.info(`Found and cancelling pending approval for feature ${featureId}`);
|
||||
// Wrapped reject clears timeout automatically
|
||||
pending.reject(new Error('Plan approval cancelled - feature was stopped'));
|
||||
this.pendingApprovals.delete(foundKey);
|
||||
} else {
|
||||
logger.info(`No pending approval to cancel for feature ${featureId}`);
|
||||
}
|
||||
}
|
||||
|
||||
/** Check if a feature has a pending plan approval. */
|
||||
hasPendingApproval(featureId: string, projectPath?: string): boolean {
|
||||
if (projectPath) {
|
||||
return this.pendingApprovals.has(this.approvalKey(projectPath, featureId));
|
||||
}
|
||||
// Fallback: search by featureId (backward compatibility)
|
||||
for (const approval of this.pendingApprovals.values()) {
|
||||
if (approval.featureId === featureId) {
|
||||
return true;
|
||||
}
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
/** Get timeout from project settings or default (30 min). */
|
||||
private async getTimeoutMs(projectPath: string): Promise<number> {
|
||||
if (!this.settingsService) {
|
||||
return DEFAULT_APPROVAL_TIMEOUT_MS;
|
||||
}
|
||||
|
||||
try {
|
||||
const projectSettings = await this.settingsService.getProjectSettings(projectPath);
|
||||
// Check for planApprovalTimeoutMs in project settings
|
||||
// eslint-disable-next-line @typescript-eslint/no-explicit-any
|
||||
const timeoutMs = (projectSettings as any).planApprovalTimeoutMs;
|
||||
if (typeof timeoutMs === 'number' && timeoutMs > 0) {
|
||||
return timeoutMs;
|
||||
}
|
||||
} catch (error) {
|
||||
logger.warn(
|
||||
`Failed to get project settings for ${projectPath}, using default timeout`,
|
||||
error
|
||||
);
|
||||
}
|
||||
|
||||
return DEFAULT_APPROVAL_TIMEOUT_MS;
|
||||
}
|
||||
}
|
||||
@@ -1,302 +0,0 @@
|
||||
/**
|
||||
* RecoveryService - Crash recovery and feature resumption
|
||||
*/
|
||||
|
||||
import path from 'path';
|
||||
import type { Feature, FeatureStatusWithPipeline } from '@automaker/types';
|
||||
import { DEFAULT_MAX_CONCURRENCY } from '@automaker/types';
|
||||
import {
|
||||
createLogger,
|
||||
readJsonWithRecovery,
|
||||
logRecoveryWarning,
|
||||
DEFAULT_BACKUP_COUNT,
|
||||
} from '@automaker/utils';
|
||||
import {
|
||||
getFeatureDir,
|
||||
getFeaturesDir,
|
||||
getExecutionStatePath,
|
||||
ensureAutomakerDir,
|
||||
} from '@automaker/platform';
|
||||
import * as secureFs from '../lib/secure-fs.js';
|
||||
import { getPromptCustomization } from '../lib/settings-helpers.js';
|
||||
import type { TypedEventBus } from './typed-event-bus.js';
|
||||
import type { ConcurrencyManager, RunningFeature } from './concurrency-manager.js';
|
||||
import type { SettingsService } from './settings-service.js';
|
||||
import type { PipelineStatusInfo } from './pipeline-orchestrator.js';
|
||||
|
||||
const logger = createLogger('RecoveryService');
|
||||
|
||||
export interface ExecutionState {
|
||||
version: 1;
|
||||
autoLoopWasRunning: boolean;
|
||||
maxConcurrency: number;
|
||||
projectPath: string;
|
||||
branchName: string | null;
|
||||
runningFeatureIds: string[];
|
||||
savedAt: string;
|
||||
}
|
||||
|
||||
export const DEFAULT_EXECUTION_STATE: ExecutionState = {
|
||||
version: 1,
|
||||
autoLoopWasRunning: false,
|
||||
maxConcurrency: DEFAULT_MAX_CONCURRENCY,
|
||||
projectPath: '',
|
||||
branchName: null,
|
||||
runningFeatureIds: [],
|
||||
savedAt: '',
|
||||
};
|
||||
|
||||
export type ExecuteFeatureFn = (
|
||||
projectPath: string,
|
||||
featureId: string,
|
||||
useWorktrees: boolean,
|
||||
isAutoMode: boolean,
|
||||
providedWorktreePath?: string,
|
||||
options?: { continuationPrompt?: string; _calledInternally?: boolean }
|
||||
) => Promise<void>;
|
||||
export type LoadFeatureFn = (projectPath: string, featureId: string) => Promise<Feature | null>;
|
||||
export type DetectPipelineStatusFn = (
|
||||
projectPath: string,
|
||||
featureId: string,
|
||||
status: FeatureStatusWithPipeline
|
||||
) => Promise<PipelineStatusInfo>;
|
||||
export type ResumePipelineFn = (
|
||||
projectPath: string,
|
||||
feature: Feature,
|
||||
useWorktrees: boolean,
|
||||
pipelineInfo: PipelineStatusInfo
|
||||
) => Promise<void>;
|
||||
export type IsFeatureRunningFn = (featureId: string) => boolean;
|
||||
export type AcquireRunningFeatureFn = (options: {
|
||||
featureId: string;
|
||||
projectPath: string;
|
||||
isAutoMode: boolean;
|
||||
allowReuse?: boolean;
|
||||
}) => RunningFeature;
|
||||
export type ReleaseRunningFeatureFn = (featureId: string) => void;
|
||||
|
||||
export class RecoveryService {
|
||||
constructor(
|
||||
private eventBus: TypedEventBus,
|
||||
private concurrencyManager: ConcurrencyManager,
|
||||
private settingsService: SettingsService | null,
|
||||
private executeFeatureFn: ExecuteFeatureFn,
|
||||
private loadFeatureFn: LoadFeatureFn,
|
||||
private detectPipelineStatusFn: DetectPipelineStatusFn,
|
||||
private resumePipelineFn: ResumePipelineFn,
|
||||
private isFeatureRunningFn: IsFeatureRunningFn,
|
||||
private acquireRunningFeatureFn: AcquireRunningFeatureFn,
|
||||
private releaseRunningFeatureFn: ReleaseRunningFeatureFn
|
||||
) {}
|
||||
|
||||
async saveExecutionStateForProject(
|
||||
projectPath: string,
|
||||
branchName: string | null,
|
||||
maxConcurrency: number
|
||||
): Promise<void> {
|
||||
try {
|
||||
await ensureAutomakerDir(projectPath);
|
||||
const runningFeatureIds = this.concurrencyManager
|
||||
.getAllRunning()
|
||||
.filter((f) => f.projectPath === projectPath)
|
||||
.map((f) => f.featureId);
|
||||
const state: ExecutionState = {
|
||||
version: 1,
|
||||
autoLoopWasRunning: true,
|
||||
maxConcurrency,
|
||||
projectPath,
|
||||
branchName,
|
||||
runningFeatureIds,
|
||||
savedAt: new Date().toISOString(),
|
||||
};
|
||||
await secureFs.writeFile(
|
||||
getExecutionStatePath(projectPath),
|
||||
JSON.stringify(state, null, 2),
|
||||
'utf-8'
|
||||
);
|
||||
} catch {
|
||||
/* ignore */
|
||||
}
|
||||
}
|
||||
|
||||
async saveExecutionState(
|
||||
projectPath: string,
|
||||
autoLoopWasRunning = false,
|
||||
maxConcurrency = DEFAULT_MAX_CONCURRENCY
|
||||
): Promise<void> {
|
||||
try {
|
||||
await ensureAutomakerDir(projectPath);
|
||||
const state: ExecutionState = {
|
||||
version: 1,
|
||||
autoLoopWasRunning,
|
||||
maxConcurrency,
|
||||
projectPath,
|
||||
branchName: null,
|
||||
runningFeatureIds: this.concurrencyManager.getAllRunning().map((rf) => rf.featureId),
|
||||
savedAt: new Date().toISOString(),
|
||||
};
|
||||
await secureFs.writeFile(
|
||||
getExecutionStatePath(projectPath),
|
||||
JSON.stringify(state, null, 2),
|
||||
'utf-8'
|
||||
);
|
||||
} catch {
|
||||
/* ignore */
|
||||
}
|
||||
}
|
||||
|
||||
async loadExecutionState(projectPath: string): Promise<ExecutionState> {
|
||||
try {
|
||||
const content = (await secureFs.readFile(
|
||||
getExecutionStatePath(projectPath),
|
||||
'utf-8'
|
||||
)) as string;
|
||||
return JSON.parse(content) as ExecutionState;
|
||||
} catch {
|
||||
return DEFAULT_EXECUTION_STATE;
|
||||
}
|
||||
}
|
||||
|
||||
async clearExecutionState(projectPath: string, _branchName: string | null = null): Promise<void> {
|
||||
try {
|
||||
await secureFs.unlink(getExecutionStatePath(projectPath));
|
||||
} catch {
|
||||
/* ignore */
|
||||
}
|
||||
}
|
||||
|
||||
async contextExists(projectPath: string, featureId: string): Promise<boolean> {
|
||||
try {
|
||||
await secureFs.access(path.join(getFeatureDir(projectPath, featureId), 'agent-output.md'));
|
||||
return true;
|
||||
} catch {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
private async executeFeatureWithContext(
|
||||
projectPath: string,
|
||||
featureId: string,
|
||||
context: string,
|
||||
useWorktrees: boolean
|
||||
): Promise<void> {
|
||||
const feature = await this.loadFeatureFn(projectPath, featureId);
|
||||
if (!feature) throw new Error(`Feature ${featureId} not found`);
|
||||
const prompts = await getPromptCustomization(this.settingsService, '[RecoveryService]');
|
||||
const featurePrompt = `## Feature Implementation Task\n\n**Feature ID:** ${feature.id}\n**Title:** ${feature.title || 'Untitled Feature'}\n**Description:** ${feature.description}\n`;
|
||||
let prompt = prompts.taskExecution.resumeFeatureTemplate;
|
||||
prompt = prompt
|
||||
.replace(/\{\{featurePrompt\}\}/g, featurePrompt)
|
||||
.replace(/\{\{previousContext\}\}/g, context);
|
||||
return this.executeFeatureFn(projectPath, featureId, useWorktrees, false, undefined, {
|
||||
continuationPrompt: prompt,
|
||||
_calledInternally: true,
|
||||
});
|
||||
}
|
||||
|
||||
async resumeFeature(
|
||||
projectPath: string,
|
||||
featureId: string,
|
||||
useWorktrees = false,
|
||||
_calledInternally = false
|
||||
): Promise<void> {
|
||||
if (!_calledInternally && this.isFeatureRunningFn(featureId)) return;
|
||||
this.acquireRunningFeatureFn({
|
||||
featureId,
|
||||
projectPath,
|
||||
isAutoMode: false,
|
||||
allowReuse: _calledInternally,
|
||||
});
|
||||
try {
|
||||
const feature = await this.loadFeatureFn(projectPath, featureId);
|
||||
if (!feature) throw new Error(`Feature ${featureId} not found`);
|
||||
const pipelineInfo = await this.detectPipelineStatusFn(
|
||||
projectPath,
|
||||
featureId,
|
||||
(feature.status || '') as FeatureStatusWithPipeline
|
||||
);
|
||||
if (pipelineInfo.isPipeline)
|
||||
return await this.resumePipelineFn(projectPath, feature, useWorktrees, pipelineInfo);
|
||||
const hasContext = await this.contextExists(projectPath, featureId);
|
||||
if (hasContext) {
|
||||
const context = (await secureFs.readFile(
|
||||
path.join(getFeatureDir(projectPath, featureId), 'agent-output.md'),
|
||||
'utf-8'
|
||||
)) as string;
|
||||
this.eventBus.emitAutoModeEvent('auto_mode_feature_resuming', {
|
||||
featureId,
|
||||
featureName: feature.title,
|
||||
projectPath,
|
||||
hasContext: true,
|
||||
message: `Resuming feature "${feature.title}" from saved context`,
|
||||
});
|
||||
return await this.executeFeatureWithContext(projectPath, featureId, context, useWorktrees);
|
||||
}
|
||||
this.eventBus.emitAutoModeEvent('auto_mode_feature_resuming', {
|
||||
featureId,
|
||||
featureName: feature.title,
|
||||
projectPath,
|
||||
hasContext: false,
|
||||
message: `Starting fresh execution for interrupted feature "${feature.title}"`,
|
||||
});
|
||||
return await this.executeFeatureFn(projectPath, featureId, useWorktrees, false, undefined, {
|
||||
_calledInternally: true,
|
||||
});
|
||||
} finally {
|
||||
this.releaseRunningFeatureFn(featureId);
|
||||
}
|
||||
}
|
||||
|
||||
async resumeInterruptedFeatures(projectPath: string): Promise<void> {
|
||||
const featuresDir = getFeaturesDir(projectPath);
|
||||
try {
|
||||
const entries = await secureFs.readdir(featuresDir, { withFileTypes: true });
|
||||
const featuresWithContext: Feature[] = [];
|
||||
const featuresWithoutContext: Feature[] = [];
|
||||
for (const entry of entries) {
|
||||
if (entry.isDirectory()) {
|
||||
const result = await readJsonWithRecovery<Feature | null>(
|
||||
path.join(featuresDir, entry.name, 'feature.json'),
|
||||
null,
|
||||
{ maxBackups: DEFAULT_BACKUP_COUNT, autoRestore: true }
|
||||
);
|
||||
logRecoveryWarning(result, `Feature ${entry.name}`, logger);
|
||||
const feature = result.data;
|
||||
if (!feature) continue;
|
||||
if (
|
||||
feature.status === 'in_progress' ||
|
||||
(feature.status && feature.status.startsWith('pipeline_'))
|
||||
) {
|
||||
(await this.contextExists(projectPath, feature.id))
|
||||
? featuresWithContext.push(feature)
|
||||
: featuresWithoutContext.push(feature);
|
||||
}
|
||||
}
|
||||
}
|
||||
const allInterruptedFeatures = [...featuresWithContext, ...featuresWithoutContext];
|
||||
if (allInterruptedFeatures.length === 0) return;
|
||||
this.eventBus.emitAutoModeEvent('auto_mode_resuming_features', {
|
||||
message: `Resuming ${allInterruptedFeatures.length} interrupted feature(s)`,
|
||||
projectPath,
|
||||
featureIds: allInterruptedFeatures.map((f) => f.id),
|
||||
features: allInterruptedFeatures.map((f) => ({
|
||||
id: f.id,
|
||||
title: f.title,
|
||||
status: f.status,
|
||||
branchName: f.branchName ?? null,
|
||||
hasContext: featuresWithContext.some((fc) => fc.id === f.id),
|
||||
})),
|
||||
});
|
||||
for (const feature of allInterruptedFeatures) {
|
||||
try {
|
||||
if (!this.isFeatureRunningFn(feature.id))
|
||||
await this.resumeFeature(projectPath, feature.id, true);
|
||||
} catch {
|
||||
/* continue */
|
||||
}
|
||||
}
|
||||
} catch {
|
||||
/* ignore */
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,227 +0,0 @@
|
||||
/**
|
||||
* Spec Parser - Pure functions for parsing spec content and detecting markers
|
||||
*
|
||||
* Extracts tasks from generated specs, detects progress markers,
|
||||
* and extracts summary content from various formats.
|
||||
*/
|
||||
|
||||
import type { ParsedTask } from '@automaker/types';
|
||||
|
||||
/**
|
||||
* Parse a single task line
|
||||
* Format: - [ ] T###: Description | File: path/to/file
|
||||
*/
|
||||
function parseTaskLine(line: string, currentPhase?: string): ParsedTask | null {
|
||||
// Match pattern: - [ ] T###: Description | File: path
|
||||
const taskMatch = line.match(/- \[ \] (T\d{3}):\s*([^|]+)(?:\|\s*File:\s*(.+))?$/);
|
||||
if (!taskMatch) {
|
||||
// Try simpler pattern without file
|
||||
const simpleMatch = line.match(/- \[ \] (T\d{3}):\s*(.+)$/);
|
||||
if (simpleMatch) {
|
||||
return {
|
||||
id: simpleMatch[1],
|
||||
description: simpleMatch[2].trim(),
|
||||
phase: currentPhase,
|
||||
status: 'pending',
|
||||
};
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
return {
|
||||
id: taskMatch[1],
|
||||
description: taskMatch[2].trim(),
|
||||
filePath: taskMatch[3]?.trim(),
|
||||
phase: currentPhase,
|
||||
status: 'pending',
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Parse tasks from generated spec content
|
||||
* Looks for the ```tasks code block and extracts task lines
|
||||
* Format: - [ ] T###: Description | File: path/to/file
|
||||
*/
|
||||
export function parseTasksFromSpec(specContent: string): ParsedTask[] {
|
||||
const tasks: ParsedTask[] = [];
|
||||
|
||||
// Extract content within ```tasks ... ``` block
|
||||
const tasksBlockMatch = specContent.match(/```tasks\s*([\s\S]*?)```/);
|
||||
if (!tasksBlockMatch) {
|
||||
// Try fallback: look for task lines anywhere in content
|
||||
const taskLines = specContent.match(/- \[ \] T\d{3}:.*$/gm);
|
||||
if (!taskLines) {
|
||||
return tasks;
|
||||
}
|
||||
// Parse fallback task lines
|
||||
let currentPhase: string | undefined;
|
||||
for (const line of taskLines) {
|
||||
const parsed = parseTaskLine(line, currentPhase);
|
||||
if (parsed) {
|
||||
tasks.push(parsed);
|
||||
}
|
||||
}
|
||||
return tasks;
|
||||
}
|
||||
|
||||
const tasksContent = tasksBlockMatch[1];
|
||||
const lines = tasksContent.split('\n');
|
||||
|
||||
let currentPhase: string | undefined;
|
||||
|
||||
for (const line of lines) {
|
||||
const trimmedLine = line.trim();
|
||||
|
||||
// Check for phase header (e.g., "## Phase 1: Foundation")
|
||||
const phaseMatch = trimmedLine.match(/^##\s*(.+)$/);
|
||||
if (phaseMatch) {
|
||||
currentPhase = phaseMatch[1].trim();
|
||||
continue;
|
||||
}
|
||||
|
||||
// Check for task line
|
||||
if (trimmedLine.startsWith('- [ ]')) {
|
||||
const parsed = parseTaskLine(trimmedLine, currentPhase);
|
||||
if (parsed) {
|
||||
tasks.push(parsed);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return tasks;
|
||||
}
|
||||
|
||||
/**
|
||||
* Detect [TASK_START] marker in text and extract task ID
|
||||
* Format: [TASK_START] T###: Description
|
||||
*/
|
||||
export function detectTaskStartMarker(text: string): string | null {
|
||||
const match = text.match(/\[TASK_START\]\s*(T\d{3})/);
|
||||
return match ? match[1] : null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Detect [TASK_COMPLETE] marker in text and extract task ID
|
||||
* Format: [TASK_COMPLETE] T###: Brief summary
|
||||
*/
|
||||
export function detectTaskCompleteMarker(text: string): string | null {
|
||||
const match = text.match(/\[TASK_COMPLETE\]\s*(T\d{3})/);
|
||||
return match ? match[1] : null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Detect [PHASE_COMPLETE] marker in text and extract phase number
|
||||
* Format: [PHASE_COMPLETE] Phase N complete
|
||||
*/
|
||||
export function detectPhaseCompleteMarker(text: string): number | null {
|
||||
const match = text.match(/\[PHASE_COMPLETE\]\s*Phase\s*(\d+)/i);
|
||||
return match ? parseInt(match[1], 10) : null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Fallback spec detection when [SPEC_GENERATED] marker is missing
|
||||
* Looks for structural elements that indicate a spec was generated.
|
||||
* This is especially important for non-Claude models that may not output
|
||||
* the explicit [SPEC_GENERATED] marker.
|
||||
*
|
||||
* @param text - The text content to check for spec structure
|
||||
* @returns true if the text appears to be a generated spec
|
||||
*/
|
||||
export function detectSpecFallback(text: string): boolean {
|
||||
// Check for key structural elements of a spec
|
||||
const hasTasksBlock = /```tasks[\s\S]*```/.test(text);
|
||||
const hasTaskLines = /- \[ \] T\d{3}:/.test(text);
|
||||
|
||||
// Check for common spec sections (case-insensitive)
|
||||
const hasAcceptanceCriteria = /acceptance criteria/i.test(text);
|
||||
const hasTechnicalContext = /technical context/i.test(text);
|
||||
const hasProblemStatement = /problem statement/i.test(text);
|
||||
const hasUserStory = /user story/i.test(text);
|
||||
// Additional patterns for different model outputs
|
||||
const hasGoal = /\*\*Goal\*\*:/i.test(text);
|
||||
const hasSolution = /\*\*Solution\*\*:/i.test(text);
|
||||
const hasImplementation = /implementation\s*(plan|steps|approach)/i.test(text);
|
||||
const hasOverview = /##\s*(overview|summary)/i.test(text);
|
||||
|
||||
// Spec is detected if we have task structure AND at least some spec content
|
||||
const hasTaskStructure = hasTasksBlock || hasTaskLines;
|
||||
const hasSpecContent =
|
||||
hasAcceptanceCriteria ||
|
||||
hasTechnicalContext ||
|
||||
hasProblemStatement ||
|
||||
hasUserStory ||
|
||||
hasGoal ||
|
||||
hasSolution ||
|
||||
hasImplementation ||
|
||||
hasOverview;
|
||||
|
||||
return hasTaskStructure && hasSpecContent;
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract summary from text content
|
||||
* Checks for multiple formats in order of priority:
|
||||
* 1. Explicit <summary> tags
|
||||
* 2. ## Summary section (markdown)
|
||||
* 3. **Goal**: section (lite planning mode)
|
||||
* 4. **Problem**: or **Problem Statement**: section (spec/full modes)
|
||||
* 5. **Solution**: section as fallback
|
||||
*
|
||||
* Note: Uses last match for each pattern to avoid stale summaries
|
||||
* when agent output accumulates across multiple runs.
|
||||
*
|
||||
* @param text - The text content to extract summary from
|
||||
* @returns The extracted summary string, or null if no summary found
|
||||
*/
|
||||
export function extractSummary(text: string): string | null {
|
||||
// Helper to truncate content to first paragraph with max length
|
||||
const truncate = (content: string, maxLength: number): string => {
|
||||
const firstPara = content.split(/\n\n/)[0];
|
||||
return firstPara.length > maxLength ? `${firstPara.substring(0, maxLength)}...` : firstPara;
|
||||
};
|
||||
|
||||
// Helper to get last match from matchAll results
|
||||
const getLastMatch = (matches: IterableIterator<RegExpMatchArray>): RegExpMatchArray | null => {
|
||||
const arr = [...matches];
|
||||
return arr.length > 0 ? arr[arr.length - 1] : null;
|
||||
};
|
||||
|
||||
// Check for explicit <summary> tags first (use last match to avoid stale summaries)
|
||||
const summaryMatches = text.matchAll(/<summary>([\s\S]*?)<\/summary>/g);
|
||||
const summaryMatch = getLastMatch(summaryMatches);
|
||||
if (summaryMatch) {
|
||||
return summaryMatch[1].trim();
|
||||
}
|
||||
|
||||
// Check for ## Summary section (use last match)
|
||||
const sectionMatches = text.matchAll(/##\s*Summary\s*\n+([\s\S]*?)(?=\n##|\n\*\*|$)/gi);
|
||||
const sectionMatch = getLastMatch(sectionMatches);
|
||||
if (sectionMatch) {
|
||||
return truncate(sectionMatch[1].trim(), 500);
|
||||
}
|
||||
|
||||
// Check for **Goal**: section (lite mode, use last match)
|
||||
const goalMatches = text.matchAll(/\*\*Goal\*\*:\s*(.+?)(?:\n|$)/gi);
|
||||
const goalMatch = getLastMatch(goalMatches);
|
||||
if (goalMatch) {
|
||||
return goalMatch[1].trim();
|
||||
}
|
||||
|
||||
// Check for **Problem**: or **Problem Statement**: section (spec/full modes, use last match)
|
||||
const problemMatches = text.matchAll(
|
||||
/\*\*Problem(?:\s*Statement)?\*\*:\s*([\s\S]*?)(?=\n\d+\.|\n\*\*|$)/gi
|
||||
);
|
||||
const problemMatch = getLastMatch(problemMatches);
|
||||
if (problemMatch) {
|
||||
return truncate(problemMatch[1].trim(), 500);
|
||||
}
|
||||
|
||||
// Check for **Solution**: section as fallback (use last match)
|
||||
const solutionMatches = text.matchAll(/\*\*Solution\*\*:\s*([\s\S]*?)(?=\n\d+\.|\n\*\*|$)/gi);
|
||||
const solutionMatch = getLastMatch(solutionMatches);
|
||||
if (solutionMatch) {
|
||||
return truncate(solutionMatch[1].trim(), 300);
|
||||
}
|
||||
|
||||
return null;
|
||||
}
|
||||
@@ -1,108 +0,0 @@
|
||||
/**
|
||||
* TypedEventBus - Type-safe event emission wrapper for AutoModeService
|
||||
*
|
||||
* This class wraps the existing EventEmitter to provide type-safe event emission,
|
||||
* specifically encapsulating the `emitAutoModeEvent` pattern used throughout AutoModeService.
|
||||
*
|
||||
* Key behavior:
|
||||
* - emitAutoModeEvent wraps events in 'auto-mode:event' format for frontend consumption
|
||||
* - Preserves all existing event emission patterns for backward compatibility
|
||||
* - Frontend receives events in the exact same format as before (no breaking changes)
|
||||
*/
|
||||
|
||||
import type { EventEmitter, EventType, EventCallback } from '../lib/events.js';
|
||||
|
||||
/**
|
||||
* Auto-mode event types that can be emitted through the TypedEventBus.
|
||||
* These correspond to the event types expected by the frontend.
|
||||
*/
|
||||
export type AutoModeEventType =
|
||||
| 'auto_mode_started'
|
||||
| 'auto_mode_stopped'
|
||||
| 'auto_mode_idle'
|
||||
| 'auto_mode_error'
|
||||
| 'auto_mode_paused_failures'
|
||||
| 'auto_mode_feature_start'
|
||||
| 'auto_mode_feature_complete'
|
||||
| 'auto_mode_feature_resuming'
|
||||
| 'auto_mode_progress'
|
||||
| 'auto_mode_tool'
|
||||
| 'auto_mode_task_started'
|
||||
| 'auto_mode_task_complete'
|
||||
| 'auto_mode_task_status'
|
||||
| 'auto_mode_phase_complete'
|
||||
| 'auto_mode_summary'
|
||||
| 'auto_mode_resuming_features'
|
||||
| 'planning_started'
|
||||
| 'plan_approval_required'
|
||||
| 'plan_approved'
|
||||
| 'plan_auto_approved'
|
||||
| 'plan_rejected'
|
||||
| 'plan_revision_requested'
|
||||
| 'plan_revision_warning'
|
||||
| 'pipeline_step_started'
|
||||
| 'pipeline_step_complete'
|
||||
| string; // Allow other strings for extensibility
|
||||
|
||||
/**
|
||||
* TypedEventBus wraps an EventEmitter to provide type-safe event emission
|
||||
* with the auto-mode event wrapping pattern.
|
||||
*/
|
||||
export class TypedEventBus {
|
||||
private events: EventEmitter;
|
||||
|
||||
/**
|
||||
* Create a TypedEventBus wrapping an existing EventEmitter.
|
||||
* @param events - The underlying EventEmitter to wrap
|
||||
*/
|
||||
constructor(events: EventEmitter) {
|
||||
this.events = events;
|
||||
}
|
||||
|
||||
/**
|
||||
* Emit a raw event directly to subscribers.
|
||||
* Use this for non-auto-mode events that don't need wrapping.
|
||||
* @param type - The event type
|
||||
* @param payload - The event payload
|
||||
*/
|
||||
emit(type: EventType, payload: unknown): void {
|
||||
this.events.emit(type, payload);
|
||||
}
|
||||
|
||||
/**
|
||||
* Emit an auto-mode event wrapped in the correct format for the client.
|
||||
* All auto-mode events are sent as type "auto-mode:event" with the actual
|
||||
* event type and data in the payload.
|
||||
*
|
||||
* This produces the exact same event format that the frontend expects:
|
||||
* { type: eventType, ...data }
|
||||
*
|
||||
* @param eventType - The auto-mode event type (e.g., 'auto_mode_started')
|
||||
* @param data - Additional data to include in the event payload
|
||||
*/
|
||||
emitAutoModeEvent(eventType: AutoModeEventType, data: Record<string, unknown>): void {
|
||||
// Wrap the event in auto-mode:event format expected by the client
|
||||
this.events.emit('auto-mode:event', {
|
||||
type: eventType,
|
||||
...data,
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Subscribe to all events from the underlying emitter.
|
||||
* @param callback - Function called with (type, payload) for each event
|
||||
* @returns Unsubscribe function
|
||||
*/
|
||||
subscribe(callback: EventCallback): () => void {
|
||||
return this.events.subscribe(callback);
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the underlying EventEmitter for cases where direct access is needed.
|
||||
* Use sparingly - prefer the typed methods when possible.
|
||||
* @returns The wrapped EventEmitter
|
||||
*/
|
||||
getUnderlyingEmitter(): EventEmitter {
|
||||
return this.events;
|
||||
}
|
||||
}
|
||||
@@ -1,170 +0,0 @@
|
||||
/**
|
||||
* WorktreeResolver - Git worktree discovery and resolution
|
||||
*
|
||||
* Extracted from AutoModeService to provide a standalone service for:
|
||||
* - Finding existing worktrees for a given branch
|
||||
* - Getting the current branch of a repository
|
||||
* - Listing all worktrees with their metadata
|
||||
*
|
||||
* Key behaviors:
|
||||
* - Parses `git worktree list --porcelain` output
|
||||
* - Always resolves paths to absolute (cross-platform compatibility)
|
||||
* - Handles detached HEAD and bare worktrees gracefully
|
||||
*/
|
||||
|
||||
import { exec } from 'child_process';
|
||||
import { promisify } from 'util';
|
||||
import path from 'path';
|
||||
|
||||
const execAsync = promisify(exec);
|
||||
|
||||
/**
|
||||
* Information about a git worktree
|
||||
*/
|
||||
export interface WorktreeInfo {
|
||||
/** Absolute path to the worktree directory */
|
||||
path: string;
|
||||
/** Branch name (without refs/heads/ prefix), or null if detached HEAD */
|
||||
branch: string | null;
|
||||
/** Whether this is the main worktree (first in git worktree list) */
|
||||
isMain: boolean;
|
||||
}
|
||||
|
||||
/**
|
||||
* WorktreeResolver handles git worktree discovery and path resolution.
|
||||
*
|
||||
* This service is responsible for:
|
||||
* 1. Finding existing worktrees by branch name
|
||||
* 2. Getting the current branch of a repository
|
||||
* 3. Listing all worktrees with normalized paths
|
||||
*/
|
||||
export class WorktreeResolver {
|
||||
/**
|
||||
* Get the current branch name for a git repository
|
||||
*
|
||||
* @param projectPath - Path to the git repository
|
||||
* @returns The current branch name, or null if not in a git repo or on detached HEAD
|
||||
*/
|
||||
async getCurrentBranch(projectPath: string): Promise<string | null> {
|
||||
try {
|
||||
const { stdout } = await execAsync('git branch --show-current', { cwd: projectPath });
|
||||
const branch = stdout.trim();
|
||||
return branch || null;
|
||||
} catch {
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Find an existing worktree for a given branch name
|
||||
*
|
||||
* @param projectPath - Path to the git repository (main worktree)
|
||||
* @param branchName - Branch name to find worktree for
|
||||
* @returns Absolute path to the worktree, or null if not found
|
||||
*/
|
||||
async findWorktreeForBranch(projectPath: string, branchName: string): Promise<string | null> {
|
||||
try {
|
||||
const { stdout } = await execAsync('git worktree list --porcelain', {
|
||||
cwd: projectPath,
|
||||
});
|
||||
|
||||
const lines = stdout.split('\n');
|
||||
let currentPath: string | null = null;
|
||||
let currentBranch: string | null = null;
|
||||
|
||||
for (const line of lines) {
|
||||
if (line.startsWith('worktree ')) {
|
||||
currentPath = line.slice(9);
|
||||
} else if (line.startsWith('branch ')) {
|
||||
currentBranch = line.slice(7).replace('refs/heads/', '');
|
||||
} else if (line === '' && currentPath && currentBranch) {
|
||||
// End of a worktree entry
|
||||
if (currentBranch === branchName) {
|
||||
// Resolve to absolute path - git may return relative paths
|
||||
// On Windows, this is critical for cwd to work correctly
|
||||
// On all platforms, absolute paths ensure consistent behavior
|
||||
return this.resolvePath(projectPath, currentPath);
|
||||
}
|
||||
currentPath = null;
|
||||
currentBranch = null;
|
||||
}
|
||||
}
|
||||
|
||||
// Check the last entry (if file doesn't end with newline)
|
||||
if (currentPath && currentBranch && currentBranch === branchName) {
|
||||
return this.resolvePath(projectPath, currentPath);
|
||||
}
|
||||
|
||||
return null;
|
||||
} catch {
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* List all worktrees for a repository
|
||||
*
|
||||
* @param projectPath - Path to the git repository
|
||||
* @returns Array of WorktreeInfo objects with normalized paths
|
||||
*/
|
||||
async listWorktrees(projectPath: string): Promise<WorktreeInfo[]> {
|
||||
try {
|
||||
const { stdout } = await execAsync('git worktree list --porcelain', {
|
||||
cwd: projectPath,
|
||||
});
|
||||
|
||||
const worktrees: WorktreeInfo[] = [];
|
||||
const lines = stdout.split('\n');
|
||||
let currentPath: string | null = null;
|
||||
let currentBranch: string | null = null;
|
||||
let isFirstWorktree = true;
|
||||
|
||||
for (const line of lines) {
|
||||
if (line.startsWith('worktree ')) {
|
||||
currentPath = line.slice(9);
|
||||
} else if (line.startsWith('branch ')) {
|
||||
currentBranch = line.slice(7).replace('refs/heads/', '');
|
||||
} else if (line.startsWith('detached')) {
|
||||
// Detached HEAD - branch is null
|
||||
currentBranch = null;
|
||||
} else if (line === '' && currentPath) {
|
||||
// End of a worktree entry
|
||||
worktrees.push({
|
||||
path: this.resolvePath(projectPath, currentPath),
|
||||
branch: currentBranch,
|
||||
isMain: isFirstWorktree,
|
||||
});
|
||||
currentPath = null;
|
||||
currentBranch = null;
|
||||
isFirstWorktree = false;
|
||||
}
|
||||
}
|
||||
|
||||
// Handle last entry if file doesn't end with newline
|
||||
if (currentPath) {
|
||||
worktrees.push({
|
||||
path: this.resolvePath(projectPath, currentPath),
|
||||
branch: currentBranch,
|
||||
isMain: isFirstWorktree,
|
||||
});
|
||||
}
|
||||
|
||||
return worktrees;
|
||||
} catch {
|
||||
return [];
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Resolve a path to absolute, handling both relative and absolute inputs
|
||||
*
|
||||
* @param projectPath - Base path for relative resolution
|
||||
* @param worktreePath - Path from git worktree list output
|
||||
* @returns Absolute path
|
||||
*/
|
||||
private resolvePath(projectPath: string, worktreePath: string): string {
|
||||
return path.isAbsolute(worktreePath)
|
||||
? path.resolve(worktreePath)
|
||||
: path.resolve(projectPath, worktreePath);
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,694 @@
|
||||
import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest';
|
||||
import { AutoModeService } from '@/services/auto-mode-service.js';
|
||||
import { ProviderFactory } from '@/providers/provider-factory.js';
|
||||
import { FeatureLoader } from '@/services/feature-loader.js';
|
||||
import {
|
||||
createTestGitRepo,
|
||||
createTestFeature,
|
||||
listBranches,
|
||||
listWorktrees,
|
||||
branchExists,
|
||||
worktreeExists,
|
||||
type TestRepo,
|
||||
} from '../helpers/git-test-repo.js';
|
||||
import * as fs from 'fs/promises';
|
||||
import * as path from 'path';
|
||||
import { exec } from 'child_process';
|
||||
import { promisify } from 'util';
|
||||
|
||||
const execAsync = promisify(exec);
|
||||
|
||||
vi.mock('@/providers/provider-factory.js');
|
||||
|
||||
describe('auto-mode-service.ts (integration)', () => {
|
||||
let service: AutoModeService;
|
||||
let testRepo: TestRepo;
|
||||
let featureLoader: FeatureLoader;
|
||||
const mockEvents = {
|
||||
subscribe: vi.fn(),
|
||||
emit: vi.fn(),
|
||||
};
|
||||
|
||||
beforeEach(async () => {
|
||||
vi.clearAllMocks();
|
||||
service = new AutoModeService(mockEvents as any);
|
||||
featureLoader = new FeatureLoader();
|
||||
testRepo = await createTestGitRepo();
|
||||
});
|
||||
|
||||
afterEach(async () => {
|
||||
// Stop any running auto loops
|
||||
await service.stopAutoLoop();
|
||||
|
||||
// Cleanup test repo
|
||||
if (testRepo) {
|
||||
await testRepo.cleanup();
|
||||
}
|
||||
});
|
||||
|
||||
describe('worktree operations', () => {
|
||||
it('should use existing git worktree for feature', async () => {
|
||||
const branchName = 'feature/test-feature-1';
|
||||
|
||||
// Create a test feature with branchName set
|
||||
await createTestFeature(testRepo.path, 'test-feature-1', {
|
||||
id: 'test-feature-1',
|
||||
category: 'test',
|
||||
description: 'Test feature',
|
||||
status: 'pending',
|
||||
branchName: branchName,
|
||||
});
|
||||
|
||||
// Create worktree before executing (worktrees are now created when features are added/edited)
|
||||
const worktreesDir = path.join(testRepo.path, '.worktrees');
|
||||
const worktreePath = path.join(worktreesDir, 'test-feature-1');
|
||||
await fs.mkdir(worktreesDir, { recursive: true });
|
||||
await execAsync(`git worktree add -b ${branchName} "${worktreePath}" HEAD`, {
|
||||
cwd: testRepo.path,
|
||||
});
|
||||
|
||||
// Mock provider to complete quickly
|
||||
const mockProvider = {
|
||||
getName: () => 'claude',
|
||||
executeQuery: async function* () {
|
||||
yield {
|
||||
type: 'assistant',
|
||||
message: {
|
||||
role: 'assistant',
|
||||
content: [{ type: 'text', text: 'Feature implemented' }],
|
||||
},
|
||||
};
|
||||
yield {
|
||||
type: 'result',
|
||||
subtype: 'success',
|
||||
};
|
||||
},
|
||||
};
|
||||
|
||||
vi.mocked(ProviderFactory.getProviderForModel).mockReturnValue(mockProvider as any);
|
||||
|
||||
// Execute feature with worktrees enabled
|
||||
await service.executeFeature(
|
||||
testRepo.path,
|
||||
'test-feature-1',
|
||||
true, // useWorktrees
|
||||
false // isAutoMode
|
||||
);
|
||||
|
||||
// Verify branch exists (was created when worktree was created)
|
||||
const branches = await listBranches(testRepo.path);
|
||||
expect(branches).toContain(branchName);
|
||||
|
||||
// Verify worktree exists and is being used
|
||||
// The service should have found and used the worktree (check via logs)
|
||||
// We can verify the worktree exists by checking git worktree list
|
||||
const worktrees = await listWorktrees(testRepo.path);
|
||||
expect(worktrees.length).toBeGreaterThan(0);
|
||||
// Verify that at least one worktree path contains our feature ID
|
||||
const worktreePathsMatch = worktrees.some(
|
||||
(wt) => wt.includes('test-feature-1') || wt.includes('.worktrees')
|
||||
);
|
||||
expect(worktreePathsMatch).toBe(true);
|
||||
|
||||
// Note: Worktrees are not automatically cleaned up by the service
|
||||
// This is expected behavior - manual cleanup is required
|
||||
}, 30000);
|
||||
|
||||
it('should handle error gracefully', async () => {
|
||||
await createTestFeature(testRepo.path, 'test-feature-error', {
|
||||
id: 'test-feature-error',
|
||||
category: 'test',
|
||||
description: 'Test feature that errors',
|
||||
status: 'pending',
|
||||
});
|
||||
|
||||
// Mock provider that throws error
|
||||
const mockProvider = {
|
||||
getName: () => 'claude',
|
||||
executeQuery: async function* () {
|
||||
throw new Error('Provider error');
|
||||
},
|
||||
};
|
||||
|
||||
vi.mocked(ProviderFactory.getProviderForModel).mockReturnValue(mockProvider as any);
|
||||
|
||||
// Execute feature (should handle error)
|
||||
await service.executeFeature(testRepo.path, 'test-feature-error', true, false);
|
||||
|
||||
// Verify feature status was updated to backlog (error status)
|
||||
const feature = await featureLoader.get(testRepo.path, 'test-feature-error');
|
||||
expect(feature?.status).toBe('backlog');
|
||||
}, 30000);
|
||||
|
||||
it('should work without worktrees', async () => {
|
||||
await createTestFeature(testRepo.path, 'test-no-worktree', {
|
||||
id: 'test-no-worktree',
|
||||
category: 'test',
|
||||
description: 'Test without worktree',
|
||||
status: 'pending',
|
||||
skipTests: true,
|
||||
});
|
||||
|
||||
const mockProvider = {
|
||||
getName: () => 'claude',
|
||||
executeQuery: async function* () {
|
||||
yield {
|
||||
type: 'result',
|
||||
subtype: 'success',
|
||||
};
|
||||
},
|
||||
};
|
||||
|
||||
vi.mocked(ProviderFactory.getProviderForModel).mockReturnValue(mockProvider as any);
|
||||
|
||||
// Execute without worktrees
|
||||
await service.executeFeature(
|
||||
testRepo.path,
|
||||
'test-no-worktree',
|
||||
false, // useWorktrees = false
|
||||
false
|
||||
);
|
||||
|
||||
// Feature should be updated successfully
|
||||
const feature = await featureLoader.get(testRepo.path, 'test-no-worktree');
|
||||
expect(feature?.status).toBe('waiting_approval');
|
||||
}, 30000);
|
||||
});
|
||||
|
||||
describe('feature execution', () => {
|
||||
it('should execute feature and update status', async () => {
|
||||
await createTestFeature(testRepo.path, 'feature-exec-1', {
|
||||
id: 'feature-exec-1',
|
||||
category: 'ui',
|
||||
description: 'Execute this feature',
|
||||
status: 'pending',
|
||||
skipTests: true,
|
||||
});
|
||||
|
||||
const mockProvider = {
|
||||
getName: () => 'claude',
|
||||
executeQuery: async function* () {
|
||||
yield {
|
||||
type: 'assistant',
|
||||
message: {
|
||||
role: 'assistant',
|
||||
content: [{ type: 'text', text: 'Implemented the feature' }],
|
||||
},
|
||||
};
|
||||
yield {
|
||||
type: 'result',
|
||||
subtype: 'success',
|
||||
};
|
||||
},
|
||||
};
|
||||
|
||||
vi.mocked(ProviderFactory.getProviderForModel).mockReturnValue(mockProvider as any);
|
||||
|
||||
await service.executeFeature(
|
||||
testRepo.path,
|
||||
'feature-exec-1',
|
||||
false, // Don't use worktrees so agent output is saved to main project
|
||||
false
|
||||
);
|
||||
|
||||
// Check feature status was updated
|
||||
const feature = await featureLoader.get(testRepo.path, 'feature-exec-1');
|
||||
expect(feature?.status).toBe('waiting_approval');
|
||||
|
||||
// Check agent output was saved
|
||||
const agentOutput = await featureLoader.getAgentOutput(testRepo.path, 'feature-exec-1');
|
||||
expect(agentOutput).toBeTruthy();
|
||||
expect(agentOutput).toContain('Implemented the feature');
|
||||
}, 30000);
|
||||
|
||||
it('should handle feature not found', async () => {
|
||||
const mockProvider = {
|
||||
getName: () => 'claude',
|
||||
executeQuery: async function* () {
|
||||
yield {
|
||||
type: 'result',
|
||||
subtype: 'success',
|
||||
};
|
||||
},
|
||||
};
|
||||
|
||||
vi.mocked(ProviderFactory.getProviderForModel).mockReturnValue(mockProvider as any);
|
||||
|
||||
// Try to execute non-existent feature
|
||||
await service.executeFeature(testRepo.path, 'nonexistent-feature', true, false);
|
||||
|
||||
// Should emit error event
|
||||
expect(mockEvents.emit).toHaveBeenCalledWith(
|
||||
expect.any(String),
|
||||
expect.objectContaining({
|
||||
featureId: 'nonexistent-feature',
|
||||
error: expect.stringContaining('not found'),
|
||||
})
|
||||
);
|
||||
}, 30000);
|
||||
|
||||
it('should prevent duplicate feature execution', async () => {
|
||||
await createTestFeature(testRepo.path, 'feature-dup', {
|
||||
id: 'feature-dup',
|
||||
category: 'test',
|
||||
description: 'Duplicate test',
|
||||
status: 'pending',
|
||||
});
|
||||
|
||||
const mockProvider = {
|
||||
getName: () => 'claude',
|
||||
executeQuery: async function* () {
|
||||
// Simulate slow execution
|
||||
await new Promise((resolve) => setTimeout(resolve, 500));
|
||||
yield {
|
||||
type: 'result',
|
||||
subtype: 'success',
|
||||
};
|
||||
},
|
||||
};
|
||||
|
||||
vi.mocked(ProviderFactory.getProviderForModel).mockReturnValue(mockProvider as any);
|
||||
|
||||
// Start first execution
|
||||
const promise1 = service.executeFeature(testRepo.path, 'feature-dup', false, false);
|
||||
|
||||
// Try to start second execution (should throw)
|
||||
await expect(
|
||||
service.executeFeature(testRepo.path, 'feature-dup', false, false)
|
||||
).rejects.toThrow('already running');
|
||||
|
||||
await promise1;
|
||||
}, 30000);
|
||||
|
||||
it('should use feature-specific model', async () => {
|
||||
await createTestFeature(testRepo.path, 'feature-model', {
|
||||
id: 'feature-model',
|
||||
category: 'test',
|
||||
description: 'Model test',
|
||||
status: 'pending',
|
||||
model: 'claude-sonnet-4-20250514',
|
||||
});
|
||||
|
||||
const mockProvider = {
|
||||
getName: () => 'claude',
|
||||
executeQuery: async function* () {
|
||||
yield {
|
||||
type: 'result',
|
||||
subtype: 'success',
|
||||
};
|
||||
},
|
||||
};
|
||||
|
||||
vi.mocked(ProviderFactory.getProviderForModel).mockReturnValue(mockProvider as any);
|
||||
|
||||
await service.executeFeature(testRepo.path, 'feature-model', false, false);
|
||||
|
||||
// Should have used claude-sonnet-4-20250514
|
||||
expect(ProviderFactory.getProviderForModel).toHaveBeenCalledWith('claude-sonnet-4-20250514');
|
||||
}, 30000);
|
||||
});
|
||||
|
||||
describe('auto loop', () => {
|
||||
it('should start and stop auto loop', async () => {
|
||||
const startPromise = service.startAutoLoop(testRepo.path, 2);
|
||||
|
||||
// Give it time to start
|
||||
await new Promise((resolve) => setTimeout(resolve, 100));
|
||||
|
||||
// Stop the loop
|
||||
const runningCount = await service.stopAutoLoop();
|
||||
|
||||
expect(runningCount).toBe(0);
|
||||
await startPromise.catch(() => {}); // Cleanup
|
||||
}, 10000);
|
||||
|
||||
it('should process pending features in auto loop', async () => {
|
||||
// Create multiple pending features
|
||||
await createTestFeature(testRepo.path, 'auto-1', {
|
||||
id: 'auto-1',
|
||||
category: 'test',
|
||||
description: 'Auto feature 1',
|
||||
status: 'pending',
|
||||
skipTests: true,
|
||||
});
|
||||
|
||||
await createTestFeature(testRepo.path, 'auto-2', {
|
||||
id: 'auto-2',
|
||||
category: 'test',
|
||||
description: 'Auto feature 2',
|
||||
status: 'pending',
|
||||
skipTests: true,
|
||||
});
|
||||
|
||||
const mockProvider = {
|
||||
getName: () => 'claude',
|
||||
executeQuery: async function* () {
|
||||
yield {
|
||||
type: 'result',
|
||||
subtype: 'success',
|
||||
};
|
||||
},
|
||||
};
|
||||
|
||||
vi.mocked(ProviderFactory.getProviderForModel).mockReturnValue(mockProvider as any);
|
||||
|
||||
// Start auto loop
|
||||
const startPromise = service.startAutoLoop(testRepo.path, 2);
|
||||
|
||||
// Wait for features to be processed
|
||||
await new Promise((resolve) => setTimeout(resolve, 3000));
|
||||
|
||||
// Stop the loop
|
||||
await service.stopAutoLoop();
|
||||
await startPromise.catch(() => {});
|
||||
|
||||
// Check that features were updated
|
||||
const feature1 = await featureLoader.get(testRepo.path, 'auto-1');
|
||||
const feature2 = await featureLoader.get(testRepo.path, 'auto-2');
|
||||
|
||||
// At least one should have been processed
|
||||
const processedCount = [feature1, feature2].filter(
|
||||
(f) => f?.status === 'waiting_approval' || f?.status === 'in_progress'
|
||||
).length;
|
||||
|
||||
expect(processedCount).toBeGreaterThan(0);
|
||||
}, 15000);
|
||||
|
||||
it('should respect max concurrency', async () => {
|
||||
// Create 5 features
|
||||
for (let i = 1; i <= 5; i++) {
|
||||
await createTestFeature(testRepo.path, `concurrent-${i}`, {
|
||||
id: `concurrent-${i}`,
|
||||
category: 'test',
|
||||
description: `Concurrent feature ${i}`,
|
||||
status: 'pending',
|
||||
});
|
||||
}
|
||||
|
||||
let concurrentCount = 0;
|
||||
let maxConcurrent = 0;
|
||||
|
||||
const mockProvider = {
|
||||
getName: () => 'claude',
|
||||
executeQuery: async function* () {
|
||||
concurrentCount++;
|
||||
maxConcurrent = Math.max(maxConcurrent, concurrentCount);
|
||||
|
||||
// Simulate work
|
||||
await new Promise((resolve) => setTimeout(resolve, 500));
|
||||
|
||||
concurrentCount--;
|
||||
|
||||
yield {
|
||||
type: 'result',
|
||||
subtype: 'success',
|
||||
};
|
||||
},
|
||||
};
|
||||
|
||||
vi.mocked(ProviderFactory.getProviderForModel).mockReturnValue(mockProvider as any);
|
||||
|
||||
// Start with max concurrency of 2
|
||||
const startPromise = service.startAutoLoop(testRepo.path, 2);
|
||||
|
||||
// Wait for some features to be processed
|
||||
await new Promise((resolve) => setTimeout(resolve, 3000));
|
||||
|
||||
await service.stopAutoLoop();
|
||||
await startPromise.catch(() => {});
|
||||
|
||||
// Max concurrent should not exceed 2
|
||||
expect(maxConcurrent).toBeLessThanOrEqual(2);
|
||||
}, 15000);
|
||||
|
||||
it('should emit auto mode events', async () => {
|
||||
const startPromise = service.startAutoLoop(testRepo.path, 1);
|
||||
|
||||
// Wait for start event
|
||||
await new Promise((resolve) => setTimeout(resolve, 100));
|
||||
|
||||
// Check start event was emitted
|
||||
const startEvent = mockEvents.emit.mock.calls.find((call) =>
|
||||
call[1]?.message?.includes('Auto mode started')
|
||||
);
|
||||
expect(startEvent).toBeTruthy();
|
||||
|
||||
await service.stopAutoLoop();
|
||||
await startPromise.catch(() => {});
|
||||
|
||||
// Check stop event was emitted (emitted immediately by stopAutoLoop)
|
||||
const stopEvent = mockEvents.emit.mock.calls.find(
|
||||
(call) =>
|
||||
call[1]?.type === 'auto_mode_stopped' || call[1]?.message?.includes('Auto mode stopped')
|
||||
);
|
||||
expect(stopEvent).toBeTruthy();
|
||||
}, 10000);
|
||||
});
|
||||
|
||||
describe('error handling', () => {
|
||||
it('should handle provider errors gracefully', async () => {
|
||||
await createTestFeature(testRepo.path, 'error-feature', {
|
||||
id: 'error-feature',
|
||||
category: 'test',
|
||||
description: 'Error test',
|
||||
status: 'pending',
|
||||
});
|
||||
|
||||
const mockProvider = {
|
||||
getName: () => 'claude',
|
||||
executeQuery: async function* () {
|
||||
throw new Error('Provider execution failed');
|
||||
},
|
||||
};
|
||||
|
||||
vi.mocked(ProviderFactory.getProviderForModel).mockReturnValue(mockProvider as any);
|
||||
|
||||
// Should not throw
|
||||
await service.executeFeature(testRepo.path, 'error-feature', true, false);
|
||||
|
||||
// Feature should be marked as backlog (error status)
|
||||
const feature = await featureLoader.get(testRepo.path, 'error-feature');
|
||||
expect(feature?.status).toBe('backlog');
|
||||
}, 30000);
|
||||
|
||||
it('should continue auto loop after feature error', async () => {
|
||||
await createTestFeature(testRepo.path, 'fail-1', {
|
||||
id: 'fail-1',
|
||||
category: 'test',
|
||||
description: 'Will fail',
|
||||
status: 'pending',
|
||||
});
|
||||
|
||||
await createTestFeature(testRepo.path, 'success-1', {
|
||||
id: 'success-1',
|
||||
category: 'test',
|
||||
description: 'Will succeed',
|
||||
status: 'pending',
|
||||
});
|
||||
|
||||
let callCount = 0;
|
||||
const mockProvider = {
|
||||
getName: () => 'claude',
|
||||
executeQuery: async function* () {
|
||||
callCount++;
|
||||
if (callCount === 1) {
|
||||
throw new Error('First feature fails');
|
||||
}
|
||||
yield {
|
||||
type: 'result',
|
||||
subtype: 'success',
|
||||
};
|
||||
},
|
||||
};
|
||||
|
||||
vi.mocked(ProviderFactory.getProviderForModel).mockReturnValue(mockProvider as any);
|
||||
|
||||
const startPromise = service.startAutoLoop(testRepo.path, 1);
|
||||
|
||||
// Wait for both features to be attempted
|
||||
await new Promise((resolve) => setTimeout(resolve, 5000));
|
||||
|
||||
await service.stopAutoLoop();
|
||||
await startPromise.catch(() => {});
|
||||
|
||||
// Both features should have been attempted
|
||||
expect(callCount).toBeGreaterThanOrEqual(1);
|
||||
}, 15000);
|
||||
});
|
||||
|
||||
describe('planning mode', () => {
|
||||
it('should execute feature with skip planning mode', async () => {
|
||||
await createTestFeature(testRepo.path, 'skip-plan-feature', {
|
||||
id: 'skip-plan-feature',
|
||||
category: 'test',
|
||||
description: 'Feature with skip planning',
|
||||
status: 'pending',
|
||||
planningMode: 'skip',
|
||||
skipTests: true,
|
||||
});
|
||||
|
||||
const mockProvider = {
|
||||
getName: () => 'claude',
|
||||
executeQuery: async function* () {
|
||||
yield {
|
||||
type: 'assistant',
|
||||
message: {
|
||||
role: 'assistant',
|
||||
content: [{ type: 'text', text: 'Feature implemented' }],
|
||||
},
|
||||
};
|
||||
yield {
|
||||
type: 'result',
|
||||
subtype: 'success',
|
||||
};
|
||||
},
|
||||
};
|
||||
|
||||
vi.mocked(ProviderFactory.getProviderForModel).mockReturnValue(mockProvider as any);
|
||||
|
||||
await service.executeFeature(testRepo.path, 'skip-plan-feature', false, false);
|
||||
|
||||
const feature = await featureLoader.get(testRepo.path, 'skip-plan-feature');
|
||||
expect(feature?.status).toBe('waiting_approval');
|
||||
}, 30000);
|
||||
|
||||
it('should execute feature with lite planning mode without approval', async () => {
|
||||
await createTestFeature(testRepo.path, 'lite-plan-feature', {
|
||||
id: 'lite-plan-feature',
|
||||
category: 'test',
|
||||
description: 'Feature with lite planning',
|
||||
status: 'pending',
|
||||
planningMode: 'lite',
|
||||
requirePlanApproval: false,
|
||||
skipTests: true,
|
||||
});
|
||||
|
||||
const mockProvider = {
|
||||
getName: () => 'claude',
|
||||
executeQuery: async function* () {
|
||||
yield {
|
||||
type: 'assistant',
|
||||
message: {
|
||||
role: 'assistant',
|
||||
content: [
|
||||
{
|
||||
type: 'text',
|
||||
text: '[PLAN_GENERATED] Planning outline complete.\n\nFeature implemented',
|
||||
},
|
||||
],
|
||||
},
|
||||
};
|
||||
yield {
|
||||
type: 'result',
|
||||
subtype: 'success',
|
||||
};
|
||||
},
|
||||
};
|
||||
|
||||
vi.mocked(ProviderFactory.getProviderForModel).mockReturnValue(mockProvider as any);
|
||||
|
||||
await service.executeFeature(testRepo.path, 'lite-plan-feature', false, false);
|
||||
|
||||
const feature = await featureLoader.get(testRepo.path, 'lite-plan-feature');
|
||||
expect(feature?.status).toBe('waiting_approval');
|
||||
}, 30000);
|
||||
|
||||
it('should emit planning_started event for spec mode', async () => {
|
||||
await createTestFeature(testRepo.path, 'spec-plan-feature', {
|
||||
id: 'spec-plan-feature',
|
||||
category: 'test',
|
||||
description: 'Feature with spec planning',
|
||||
status: 'pending',
|
||||
planningMode: 'spec',
|
||||
requirePlanApproval: false,
|
||||
});
|
||||
|
||||
const mockProvider = {
|
||||
getName: () => 'claude',
|
||||
executeQuery: async function* () {
|
||||
yield {
|
||||
type: 'assistant',
|
||||
message: {
|
||||
role: 'assistant',
|
||||
content: [
|
||||
{ type: 'text', text: 'Spec generated\n\n[SPEC_GENERATED] Review the spec.' },
|
||||
],
|
||||
},
|
||||
};
|
||||
yield {
|
||||
type: 'result',
|
||||
subtype: 'success',
|
||||
};
|
||||
},
|
||||
};
|
||||
|
||||
vi.mocked(ProviderFactory.getProviderForModel).mockReturnValue(mockProvider as any);
|
||||
|
||||
await service.executeFeature(testRepo.path, 'spec-plan-feature', false, false);
|
||||
|
||||
// Check planning_started event was emitted
|
||||
const planningEvent = mockEvents.emit.mock.calls.find((call) => call[1]?.mode === 'spec');
|
||||
expect(planningEvent).toBeTruthy();
|
||||
}, 30000);
|
||||
|
||||
it('should handle feature with full planning mode', async () => {
|
||||
await createTestFeature(testRepo.path, 'full-plan-feature', {
|
||||
id: 'full-plan-feature',
|
||||
category: 'test',
|
||||
description: 'Feature with full planning',
|
||||
status: 'pending',
|
||||
planningMode: 'full',
|
||||
requirePlanApproval: false,
|
||||
});
|
||||
|
||||
const mockProvider = {
|
||||
getName: () => 'claude',
|
||||
executeQuery: async function* () {
|
||||
yield {
|
||||
type: 'assistant',
|
||||
message: {
|
||||
role: 'assistant',
|
||||
content: [
|
||||
{ type: 'text', text: 'Full spec with phases\n\n[SPEC_GENERATED] Review.' },
|
||||
],
|
||||
},
|
||||
};
|
||||
yield {
|
||||
type: 'result',
|
||||
subtype: 'success',
|
||||
};
|
||||
},
|
||||
};
|
||||
|
||||
vi.mocked(ProviderFactory.getProviderForModel).mockReturnValue(mockProvider as any);
|
||||
|
||||
await service.executeFeature(testRepo.path, 'full-plan-feature', false, false);
|
||||
|
||||
// Check planning_started event was emitted with full mode
|
||||
const planningEvent = mockEvents.emit.mock.calls.find((call) => call[1]?.mode === 'full');
|
||||
expect(planningEvent).toBeTruthy();
|
||||
}, 30000);
|
||||
|
||||
it('should track pending approval correctly', async () => {
|
||||
// Initially no pending approvals
|
||||
expect(service.hasPendingApproval('non-existent')).toBe(false);
|
||||
});
|
||||
|
||||
it('should cancel pending approval gracefully', () => {
|
||||
// Should not throw when cancelling non-existent approval
|
||||
expect(() => service.cancelPlanApproval('non-existent')).not.toThrow();
|
||||
});
|
||||
|
||||
it('should resolve approval with error for non-existent feature', async () => {
|
||||
const result = await service.resolvePlanApproval(
|
||||
'non-existent',
|
||||
true,
|
||||
undefined,
|
||||
undefined,
|
||||
undefined
|
||||
);
|
||||
expect(result.success).toBe(false);
|
||||
expect(result.error).toContain('No pending approval');
|
||||
});
|
||||
});
|
||||
});
|
||||
@@ -325,12 +325,8 @@ describe('codex-provider.ts', () => {
|
||||
);
|
||||
|
||||
const call = vi.mocked(spawnJSONLProcess).mock.calls[0][0];
|
||||
// xhigh reasoning effort uses 5-minute base timeout (300000ms) for feature generation
|
||||
// then applies 4x multiplier: 300000 * 4.0 = 1200000ms (20 minutes)
|
||||
const CODEX_FEATURE_GENERATION_BASE_TIMEOUT_MS = 300000;
|
||||
expect(call.timeout).toBe(
|
||||
CODEX_FEATURE_GENERATION_BASE_TIMEOUT_MS * REASONING_TIMEOUT_MULTIPLIERS.xhigh
|
||||
);
|
||||
// xhigh reasoning effort should have 4x the default timeout (120000ms)
|
||||
expect(call.timeout).toBe(DEFAULT_TIMEOUT_MS * REASONING_TIMEOUT_MULTIPLIERS.xhigh);
|
||||
});
|
||||
|
||||
it('uses default timeout when no reasoning effort is specified', async () => {
|
||||
|
||||
@@ -1,935 +0,0 @@
|
||||
import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest';
|
||||
import {
|
||||
AgentExecutor,
|
||||
type AgentExecutionOptions,
|
||||
type AgentExecutionResult,
|
||||
type WaitForApprovalFn,
|
||||
type SaveFeatureSummaryFn,
|
||||
type UpdateFeatureSummaryFn,
|
||||
type BuildTaskPromptFn,
|
||||
} from '../../../src/services/agent-executor.js';
|
||||
import type { TypedEventBus } from '../../../src/services/typed-event-bus.js';
|
||||
import type { FeatureStateManager } from '../../../src/services/feature-state-manager.js';
|
||||
import type { PlanApprovalService } from '../../../src/services/plan-approval-service.js';
|
||||
import type { SettingsService } from '../../../src/services/settings-service.js';
|
||||
import type { BaseProvider } from '../../../src/providers/base-provider.js';
|
||||
|
||||
/**
|
||||
* Unit tests for AgentExecutor
|
||||
*
|
||||
* Note: Full integration tests for execute() require complex mocking of
|
||||
* @automaker/utils and @automaker/platform which have module hoisting issues.
|
||||
* These tests focus on:
|
||||
* - Constructor injection
|
||||
* - Interface exports
|
||||
* - Type correctness
|
||||
*
|
||||
* Integration tests for streaming/marker detection are covered in E2E tests
|
||||
* and auto-mode-service tests.
|
||||
*/
|
||||
describe('AgentExecutor', () => {
|
||||
// Mock dependencies
|
||||
let mockEventBus: TypedEventBus;
|
||||
let mockFeatureStateManager: FeatureStateManager;
|
||||
let mockPlanApprovalService: PlanApprovalService;
|
||||
let mockSettingsService: SettingsService | null;
|
||||
|
||||
beforeEach(() => {
|
||||
// Reset mocks
|
||||
mockEventBus = {
|
||||
emitAutoModeEvent: vi.fn(),
|
||||
} as unknown as TypedEventBus;
|
||||
|
||||
mockFeatureStateManager = {
|
||||
updateTaskStatus: vi.fn().mockResolvedValue(undefined),
|
||||
updateFeaturePlanSpec: vi.fn().mockResolvedValue(undefined),
|
||||
saveFeatureSummary: vi.fn().mockResolvedValue(undefined),
|
||||
} as unknown as FeatureStateManager;
|
||||
|
||||
mockPlanApprovalService = {
|
||||
waitForApproval: vi.fn(),
|
||||
} as unknown as PlanApprovalService;
|
||||
|
||||
mockSettingsService = null;
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
vi.clearAllMocks();
|
||||
});
|
||||
|
||||
describe('constructor', () => {
|
||||
it('should create instance with all dependencies', () => {
|
||||
const executor = new AgentExecutor(
|
||||
mockEventBus,
|
||||
mockFeatureStateManager,
|
||||
mockPlanApprovalService,
|
||||
mockSettingsService
|
||||
);
|
||||
expect(executor).toBeInstanceOf(AgentExecutor);
|
||||
});
|
||||
|
||||
it('should accept null settingsService', () => {
|
||||
const executor = new AgentExecutor(
|
||||
mockEventBus,
|
||||
mockFeatureStateManager,
|
||||
mockPlanApprovalService,
|
||||
null
|
||||
);
|
||||
expect(executor).toBeInstanceOf(AgentExecutor);
|
||||
});
|
||||
|
||||
it('should accept undefined settingsService', () => {
|
||||
const executor = new AgentExecutor(
|
||||
mockEventBus,
|
||||
mockFeatureStateManager,
|
||||
mockPlanApprovalService
|
||||
);
|
||||
expect(executor).toBeInstanceOf(AgentExecutor);
|
||||
});
|
||||
|
||||
it('should store eventBus dependency', () => {
|
||||
const executor = new AgentExecutor(
|
||||
mockEventBus,
|
||||
mockFeatureStateManager,
|
||||
mockPlanApprovalService,
|
||||
mockSettingsService
|
||||
);
|
||||
// Verify executor was created - actual use tested via execute()
|
||||
expect(executor).toBeDefined();
|
||||
});
|
||||
|
||||
it('should store featureStateManager dependency', () => {
|
||||
const executor = new AgentExecutor(
|
||||
mockEventBus,
|
||||
mockFeatureStateManager,
|
||||
mockPlanApprovalService,
|
||||
mockSettingsService
|
||||
);
|
||||
expect(executor).toBeDefined();
|
||||
});
|
||||
|
||||
it('should store planApprovalService dependency', () => {
|
||||
const executor = new AgentExecutor(
|
||||
mockEventBus,
|
||||
mockFeatureStateManager,
|
||||
mockPlanApprovalService,
|
||||
mockSettingsService
|
||||
);
|
||||
expect(executor).toBeDefined();
|
||||
});
|
||||
});
|
||||
|
||||
describe('interface exports', () => {
|
||||
it('should export AgentExecutionOptions type', () => {
|
||||
// Type assertion test - if this compiles, the type is exported correctly
|
||||
const options: AgentExecutionOptions = {
|
||||
workDir: '/test',
|
||||
featureId: 'test-feature',
|
||||
prompt: 'Test prompt',
|
||||
projectPath: '/project',
|
||||
abortController: new AbortController(),
|
||||
provider: {} as BaseProvider,
|
||||
effectiveBareModel: 'claude-sonnet-4-20250514',
|
||||
};
|
||||
expect(options.featureId).toBe('test-feature');
|
||||
});
|
||||
|
||||
it('should export AgentExecutionResult type', () => {
|
||||
const result: AgentExecutionResult = {
|
||||
responseText: 'test response',
|
||||
specDetected: false,
|
||||
tasksCompleted: 0,
|
||||
aborted: false,
|
||||
};
|
||||
expect(result.aborted).toBe(false);
|
||||
});
|
||||
|
||||
it('should export callback types', () => {
|
||||
const waitForApproval: WaitForApprovalFn = async () => ({ approved: true });
|
||||
const saveFeatureSummary: SaveFeatureSummaryFn = async () => {};
|
||||
const updateFeatureSummary: UpdateFeatureSummaryFn = async () => {};
|
||||
const buildTaskPrompt: BuildTaskPromptFn = () => 'prompt';
|
||||
|
||||
expect(typeof waitForApproval).toBe('function');
|
||||
expect(typeof saveFeatureSummary).toBe('function');
|
||||
expect(typeof updateFeatureSummary).toBe('function');
|
||||
expect(typeof buildTaskPrompt).toBe('function');
|
||||
});
|
||||
});
|
||||
|
||||
describe('AgentExecutionOptions', () => {
|
||||
it('should accept required options', () => {
|
||||
const options: AgentExecutionOptions = {
|
||||
workDir: '/test/workdir',
|
||||
featureId: 'feature-123',
|
||||
prompt: 'Test prompt',
|
||||
projectPath: '/test/project',
|
||||
abortController: new AbortController(),
|
||||
provider: {} as BaseProvider,
|
||||
effectiveBareModel: 'claude-sonnet-4-20250514',
|
||||
};
|
||||
|
||||
expect(options.workDir).toBe('/test/workdir');
|
||||
expect(options.featureId).toBe('feature-123');
|
||||
expect(options.prompt).toBe('Test prompt');
|
||||
expect(options.projectPath).toBe('/test/project');
|
||||
expect(options.abortController).toBeInstanceOf(AbortController);
|
||||
expect(options.effectiveBareModel).toBe('claude-sonnet-4-20250514');
|
||||
});
|
||||
|
||||
it('should accept optional options', () => {
|
||||
const options: AgentExecutionOptions = {
|
||||
workDir: '/test/workdir',
|
||||
featureId: 'feature-123',
|
||||
prompt: 'Test prompt',
|
||||
projectPath: '/test/project',
|
||||
abortController: new AbortController(),
|
||||
provider: {} as BaseProvider,
|
||||
effectiveBareModel: 'claude-sonnet-4-20250514',
|
||||
// Optional fields
|
||||
imagePaths: ['/image1.png', '/image2.png'],
|
||||
model: 'claude-sonnet-4-20250514',
|
||||
planningMode: 'spec',
|
||||
requirePlanApproval: true,
|
||||
previousContent: 'Previous content',
|
||||
systemPrompt: 'System prompt',
|
||||
autoLoadClaudeMd: true,
|
||||
thinkingLevel: 'medium',
|
||||
branchName: 'feature-branch',
|
||||
specAlreadyDetected: false,
|
||||
existingApprovedPlanContent: 'Approved plan',
|
||||
persistedTasks: [{ id: 'T001', description: 'Task 1', status: 'pending' }],
|
||||
sdkOptions: {
|
||||
maxTurns: 100,
|
||||
allowedTools: ['read', 'write'],
|
||||
},
|
||||
};
|
||||
|
||||
expect(options.imagePaths).toHaveLength(2);
|
||||
expect(options.planningMode).toBe('spec');
|
||||
expect(options.requirePlanApproval).toBe(true);
|
||||
expect(options.branchName).toBe('feature-branch');
|
||||
});
|
||||
});
|
||||
|
||||
describe('AgentExecutionResult', () => {
|
||||
it('should contain responseText', () => {
|
||||
const result: AgentExecutionResult = {
|
||||
responseText: 'Full response text from agent',
|
||||
specDetected: true,
|
||||
tasksCompleted: 5,
|
||||
aborted: false,
|
||||
};
|
||||
expect(result.responseText).toBe('Full response text from agent');
|
||||
});
|
||||
|
||||
it('should contain specDetected flag', () => {
|
||||
const result: AgentExecutionResult = {
|
||||
responseText: '',
|
||||
specDetected: true,
|
||||
tasksCompleted: 0,
|
||||
aborted: false,
|
||||
};
|
||||
expect(result.specDetected).toBe(true);
|
||||
});
|
||||
|
||||
it('should contain tasksCompleted count', () => {
|
||||
const result: AgentExecutionResult = {
|
||||
responseText: '',
|
||||
specDetected: true,
|
||||
tasksCompleted: 10,
|
||||
aborted: false,
|
||||
};
|
||||
expect(result.tasksCompleted).toBe(10);
|
||||
});
|
||||
|
||||
it('should contain aborted flag', () => {
|
||||
const result: AgentExecutionResult = {
|
||||
responseText: '',
|
||||
specDetected: false,
|
||||
tasksCompleted: 3,
|
||||
aborted: true,
|
||||
};
|
||||
expect(result.aborted).toBe(true);
|
||||
});
|
||||
});
|
||||
|
||||
describe('execute method signature', () => {
|
||||
it('should have execute method', () => {
|
||||
const executor = new AgentExecutor(
|
||||
mockEventBus,
|
||||
mockFeatureStateManager,
|
||||
mockPlanApprovalService,
|
||||
mockSettingsService
|
||||
);
|
||||
expect(typeof executor.execute).toBe('function');
|
||||
});
|
||||
|
||||
it('should accept options and callbacks', () => {
|
||||
const executor = new AgentExecutor(
|
||||
mockEventBus,
|
||||
mockFeatureStateManager,
|
||||
mockPlanApprovalService,
|
||||
mockSettingsService
|
||||
);
|
||||
|
||||
// Type check - verifying the signature accepts the expected parameters
|
||||
// Actual execution would require mocking external modules
|
||||
const executeSignature = executor.execute.length;
|
||||
// execute(options, callbacks) = 2 parameters
|
||||
expect(executeSignature).toBe(2);
|
||||
});
|
||||
});
|
||||
|
||||
describe('callback types', () => {
|
||||
it('WaitForApprovalFn should return approval result', async () => {
|
||||
const waitForApproval: WaitForApprovalFn = vi.fn().mockResolvedValue({
|
||||
approved: true,
|
||||
feedback: 'Looks good',
|
||||
editedPlan: undefined,
|
||||
});
|
||||
|
||||
const result = await waitForApproval('feature-123', '/project');
|
||||
expect(result.approved).toBe(true);
|
||||
expect(result.feedback).toBe('Looks good');
|
||||
});
|
||||
|
||||
it('WaitForApprovalFn should handle rejection with feedback', async () => {
|
||||
const waitForApproval: WaitForApprovalFn = vi.fn().mockResolvedValue({
|
||||
approved: false,
|
||||
feedback: 'Please add more tests',
|
||||
editedPlan: '## Revised Plan\n...',
|
||||
});
|
||||
|
||||
const result = await waitForApproval('feature-123', '/project');
|
||||
expect(result.approved).toBe(false);
|
||||
expect(result.feedback).toBe('Please add more tests');
|
||||
expect(result.editedPlan).toBeDefined();
|
||||
});
|
||||
|
||||
it('SaveFeatureSummaryFn should accept parameters', async () => {
|
||||
const saveSummary: SaveFeatureSummaryFn = vi.fn().mockResolvedValue(undefined);
|
||||
|
||||
await saveSummary('/project', 'feature-123', 'Feature summary text');
|
||||
expect(saveSummary).toHaveBeenCalledWith('/project', 'feature-123', 'Feature summary text');
|
||||
});
|
||||
|
||||
it('UpdateFeatureSummaryFn should accept parameters', async () => {
|
||||
const updateSummary: UpdateFeatureSummaryFn = vi.fn().mockResolvedValue(undefined);
|
||||
|
||||
await updateSummary('/project', 'feature-123', 'Updated summary');
|
||||
expect(updateSummary).toHaveBeenCalledWith('/project', 'feature-123', 'Updated summary');
|
||||
});
|
||||
|
||||
it('BuildTaskPromptFn should return prompt string', () => {
|
||||
const buildPrompt: BuildTaskPromptFn = vi.fn().mockReturnValue('Execute T001: Create file');
|
||||
|
||||
const task = { id: 'T001', description: 'Create file', status: 'pending' as const };
|
||||
const allTasks = [task];
|
||||
const prompt = buildPrompt(task, allTasks, 0, 'Plan content', 'Template', undefined);
|
||||
|
||||
expect(typeof prompt).toBe('string');
|
||||
expect(prompt).toBe('Execute T001: Create file');
|
||||
});
|
||||
});
|
||||
|
||||
describe('dependency injection patterns', () => {
|
||||
it('should allow different eventBus implementations', () => {
|
||||
const customEventBus = {
|
||||
emitAutoModeEvent: vi.fn(),
|
||||
emit: vi.fn(),
|
||||
on: vi.fn(),
|
||||
} as unknown as TypedEventBus;
|
||||
|
||||
const executor = new AgentExecutor(
|
||||
customEventBus,
|
||||
mockFeatureStateManager,
|
||||
mockPlanApprovalService,
|
||||
mockSettingsService
|
||||
);
|
||||
|
||||
expect(executor).toBeInstanceOf(AgentExecutor);
|
||||
});
|
||||
|
||||
it('should allow different featureStateManager implementations', () => {
|
||||
const customStateManager = {
|
||||
updateTaskStatus: vi.fn().mockResolvedValue(undefined),
|
||||
updateFeaturePlanSpec: vi.fn().mockResolvedValue(undefined),
|
||||
saveFeatureSummary: vi.fn().mockResolvedValue(undefined),
|
||||
loadFeature: vi.fn().mockResolvedValue(null),
|
||||
} as unknown as FeatureStateManager;
|
||||
|
||||
const executor = new AgentExecutor(
|
||||
mockEventBus,
|
||||
customStateManager,
|
||||
mockPlanApprovalService,
|
||||
mockSettingsService
|
||||
);
|
||||
|
||||
expect(executor).toBeInstanceOf(AgentExecutor);
|
||||
});
|
||||
|
||||
it('should work with mock settingsService', () => {
|
||||
const customSettingsService = {
|
||||
getGlobalSettings: vi.fn().mockResolvedValue({}),
|
||||
getCredentials: vi.fn().mockResolvedValue({}),
|
||||
} as unknown as SettingsService;
|
||||
|
||||
const executor = new AgentExecutor(
|
||||
mockEventBus,
|
||||
mockFeatureStateManager,
|
||||
mockPlanApprovalService,
|
||||
customSettingsService
|
||||
);
|
||||
|
||||
expect(executor).toBeInstanceOf(AgentExecutor);
|
||||
});
|
||||
});
|
||||
|
||||
describe('execute() behavior', () => {
|
||||
/**
|
||||
* Execution tests focus on verifiable behaviors without requiring
|
||||
* full stream mocking. Complex integration scenarios are tested in E2E.
|
||||
*/
|
||||
|
||||
it('should return aborted=true when abort signal is already aborted', async () => {
|
||||
const executor = new AgentExecutor(
|
||||
mockEventBus,
|
||||
mockFeatureStateManager,
|
||||
mockPlanApprovalService,
|
||||
mockSettingsService
|
||||
);
|
||||
|
||||
// Create an already-aborted controller
|
||||
const abortController = new AbortController();
|
||||
abortController.abort();
|
||||
|
||||
// Mock provider that yields nothing (would check signal first)
|
||||
const mockProvider = {
|
||||
getName: () => 'mock',
|
||||
executeQuery: vi.fn().mockImplementation(function* () {
|
||||
// Generator yields nothing, simulating immediate abort check
|
||||
}),
|
||||
} as unknown as BaseProvider;
|
||||
|
||||
const options: AgentExecutionOptions = {
|
||||
workDir: '/test',
|
||||
featureId: 'test-feature',
|
||||
prompt: 'Test prompt',
|
||||
projectPath: '/project',
|
||||
abortController,
|
||||
provider: mockProvider,
|
||||
effectiveBareModel: 'claude-sonnet-4-20250514',
|
||||
planningMode: 'skip',
|
||||
};
|
||||
|
||||
const callbacks = {
|
||||
waitForApproval: vi.fn().mockResolvedValue({ approved: true }),
|
||||
saveFeatureSummary: vi.fn(),
|
||||
updateFeatureSummary: vi.fn(),
|
||||
buildTaskPrompt: vi.fn().mockReturnValue('task prompt'),
|
||||
};
|
||||
|
||||
// Execute - should complete without error even with aborted signal
|
||||
const result = await executor.execute(options, callbacks);
|
||||
|
||||
// When stream is empty and signal is aborted before stream starts,
|
||||
// the result depends on whether abort was checked
|
||||
expect(result).toBeDefined();
|
||||
expect(result.responseText).toBeDefined();
|
||||
});
|
||||
|
||||
it('should initialize with previousContent when provided', async () => {
|
||||
const executor = new AgentExecutor(
|
||||
mockEventBus,
|
||||
mockFeatureStateManager,
|
||||
mockPlanApprovalService,
|
||||
mockSettingsService
|
||||
);
|
||||
|
||||
const mockProvider = {
|
||||
getName: () => 'mock',
|
||||
executeQuery: vi.fn().mockImplementation(function* () {
|
||||
// Empty stream
|
||||
}),
|
||||
} as unknown as BaseProvider;
|
||||
|
||||
const options: AgentExecutionOptions = {
|
||||
workDir: '/test',
|
||||
featureId: 'test-feature',
|
||||
prompt: 'Test prompt',
|
||||
projectPath: '/project',
|
||||
abortController: new AbortController(),
|
||||
provider: mockProvider,
|
||||
effectiveBareModel: 'claude-sonnet-4-20250514',
|
||||
previousContent: 'Previous context from earlier session',
|
||||
};
|
||||
|
||||
const callbacks = {
|
||||
waitForApproval: vi.fn().mockResolvedValue({ approved: true }),
|
||||
saveFeatureSummary: vi.fn(),
|
||||
updateFeatureSummary: vi.fn(),
|
||||
buildTaskPrompt: vi.fn().mockReturnValue('task prompt'),
|
||||
};
|
||||
|
||||
const result = await executor.execute(options, callbacks);
|
||||
|
||||
// Response should start with previous content
|
||||
expect(result.responseText).toContain('Previous context from earlier session');
|
||||
expect(result.responseText).toContain('Follow-up Session');
|
||||
});
|
||||
|
||||
it('should return specDetected=false when no spec markers in content', async () => {
|
||||
const executor = new AgentExecutor(
|
||||
mockEventBus,
|
||||
mockFeatureStateManager,
|
||||
mockPlanApprovalService,
|
||||
mockSettingsService
|
||||
);
|
||||
|
||||
const mockProvider = {
|
||||
getName: () => 'mock',
|
||||
executeQuery: vi.fn().mockImplementation(function* () {
|
||||
yield {
|
||||
type: 'assistant',
|
||||
message: {
|
||||
content: [{ type: 'text', text: 'Simple response without spec markers' }],
|
||||
},
|
||||
};
|
||||
yield { type: 'result', subtype: 'success' };
|
||||
}),
|
||||
} as unknown as BaseProvider;
|
||||
|
||||
const options: AgentExecutionOptions = {
|
||||
workDir: '/test',
|
||||
featureId: 'test-feature',
|
||||
prompt: 'Test prompt',
|
||||
projectPath: '/project',
|
||||
abortController: new AbortController(),
|
||||
provider: mockProvider,
|
||||
effectiveBareModel: 'claude-sonnet-4-20250514',
|
||||
planningMode: 'skip', // No spec detection in skip mode
|
||||
};
|
||||
|
||||
const callbacks = {
|
||||
waitForApproval: vi.fn().mockResolvedValue({ approved: true }),
|
||||
saveFeatureSummary: vi.fn(),
|
||||
updateFeatureSummary: vi.fn(),
|
||||
buildTaskPrompt: vi.fn().mockReturnValue('task prompt'),
|
||||
};
|
||||
|
||||
const result = await executor.execute(options, callbacks);
|
||||
|
||||
expect(result.specDetected).toBe(false);
|
||||
expect(result.responseText).toContain('Simple response without spec markers');
|
||||
});
|
||||
|
||||
it('should emit auto_mode_progress events for text content', async () => {
|
||||
const executor = new AgentExecutor(
|
||||
mockEventBus,
|
||||
mockFeatureStateManager,
|
||||
mockPlanApprovalService,
|
||||
mockSettingsService
|
||||
);
|
||||
|
||||
const mockProvider = {
|
||||
getName: () => 'mock',
|
||||
executeQuery: vi.fn().mockImplementation(function* () {
|
||||
yield {
|
||||
type: 'assistant',
|
||||
message: {
|
||||
content: [{ type: 'text', text: 'First chunk of text' }],
|
||||
},
|
||||
};
|
||||
yield {
|
||||
type: 'assistant',
|
||||
message: {
|
||||
content: [{ type: 'text', text: 'Second chunk of text' }],
|
||||
},
|
||||
};
|
||||
yield { type: 'result', subtype: 'success' };
|
||||
}),
|
||||
} as unknown as BaseProvider;
|
||||
|
||||
const options: AgentExecutionOptions = {
|
||||
workDir: '/test',
|
||||
featureId: 'test-feature',
|
||||
prompt: 'Test prompt',
|
||||
projectPath: '/project',
|
||||
abortController: new AbortController(),
|
||||
provider: mockProvider,
|
||||
effectiveBareModel: 'claude-sonnet-4-20250514',
|
||||
planningMode: 'skip',
|
||||
};
|
||||
|
||||
const callbacks = {
|
||||
waitForApproval: vi.fn().mockResolvedValue({ approved: true }),
|
||||
saveFeatureSummary: vi.fn(),
|
||||
updateFeatureSummary: vi.fn(),
|
||||
buildTaskPrompt: vi.fn().mockReturnValue('task prompt'),
|
||||
};
|
||||
|
||||
await executor.execute(options, callbacks);
|
||||
|
||||
// Should emit progress events for each text chunk
|
||||
expect(mockEventBus.emitAutoModeEvent).toHaveBeenCalledWith('auto_mode_progress', {
|
||||
featureId: 'test-feature',
|
||||
branchName: null,
|
||||
content: 'First chunk of text',
|
||||
});
|
||||
expect(mockEventBus.emitAutoModeEvent).toHaveBeenCalledWith('auto_mode_progress', {
|
||||
featureId: 'test-feature',
|
||||
branchName: null,
|
||||
content: 'Second chunk of text',
|
||||
});
|
||||
});
|
||||
|
||||
it('should emit auto_mode_tool events for tool use', async () => {
|
||||
const executor = new AgentExecutor(
|
||||
mockEventBus,
|
||||
mockFeatureStateManager,
|
||||
mockPlanApprovalService,
|
||||
mockSettingsService
|
||||
);
|
||||
|
||||
const mockProvider = {
|
||||
getName: () => 'mock',
|
||||
executeQuery: vi.fn().mockImplementation(function* () {
|
||||
yield {
|
||||
type: 'assistant',
|
||||
message: {
|
||||
content: [
|
||||
{
|
||||
type: 'tool_use',
|
||||
name: 'write_file',
|
||||
input: { path: '/test/file.ts', content: 'test content' },
|
||||
},
|
||||
],
|
||||
},
|
||||
};
|
||||
yield { type: 'result', subtype: 'success' };
|
||||
}),
|
||||
} as unknown as BaseProvider;
|
||||
|
||||
const options: AgentExecutionOptions = {
|
||||
workDir: '/test',
|
||||
featureId: 'test-feature',
|
||||
prompt: 'Test prompt',
|
||||
projectPath: '/project',
|
||||
abortController: new AbortController(),
|
||||
provider: mockProvider,
|
||||
effectiveBareModel: 'claude-sonnet-4-20250514',
|
||||
planningMode: 'skip',
|
||||
};
|
||||
|
||||
const callbacks = {
|
||||
waitForApproval: vi.fn().mockResolvedValue({ approved: true }),
|
||||
saveFeatureSummary: vi.fn(),
|
||||
updateFeatureSummary: vi.fn(),
|
||||
buildTaskPrompt: vi.fn().mockReturnValue('task prompt'),
|
||||
};
|
||||
|
||||
await executor.execute(options, callbacks);
|
||||
|
||||
// Should emit tool event
|
||||
expect(mockEventBus.emitAutoModeEvent).toHaveBeenCalledWith('auto_mode_tool', {
|
||||
featureId: 'test-feature',
|
||||
branchName: null,
|
||||
tool: 'write_file',
|
||||
input: { path: '/test/file.ts', content: 'test content' },
|
||||
});
|
||||
});
|
||||
|
||||
it('should throw error when provider stream yields error message', async () => {
|
||||
const executor = new AgentExecutor(
|
||||
mockEventBus,
|
||||
mockFeatureStateManager,
|
||||
mockPlanApprovalService,
|
||||
mockSettingsService
|
||||
);
|
||||
|
||||
const mockProvider = {
|
||||
getName: () => 'mock',
|
||||
executeQuery: vi.fn().mockImplementation(function* () {
|
||||
yield {
|
||||
type: 'assistant',
|
||||
message: {
|
||||
content: [{ type: 'text', text: 'Starting...' }],
|
||||
},
|
||||
};
|
||||
yield {
|
||||
type: 'error',
|
||||
error: 'API rate limit exceeded',
|
||||
};
|
||||
}),
|
||||
} as unknown as BaseProvider;
|
||||
|
||||
const options: AgentExecutionOptions = {
|
||||
workDir: '/test',
|
||||
featureId: 'test-feature',
|
||||
prompt: 'Test prompt',
|
||||
projectPath: '/project',
|
||||
abortController: new AbortController(),
|
||||
provider: mockProvider,
|
||||
effectiveBareModel: 'claude-sonnet-4-20250514',
|
||||
planningMode: 'skip',
|
||||
};
|
||||
|
||||
const callbacks = {
|
||||
waitForApproval: vi.fn().mockResolvedValue({ approved: true }),
|
||||
saveFeatureSummary: vi.fn(),
|
||||
updateFeatureSummary: vi.fn(),
|
||||
buildTaskPrompt: vi.fn().mockReturnValue('task prompt'),
|
||||
};
|
||||
|
||||
await expect(executor.execute(options, callbacks)).rejects.toThrow('API rate limit exceeded');
|
||||
});
|
||||
|
||||
it('should throw error when authentication fails in response', async () => {
|
||||
const executor = new AgentExecutor(
|
||||
mockEventBus,
|
||||
mockFeatureStateManager,
|
||||
mockPlanApprovalService,
|
||||
mockSettingsService
|
||||
);
|
||||
|
||||
const mockProvider = {
|
||||
getName: () => 'mock',
|
||||
executeQuery: vi.fn().mockImplementation(function* () {
|
||||
yield {
|
||||
type: 'assistant',
|
||||
message: {
|
||||
content: [{ type: 'text', text: 'Error: Invalid API key' }],
|
||||
},
|
||||
};
|
||||
}),
|
||||
} as unknown as BaseProvider;
|
||||
|
||||
const options: AgentExecutionOptions = {
|
||||
workDir: '/test',
|
||||
featureId: 'test-feature',
|
||||
prompt: 'Test prompt',
|
||||
projectPath: '/project',
|
||||
abortController: new AbortController(),
|
||||
provider: mockProvider,
|
||||
effectiveBareModel: 'claude-sonnet-4-20250514',
|
||||
planningMode: 'skip',
|
||||
};
|
||||
|
||||
const callbacks = {
|
||||
waitForApproval: vi.fn().mockResolvedValue({ approved: true }),
|
||||
saveFeatureSummary: vi.fn(),
|
||||
updateFeatureSummary: vi.fn(),
|
||||
buildTaskPrompt: vi.fn().mockReturnValue('task prompt'),
|
||||
};
|
||||
|
||||
await expect(executor.execute(options, callbacks)).rejects.toThrow('Authentication failed');
|
||||
});
|
||||
|
||||
it('should accumulate responseText from multiple text blocks', async () => {
|
||||
const executor = new AgentExecutor(
|
||||
mockEventBus,
|
||||
mockFeatureStateManager,
|
||||
mockPlanApprovalService,
|
||||
mockSettingsService
|
||||
);
|
||||
|
||||
const mockProvider = {
|
||||
getName: () => 'mock',
|
||||
executeQuery: vi.fn().mockImplementation(function* () {
|
||||
yield {
|
||||
type: 'assistant',
|
||||
message: {
|
||||
content: [
|
||||
{ type: 'text', text: 'Part 1.' },
|
||||
{ type: 'text', text: ' Part 2.' },
|
||||
],
|
||||
},
|
||||
};
|
||||
yield {
|
||||
type: 'assistant',
|
||||
message: {
|
||||
content: [{ type: 'text', text: ' Part 3.' }],
|
||||
},
|
||||
};
|
||||
yield { type: 'result', subtype: 'success' };
|
||||
}),
|
||||
} as unknown as BaseProvider;
|
||||
|
||||
const options: AgentExecutionOptions = {
|
||||
workDir: '/test',
|
||||
featureId: 'test-feature',
|
||||
prompt: 'Test prompt',
|
||||
projectPath: '/project',
|
||||
abortController: new AbortController(),
|
||||
provider: mockProvider,
|
||||
effectiveBareModel: 'claude-sonnet-4-20250514',
|
||||
planningMode: 'skip',
|
||||
};
|
||||
|
||||
const callbacks = {
|
||||
waitForApproval: vi.fn().mockResolvedValue({ approved: true }),
|
||||
saveFeatureSummary: vi.fn(),
|
||||
updateFeatureSummary: vi.fn(),
|
||||
buildTaskPrompt: vi.fn().mockReturnValue('task prompt'),
|
||||
};
|
||||
|
||||
const result = await executor.execute(options, callbacks);
|
||||
|
||||
// All parts should be in response text
|
||||
expect(result.responseText).toContain('Part 1');
|
||||
expect(result.responseText).toContain('Part 2');
|
||||
expect(result.responseText).toContain('Part 3');
|
||||
});
|
||||
|
||||
it('should return tasksCompleted=0 when no tasks executed', async () => {
|
||||
const executor = new AgentExecutor(
|
||||
mockEventBus,
|
||||
mockFeatureStateManager,
|
||||
mockPlanApprovalService,
|
||||
mockSettingsService
|
||||
);
|
||||
|
||||
const mockProvider = {
|
||||
getName: () => 'mock',
|
||||
executeQuery: vi.fn().mockImplementation(function* () {
|
||||
yield {
|
||||
type: 'assistant',
|
||||
message: {
|
||||
content: [{ type: 'text', text: 'Simple response' }],
|
||||
},
|
||||
};
|
||||
yield { type: 'result', subtype: 'success' };
|
||||
}),
|
||||
} as unknown as BaseProvider;
|
||||
|
||||
const options: AgentExecutionOptions = {
|
||||
workDir: '/test',
|
||||
featureId: 'test-feature',
|
||||
prompt: 'Test prompt',
|
||||
projectPath: '/project',
|
||||
abortController: new AbortController(),
|
||||
provider: mockProvider,
|
||||
effectiveBareModel: 'claude-sonnet-4-20250514',
|
||||
planningMode: 'skip',
|
||||
};
|
||||
|
||||
const callbacks = {
|
||||
waitForApproval: vi.fn().mockResolvedValue({ approved: true }),
|
||||
saveFeatureSummary: vi.fn(),
|
||||
updateFeatureSummary: vi.fn(),
|
||||
buildTaskPrompt: vi.fn().mockReturnValue('task prompt'),
|
||||
};
|
||||
|
||||
const result = await executor.execute(options, callbacks);
|
||||
|
||||
expect(result.tasksCompleted).toBe(0);
|
||||
expect(result.aborted).toBe(false);
|
||||
});
|
||||
|
||||
it('should pass branchName to event payloads', async () => {
|
||||
const executor = new AgentExecutor(
|
||||
mockEventBus,
|
||||
mockFeatureStateManager,
|
||||
mockPlanApprovalService,
|
||||
mockSettingsService
|
||||
);
|
||||
|
||||
const mockProvider = {
|
||||
getName: () => 'mock',
|
||||
executeQuery: vi.fn().mockImplementation(function* () {
|
||||
yield {
|
||||
type: 'assistant',
|
||||
message: {
|
||||
content: [{ type: 'text', text: 'Response' }],
|
||||
},
|
||||
};
|
||||
yield { type: 'result', subtype: 'success' };
|
||||
}),
|
||||
} as unknown as BaseProvider;
|
||||
|
||||
const options: AgentExecutionOptions = {
|
||||
workDir: '/test',
|
||||
featureId: 'test-feature',
|
||||
prompt: 'Test prompt',
|
||||
projectPath: '/project',
|
||||
abortController: new AbortController(),
|
||||
provider: mockProvider,
|
||||
effectiveBareModel: 'claude-sonnet-4-20250514',
|
||||
planningMode: 'skip',
|
||||
branchName: 'feature/my-feature',
|
||||
};
|
||||
|
||||
const callbacks = {
|
||||
waitForApproval: vi.fn().mockResolvedValue({ approved: true }),
|
||||
saveFeatureSummary: vi.fn(),
|
||||
updateFeatureSummary: vi.fn(),
|
||||
buildTaskPrompt: vi.fn().mockReturnValue('task prompt'),
|
||||
};
|
||||
|
||||
await executor.execute(options, callbacks);
|
||||
|
||||
// Branch name should be passed to progress event
|
||||
expect(mockEventBus.emitAutoModeEvent).toHaveBeenCalledWith(
|
||||
'auto_mode_progress',
|
||||
expect.objectContaining({
|
||||
branchName: 'feature/my-feature',
|
||||
})
|
||||
);
|
||||
});
|
||||
|
||||
it('should return correct result structure', async () => {
|
||||
const executor = new AgentExecutor(
|
||||
mockEventBus,
|
||||
mockFeatureStateManager,
|
||||
mockPlanApprovalService,
|
||||
mockSettingsService
|
||||
);
|
||||
|
||||
const mockProvider = {
|
||||
getName: () => 'mock',
|
||||
executeQuery: vi.fn().mockImplementation(function* () {
|
||||
yield {
|
||||
type: 'assistant',
|
||||
message: {
|
||||
content: [{ type: 'text', text: 'Test response' }],
|
||||
},
|
||||
};
|
||||
yield { type: 'result', subtype: 'success' };
|
||||
}),
|
||||
} as unknown as BaseProvider;
|
||||
|
||||
const options: AgentExecutionOptions = {
|
||||
workDir: '/test',
|
||||
featureId: 'test-feature',
|
||||
prompt: 'Test prompt',
|
||||
projectPath: '/project',
|
||||
abortController: new AbortController(),
|
||||
provider: mockProvider,
|
||||
effectiveBareModel: 'claude-sonnet-4-20250514',
|
||||
planningMode: 'skip',
|
||||
};
|
||||
|
||||
const callbacks = {
|
||||
waitForApproval: vi.fn().mockResolvedValue({ approved: true }),
|
||||
saveFeatureSummary: vi.fn(),
|
||||
updateFeatureSummary: vi.fn(),
|
||||
buildTaskPrompt: vi.fn().mockReturnValue('task prompt'),
|
||||
};
|
||||
|
||||
const result = await executor.execute(options, callbacks);
|
||||
|
||||
// Verify result has all expected properties
|
||||
expect(result).toHaveProperty('responseText');
|
||||
expect(result).toHaveProperty('specDetected');
|
||||
expect(result).toHaveProperty('tasksCompleted');
|
||||
expect(result).toHaveProperty('aborted');
|
||||
|
||||
// Verify types
|
||||
expect(typeof result.responseText).toBe('string');
|
||||
expect(typeof result.specDetected).toBe('boolean');
|
||||
expect(typeof result.tasksCompleted).toBe('number');
|
||||
expect(typeof result.aborted).toBe('boolean');
|
||||
});
|
||||
});
|
||||
});
|
||||
@@ -1,610 +0,0 @@
|
||||
import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest';
|
||||
import {
|
||||
AutoLoopCoordinator,
|
||||
getWorktreeAutoLoopKey,
|
||||
type AutoModeConfig,
|
||||
type ProjectAutoLoopState,
|
||||
type ExecuteFeatureFn,
|
||||
type LoadPendingFeaturesFn,
|
||||
type SaveExecutionStateFn,
|
||||
type ClearExecutionStateFn,
|
||||
type ResetStuckFeaturesFn,
|
||||
type IsFeatureFinishedFn,
|
||||
} from '../../../src/services/auto-loop-coordinator.js';
|
||||
import type { TypedEventBus } from '../../../src/services/typed-event-bus.js';
|
||||
import type { ConcurrencyManager } from '../../../src/services/concurrency-manager.js';
|
||||
import type { SettingsService } from '../../../src/services/settings-service.js';
|
||||
import type { Feature } from '@automaker/types';
|
||||
|
||||
describe('auto-loop-coordinator.ts', () => {
|
||||
// Mock dependencies
|
||||
let mockEventBus: TypedEventBus;
|
||||
let mockConcurrencyManager: ConcurrencyManager;
|
||||
let mockSettingsService: SettingsService | null;
|
||||
|
||||
// Callback mocks
|
||||
let mockExecuteFeature: ExecuteFeatureFn;
|
||||
let mockLoadPendingFeatures: LoadPendingFeaturesFn;
|
||||
let mockSaveExecutionState: SaveExecutionStateFn;
|
||||
let mockClearExecutionState: ClearExecutionStateFn;
|
||||
let mockResetStuckFeatures: ResetStuckFeaturesFn;
|
||||
let mockIsFeatureFinished: IsFeatureFinishedFn;
|
||||
let mockIsFeatureRunning: (featureId: string) => boolean;
|
||||
|
||||
let coordinator: AutoLoopCoordinator;
|
||||
|
||||
const testFeature: Feature = {
|
||||
id: 'feature-1',
|
||||
title: 'Test Feature',
|
||||
category: 'test',
|
||||
description: 'Test description',
|
||||
status: 'ready',
|
||||
};
|
||||
|
||||
beforeEach(() => {
|
||||
vi.clearAllMocks();
|
||||
vi.useFakeTimers();
|
||||
|
||||
mockEventBus = {
|
||||
emitAutoModeEvent: vi.fn(),
|
||||
} as unknown as TypedEventBus;
|
||||
|
||||
mockConcurrencyManager = {
|
||||
getRunningCountForWorktree: vi.fn().mockResolvedValue(0),
|
||||
isRunning: vi.fn().mockReturnValue(false),
|
||||
} as unknown as ConcurrencyManager;
|
||||
|
||||
mockSettingsService = {
|
||||
getGlobalSettings: vi.fn().mockResolvedValue({
|
||||
maxConcurrency: 3,
|
||||
projects: [{ id: 'proj-1', path: '/test/project' }],
|
||||
autoModeByWorktree: {},
|
||||
}),
|
||||
} as unknown as SettingsService;
|
||||
|
||||
// Callback mocks
|
||||
mockExecuteFeature = vi.fn().mockResolvedValue(undefined);
|
||||
mockLoadPendingFeatures = vi.fn().mockResolvedValue([]);
|
||||
mockSaveExecutionState = vi.fn().mockResolvedValue(undefined);
|
||||
mockClearExecutionState = vi.fn().mockResolvedValue(undefined);
|
||||
mockResetStuckFeatures = vi.fn().mockResolvedValue(undefined);
|
||||
mockIsFeatureFinished = vi.fn().mockReturnValue(false);
|
||||
mockIsFeatureRunning = vi.fn().mockReturnValue(false);
|
||||
|
||||
coordinator = new AutoLoopCoordinator(
|
||||
mockEventBus,
|
||||
mockConcurrencyManager,
|
||||
mockSettingsService,
|
||||
mockExecuteFeature,
|
||||
mockLoadPendingFeatures,
|
||||
mockSaveExecutionState,
|
||||
mockClearExecutionState,
|
||||
mockResetStuckFeatures,
|
||||
mockIsFeatureFinished,
|
||||
mockIsFeatureRunning
|
||||
);
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
vi.useRealTimers();
|
||||
});
|
||||
|
||||
describe('getWorktreeAutoLoopKey', () => {
|
||||
it('returns correct key for main worktree (null branch)', () => {
|
||||
const key = getWorktreeAutoLoopKey('/test/project', null);
|
||||
expect(key).toBe('/test/project::__main__');
|
||||
});
|
||||
|
||||
it('returns correct key for named branch', () => {
|
||||
const key = getWorktreeAutoLoopKey('/test/project', 'feature/test-1');
|
||||
expect(key).toBe('/test/project::feature/test-1');
|
||||
});
|
||||
|
||||
it("normalizes 'main' branch to null", () => {
|
||||
const key = getWorktreeAutoLoopKey('/test/project', 'main');
|
||||
expect(key).toBe('/test/project::__main__');
|
||||
});
|
||||
});
|
||||
|
||||
describe('startAutoLoopForProject', () => {
|
||||
it('throws if loop already running for project/worktree', async () => {
|
||||
// Start the first loop
|
||||
await coordinator.startAutoLoopForProject('/test/project', null, 1);
|
||||
|
||||
// Try to start another - should throw
|
||||
await expect(coordinator.startAutoLoopForProject('/test/project', null, 1)).rejects.toThrow(
|
||||
'Auto mode is already running for main worktree in project'
|
||||
);
|
||||
});
|
||||
|
||||
it('creates ProjectAutoLoopState with correct config', async () => {
|
||||
await coordinator.startAutoLoopForProject('/test/project', 'feature-branch', 2);
|
||||
|
||||
const config = coordinator.getAutoLoopConfigForProject('/test/project', 'feature-branch');
|
||||
expect(config).toEqual({
|
||||
maxConcurrency: 2,
|
||||
useWorktrees: true,
|
||||
projectPath: '/test/project',
|
||||
branchName: 'feature-branch',
|
||||
});
|
||||
});
|
||||
|
||||
it('emits auto_mode_started event', async () => {
|
||||
await coordinator.startAutoLoopForProject('/test/project', null, 3);
|
||||
|
||||
expect(mockEventBus.emitAutoModeEvent).toHaveBeenCalledWith('auto_mode_started', {
|
||||
message: 'Auto mode started with max 3 concurrent features',
|
||||
projectPath: '/test/project',
|
||||
branchName: null,
|
||||
maxConcurrency: 3,
|
||||
});
|
||||
});
|
||||
|
||||
it('calls saveExecutionState', async () => {
|
||||
await coordinator.startAutoLoopForProject('/test/project', null, 3);
|
||||
|
||||
expect(mockSaveExecutionState).toHaveBeenCalledWith('/test/project', null, 3);
|
||||
});
|
||||
|
||||
it('resets stuck features on start', async () => {
|
||||
await coordinator.startAutoLoopForProject('/test/project', null, 1);
|
||||
|
||||
expect(mockResetStuckFeatures).toHaveBeenCalledWith('/test/project');
|
||||
});
|
||||
|
||||
it('uses settings maxConcurrency when not provided', async () => {
|
||||
const result = await coordinator.startAutoLoopForProject('/test/project', null);
|
||||
|
||||
expect(result).toBe(3); // from mockSettingsService
|
||||
});
|
||||
|
||||
it('uses worktree-specific maxConcurrency from settings', async () => {
|
||||
vi.mocked(mockSettingsService!.getGlobalSettings).mockResolvedValue({
|
||||
maxConcurrency: 5,
|
||||
projects: [{ id: 'proj-1', path: '/test/project' }],
|
||||
autoModeByWorktree: {
|
||||
'proj-1::__main__': { maxConcurrency: 7 },
|
||||
},
|
||||
});
|
||||
|
||||
const result = await coordinator.startAutoLoopForProject('/test/project', null);
|
||||
|
||||
expect(result).toBe(7);
|
||||
});
|
||||
});
|
||||
|
||||
describe('stopAutoLoopForProject', () => {
|
||||
it('aborts running loop', async () => {
|
||||
await coordinator.startAutoLoopForProject('/test/project', null, 1);
|
||||
|
||||
const result = await coordinator.stopAutoLoopForProject('/test/project', null);
|
||||
|
||||
expect(result).toBe(0);
|
||||
expect(coordinator.isAutoLoopRunningForProject('/test/project', null)).toBe(false);
|
||||
});
|
||||
|
||||
it('emits auto_mode_stopped event', async () => {
|
||||
await coordinator.startAutoLoopForProject('/test/project', null, 1);
|
||||
vi.mocked(mockEventBus.emitAutoModeEvent).mockClear();
|
||||
|
||||
await coordinator.stopAutoLoopForProject('/test/project', null);
|
||||
|
||||
expect(mockEventBus.emitAutoModeEvent).toHaveBeenCalledWith('auto_mode_stopped', {
|
||||
message: 'Auto mode stopped',
|
||||
projectPath: '/test/project',
|
||||
branchName: null,
|
||||
});
|
||||
});
|
||||
|
||||
it('calls clearExecutionState', async () => {
|
||||
await coordinator.startAutoLoopForProject('/test/project', null, 1);
|
||||
|
||||
await coordinator.stopAutoLoopForProject('/test/project', null);
|
||||
|
||||
expect(mockClearExecutionState).toHaveBeenCalledWith('/test/project', null);
|
||||
});
|
||||
|
||||
it('returns 0 when no loop running', async () => {
|
||||
const result = await coordinator.stopAutoLoopForProject('/test/project', null);
|
||||
|
||||
expect(result).toBe(0);
|
||||
expect(mockClearExecutionState).not.toHaveBeenCalled();
|
||||
});
|
||||
});
|
||||
|
||||
describe('isAutoLoopRunningForProject', () => {
|
||||
it('returns true when running', async () => {
|
||||
await coordinator.startAutoLoopForProject('/test/project', null, 1);
|
||||
|
||||
expect(coordinator.isAutoLoopRunningForProject('/test/project', null)).toBe(true);
|
||||
});
|
||||
|
||||
it('returns false when not running', () => {
|
||||
expect(coordinator.isAutoLoopRunningForProject('/test/project', null)).toBe(false);
|
||||
});
|
||||
|
||||
it('returns false for different worktree', async () => {
|
||||
await coordinator.startAutoLoopForProject('/test/project', 'branch-a', 1);
|
||||
|
||||
expect(coordinator.isAutoLoopRunningForProject('/test/project', 'branch-b')).toBe(false);
|
||||
});
|
||||
});
|
||||
|
||||
describe('runAutoLoopForProject', () => {
|
||||
it('loads pending features each iteration', async () => {
|
||||
vi.mocked(mockLoadPendingFeatures).mockResolvedValue([]);
|
||||
|
||||
await coordinator.startAutoLoopForProject('/test/project', null, 1);
|
||||
|
||||
// Advance time to trigger loop iterations
|
||||
await vi.advanceTimersByTimeAsync(11000);
|
||||
|
||||
// Stop the loop to avoid hanging
|
||||
await coordinator.stopAutoLoopForProject('/test/project', null);
|
||||
|
||||
expect(mockLoadPendingFeatures).toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('executes features within concurrency limit', async () => {
|
||||
vi.mocked(mockLoadPendingFeatures).mockResolvedValue([testFeature]);
|
||||
vi.mocked(mockConcurrencyManager.getRunningCountForWorktree).mockResolvedValue(0);
|
||||
|
||||
await coordinator.startAutoLoopForProject('/test/project', null, 2);
|
||||
|
||||
// Advance time to trigger loop iteration
|
||||
await vi.advanceTimersByTimeAsync(3000);
|
||||
|
||||
// Stop the loop
|
||||
await coordinator.stopAutoLoopForProject('/test/project', null);
|
||||
|
||||
expect(mockExecuteFeature).toHaveBeenCalledWith('/test/project', 'feature-1', true, true);
|
||||
});
|
||||
|
||||
it('emits idle event when no work remains (running=0, pending=0)', async () => {
|
||||
vi.mocked(mockLoadPendingFeatures).mockResolvedValue([]);
|
||||
vi.mocked(mockConcurrencyManager.getRunningCountForWorktree).mockResolvedValue(0);
|
||||
|
||||
await coordinator.startAutoLoopForProject('/test/project', null, 1);
|
||||
|
||||
// Clear the initial event mock calls
|
||||
vi.mocked(mockEventBus.emitAutoModeEvent).mockClear();
|
||||
|
||||
// Advance time to trigger loop iteration and idle event
|
||||
await vi.advanceTimersByTimeAsync(11000);
|
||||
|
||||
// Stop the loop
|
||||
await coordinator.stopAutoLoopForProject('/test/project', null);
|
||||
|
||||
expect(mockEventBus.emitAutoModeEvent).toHaveBeenCalledWith('auto_mode_idle', {
|
||||
message: 'No pending features - auto mode idle',
|
||||
projectPath: '/test/project',
|
||||
branchName: null,
|
||||
});
|
||||
});
|
||||
|
||||
it('skips already-running features', async () => {
|
||||
const feature2: Feature = { ...testFeature, id: 'feature-2' };
|
||||
vi.mocked(mockLoadPendingFeatures).mockResolvedValue([testFeature, feature2]);
|
||||
vi.mocked(mockIsFeatureRunning)
|
||||
.mockReturnValueOnce(true) // feature-1 is running
|
||||
.mockReturnValueOnce(false); // feature-2 is not running
|
||||
|
||||
await coordinator.startAutoLoopForProject('/test/project', null, 2);
|
||||
|
||||
await vi.advanceTimersByTimeAsync(3000);
|
||||
|
||||
await coordinator.stopAutoLoopForProject('/test/project', null);
|
||||
|
||||
// Should execute feature-2, not feature-1
|
||||
expect(mockExecuteFeature).toHaveBeenCalledWith('/test/project', 'feature-2', true, true);
|
||||
});
|
||||
|
||||
it('stops when aborted', async () => {
|
||||
vi.mocked(mockLoadPendingFeatures).mockResolvedValue([testFeature]);
|
||||
|
||||
await coordinator.startAutoLoopForProject('/test/project', null, 1);
|
||||
|
||||
// Stop immediately
|
||||
await coordinator.stopAutoLoopForProject('/test/project', null);
|
||||
|
||||
// Should not have executed many features
|
||||
expect(mockExecuteFeature.mock.calls.length).toBeLessThanOrEqual(1);
|
||||
});
|
||||
|
||||
it('waits when at capacity', async () => {
|
||||
vi.mocked(mockLoadPendingFeatures).mockResolvedValue([testFeature]);
|
||||
vi.mocked(mockConcurrencyManager.getRunningCountForWorktree).mockResolvedValue(2); // At capacity for maxConcurrency=2
|
||||
|
||||
await coordinator.startAutoLoopForProject('/test/project', null, 2);
|
||||
|
||||
await vi.advanceTimersByTimeAsync(6000);
|
||||
|
||||
await coordinator.stopAutoLoopForProject('/test/project', null);
|
||||
|
||||
// Should not have executed features because at capacity
|
||||
expect(mockExecuteFeature).not.toHaveBeenCalled();
|
||||
});
|
||||
});
|
||||
|
||||
describe('failure tracking', () => {
|
||||
it('trackFailureAndCheckPauseForProject returns true after threshold', async () => {
|
||||
await coordinator.startAutoLoopForProject('/test/project', null, 1);
|
||||
|
||||
// Track 3 failures (threshold)
|
||||
const result1 = coordinator.trackFailureAndCheckPauseForProject('/test/project', {
|
||||
type: 'agent_error',
|
||||
message: 'Error 1',
|
||||
});
|
||||
expect(result1).toBe(false);
|
||||
|
||||
const result2 = coordinator.trackFailureAndCheckPauseForProject('/test/project', {
|
||||
type: 'agent_error',
|
||||
message: 'Error 2',
|
||||
});
|
||||
expect(result2).toBe(false);
|
||||
|
||||
const result3 = coordinator.trackFailureAndCheckPauseForProject('/test/project', {
|
||||
type: 'agent_error',
|
||||
message: 'Error 3',
|
||||
});
|
||||
expect(result3).toBe(true); // Should pause after 3
|
||||
|
||||
await coordinator.stopAutoLoopForProject('/test/project', null);
|
||||
});
|
||||
|
||||
it('agent errors count as failures', async () => {
|
||||
await coordinator.startAutoLoopForProject('/test/project', null, 1);
|
||||
|
||||
const result = coordinator.trackFailureAndCheckPauseForProject('/test/project', {
|
||||
type: 'agent_error',
|
||||
message: 'Agent failed',
|
||||
});
|
||||
|
||||
// First error should not pause
|
||||
expect(result).toBe(false);
|
||||
|
||||
await coordinator.stopAutoLoopForProject('/test/project', null);
|
||||
});
|
||||
|
||||
it('clears failures on success (recordSuccessForProject)', async () => {
|
||||
await coordinator.startAutoLoopForProject('/test/project', null, 1);
|
||||
|
||||
// Add 2 failures
|
||||
coordinator.trackFailureAndCheckPauseForProject('/test/project', {
|
||||
type: 'agent_error',
|
||||
message: 'Error 1',
|
||||
});
|
||||
coordinator.trackFailureAndCheckPauseForProject('/test/project', {
|
||||
type: 'agent_error',
|
||||
message: 'Error 2',
|
||||
});
|
||||
|
||||
// Record success - should clear failures
|
||||
coordinator.recordSuccessForProject('/test/project');
|
||||
|
||||
// Next failure should return false (not hitting threshold)
|
||||
const result = coordinator.trackFailureAndCheckPauseForProject('/test/project', {
|
||||
type: 'agent_error',
|
||||
message: 'Error 3',
|
||||
});
|
||||
expect(result).toBe(false);
|
||||
|
||||
await coordinator.stopAutoLoopForProject('/test/project', null);
|
||||
});
|
||||
|
||||
it('signalShouldPauseForProject emits event and stops loop', async () => {
|
||||
await coordinator.startAutoLoopForProject('/test/project', null, 1);
|
||||
vi.mocked(mockEventBus.emitAutoModeEvent).mockClear();
|
||||
|
||||
coordinator.signalShouldPauseForProject('/test/project', {
|
||||
type: 'quota_exhausted',
|
||||
message: 'Rate limited',
|
||||
});
|
||||
|
||||
expect(mockEventBus.emitAutoModeEvent).toHaveBeenCalledWith(
|
||||
'auto_mode_paused_failures',
|
||||
expect.objectContaining({
|
||||
errorType: 'quota_exhausted',
|
||||
projectPath: '/test/project',
|
||||
})
|
||||
);
|
||||
|
||||
// Loop should be stopped
|
||||
expect(coordinator.isAutoLoopRunningForProject('/test/project', null)).toBe(false);
|
||||
});
|
||||
|
||||
it('quota/rate limit errors pause immediately', async () => {
|
||||
await coordinator.startAutoLoopForProject('/test/project', null, 1);
|
||||
|
||||
const result = coordinator.trackFailureAndCheckPauseForProject('/test/project', {
|
||||
type: 'quota_exhausted',
|
||||
message: 'API quota exceeded',
|
||||
});
|
||||
|
||||
expect(result).toBe(true); // Should pause immediately
|
||||
|
||||
await coordinator.stopAutoLoopForProject('/test/project', null);
|
||||
});
|
||||
|
||||
it('rate_limit type also pauses immediately', async () => {
|
||||
await coordinator.startAutoLoopForProject('/test/project', null, 1);
|
||||
|
||||
const result = coordinator.trackFailureAndCheckPauseForProject('/test/project', {
|
||||
type: 'rate_limit',
|
||||
message: 'Rate limited',
|
||||
});
|
||||
|
||||
expect(result).toBe(true);
|
||||
|
||||
await coordinator.stopAutoLoopForProject('/test/project', null);
|
||||
});
|
||||
});
|
||||
|
||||
describe('multiple projects', () => {
|
||||
it('runs concurrent loops for different projects', async () => {
|
||||
await coordinator.startAutoLoopForProject('/project-a', null, 1);
|
||||
await coordinator.startAutoLoopForProject('/project-b', null, 1);
|
||||
|
||||
expect(coordinator.isAutoLoopRunningForProject('/project-a', null)).toBe(true);
|
||||
expect(coordinator.isAutoLoopRunningForProject('/project-b', null)).toBe(true);
|
||||
|
||||
await coordinator.stopAutoLoopForProject('/project-a', null);
|
||||
await coordinator.stopAutoLoopForProject('/project-b', null);
|
||||
});
|
||||
|
||||
it('runs concurrent loops for different worktrees of same project', async () => {
|
||||
await coordinator.startAutoLoopForProject('/test/project', null, 1);
|
||||
await coordinator.startAutoLoopForProject('/test/project', 'feature-branch', 1);
|
||||
|
||||
expect(coordinator.isAutoLoopRunningForProject('/test/project', null)).toBe(true);
|
||||
expect(coordinator.isAutoLoopRunningForProject('/test/project', 'feature-branch')).toBe(true);
|
||||
|
||||
await coordinator.stopAutoLoopForProject('/test/project', null);
|
||||
await coordinator.stopAutoLoopForProject('/test/project', 'feature-branch');
|
||||
});
|
||||
|
||||
it('stopping one loop does not affect others', async () => {
|
||||
await coordinator.startAutoLoopForProject('/project-a', null, 1);
|
||||
await coordinator.startAutoLoopForProject('/project-b', null, 1);
|
||||
|
||||
await coordinator.stopAutoLoopForProject('/project-a', null);
|
||||
|
||||
expect(coordinator.isAutoLoopRunningForProject('/project-a', null)).toBe(false);
|
||||
expect(coordinator.isAutoLoopRunningForProject('/project-b', null)).toBe(true);
|
||||
|
||||
await coordinator.stopAutoLoopForProject('/project-b', null);
|
||||
});
|
||||
});
|
||||
|
||||
describe('getAutoLoopConfigForProject', () => {
|
||||
it('returns config when loop is running', async () => {
|
||||
await coordinator.startAutoLoopForProject('/test/project', null, 5);
|
||||
|
||||
const config = coordinator.getAutoLoopConfigForProject('/test/project', null);
|
||||
|
||||
expect(config).toEqual({
|
||||
maxConcurrency: 5,
|
||||
useWorktrees: true,
|
||||
projectPath: '/test/project',
|
||||
branchName: null,
|
||||
});
|
||||
|
||||
await coordinator.stopAutoLoopForProject('/test/project', null);
|
||||
});
|
||||
|
||||
it('returns null when no loop running', () => {
|
||||
const config = coordinator.getAutoLoopConfigForProject('/test/project', null);
|
||||
|
||||
expect(config).toBeNull();
|
||||
});
|
||||
});
|
||||
|
||||
describe('getRunningCountForWorktree', () => {
|
||||
it('delegates to ConcurrencyManager', async () => {
|
||||
vi.mocked(mockConcurrencyManager.getRunningCountForWorktree).mockResolvedValue(3);
|
||||
|
||||
const count = await coordinator.getRunningCountForWorktree('/test/project', null);
|
||||
|
||||
expect(count).toBe(3);
|
||||
expect(mockConcurrencyManager.getRunningCountForWorktree).toHaveBeenCalledWith(
|
||||
'/test/project',
|
||||
null
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
describe('resetFailureTrackingForProject', () => {
|
||||
it('clears consecutive failures and paused flag', async () => {
|
||||
await coordinator.startAutoLoopForProject('/test/project', null, 1);
|
||||
|
||||
// Add failures
|
||||
coordinator.trackFailureAndCheckPauseForProject('/test/project', {
|
||||
type: 'agent_error',
|
||||
message: 'Error',
|
||||
});
|
||||
coordinator.trackFailureAndCheckPauseForProject('/test/project', {
|
||||
type: 'agent_error',
|
||||
message: 'Error',
|
||||
});
|
||||
|
||||
// Reset failure tracking
|
||||
coordinator.resetFailureTrackingForProject('/test/project');
|
||||
|
||||
// Next 3 failures should be needed to trigger pause again
|
||||
const result1 = coordinator.trackFailureAndCheckPauseForProject('/test/project', {
|
||||
type: 'agent_error',
|
||||
message: 'Error',
|
||||
});
|
||||
expect(result1).toBe(false);
|
||||
|
||||
await coordinator.stopAutoLoopForProject('/test/project', null);
|
||||
});
|
||||
});
|
||||
|
||||
describe('edge cases', () => {
|
||||
it('handles null settingsService gracefully', async () => {
|
||||
const coordWithoutSettings = new AutoLoopCoordinator(
|
||||
mockEventBus,
|
||||
mockConcurrencyManager,
|
||||
null, // No settings service
|
||||
mockExecuteFeature,
|
||||
mockLoadPendingFeatures,
|
||||
mockSaveExecutionState,
|
||||
mockClearExecutionState,
|
||||
mockResetStuckFeatures,
|
||||
mockIsFeatureFinished,
|
||||
mockIsFeatureRunning
|
||||
);
|
||||
|
||||
// Should use default concurrency
|
||||
const result = await coordWithoutSettings.startAutoLoopForProject('/test/project', null);
|
||||
|
||||
expect(result).toBe(1); // DEFAULT_MAX_CONCURRENCY
|
||||
|
||||
await coordWithoutSettings.stopAutoLoopForProject('/test/project', null);
|
||||
});
|
||||
|
||||
it('handles resetStuckFeatures error gracefully', async () => {
|
||||
vi.mocked(mockResetStuckFeatures).mockRejectedValue(new Error('Reset failed'));
|
||||
|
||||
// Should not throw
|
||||
await coordinator.startAutoLoopForProject('/test/project', null, 1);
|
||||
|
||||
expect(mockResetStuckFeatures).toHaveBeenCalled();
|
||||
|
||||
await coordinator.stopAutoLoopForProject('/test/project', null);
|
||||
});
|
||||
|
||||
it('trackFailureAndCheckPauseForProject returns false when no loop', () => {
|
||||
const result = coordinator.trackFailureAndCheckPauseForProject('/nonexistent', {
|
||||
type: 'agent_error',
|
||||
message: 'Error',
|
||||
});
|
||||
|
||||
expect(result).toBe(false);
|
||||
});
|
||||
|
||||
it('signalShouldPauseForProject does nothing when no loop', () => {
|
||||
// Should not throw
|
||||
coordinator.signalShouldPauseForProject('/nonexistent', {
|
||||
type: 'quota_exhausted',
|
||||
message: 'Error',
|
||||
});
|
||||
|
||||
expect(mockEventBus.emitAutoModeEvent).not.toHaveBeenCalledWith(
|
||||
'auto_mode_paused_failures',
|
||||
expect.anything()
|
||||
);
|
||||
});
|
||||
|
||||
it('does not emit stopped event when loop was not running', async () => {
|
||||
const result = await coordinator.stopAutoLoopForProject('/test/project', null);
|
||||
|
||||
expect(result).toBe(0);
|
||||
expect(mockEventBus.emitAutoModeEvent).not.toHaveBeenCalledWith(
|
||||
'auto_mode_stopped',
|
||||
expect.anything()
|
||||
);
|
||||
});
|
||||
});
|
||||
});
|
||||
@@ -0,0 +1,346 @@
|
||||
import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest';
|
||||
import { AutoModeService } from '@/services/auto-mode-service.js';
|
||||
|
||||
describe('auto-mode-service.ts - Planning Mode', () => {
|
||||
let service: AutoModeService;
|
||||
const mockEvents = {
|
||||
subscribe: vi.fn(),
|
||||
emit: vi.fn(),
|
||||
};
|
||||
|
||||
beforeEach(() => {
|
||||
vi.clearAllMocks();
|
||||
service = new AutoModeService(mockEvents as any);
|
||||
});
|
||||
|
||||
afterEach(async () => {
|
||||
// Clean up any running processes
|
||||
await service.stopAutoLoop().catch(() => {});
|
||||
});
|
||||
|
||||
describe('getPlanningPromptPrefix', () => {
|
||||
// Access private method through any cast for testing
|
||||
const getPlanningPromptPrefix = (svc: any, feature: any) => {
|
||||
return svc.getPlanningPromptPrefix(feature);
|
||||
};
|
||||
|
||||
it('should return empty string for skip mode', async () => {
|
||||
const feature = { id: 'test', planningMode: 'skip' as const };
|
||||
const result = await getPlanningPromptPrefix(service, feature);
|
||||
expect(result).toBe('');
|
||||
});
|
||||
|
||||
it('should return empty string when planningMode is undefined', async () => {
|
||||
const feature = { id: 'test' };
|
||||
const result = await getPlanningPromptPrefix(service, feature);
|
||||
expect(result).toBe('');
|
||||
});
|
||||
|
||||
it('should return lite prompt for lite mode without approval', async () => {
|
||||
const feature = {
|
||||
id: 'test',
|
||||
planningMode: 'lite' as const,
|
||||
requirePlanApproval: false,
|
||||
};
|
||||
const result = await getPlanningPromptPrefix(service, feature);
|
||||
expect(result).toContain('Planning Phase (Lite Mode)');
|
||||
expect(result).toContain('[PLAN_GENERATED]');
|
||||
expect(result).toContain('Feature Request');
|
||||
});
|
||||
|
||||
it('should return lite_with_approval prompt for lite mode with approval', async () => {
|
||||
const feature = {
|
||||
id: 'test',
|
||||
planningMode: 'lite' as const,
|
||||
requirePlanApproval: true,
|
||||
};
|
||||
const result = await getPlanningPromptPrefix(service, feature);
|
||||
expect(result).toContain('## Planning Phase (Lite Mode)');
|
||||
expect(result).toContain('[SPEC_GENERATED]');
|
||||
expect(result).toContain(
|
||||
'DO NOT proceed with implementation until you receive explicit approval'
|
||||
);
|
||||
});
|
||||
|
||||
it('should return spec prompt for spec mode', async () => {
|
||||
const feature = {
|
||||
id: 'test',
|
||||
planningMode: 'spec' as const,
|
||||
};
|
||||
const result = await getPlanningPromptPrefix(service, feature);
|
||||
expect(result).toContain('## Specification Phase (Spec Mode)');
|
||||
expect(result).toContain('```tasks');
|
||||
expect(result).toContain('T001');
|
||||
expect(result).toContain('[TASK_START]');
|
||||
expect(result).toContain('[TASK_COMPLETE]');
|
||||
});
|
||||
|
||||
it('should return full prompt for full mode', async () => {
|
||||
const feature = {
|
||||
id: 'test',
|
||||
planningMode: 'full' as const,
|
||||
};
|
||||
const result = await getPlanningPromptPrefix(service, feature);
|
||||
expect(result).toContain('## Full Specification Phase (Full SDD Mode)');
|
||||
expect(result).toContain('Phase 1: Foundation');
|
||||
expect(result).toContain('Phase 2: Core Implementation');
|
||||
expect(result).toContain('Phase 3: Integration & Testing');
|
||||
});
|
||||
|
||||
it('should include the separator and Feature Request header', async () => {
|
||||
const feature = {
|
||||
id: 'test',
|
||||
planningMode: 'spec' as const,
|
||||
};
|
||||
const result = await getPlanningPromptPrefix(service, feature);
|
||||
expect(result).toContain('---');
|
||||
expect(result).toContain('## Feature Request');
|
||||
});
|
||||
|
||||
it('should instruct agent to NOT output exploration text', async () => {
|
||||
const modes = ['lite', 'spec', 'full'] as const;
|
||||
for (const mode of modes) {
|
||||
const feature = { id: 'test', planningMode: mode };
|
||||
const result = await getPlanningPromptPrefix(service, feature);
|
||||
// All modes should have the IMPORTANT instruction about not outputting exploration text
|
||||
expect(result).toContain('IMPORTANT: Do NOT output exploration text');
|
||||
expect(result).toContain('Silently analyze the codebase first');
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
describe('parseTasksFromSpec (via module)', () => {
|
||||
// We need to test the module-level function
|
||||
// Import it directly for testing
|
||||
it('should parse tasks from a valid tasks block', async () => {
|
||||
// This tests the internal logic through integration
|
||||
// The function is module-level, so we verify behavior through the service
|
||||
const specContent = `
|
||||
## Specification
|
||||
|
||||
\`\`\`tasks
|
||||
- [ ] T001: Create user model | File: src/models/user.ts
|
||||
- [ ] T002: Add API endpoint | File: src/routes/users.ts
|
||||
- [ ] T003: Write unit tests | File: tests/user.test.ts
|
||||
\`\`\`
|
||||
`;
|
||||
// Since parseTasksFromSpec is a module-level function,
|
||||
// we verify its behavior indirectly through plan parsing
|
||||
expect(specContent).toContain('T001');
|
||||
expect(specContent).toContain('T002');
|
||||
expect(specContent).toContain('T003');
|
||||
});
|
||||
|
||||
it('should handle tasks block with phases', () => {
|
||||
const specContent = `
|
||||
\`\`\`tasks
|
||||
## Phase 1: Setup
|
||||
- [ ] T001: Initialize project | File: package.json
|
||||
- [ ] T002: Configure TypeScript | File: tsconfig.json
|
||||
|
||||
## Phase 2: Implementation
|
||||
- [ ] T003: Create main module | File: src/index.ts
|
||||
\`\`\`
|
||||
`;
|
||||
expect(specContent).toContain('Phase 1');
|
||||
expect(specContent).toContain('Phase 2');
|
||||
expect(specContent).toContain('T001');
|
||||
expect(specContent).toContain('T003');
|
||||
});
|
||||
});
|
||||
|
||||
describe('plan approval flow', () => {
|
||||
it('should track pending approvals correctly', () => {
|
||||
expect(service.hasPendingApproval('test-feature')).toBe(false);
|
||||
});
|
||||
|
||||
it('should allow cancelling non-existent approval without error', () => {
|
||||
expect(() => service.cancelPlanApproval('non-existent')).not.toThrow();
|
||||
});
|
||||
|
||||
it('should return running features count after stop', async () => {
|
||||
const count = await service.stopAutoLoop();
|
||||
expect(count).toBe(0);
|
||||
});
|
||||
});
|
||||
|
||||
describe('resolvePlanApproval', () => {
|
||||
it('should return error when no pending approval exists', async () => {
|
||||
const result = await service.resolvePlanApproval(
|
||||
'non-existent-feature',
|
||||
true,
|
||||
undefined,
|
||||
undefined,
|
||||
undefined
|
||||
);
|
||||
expect(result.success).toBe(false);
|
||||
expect(result.error).toContain('No pending approval');
|
||||
});
|
||||
|
||||
it('should handle approval with edited plan', async () => {
|
||||
// Without a pending approval, this should fail gracefully
|
||||
const result = await service.resolvePlanApproval(
|
||||
'test-feature',
|
||||
true,
|
||||
'Edited plan content',
|
||||
undefined,
|
||||
undefined
|
||||
);
|
||||
expect(result.success).toBe(false);
|
||||
});
|
||||
|
||||
it('should handle rejection with feedback', async () => {
|
||||
const result = await service.resolvePlanApproval(
|
||||
'test-feature',
|
||||
false,
|
||||
undefined,
|
||||
'Please add more details',
|
||||
undefined
|
||||
);
|
||||
expect(result.success).toBe(false);
|
||||
});
|
||||
});
|
||||
|
||||
describe('buildFeaturePrompt', () => {
|
||||
const defaultTaskExecutionPrompts = {
|
||||
implementationInstructions: 'Test implementation instructions',
|
||||
playwrightVerificationInstructions: 'Test playwright instructions',
|
||||
};
|
||||
|
||||
const buildFeaturePrompt = (
|
||||
svc: any,
|
||||
feature: any,
|
||||
taskExecutionPrompts = defaultTaskExecutionPrompts
|
||||
) => {
|
||||
return svc.buildFeaturePrompt(feature, taskExecutionPrompts);
|
||||
};
|
||||
|
||||
it('should include feature ID and description', () => {
|
||||
const feature = {
|
||||
id: 'feat-123',
|
||||
description: 'Add user authentication',
|
||||
};
|
||||
const result = buildFeaturePrompt(service, feature);
|
||||
expect(result).toContain('feat-123');
|
||||
expect(result).toContain('Add user authentication');
|
||||
});
|
||||
|
||||
it('should include specification when present', () => {
|
||||
const feature = {
|
||||
id: 'feat-123',
|
||||
description: 'Test feature',
|
||||
spec: 'Detailed specification here',
|
||||
};
|
||||
const result = buildFeaturePrompt(service, feature);
|
||||
expect(result).toContain('Specification:');
|
||||
expect(result).toContain('Detailed specification here');
|
||||
});
|
||||
|
||||
it('should include image paths when present', () => {
|
||||
const feature = {
|
||||
id: 'feat-123',
|
||||
description: 'Test feature',
|
||||
imagePaths: [
|
||||
{ path: '/tmp/image1.png', filename: 'image1.png', mimeType: 'image/png' },
|
||||
'/tmp/image2.jpg',
|
||||
],
|
||||
};
|
||||
const result = buildFeaturePrompt(service, feature);
|
||||
expect(result).toContain('Context Images Attached');
|
||||
expect(result).toContain('image1.png');
|
||||
expect(result).toContain('/tmp/image2.jpg');
|
||||
});
|
||||
|
||||
it('should include implementation instructions', () => {
|
||||
const feature = {
|
||||
id: 'feat-123',
|
||||
description: 'Test feature',
|
||||
};
|
||||
const result = buildFeaturePrompt(service, feature);
|
||||
// The prompt should include the implementation instructions passed to it
|
||||
expect(result).toContain('Test implementation instructions');
|
||||
expect(result).toContain('Test playwright instructions');
|
||||
});
|
||||
});
|
||||
|
||||
describe('extractTitleFromDescription', () => {
|
||||
const extractTitle = (svc: any, description: string) => {
|
||||
return svc.extractTitleFromDescription(description);
|
||||
};
|
||||
|
||||
it("should return 'Untitled Feature' for empty description", () => {
|
||||
expect(extractTitle(service, '')).toBe('Untitled Feature');
|
||||
expect(extractTitle(service, ' ')).toBe('Untitled Feature');
|
||||
});
|
||||
|
||||
it('should return first line if under 60 characters', () => {
|
||||
const description = 'Add user login\nWith email validation';
|
||||
expect(extractTitle(service, description)).toBe('Add user login');
|
||||
});
|
||||
|
||||
it('should truncate long first lines to 60 characters', () => {
|
||||
const description =
|
||||
'This is a very long feature description that exceeds the sixty character limit significantly';
|
||||
const result = extractTitle(service, description);
|
||||
expect(result.length).toBe(60);
|
||||
expect(result).toContain('...');
|
||||
});
|
||||
});
|
||||
|
||||
describe('PLANNING_PROMPTS structure', () => {
|
||||
const getPlanningPromptPrefix = (svc: any, feature: any) => {
|
||||
return svc.getPlanningPromptPrefix(feature);
|
||||
};
|
||||
|
||||
it('should have all required planning modes', async () => {
|
||||
const modes = ['lite', 'spec', 'full'] as const;
|
||||
for (const mode of modes) {
|
||||
const feature = { id: 'test', planningMode: mode };
|
||||
const result = await getPlanningPromptPrefix(service, feature);
|
||||
expect(result.length).toBeGreaterThan(100);
|
||||
}
|
||||
});
|
||||
|
||||
it('lite prompt should include correct structure', async () => {
|
||||
const feature = { id: 'test', planningMode: 'lite' as const };
|
||||
const result = await getPlanningPromptPrefix(service, feature);
|
||||
expect(result).toContain('Goal');
|
||||
expect(result).toContain('Approach');
|
||||
expect(result).toContain('Files to Touch');
|
||||
expect(result).toContain('Tasks');
|
||||
expect(result).toContain('Risks');
|
||||
});
|
||||
|
||||
it('spec prompt should include task format instructions', async () => {
|
||||
const feature = { id: 'test', planningMode: 'spec' as const };
|
||||
const result = await getPlanningPromptPrefix(service, feature);
|
||||
expect(result).toContain('Problem');
|
||||
expect(result).toContain('Solution');
|
||||
expect(result).toContain('Acceptance Criteria');
|
||||
expect(result).toContain('GIVEN-WHEN-THEN');
|
||||
expect(result).toContain('Implementation Tasks');
|
||||
expect(result).toContain('Verification');
|
||||
});
|
||||
|
||||
it('full prompt should include phases', async () => {
|
||||
const feature = { id: 'test', planningMode: 'full' as const };
|
||||
const result = await getPlanningPromptPrefix(service, feature);
|
||||
expect(result).toContain('1. **Problem Statement**');
|
||||
expect(result).toContain('2. **User Story**');
|
||||
expect(result).toContain('4. **Technical Context**');
|
||||
expect(result).toContain('5. **Non-Goals**');
|
||||
expect(result).toContain('Phase 1');
|
||||
expect(result).toContain('Phase 2');
|
||||
expect(result).toContain('Phase 3');
|
||||
});
|
||||
});
|
||||
|
||||
describe('status management', () => {
|
||||
it('should report correct status', () => {
|
||||
const status = service.getStatus();
|
||||
expect(status.runningFeatures).toEqual([]);
|
||||
expect(status.isRunning).toBe(false);
|
||||
expect(status.runningCount).toBe(0);
|
||||
});
|
||||
});
|
||||
});
|
||||
318
apps/server/tests/unit/services/auto-mode-service.test.ts
Normal file
318
apps/server/tests/unit/services/auto-mode-service.test.ts
Normal file
@@ -0,0 +1,318 @@
|
||||
import { describe, it, expect, vi, beforeEach } from 'vitest';
|
||||
import { AutoModeService } from '@/services/auto-mode-service.js';
|
||||
import type { Feature } from '@automaker/types';
|
||||
|
||||
describe('auto-mode-service.ts', () => {
|
||||
let service: AutoModeService;
|
||||
const mockEvents = {
|
||||
subscribe: vi.fn(),
|
||||
emit: vi.fn(),
|
||||
};
|
||||
|
||||
beforeEach(() => {
|
||||
vi.clearAllMocks();
|
||||
service = new AutoModeService(mockEvents as any);
|
||||
});
|
||||
|
||||
describe('constructor', () => {
|
||||
it('should initialize with event emitter', () => {
|
||||
expect(service).toBeDefined();
|
||||
});
|
||||
});
|
||||
|
||||
describe('startAutoLoop', () => {
|
||||
it('should throw if auto mode is already running', async () => {
|
||||
// Start first loop
|
||||
const promise1 = service.startAutoLoop('/test/project', 3);
|
||||
|
||||
// Try to start second loop
|
||||
await expect(service.startAutoLoop('/test/project', 3)).rejects.toThrow('already running');
|
||||
|
||||
// Cleanup
|
||||
await service.stopAutoLoop();
|
||||
await promise1.catch(() => {});
|
||||
});
|
||||
|
||||
it('should emit auto mode start event', async () => {
|
||||
const promise = service.startAutoLoop('/test/project', 3);
|
||||
|
||||
// Give it time to emit the event
|
||||
await new Promise((resolve) => setTimeout(resolve, 10));
|
||||
|
||||
expect(mockEvents.emit).toHaveBeenCalledWith(
|
||||
expect.any(String),
|
||||
expect.objectContaining({
|
||||
message: expect.stringContaining('Auto mode started'),
|
||||
})
|
||||
);
|
||||
|
||||
// Cleanup
|
||||
await service.stopAutoLoop();
|
||||
await promise.catch(() => {});
|
||||
});
|
||||
});
|
||||
|
||||
describe('stopAutoLoop', () => {
|
||||
it('should stop the auto loop', async () => {
|
||||
const promise = service.startAutoLoop('/test/project', 3);
|
||||
|
||||
const runningCount = await service.stopAutoLoop();
|
||||
|
||||
expect(runningCount).toBe(0);
|
||||
await promise.catch(() => {});
|
||||
});
|
||||
|
||||
it('should return 0 when not running', async () => {
|
||||
const runningCount = await service.stopAutoLoop();
|
||||
expect(runningCount).toBe(0);
|
||||
});
|
||||
});
|
||||
|
||||
describe('getRunningAgents', () => {
|
||||
// Helper to access private runningFeatures Map
|
||||
const getRunningFeaturesMap = (svc: AutoModeService) =>
|
||||
(svc as any).runningFeatures as Map<
|
||||
string,
|
||||
{ featureId: string; projectPath: string; isAutoMode: boolean }
|
||||
>;
|
||||
|
||||
// Helper to get the featureLoader and mock its get method
|
||||
const mockFeatureLoaderGet = (svc: AutoModeService, mockFn: ReturnType<typeof vi.fn>) => {
|
||||
(svc as any).featureLoader = { get: mockFn };
|
||||
};
|
||||
|
||||
it('should return empty array when no agents are running', async () => {
|
||||
const result = await service.getRunningAgents();
|
||||
|
||||
expect(result).toEqual([]);
|
||||
});
|
||||
|
||||
it('should return running agents with basic info when feature data is not available', async () => {
|
||||
// Arrange: Add a running feature to the Map
|
||||
const runningFeaturesMap = getRunningFeaturesMap(service);
|
||||
runningFeaturesMap.set('feature-123', {
|
||||
featureId: 'feature-123',
|
||||
projectPath: '/test/project/path',
|
||||
isAutoMode: true,
|
||||
});
|
||||
|
||||
// Mock featureLoader.get to return null (feature not found)
|
||||
const getMock = vi.fn().mockResolvedValue(null);
|
||||
mockFeatureLoaderGet(service, getMock);
|
||||
|
||||
// Act
|
||||
const result = await service.getRunningAgents();
|
||||
|
||||
// Assert
|
||||
expect(result).toHaveLength(1);
|
||||
expect(result[0]).toEqual({
|
||||
featureId: 'feature-123',
|
||||
projectPath: '/test/project/path',
|
||||
projectName: 'path',
|
||||
isAutoMode: true,
|
||||
title: undefined,
|
||||
description: undefined,
|
||||
});
|
||||
});
|
||||
|
||||
it('should return running agents with title and description when feature data is available', async () => {
|
||||
// Arrange
|
||||
const runningFeaturesMap = getRunningFeaturesMap(service);
|
||||
runningFeaturesMap.set('feature-456', {
|
||||
featureId: 'feature-456',
|
||||
projectPath: '/home/user/my-project',
|
||||
isAutoMode: false,
|
||||
});
|
||||
|
||||
const mockFeature: Partial<Feature> = {
|
||||
id: 'feature-456',
|
||||
title: 'Implement user authentication',
|
||||
description: 'Add login and signup functionality',
|
||||
category: 'auth',
|
||||
};
|
||||
|
||||
const getMock = vi.fn().mockResolvedValue(mockFeature);
|
||||
mockFeatureLoaderGet(service, getMock);
|
||||
|
||||
// Act
|
||||
const result = await service.getRunningAgents();
|
||||
|
||||
// Assert
|
||||
expect(result).toHaveLength(1);
|
||||
expect(result[0]).toEqual({
|
||||
featureId: 'feature-456',
|
||||
projectPath: '/home/user/my-project',
|
||||
projectName: 'my-project',
|
||||
isAutoMode: false,
|
||||
title: 'Implement user authentication',
|
||||
description: 'Add login and signup functionality',
|
||||
});
|
||||
expect(getMock).toHaveBeenCalledWith('/home/user/my-project', 'feature-456');
|
||||
});
|
||||
|
||||
it('should handle multiple running agents', async () => {
|
||||
// Arrange
|
||||
const runningFeaturesMap = getRunningFeaturesMap(service);
|
||||
runningFeaturesMap.set('feature-1', {
|
||||
featureId: 'feature-1',
|
||||
projectPath: '/project-a',
|
||||
isAutoMode: true,
|
||||
});
|
||||
runningFeaturesMap.set('feature-2', {
|
||||
featureId: 'feature-2',
|
||||
projectPath: '/project-b',
|
||||
isAutoMode: false,
|
||||
});
|
||||
|
||||
const getMock = vi
|
||||
.fn()
|
||||
.mockResolvedValueOnce({
|
||||
id: 'feature-1',
|
||||
title: 'Feature One',
|
||||
description: 'Description one',
|
||||
})
|
||||
.mockResolvedValueOnce({
|
||||
id: 'feature-2',
|
||||
title: 'Feature Two',
|
||||
description: 'Description two',
|
||||
});
|
||||
mockFeatureLoaderGet(service, getMock);
|
||||
|
||||
// Act
|
||||
const result = await service.getRunningAgents();
|
||||
|
||||
// Assert
|
||||
expect(result).toHaveLength(2);
|
||||
expect(getMock).toHaveBeenCalledTimes(2);
|
||||
});
|
||||
|
||||
it('should silently handle errors when fetching feature data', async () => {
|
||||
// Arrange
|
||||
const runningFeaturesMap = getRunningFeaturesMap(service);
|
||||
runningFeaturesMap.set('feature-error', {
|
||||
featureId: 'feature-error',
|
||||
projectPath: '/project-error',
|
||||
isAutoMode: true,
|
||||
});
|
||||
|
||||
const getMock = vi.fn().mockRejectedValue(new Error('Database connection failed'));
|
||||
mockFeatureLoaderGet(service, getMock);
|
||||
|
||||
// Act - should not throw
|
||||
const result = await service.getRunningAgents();
|
||||
|
||||
// Assert
|
||||
expect(result).toHaveLength(1);
|
||||
expect(result[0]).toEqual({
|
||||
featureId: 'feature-error',
|
||||
projectPath: '/project-error',
|
||||
projectName: 'project-error',
|
||||
isAutoMode: true,
|
||||
title: undefined,
|
||||
description: undefined,
|
||||
});
|
||||
});
|
||||
|
||||
it('should handle feature with title but no description', async () => {
|
||||
// Arrange
|
||||
const runningFeaturesMap = getRunningFeaturesMap(service);
|
||||
runningFeaturesMap.set('feature-title-only', {
|
||||
featureId: 'feature-title-only',
|
||||
projectPath: '/project',
|
||||
isAutoMode: false,
|
||||
});
|
||||
|
||||
const getMock = vi.fn().mockResolvedValue({
|
||||
id: 'feature-title-only',
|
||||
title: 'Only Title',
|
||||
// description is undefined
|
||||
});
|
||||
mockFeatureLoaderGet(service, getMock);
|
||||
|
||||
// Act
|
||||
const result = await service.getRunningAgents();
|
||||
|
||||
// Assert
|
||||
expect(result[0].title).toBe('Only Title');
|
||||
expect(result[0].description).toBeUndefined();
|
||||
});
|
||||
|
||||
it('should handle feature with description but no title', async () => {
|
||||
// Arrange
|
||||
const runningFeaturesMap = getRunningFeaturesMap(service);
|
||||
runningFeaturesMap.set('feature-desc-only', {
|
||||
featureId: 'feature-desc-only',
|
||||
projectPath: '/project',
|
||||
isAutoMode: false,
|
||||
});
|
||||
|
||||
const getMock = vi.fn().mockResolvedValue({
|
||||
id: 'feature-desc-only',
|
||||
description: 'Only description, no title',
|
||||
// title is undefined
|
||||
});
|
||||
mockFeatureLoaderGet(service, getMock);
|
||||
|
||||
// Act
|
||||
const result = await service.getRunningAgents();
|
||||
|
||||
// Assert
|
||||
expect(result[0].title).toBeUndefined();
|
||||
expect(result[0].description).toBe('Only description, no title');
|
||||
});
|
||||
|
||||
it('should extract projectName from nested paths correctly', async () => {
|
||||
// Arrange
|
||||
const runningFeaturesMap = getRunningFeaturesMap(service);
|
||||
runningFeaturesMap.set('feature-nested', {
|
||||
featureId: 'feature-nested',
|
||||
projectPath: '/home/user/workspace/projects/my-awesome-project',
|
||||
isAutoMode: true,
|
||||
});
|
||||
|
||||
const getMock = vi.fn().mockResolvedValue(null);
|
||||
mockFeatureLoaderGet(service, getMock);
|
||||
|
||||
// Act
|
||||
const result = await service.getRunningAgents();
|
||||
|
||||
// Assert
|
||||
expect(result[0].projectName).toBe('my-awesome-project');
|
||||
});
|
||||
|
||||
it('should fetch feature data in parallel for multiple agents', async () => {
|
||||
// Arrange: Add multiple running features
|
||||
const runningFeaturesMap = getRunningFeaturesMap(service);
|
||||
for (let i = 1; i <= 5; i++) {
|
||||
runningFeaturesMap.set(`feature-${i}`, {
|
||||
featureId: `feature-${i}`,
|
||||
projectPath: `/project-${i}`,
|
||||
isAutoMode: i % 2 === 0,
|
||||
});
|
||||
}
|
||||
|
||||
// Track call order
|
||||
const callOrder: string[] = [];
|
||||
const getMock = vi.fn().mockImplementation(async (projectPath: string, featureId: string) => {
|
||||
callOrder.push(featureId);
|
||||
// Simulate async delay to verify parallel execution
|
||||
await new Promise((resolve) => setTimeout(resolve, 10));
|
||||
return { id: featureId, title: `Title for ${featureId}` };
|
||||
});
|
||||
mockFeatureLoaderGet(service, getMock);
|
||||
|
||||
// Act
|
||||
const startTime = Date.now();
|
||||
const result = await service.getRunningAgents();
|
||||
const duration = Date.now() - startTime;
|
||||
|
||||
// Assert
|
||||
expect(result).toHaveLength(5);
|
||||
expect(getMock).toHaveBeenCalledTimes(5);
|
||||
// If executed in parallel, total time should be ~10ms (one batch)
|
||||
// If sequential, it would be ~50ms (5 * 10ms)
|
||||
// Allow some buffer for execution overhead
|
||||
expect(duration).toBeLessThan(40);
|
||||
});
|
||||
});
|
||||
});
|
||||
@@ -1,11 +1,18 @@
|
||||
import { describe, it, expect } from 'vitest';
|
||||
import type { ParsedTask } from '@automaker/types';
|
||||
|
||||
/**
|
||||
* Test the task parsing logic by reimplementing the parsing functions
|
||||
* These mirror the logic in auto-mode-service.ts parseTasksFromSpec and parseTaskLine
|
||||
*/
|
||||
|
||||
interface ParsedTask {
|
||||
id: string;
|
||||
description: string;
|
||||
filePath?: string;
|
||||
phase?: string;
|
||||
status: 'pending' | 'in_progress' | 'completed';
|
||||
}
|
||||
|
||||
function parseTaskLine(line: string, currentPhase?: string): ParsedTask | null {
|
||||
// Match pattern: - [ ] T###: Description | File: path
|
||||
const taskMatch = line.match(/- \[ \] (T\d{3}):\s*([^|]+)(?:\|\s*File:\s*(.+))?$/);
|
||||
@@ -335,236 +342,4 @@ Some other text
|
||||
expect(fullModeOutput).toContain('[SPEC_GENERATED]');
|
||||
});
|
||||
});
|
||||
|
||||
describe('detectSpecFallback - non-Claude model support', () => {
|
||||
/**
|
||||
* Reimplementation of detectSpecFallback for testing
|
||||
* This mirrors the logic in auto-mode-service.ts for detecting specs
|
||||
* when the [SPEC_GENERATED] marker is missing (common with non-Claude models)
|
||||
*/
|
||||
function detectSpecFallback(text: string): boolean {
|
||||
// Check for key structural elements of a spec
|
||||
const hasTasksBlock = /```tasks[\s\S]*```/.test(text);
|
||||
const hasTaskLines = /- \[ \] T\d{3}:/.test(text);
|
||||
|
||||
// Check for common spec sections (case-insensitive)
|
||||
const hasAcceptanceCriteria = /acceptance criteria/i.test(text);
|
||||
const hasTechnicalContext = /technical context/i.test(text);
|
||||
const hasProblemStatement = /problem statement/i.test(text);
|
||||
const hasUserStory = /user story/i.test(text);
|
||||
// Additional patterns for different model outputs
|
||||
const hasGoal = /\*\*Goal\*\*:/i.test(text);
|
||||
const hasSolution = /\*\*Solution\*\*:/i.test(text);
|
||||
const hasImplementation = /implementation\s*(plan|steps|approach)/i.test(text);
|
||||
const hasOverview = /##\s*(overview|summary)/i.test(text);
|
||||
|
||||
// Spec is detected if we have task structure AND at least some spec content
|
||||
const hasTaskStructure = hasTasksBlock || hasTaskLines;
|
||||
const hasSpecContent =
|
||||
hasAcceptanceCriteria ||
|
||||
hasTechnicalContext ||
|
||||
hasProblemStatement ||
|
||||
hasUserStory ||
|
||||
hasGoal ||
|
||||
hasSolution ||
|
||||
hasImplementation ||
|
||||
hasOverview;
|
||||
|
||||
return hasTaskStructure && hasSpecContent;
|
||||
}
|
||||
|
||||
it('should detect spec with tasks block and acceptance criteria', () => {
|
||||
const content = `
|
||||
## Acceptance Criteria
|
||||
- GIVEN a user, WHEN they login, THEN they see the dashboard
|
||||
|
||||
\`\`\`tasks
|
||||
- [ ] T001: Create login form | File: src/Login.tsx
|
||||
\`\`\`
|
||||
`;
|
||||
expect(detectSpecFallback(content)).toBe(true);
|
||||
});
|
||||
|
||||
it('should detect spec with task lines and problem statement', () => {
|
||||
const content = `
|
||||
## Problem Statement
|
||||
Users cannot currently log in to the application.
|
||||
|
||||
## Implementation Plan
|
||||
- [ ] T001: Add authentication endpoint
|
||||
- [ ] T002: Create login UI
|
||||
`;
|
||||
expect(detectSpecFallback(content)).toBe(true);
|
||||
});
|
||||
|
||||
it('should detect spec with Goal section (lite planning mode style)', () => {
|
||||
const content = `
|
||||
**Goal**: Implement user authentication
|
||||
|
||||
**Solution**: Use JWT tokens for session management
|
||||
|
||||
- [ ] T001: Setup auth middleware
|
||||
- [ ] T002: Create token service
|
||||
`;
|
||||
expect(detectSpecFallback(content)).toBe(true);
|
||||
});
|
||||
|
||||
it('should detect spec with User Story format', () => {
|
||||
const content = `
|
||||
## User Story
|
||||
As a user, I want to reset my password, so that I can regain access.
|
||||
|
||||
## Technical Context
|
||||
This will modify the auth module.
|
||||
|
||||
\`\`\`tasks
|
||||
- [ ] T001: Add reset endpoint
|
||||
\`\`\`
|
||||
`;
|
||||
expect(detectSpecFallback(content)).toBe(true);
|
||||
});
|
||||
|
||||
it('should detect spec with Overview section', () => {
|
||||
const content = `
|
||||
## Overview
|
||||
This feature adds dark mode support.
|
||||
|
||||
\`\`\`tasks
|
||||
- [ ] T001: Add theme toggle
|
||||
- [ ] T002: Update CSS variables
|
||||
\`\`\`
|
||||
`;
|
||||
expect(detectSpecFallback(content)).toBe(true);
|
||||
});
|
||||
|
||||
it('should detect spec with Summary section', () => {
|
||||
const content = `
|
||||
## Summary
|
||||
Adding a new dashboard component.
|
||||
|
||||
- [ ] T001: Create dashboard layout
|
||||
- [ ] T002: Add widgets
|
||||
`;
|
||||
expect(detectSpecFallback(content)).toBe(true);
|
||||
});
|
||||
|
||||
it('should detect spec with implementation plan', () => {
|
||||
const content = `
|
||||
## Implementation Plan
|
||||
We will add the feature in two phases.
|
||||
|
||||
- [ ] T001: Phase 1 setup
|
||||
- [ ] T002: Phase 2 implementation
|
||||
`;
|
||||
expect(detectSpecFallback(content)).toBe(true);
|
||||
});
|
||||
|
||||
it('should detect spec with implementation steps', () => {
|
||||
const content = `
|
||||
## Implementation Steps
|
||||
Follow these steps:
|
||||
|
||||
- [ ] T001: Step one
|
||||
- [ ] T002: Step two
|
||||
`;
|
||||
expect(detectSpecFallback(content)).toBe(true);
|
||||
});
|
||||
|
||||
it('should detect spec with implementation approach', () => {
|
||||
const content = `
|
||||
## Implementation Approach
|
||||
We will use a modular approach.
|
||||
|
||||
- [ ] T001: Create modules
|
||||
`;
|
||||
expect(detectSpecFallback(content)).toBe(true);
|
||||
});
|
||||
|
||||
it('should NOT detect spec without task structure', () => {
|
||||
const content = `
|
||||
## Problem Statement
|
||||
Users cannot log in.
|
||||
|
||||
## Acceptance Criteria
|
||||
- GIVEN a user, WHEN they try to login, THEN it works
|
||||
`;
|
||||
expect(detectSpecFallback(content)).toBe(false);
|
||||
});
|
||||
|
||||
it('should NOT detect spec without spec content sections', () => {
|
||||
const content = `
|
||||
Here are some tasks:
|
||||
|
||||
- [ ] T001: Do something
|
||||
- [ ] T002: Do another thing
|
||||
`;
|
||||
expect(detectSpecFallback(content)).toBe(false);
|
||||
});
|
||||
|
||||
it('should NOT detect random text as spec', () => {
|
||||
const content = 'Just some random text without any structure';
|
||||
expect(detectSpecFallback(content)).toBe(false);
|
||||
});
|
||||
|
||||
it('should handle case-insensitive matching for spec sections', () => {
|
||||
const content = `
|
||||
## ACCEPTANCE CRITERIA
|
||||
All caps section header
|
||||
|
||||
- [ ] T001: Task
|
||||
`;
|
||||
expect(detectSpecFallback(content)).toBe(true);
|
||||
|
||||
const content2 = `
|
||||
## acceptance criteria
|
||||
Lower case section header
|
||||
|
||||
- [ ] T001: Task
|
||||
`;
|
||||
expect(detectSpecFallback(content2)).toBe(true);
|
||||
});
|
||||
|
||||
it('should detect OpenAI-style output without explicit marker', () => {
|
||||
// Non-Claude models may format specs differently but still have the key elements
|
||||
const openAIStyleOutput = `
|
||||
# Feature Specification: User Authentication
|
||||
|
||||
**Goal**: Allow users to securely log into the application
|
||||
|
||||
**Solution**: Implement JWT-based authentication with refresh tokens
|
||||
|
||||
## Acceptance Criteria
|
||||
1. Users can log in with email and password
|
||||
2. Invalid credentials show error message
|
||||
3. Sessions persist across page refreshes
|
||||
|
||||
## Implementation Tasks
|
||||
\`\`\`tasks
|
||||
- [ ] T001: Create auth service | File: src/services/auth.ts
|
||||
- [ ] T002: Build login component | File: src/components/Login.tsx
|
||||
- [ ] T003: Add protected routes | File: src/App.tsx
|
||||
\`\`\`
|
||||
`;
|
||||
expect(detectSpecFallback(openAIStyleOutput)).toBe(true);
|
||||
});
|
||||
|
||||
it('should detect Gemini-style output without explicit marker', () => {
|
||||
const geminiStyleOutput = `
|
||||
## Overview
|
||||
|
||||
This specification describes the implementation of a user profile page.
|
||||
|
||||
## Technical Context
|
||||
- Framework: React
|
||||
- State: Redux
|
||||
|
||||
## Tasks
|
||||
|
||||
- [ ] T001: Create ProfilePage component
|
||||
- [ ] T002: Add profile API endpoint
|
||||
- [ ] T003: Style the profile page
|
||||
`;
|
||||
expect(detectSpecFallback(geminiStyleOutput)).toBe(true);
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
@@ -1,609 +0,0 @@
|
||||
import { describe, it, expect, beforeEach, vi, type Mock } from 'vitest';
|
||||
import {
|
||||
ConcurrencyManager,
|
||||
type RunningFeature,
|
||||
type GetCurrentBranchFn,
|
||||
} from '@/services/concurrency-manager.js';
|
||||
|
||||
describe('ConcurrencyManager', () => {
|
||||
let manager: ConcurrencyManager;
|
||||
let mockGetCurrentBranch: Mock<GetCurrentBranchFn>;
|
||||
|
||||
beforeEach(() => {
|
||||
vi.clearAllMocks();
|
||||
// Default: primary branch is 'main'
|
||||
mockGetCurrentBranch = vi.fn().mockResolvedValue('main');
|
||||
manager = new ConcurrencyManager(mockGetCurrentBranch);
|
||||
});
|
||||
|
||||
describe('acquire', () => {
|
||||
it('should create new entry with leaseCount: 1 on first acquire', () => {
|
||||
const result = manager.acquire({
|
||||
featureId: 'feature-1',
|
||||
projectPath: '/test/project',
|
||||
isAutoMode: true,
|
||||
});
|
||||
|
||||
expect(result.featureId).toBe('feature-1');
|
||||
expect(result.projectPath).toBe('/test/project');
|
||||
expect(result.isAutoMode).toBe(true);
|
||||
expect(result.leaseCount).toBe(1);
|
||||
expect(result.worktreePath).toBeNull();
|
||||
expect(result.branchName).toBeNull();
|
||||
expect(result.startTime).toBeDefined();
|
||||
expect(result.abortController).toBeInstanceOf(AbortController);
|
||||
});
|
||||
|
||||
it('should increment leaseCount when allowReuse is true for existing feature', () => {
|
||||
// First acquire
|
||||
manager.acquire({
|
||||
featureId: 'feature-1',
|
||||
projectPath: '/test/project',
|
||||
isAutoMode: true,
|
||||
});
|
||||
|
||||
// Second acquire with allowReuse
|
||||
const result = manager.acquire({
|
||||
featureId: 'feature-1',
|
||||
projectPath: '/test/project',
|
||||
isAutoMode: true,
|
||||
allowReuse: true,
|
||||
});
|
||||
|
||||
expect(result.leaseCount).toBe(2);
|
||||
});
|
||||
|
||||
it('should throw "already running" when allowReuse is false for existing feature', () => {
|
||||
// First acquire
|
||||
manager.acquire({
|
||||
featureId: 'feature-1',
|
||||
projectPath: '/test/project',
|
||||
isAutoMode: true,
|
||||
});
|
||||
|
||||
// Second acquire without allowReuse
|
||||
expect(() =>
|
||||
manager.acquire({
|
||||
featureId: 'feature-1',
|
||||
projectPath: '/test/project',
|
||||
isAutoMode: true,
|
||||
})
|
||||
).toThrow('already running');
|
||||
});
|
||||
|
||||
it('should throw "already running" when allowReuse is explicitly false', () => {
|
||||
manager.acquire({
|
||||
featureId: 'feature-1',
|
||||
projectPath: '/test/project',
|
||||
isAutoMode: true,
|
||||
});
|
||||
|
||||
expect(() =>
|
||||
manager.acquire({
|
||||
featureId: 'feature-1',
|
||||
projectPath: '/test/project',
|
||||
isAutoMode: true,
|
||||
allowReuse: false,
|
||||
})
|
||||
).toThrow('already running');
|
||||
});
|
||||
|
||||
it('should use provided abortController', () => {
|
||||
const customAbortController = new AbortController();
|
||||
|
||||
const result = manager.acquire({
|
||||
featureId: 'feature-1',
|
||||
projectPath: '/test/project',
|
||||
isAutoMode: true,
|
||||
abortController: customAbortController,
|
||||
});
|
||||
|
||||
expect(result.abortController).toBe(customAbortController);
|
||||
});
|
||||
|
||||
it('should return the existing entry when allowReuse is true', () => {
|
||||
const first = manager.acquire({
|
||||
featureId: 'feature-1',
|
||||
projectPath: '/test/project',
|
||||
isAutoMode: true,
|
||||
});
|
||||
|
||||
const second = manager.acquire({
|
||||
featureId: 'feature-1',
|
||||
projectPath: '/test/project',
|
||||
isAutoMode: true,
|
||||
allowReuse: true,
|
||||
});
|
||||
|
||||
// Should be the same object reference
|
||||
expect(second).toBe(first);
|
||||
});
|
||||
|
||||
it('should allow multiple nested acquire calls with allowReuse', () => {
|
||||
manager.acquire({
|
||||
featureId: 'feature-1',
|
||||
projectPath: '/test/project',
|
||||
isAutoMode: true,
|
||||
});
|
||||
|
||||
manager.acquire({
|
||||
featureId: 'feature-1',
|
||||
projectPath: '/test/project',
|
||||
isAutoMode: true,
|
||||
allowReuse: true,
|
||||
});
|
||||
|
||||
const result = manager.acquire({
|
||||
featureId: 'feature-1',
|
||||
projectPath: '/test/project',
|
||||
isAutoMode: true,
|
||||
allowReuse: true,
|
||||
});
|
||||
|
||||
expect(result.leaseCount).toBe(3);
|
||||
});
|
||||
});
|
||||
|
||||
describe('release', () => {
|
||||
it('should decrement leaseCount on release', () => {
|
||||
manager.acquire({
|
||||
featureId: 'feature-1',
|
||||
projectPath: '/test/project',
|
||||
isAutoMode: true,
|
||||
});
|
||||
|
||||
manager.acquire({
|
||||
featureId: 'feature-1',
|
||||
projectPath: '/test/project',
|
||||
isAutoMode: true,
|
||||
allowReuse: true,
|
||||
});
|
||||
|
||||
manager.release('feature-1');
|
||||
|
||||
const entry = manager.getRunningFeature('feature-1');
|
||||
expect(entry?.leaseCount).toBe(1);
|
||||
});
|
||||
|
||||
it('should delete entry when leaseCount reaches 0', () => {
|
||||
manager.acquire({
|
||||
featureId: 'feature-1',
|
||||
projectPath: '/test/project',
|
||||
isAutoMode: true,
|
||||
});
|
||||
|
||||
manager.release('feature-1');
|
||||
|
||||
expect(manager.isRunning('feature-1')).toBe(false);
|
||||
expect(manager.getRunningFeature('feature-1')).toBeUndefined();
|
||||
});
|
||||
|
||||
it('should delete entry immediately when force is true regardless of leaseCount', () => {
|
||||
manager.acquire({
|
||||
featureId: 'feature-1',
|
||||
projectPath: '/test/project',
|
||||
isAutoMode: true,
|
||||
});
|
||||
|
||||
manager.acquire({
|
||||
featureId: 'feature-1',
|
||||
projectPath: '/test/project',
|
||||
isAutoMode: true,
|
||||
allowReuse: true,
|
||||
});
|
||||
|
||||
manager.acquire({
|
||||
featureId: 'feature-1',
|
||||
projectPath: '/test/project',
|
||||
isAutoMode: true,
|
||||
allowReuse: true,
|
||||
});
|
||||
|
||||
// leaseCount is 3, but force should still delete
|
||||
manager.release('feature-1', { force: true });
|
||||
|
||||
expect(manager.isRunning('feature-1')).toBe(false);
|
||||
});
|
||||
|
||||
it('should do nothing when releasing non-existent feature', () => {
|
||||
// Should not throw
|
||||
manager.release('non-existent-feature');
|
||||
manager.release('non-existent-feature', { force: true });
|
||||
});
|
||||
|
||||
it('should only delete entry after all leases are released', () => {
|
||||
manager.acquire({
|
||||
featureId: 'feature-1',
|
||||
projectPath: '/test/project',
|
||||
isAutoMode: true,
|
||||
});
|
||||
|
||||
manager.acquire({
|
||||
featureId: 'feature-1',
|
||||
projectPath: '/test/project',
|
||||
isAutoMode: true,
|
||||
allowReuse: true,
|
||||
});
|
||||
|
||||
manager.acquire({
|
||||
featureId: 'feature-1',
|
||||
projectPath: '/test/project',
|
||||
isAutoMode: true,
|
||||
allowReuse: true,
|
||||
});
|
||||
|
||||
// leaseCount is 3
|
||||
manager.release('feature-1');
|
||||
expect(manager.isRunning('feature-1')).toBe(true);
|
||||
|
||||
manager.release('feature-1');
|
||||
expect(manager.isRunning('feature-1')).toBe(true);
|
||||
|
||||
manager.release('feature-1');
|
||||
expect(manager.isRunning('feature-1')).toBe(false);
|
||||
});
|
||||
});
|
||||
|
||||
describe('isRunning', () => {
|
||||
it('should return false when feature is not running', () => {
|
||||
expect(manager.isRunning('feature-1')).toBe(false);
|
||||
});
|
||||
|
||||
it('should return true when feature is running', () => {
|
||||
manager.acquire({
|
||||
featureId: 'feature-1',
|
||||
projectPath: '/test/project',
|
||||
isAutoMode: true,
|
||||
});
|
||||
|
||||
expect(manager.isRunning('feature-1')).toBe(true);
|
||||
});
|
||||
|
||||
it('should return false after feature is released', () => {
|
||||
manager.acquire({
|
||||
featureId: 'feature-1',
|
||||
projectPath: '/test/project',
|
||||
isAutoMode: true,
|
||||
});
|
||||
|
||||
manager.release('feature-1');
|
||||
|
||||
expect(manager.isRunning('feature-1')).toBe(false);
|
||||
});
|
||||
});
|
||||
|
||||
describe('getRunningFeature', () => {
|
||||
it('should return undefined for non-existent feature', () => {
|
||||
expect(manager.getRunningFeature('feature-1')).toBeUndefined();
|
||||
});
|
||||
|
||||
it('should return the RunningFeature entry', () => {
|
||||
manager.acquire({
|
||||
featureId: 'feature-1',
|
||||
projectPath: '/test/project',
|
||||
isAutoMode: true,
|
||||
});
|
||||
|
||||
const entry = manager.getRunningFeature('feature-1');
|
||||
expect(entry).toBeDefined();
|
||||
expect(entry?.featureId).toBe('feature-1');
|
||||
expect(entry?.projectPath).toBe('/test/project');
|
||||
});
|
||||
});
|
||||
|
||||
describe('getRunningCount (project-level)', () => {
|
||||
it('should return 0 when no features are running', () => {
|
||||
expect(manager.getRunningCount('/test/project')).toBe(0);
|
||||
});
|
||||
|
||||
it('should count features for specific project', () => {
|
||||
manager.acquire({
|
||||
featureId: 'feature-1',
|
||||
projectPath: '/test/project',
|
||||
isAutoMode: true,
|
||||
});
|
||||
|
||||
manager.acquire({
|
||||
featureId: 'feature-2',
|
||||
projectPath: '/test/project',
|
||||
isAutoMode: false,
|
||||
});
|
||||
|
||||
expect(manager.getRunningCount('/test/project')).toBe(2);
|
||||
});
|
||||
|
||||
it('should only count features for the specified project', () => {
|
||||
manager.acquire({
|
||||
featureId: 'feature-1',
|
||||
projectPath: '/project-a',
|
||||
isAutoMode: true,
|
||||
});
|
||||
|
||||
manager.acquire({
|
||||
featureId: 'feature-2',
|
||||
projectPath: '/project-b',
|
||||
isAutoMode: true,
|
||||
});
|
||||
|
||||
manager.acquire({
|
||||
featureId: 'feature-3',
|
||||
projectPath: '/project-a',
|
||||
isAutoMode: false,
|
||||
});
|
||||
|
||||
expect(manager.getRunningCount('/project-a')).toBe(2);
|
||||
expect(manager.getRunningCount('/project-b')).toBe(1);
|
||||
expect(manager.getRunningCount('/project-c')).toBe(0);
|
||||
});
|
||||
});
|
||||
|
||||
describe('getRunningCountForWorktree', () => {
|
||||
it('should return 0 when no features are running', async () => {
|
||||
const count = await manager.getRunningCountForWorktree('/test/project', null);
|
||||
expect(count).toBe(0);
|
||||
});
|
||||
|
||||
it('should count features with null branchName as main worktree', async () => {
|
||||
const entry = manager.acquire({
|
||||
featureId: 'feature-1',
|
||||
projectPath: '/test/project',
|
||||
isAutoMode: true,
|
||||
});
|
||||
// entry.branchName is null by default
|
||||
|
||||
const count = await manager.getRunningCountForWorktree('/test/project', null);
|
||||
expect(count).toBe(1);
|
||||
});
|
||||
|
||||
it('should count features matching primary branch as main worktree', async () => {
|
||||
mockGetCurrentBranch.mockResolvedValue('main');
|
||||
|
||||
const entry = manager.acquire({
|
||||
featureId: 'feature-1',
|
||||
projectPath: '/test/project',
|
||||
isAutoMode: true,
|
||||
});
|
||||
manager.updateRunningFeature('feature-1', { branchName: 'main' });
|
||||
|
||||
const count = await manager.getRunningCountForWorktree('/test/project', null);
|
||||
expect(count).toBe(1);
|
||||
});
|
||||
|
||||
it('should count features with exact branch match for feature worktrees', async () => {
|
||||
const entry = manager.acquire({
|
||||
featureId: 'feature-1',
|
||||
projectPath: '/test/project',
|
||||
isAutoMode: true,
|
||||
});
|
||||
manager.updateRunningFeature('feature-1', { branchName: 'feature-branch' });
|
||||
|
||||
manager.acquire({
|
||||
featureId: 'feature-2',
|
||||
projectPath: '/test/project',
|
||||
isAutoMode: true,
|
||||
});
|
||||
// feature-2 has null branchName
|
||||
|
||||
const featureBranchCount = await manager.getRunningCountForWorktree(
|
||||
'/test/project',
|
||||
'feature-branch'
|
||||
);
|
||||
expect(featureBranchCount).toBe(1);
|
||||
|
||||
const mainWorktreeCount = await manager.getRunningCountForWorktree('/test/project', null);
|
||||
expect(mainWorktreeCount).toBe(1);
|
||||
});
|
||||
|
||||
it('should respect branch normalization (main is treated as null)', async () => {
|
||||
mockGetCurrentBranch.mockResolvedValue('main');
|
||||
|
||||
// Feature with branchName 'main' should count as main worktree
|
||||
manager.acquire({
|
||||
featureId: 'feature-1',
|
||||
projectPath: '/test/project',
|
||||
isAutoMode: true,
|
||||
});
|
||||
manager.updateRunningFeature('feature-1', { branchName: 'main' });
|
||||
|
||||
// Feature with branchName null should also count as main worktree
|
||||
manager.acquire({
|
||||
featureId: 'feature-2',
|
||||
projectPath: '/test/project',
|
||||
isAutoMode: true,
|
||||
});
|
||||
|
||||
const mainCount = await manager.getRunningCountForWorktree('/test/project', null);
|
||||
expect(mainCount).toBe(2);
|
||||
});
|
||||
|
||||
it('should filter by both projectPath and branchName', async () => {
|
||||
manager.acquire({
|
||||
featureId: 'feature-1',
|
||||
projectPath: '/project-a',
|
||||
isAutoMode: true,
|
||||
});
|
||||
manager.updateRunningFeature('feature-1', { branchName: 'feature-x' });
|
||||
|
||||
manager.acquire({
|
||||
featureId: 'feature-2',
|
||||
projectPath: '/project-b',
|
||||
isAutoMode: true,
|
||||
});
|
||||
manager.updateRunningFeature('feature-2', { branchName: 'feature-x' });
|
||||
|
||||
const countA = await manager.getRunningCountForWorktree('/project-a', 'feature-x');
|
||||
const countB = await manager.getRunningCountForWorktree('/project-b', 'feature-x');
|
||||
|
||||
expect(countA).toBe(1);
|
||||
expect(countB).toBe(1);
|
||||
});
|
||||
});
|
||||
|
||||
describe('getAllRunning', () => {
|
||||
it('should return empty array when no features are running', () => {
|
||||
expect(manager.getAllRunning()).toEqual([]);
|
||||
});
|
||||
|
||||
it('should return array with all running features', () => {
|
||||
manager.acquire({
|
||||
featureId: 'feature-1',
|
||||
projectPath: '/project-a',
|
||||
isAutoMode: true,
|
||||
});
|
||||
|
||||
manager.acquire({
|
||||
featureId: 'feature-2',
|
||||
projectPath: '/project-b',
|
||||
isAutoMode: false,
|
||||
});
|
||||
|
||||
const running = manager.getAllRunning();
|
||||
expect(running).toHaveLength(2);
|
||||
expect(running.map((r) => r.featureId)).toContain('feature-1');
|
||||
expect(running.map((r) => r.featureId)).toContain('feature-2');
|
||||
});
|
||||
|
||||
it('should include feature metadata', () => {
|
||||
manager.acquire({
|
||||
featureId: 'feature-1',
|
||||
projectPath: '/project-a',
|
||||
isAutoMode: true,
|
||||
});
|
||||
manager.updateRunningFeature('feature-1', { model: 'claude-sonnet-4', provider: 'claude' });
|
||||
|
||||
const running = manager.getAllRunning();
|
||||
expect(running[0].model).toBe('claude-sonnet-4');
|
||||
expect(running[0].provider).toBe('claude');
|
||||
});
|
||||
});
|
||||
|
||||
describe('updateRunningFeature', () => {
|
||||
it('should update worktreePath and branchName', () => {
|
||||
manager.acquire({
|
||||
featureId: 'feature-1',
|
||||
projectPath: '/test/project',
|
||||
isAutoMode: true,
|
||||
});
|
||||
|
||||
manager.updateRunningFeature('feature-1', {
|
||||
worktreePath: '/worktrees/feature-1',
|
||||
branchName: 'feature-1-branch',
|
||||
});
|
||||
|
||||
const entry = manager.getRunningFeature('feature-1');
|
||||
expect(entry?.worktreePath).toBe('/worktrees/feature-1');
|
||||
expect(entry?.branchName).toBe('feature-1-branch');
|
||||
});
|
||||
|
||||
it('should update model and provider', () => {
|
||||
manager.acquire({
|
||||
featureId: 'feature-1',
|
||||
projectPath: '/test/project',
|
||||
isAutoMode: true,
|
||||
});
|
||||
|
||||
manager.updateRunningFeature('feature-1', {
|
||||
model: 'claude-opus-4-5-20251101',
|
||||
provider: 'claude',
|
||||
});
|
||||
|
||||
const entry = manager.getRunningFeature('feature-1');
|
||||
expect(entry?.model).toBe('claude-opus-4-5-20251101');
|
||||
expect(entry?.provider).toBe('claude');
|
||||
});
|
||||
|
||||
it('should do nothing for non-existent feature', () => {
|
||||
// Should not throw
|
||||
manager.updateRunningFeature('non-existent', { model: 'test' });
|
||||
});
|
||||
|
||||
it('should preserve other properties when updating partial fields', () => {
|
||||
manager.acquire({
|
||||
featureId: 'feature-1',
|
||||
projectPath: '/test/project',
|
||||
isAutoMode: true,
|
||||
});
|
||||
|
||||
const original = manager.getRunningFeature('feature-1');
|
||||
const originalStartTime = original?.startTime;
|
||||
|
||||
manager.updateRunningFeature('feature-1', { model: 'claude-sonnet-4' });
|
||||
|
||||
const updated = manager.getRunningFeature('feature-1');
|
||||
expect(updated?.startTime).toBe(originalStartTime);
|
||||
expect(updated?.projectPath).toBe('/test/project');
|
||||
expect(updated?.isAutoMode).toBe(true);
|
||||
expect(updated?.model).toBe('claude-sonnet-4');
|
||||
});
|
||||
});
|
||||
|
||||
describe('edge cases', () => {
|
||||
it('should handle multiple features for same project', () => {
|
||||
manager.acquire({
|
||||
featureId: 'feature-1',
|
||||
projectPath: '/test/project',
|
||||
isAutoMode: true,
|
||||
});
|
||||
|
||||
manager.acquire({
|
||||
featureId: 'feature-2',
|
||||
projectPath: '/test/project',
|
||||
isAutoMode: true,
|
||||
});
|
||||
|
||||
manager.acquire({
|
||||
featureId: 'feature-3',
|
||||
projectPath: '/test/project',
|
||||
isAutoMode: false,
|
||||
});
|
||||
|
||||
expect(manager.getRunningCount('/test/project')).toBe(3);
|
||||
expect(manager.isRunning('feature-1')).toBe(true);
|
||||
expect(manager.isRunning('feature-2')).toBe(true);
|
||||
expect(manager.isRunning('feature-3')).toBe(true);
|
||||
});
|
||||
|
||||
it('should handle features across different worktrees', async () => {
|
||||
// Main worktree feature
|
||||
manager.acquire({
|
||||
featureId: 'feature-1',
|
||||
projectPath: '/test/project',
|
||||
isAutoMode: true,
|
||||
});
|
||||
|
||||
// Worktree A feature
|
||||
manager.acquire({
|
||||
featureId: 'feature-2',
|
||||
projectPath: '/test/project',
|
||||
isAutoMode: true,
|
||||
});
|
||||
manager.updateRunningFeature('feature-2', {
|
||||
worktreePath: '/worktrees/a',
|
||||
branchName: 'branch-a',
|
||||
});
|
||||
|
||||
// Worktree B feature
|
||||
manager.acquire({
|
||||
featureId: 'feature-3',
|
||||
projectPath: '/test/project',
|
||||
isAutoMode: true,
|
||||
});
|
||||
manager.updateRunningFeature('feature-3', {
|
||||
worktreePath: '/worktrees/b',
|
||||
branchName: 'branch-b',
|
||||
});
|
||||
|
||||
expect(await manager.getRunningCountForWorktree('/test/project', null)).toBe(1);
|
||||
expect(await manager.getRunningCountForWorktree('/test/project', 'branch-a')).toBe(1);
|
||||
expect(await manager.getRunningCountForWorktree('/test/project', 'branch-b')).toBe(1);
|
||||
expect(manager.getRunningCount('/test/project')).toBe(3);
|
||||
});
|
||||
|
||||
it('should return 0 counts and empty arrays for empty state', () => {
|
||||
expect(manager.getRunningCount('/any/project')).toBe(0);
|
||||
expect(manager.getAllRunning()).toEqual([]);
|
||||
expect(manager.isRunning('any-feature')).toBe(false);
|
||||
expect(manager.getRunningFeature('any-feature')).toBeUndefined();
|
||||
});
|
||||
});
|
||||
});
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,659 +0,0 @@
|
||||
import { describe, it, expect, beforeEach, vi, type Mock } from 'vitest';
|
||||
import { FeatureStateManager } from '@/services/feature-state-manager.js';
|
||||
import type { Feature } from '@automaker/types';
|
||||
import type { EventEmitter } from '@/lib/events.js';
|
||||
import type { FeatureLoader } from '@/services/feature-loader.js';
|
||||
import * as secureFs from '@/lib/secure-fs.js';
|
||||
import { atomicWriteJson, readJsonWithRecovery } from '@automaker/utils';
|
||||
import { getFeatureDir, getFeaturesDir } from '@automaker/platform';
|
||||
import { getNotificationService } from '@/services/notification-service.js';
|
||||
|
||||
// Mock dependencies
|
||||
vi.mock('@/lib/secure-fs.js', () => ({
|
||||
readFile: vi.fn(),
|
||||
readdir: vi.fn(),
|
||||
}));
|
||||
|
||||
vi.mock('@automaker/utils', async (importOriginal) => {
|
||||
const actual = await importOriginal<typeof import('@automaker/utils')>();
|
||||
return {
|
||||
...actual,
|
||||
atomicWriteJson: vi.fn(),
|
||||
readJsonWithRecovery: vi.fn(),
|
||||
logRecoveryWarning: vi.fn(),
|
||||
};
|
||||
});
|
||||
|
||||
vi.mock('@automaker/platform', () => ({
|
||||
getFeatureDir: vi.fn(),
|
||||
getFeaturesDir: vi.fn(),
|
||||
}));
|
||||
|
||||
vi.mock('@/services/notification-service.js', () => ({
|
||||
getNotificationService: vi.fn(() => ({
|
||||
createNotification: vi.fn(),
|
||||
})),
|
||||
}));
|
||||
|
||||
describe('FeatureStateManager', () => {
|
||||
let manager: FeatureStateManager;
|
||||
let mockEvents: EventEmitter;
|
||||
let mockFeatureLoader: FeatureLoader;
|
||||
|
||||
const mockFeature: Feature = {
|
||||
id: 'feature-123',
|
||||
name: 'Test Feature',
|
||||
title: 'Test Feature Title',
|
||||
description: 'A test feature',
|
||||
status: 'pending',
|
||||
createdAt: '2024-01-01T00:00:00Z',
|
||||
updatedAt: '2024-01-01T00:00:00Z',
|
||||
};
|
||||
|
||||
beforeEach(() => {
|
||||
vi.clearAllMocks();
|
||||
|
||||
mockEvents = {
|
||||
emit: vi.fn(),
|
||||
subscribe: vi.fn(() => vi.fn()),
|
||||
};
|
||||
|
||||
mockFeatureLoader = {
|
||||
syncFeatureToAppSpec: vi.fn(),
|
||||
} as unknown as FeatureLoader;
|
||||
|
||||
manager = new FeatureStateManager(mockEvents, mockFeatureLoader);
|
||||
|
||||
// Default mocks
|
||||
(getFeatureDir as Mock).mockReturnValue('/project/.automaker/features/feature-123');
|
||||
(getFeaturesDir as Mock).mockReturnValue('/project/.automaker/features');
|
||||
});
|
||||
|
||||
describe('loadFeature', () => {
|
||||
it('should load feature from disk', async () => {
|
||||
(readJsonWithRecovery as Mock).mockResolvedValue({ data: mockFeature, recovered: false });
|
||||
|
||||
const feature = await manager.loadFeature('/project', 'feature-123');
|
||||
|
||||
expect(feature).toEqual(mockFeature);
|
||||
expect(getFeatureDir).toHaveBeenCalledWith('/project', 'feature-123');
|
||||
expect(readJsonWithRecovery).toHaveBeenCalledWith(
|
||||
'/project/.automaker/features/feature-123/feature.json',
|
||||
null,
|
||||
expect.objectContaining({ autoRestore: true })
|
||||
);
|
||||
});
|
||||
|
||||
it('should return null if feature does not exist', async () => {
|
||||
(readJsonWithRecovery as Mock).mockRejectedValue(new Error('ENOENT'));
|
||||
|
||||
const feature = await manager.loadFeature('/project', 'non-existent');
|
||||
|
||||
expect(feature).toBeNull();
|
||||
});
|
||||
|
||||
it('should return null if feature JSON is invalid', async () => {
|
||||
// readJsonWithRecovery returns null as the default value when JSON is invalid
|
||||
(readJsonWithRecovery as Mock).mockResolvedValue({ data: null, recovered: false });
|
||||
|
||||
const feature = await manager.loadFeature('/project', 'feature-123');
|
||||
|
||||
expect(feature).toBeNull();
|
||||
});
|
||||
});
|
||||
|
||||
describe('updateFeatureStatus', () => {
|
||||
it('should update feature status and persist to disk', async () => {
|
||||
(readJsonWithRecovery as Mock).mockResolvedValue({
|
||||
data: { ...mockFeature },
|
||||
recovered: false,
|
||||
source: 'main',
|
||||
});
|
||||
|
||||
await manager.updateFeatureStatus('/project', 'feature-123', 'in_progress');
|
||||
|
||||
expect(atomicWriteJson).toHaveBeenCalled();
|
||||
const savedFeature = (atomicWriteJson as Mock).mock.calls[0][1] as Feature;
|
||||
expect(savedFeature.status).toBe('in_progress');
|
||||
expect(savedFeature.updatedAt).toBeDefined();
|
||||
});
|
||||
|
||||
it('should set justFinishedAt when status is waiting_approval', async () => {
|
||||
(readJsonWithRecovery as Mock).mockResolvedValue({
|
||||
data: { ...mockFeature },
|
||||
recovered: false,
|
||||
source: 'main',
|
||||
});
|
||||
|
||||
await manager.updateFeatureStatus('/project', 'feature-123', 'waiting_approval');
|
||||
|
||||
const savedFeature = (atomicWriteJson as Mock).mock.calls[0][1] as Feature;
|
||||
expect(savedFeature.justFinishedAt).toBeDefined();
|
||||
});
|
||||
|
||||
it('should clear justFinishedAt when status is not waiting_approval', async () => {
|
||||
(readJsonWithRecovery as Mock).mockResolvedValue({
|
||||
data: { ...mockFeature, justFinishedAt: '2024-01-01T00:00:00Z' },
|
||||
recovered: false,
|
||||
source: 'main',
|
||||
});
|
||||
|
||||
await manager.updateFeatureStatus('/project', 'feature-123', 'in_progress');
|
||||
|
||||
const savedFeature = (atomicWriteJson as Mock).mock.calls[0][1] as Feature;
|
||||
expect(savedFeature.justFinishedAt).toBeUndefined();
|
||||
});
|
||||
|
||||
it('should create notification for waiting_approval status', async () => {
|
||||
const mockNotificationService = { createNotification: vi.fn() };
|
||||
(getNotificationService as Mock).mockReturnValue(mockNotificationService);
|
||||
(readJsonWithRecovery as Mock).mockResolvedValue({
|
||||
data: { ...mockFeature },
|
||||
recovered: false,
|
||||
source: 'main',
|
||||
});
|
||||
|
||||
await manager.updateFeatureStatus('/project', 'feature-123', 'waiting_approval');
|
||||
|
||||
expect(mockNotificationService.createNotification).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
type: 'feature_waiting_approval',
|
||||
featureId: 'feature-123',
|
||||
})
|
||||
);
|
||||
});
|
||||
|
||||
it('should create notification for verified status', async () => {
|
||||
const mockNotificationService = { createNotification: vi.fn() };
|
||||
(getNotificationService as Mock).mockReturnValue(mockNotificationService);
|
||||
(readJsonWithRecovery as Mock).mockResolvedValue({
|
||||
data: { ...mockFeature },
|
||||
recovered: false,
|
||||
source: 'main',
|
||||
});
|
||||
|
||||
await manager.updateFeatureStatus('/project', 'feature-123', 'verified');
|
||||
|
||||
expect(mockNotificationService.createNotification).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
type: 'feature_verified',
|
||||
featureId: 'feature-123',
|
||||
})
|
||||
);
|
||||
});
|
||||
|
||||
it('should sync to app_spec for completed status', async () => {
|
||||
(readJsonWithRecovery as Mock).mockResolvedValue({
|
||||
data: { ...mockFeature },
|
||||
recovered: false,
|
||||
source: 'main',
|
||||
});
|
||||
|
||||
await manager.updateFeatureStatus('/project', 'feature-123', 'completed');
|
||||
|
||||
expect(mockFeatureLoader.syncFeatureToAppSpec).toHaveBeenCalledWith(
|
||||
'/project',
|
||||
expect.objectContaining({ status: 'completed' })
|
||||
);
|
||||
});
|
||||
|
||||
it('should sync to app_spec for verified status', async () => {
|
||||
(readJsonWithRecovery as Mock).mockResolvedValue({
|
||||
data: { ...mockFeature },
|
||||
recovered: false,
|
||||
source: 'main',
|
||||
});
|
||||
|
||||
await manager.updateFeatureStatus('/project', 'feature-123', 'verified');
|
||||
|
||||
expect(mockFeatureLoader.syncFeatureToAppSpec).toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('should not fail if sync to app_spec fails', async () => {
|
||||
(readJsonWithRecovery as Mock).mockResolvedValue({
|
||||
data: { ...mockFeature },
|
||||
recovered: false,
|
||||
source: 'main',
|
||||
});
|
||||
(mockFeatureLoader.syncFeatureToAppSpec as Mock).mockRejectedValue(new Error('Sync failed'));
|
||||
|
||||
// Should not throw
|
||||
await expect(
|
||||
manager.updateFeatureStatus('/project', 'feature-123', 'completed')
|
||||
).resolves.not.toThrow();
|
||||
});
|
||||
|
||||
it('should handle feature not found gracefully', async () => {
|
||||
(readJsonWithRecovery as Mock).mockResolvedValue({
|
||||
data: null,
|
||||
recovered: true,
|
||||
source: 'default',
|
||||
});
|
||||
|
||||
// Should not throw
|
||||
await expect(
|
||||
manager.updateFeatureStatus('/project', 'non-existent', 'in_progress')
|
||||
).resolves.not.toThrow();
|
||||
expect(atomicWriteJson).not.toHaveBeenCalled();
|
||||
});
|
||||
});
|
||||
|
||||
describe('markFeatureInterrupted', () => {
|
||||
it('should mark feature as interrupted', async () => {
|
||||
(secureFs.readFile as Mock).mockResolvedValue(
|
||||
JSON.stringify({ ...mockFeature, status: 'in_progress' })
|
||||
);
|
||||
(readJsonWithRecovery as Mock).mockResolvedValue({
|
||||
data: { ...mockFeature, status: 'in_progress' },
|
||||
recovered: false,
|
||||
source: 'main',
|
||||
});
|
||||
|
||||
await manager.markFeatureInterrupted('/project', 'feature-123', 'server shutdown');
|
||||
|
||||
expect(atomicWriteJson).toHaveBeenCalled();
|
||||
const savedFeature = (atomicWriteJson as Mock).mock.calls[0][1] as Feature;
|
||||
expect(savedFeature.status).toBe('interrupted');
|
||||
});
|
||||
|
||||
it('should preserve pipeline_* statuses', async () => {
|
||||
(secureFs.readFile as Mock).mockResolvedValue(
|
||||
JSON.stringify({ ...mockFeature, status: 'pipeline_step_1' })
|
||||
);
|
||||
|
||||
await manager.markFeatureInterrupted('/project', 'feature-123', 'server shutdown');
|
||||
|
||||
// Should NOT call atomicWriteJson because pipeline status is preserved
|
||||
expect(atomicWriteJson).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('should preserve pipeline_complete status', async () => {
|
||||
(secureFs.readFile as Mock).mockResolvedValue(
|
||||
JSON.stringify({ ...mockFeature, status: 'pipeline_complete' })
|
||||
);
|
||||
|
||||
await manager.markFeatureInterrupted('/project', 'feature-123');
|
||||
|
||||
expect(atomicWriteJson).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('should handle feature not found', async () => {
|
||||
(secureFs.readFile as Mock).mockRejectedValue(new Error('ENOENT'));
|
||||
(readJsonWithRecovery as Mock).mockResolvedValue({
|
||||
data: null,
|
||||
recovered: true,
|
||||
source: 'default',
|
||||
});
|
||||
|
||||
// Should not throw
|
||||
await expect(
|
||||
manager.markFeatureInterrupted('/project', 'non-existent')
|
||||
).resolves.not.toThrow();
|
||||
});
|
||||
});
|
||||
|
||||
describe('resetStuckFeatures', () => {
|
||||
it('should reset in_progress features to ready if has approved plan', async () => {
|
||||
const stuckFeature: Feature = {
|
||||
...mockFeature,
|
||||
status: 'in_progress',
|
||||
planSpec: { status: 'approved', version: 1, reviewedByUser: true },
|
||||
};
|
||||
|
||||
(secureFs.readdir as Mock).mockResolvedValue([
|
||||
{ name: 'feature-123', isDirectory: () => true },
|
||||
]);
|
||||
(readJsonWithRecovery as Mock).mockResolvedValue({
|
||||
data: stuckFeature,
|
||||
recovered: false,
|
||||
source: 'main',
|
||||
});
|
||||
|
||||
await manager.resetStuckFeatures('/project');
|
||||
|
||||
expect(atomicWriteJson).toHaveBeenCalled();
|
||||
const savedFeature = (atomicWriteJson as Mock).mock.calls[0][1] as Feature;
|
||||
expect(savedFeature.status).toBe('ready');
|
||||
});
|
||||
|
||||
it('should reset in_progress features to backlog if no approved plan', async () => {
|
||||
const stuckFeature: Feature = {
|
||||
...mockFeature,
|
||||
status: 'in_progress',
|
||||
planSpec: undefined,
|
||||
};
|
||||
|
||||
(secureFs.readdir as Mock).mockResolvedValue([
|
||||
{ name: 'feature-123', isDirectory: () => true },
|
||||
]);
|
||||
(readJsonWithRecovery as Mock).mockResolvedValue({
|
||||
data: stuckFeature,
|
||||
recovered: false,
|
||||
source: 'main',
|
||||
});
|
||||
|
||||
await manager.resetStuckFeatures('/project');
|
||||
|
||||
const savedFeature = (atomicWriteJson as Mock).mock.calls[0][1] as Feature;
|
||||
expect(savedFeature.status).toBe('backlog');
|
||||
});
|
||||
|
||||
it('should reset generating planSpec status to pending', async () => {
|
||||
const stuckFeature: Feature = {
|
||||
...mockFeature,
|
||||
status: 'pending',
|
||||
planSpec: { status: 'generating', version: 1, reviewedByUser: false },
|
||||
};
|
||||
|
||||
(secureFs.readdir as Mock).mockResolvedValue([
|
||||
{ name: 'feature-123', isDirectory: () => true },
|
||||
]);
|
||||
(readJsonWithRecovery as Mock).mockResolvedValue({
|
||||
data: stuckFeature,
|
||||
recovered: false,
|
||||
source: 'main',
|
||||
});
|
||||
|
||||
await manager.resetStuckFeatures('/project');
|
||||
|
||||
const savedFeature = (atomicWriteJson as Mock).mock.calls[0][1] as Feature;
|
||||
expect(savedFeature.planSpec?.status).toBe('pending');
|
||||
});
|
||||
|
||||
it('should reset in_progress tasks to pending', async () => {
|
||||
const stuckFeature: Feature = {
|
||||
...mockFeature,
|
||||
status: 'pending',
|
||||
planSpec: {
|
||||
status: 'approved',
|
||||
version: 1,
|
||||
reviewedByUser: true,
|
||||
tasks: [
|
||||
{ id: 'task-1', title: 'Task 1', status: 'completed', description: '' },
|
||||
{ id: 'task-2', title: 'Task 2', status: 'in_progress', description: '' },
|
||||
{ id: 'task-3', title: 'Task 3', status: 'pending', description: '' },
|
||||
],
|
||||
currentTaskId: 'task-2',
|
||||
},
|
||||
};
|
||||
|
||||
(secureFs.readdir as Mock).mockResolvedValue([
|
||||
{ name: 'feature-123', isDirectory: () => true },
|
||||
]);
|
||||
(readJsonWithRecovery as Mock).mockResolvedValue({
|
||||
data: stuckFeature,
|
||||
recovered: false,
|
||||
source: 'main',
|
||||
});
|
||||
|
||||
await manager.resetStuckFeatures('/project');
|
||||
|
||||
const savedFeature = (atomicWriteJson as Mock).mock.calls[0][1] as Feature;
|
||||
expect(savedFeature.planSpec?.tasks?.[1].status).toBe('pending');
|
||||
expect(savedFeature.planSpec?.currentTaskId).toBeUndefined();
|
||||
});
|
||||
|
||||
it('should skip non-directory entries', async () => {
|
||||
(secureFs.readdir as Mock).mockResolvedValue([
|
||||
{ name: 'feature-123', isDirectory: () => true },
|
||||
{ name: 'some-file.txt', isDirectory: () => false },
|
||||
]);
|
||||
(readJsonWithRecovery as Mock).mockResolvedValue({
|
||||
data: mockFeature,
|
||||
recovered: false,
|
||||
source: 'main',
|
||||
});
|
||||
|
||||
await manager.resetStuckFeatures('/project');
|
||||
|
||||
// Should only process the directory
|
||||
expect(readJsonWithRecovery).toHaveBeenCalledTimes(1);
|
||||
});
|
||||
|
||||
it('should handle features directory not existing', async () => {
|
||||
const error = new Error('ENOENT') as NodeJS.ErrnoException;
|
||||
error.code = 'ENOENT';
|
||||
(secureFs.readdir as Mock).mockRejectedValue(error);
|
||||
|
||||
// Should not throw
|
||||
await expect(manager.resetStuckFeatures('/project')).resolves.not.toThrow();
|
||||
});
|
||||
|
||||
it('should not update feature if nothing is stuck', async () => {
|
||||
const normalFeature: Feature = {
|
||||
...mockFeature,
|
||||
status: 'completed',
|
||||
planSpec: { status: 'approved', version: 1, reviewedByUser: true },
|
||||
};
|
||||
|
||||
(secureFs.readdir as Mock).mockResolvedValue([
|
||||
{ name: 'feature-123', isDirectory: () => true },
|
||||
]);
|
||||
(readJsonWithRecovery as Mock).mockResolvedValue({
|
||||
data: normalFeature,
|
||||
recovered: false,
|
||||
source: 'main',
|
||||
});
|
||||
|
||||
await manager.resetStuckFeatures('/project');
|
||||
|
||||
expect(atomicWriteJson).not.toHaveBeenCalled();
|
||||
});
|
||||
});
|
||||
|
||||
describe('updateFeaturePlanSpec', () => {
|
||||
it('should update planSpec with partial updates', async () => {
|
||||
(readJsonWithRecovery as Mock).mockResolvedValue({
|
||||
data: { ...mockFeature },
|
||||
recovered: false,
|
||||
source: 'main',
|
||||
});
|
||||
|
||||
await manager.updateFeaturePlanSpec('/project', 'feature-123', { status: 'approved' });
|
||||
|
||||
const savedFeature = (atomicWriteJson as Mock).mock.calls[0][1] as Feature;
|
||||
expect(savedFeature.planSpec?.status).toBe('approved');
|
||||
});
|
||||
|
||||
it('should initialize planSpec if not exists', async () => {
|
||||
(readJsonWithRecovery as Mock).mockResolvedValue({
|
||||
data: { ...mockFeature, planSpec: undefined },
|
||||
recovered: false,
|
||||
source: 'main',
|
||||
});
|
||||
|
||||
await manager.updateFeaturePlanSpec('/project', 'feature-123', { status: 'approved' });
|
||||
|
||||
const savedFeature = (atomicWriteJson as Mock).mock.calls[0][1] as Feature;
|
||||
expect(savedFeature.planSpec).toBeDefined();
|
||||
expect(savedFeature.planSpec?.version).toBe(1);
|
||||
});
|
||||
|
||||
it('should increment version when content changes', async () => {
|
||||
(readJsonWithRecovery as Mock).mockResolvedValue({
|
||||
data: {
|
||||
...mockFeature,
|
||||
planSpec: {
|
||||
status: 'pending',
|
||||
version: 2,
|
||||
content: 'old content',
|
||||
reviewedByUser: false,
|
||||
},
|
||||
},
|
||||
recovered: false,
|
||||
source: 'main',
|
||||
});
|
||||
|
||||
await manager.updateFeaturePlanSpec('/project', 'feature-123', { content: 'new content' });
|
||||
|
||||
const savedFeature = (atomicWriteJson as Mock).mock.calls[0][1] as Feature;
|
||||
expect(savedFeature.planSpec?.version).toBe(3);
|
||||
});
|
||||
});
|
||||
|
||||
describe('saveFeatureSummary', () => {
|
||||
it('should save summary and emit event', async () => {
|
||||
(readJsonWithRecovery as Mock).mockResolvedValue({
|
||||
data: { ...mockFeature },
|
||||
recovered: false,
|
||||
source: 'main',
|
||||
});
|
||||
|
||||
await manager.saveFeatureSummary('/project', 'feature-123', 'This is the summary');
|
||||
|
||||
// Verify persisted
|
||||
const savedFeature = (atomicWriteJson as Mock).mock.calls[0][1] as Feature;
|
||||
expect(savedFeature.summary).toBe('This is the summary');
|
||||
|
||||
// Verify event emitted AFTER persistence
|
||||
expect(mockEvents.emit).toHaveBeenCalledWith('auto-mode:event', {
|
||||
type: 'auto_mode_summary',
|
||||
featureId: 'feature-123',
|
||||
projectPath: '/project',
|
||||
summary: 'This is the summary',
|
||||
});
|
||||
});
|
||||
|
||||
it('should handle feature not found', async () => {
|
||||
(readJsonWithRecovery as Mock).mockResolvedValue({
|
||||
data: null,
|
||||
recovered: true,
|
||||
source: 'default',
|
||||
});
|
||||
|
||||
await expect(
|
||||
manager.saveFeatureSummary('/project', 'non-existent', 'Summary')
|
||||
).resolves.not.toThrow();
|
||||
expect(atomicWriteJson).not.toHaveBeenCalled();
|
||||
expect(mockEvents.emit).not.toHaveBeenCalled();
|
||||
});
|
||||
});
|
||||
|
||||
describe('updateTaskStatus', () => {
|
||||
it('should update task status and emit event', async () => {
|
||||
const featureWithTasks: Feature = {
|
||||
...mockFeature,
|
||||
planSpec: {
|
||||
status: 'approved',
|
||||
version: 1,
|
||||
reviewedByUser: true,
|
||||
tasks: [
|
||||
{ id: 'task-1', title: 'Task 1', status: 'pending', description: '' },
|
||||
{ id: 'task-2', title: 'Task 2', status: 'pending', description: '' },
|
||||
],
|
||||
},
|
||||
};
|
||||
|
||||
(readJsonWithRecovery as Mock).mockResolvedValue({
|
||||
data: featureWithTasks,
|
||||
recovered: false,
|
||||
source: 'main',
|
||||
});
|
||||
|
||||
await manager.updateTaskStatus('/project', 'feature-123', 'task-1', 'completed');
|
||||
|
||||
// Verify persisted
|
||||
const savedFeature = (atomicWriteJson as Mock).mock.calls[0][1] as Feature;
|
||||
expect(savedFeature.planSpec?.tasks?.[0].status).toBe('completed');
|
||||
|
||||
// Verify event emitted
|
||||
expect(mockEvents.emit).toHaveBeenCalledWith('auto-mode:event', {
|
||||
type: 'auto_mode_task_status',
|
||||
featureId: 'feature-123',
|
||||
projectPath: '/project',
|
||||
taskId: 'task-1',
|
||||
status: 'completed',
|
||||
tasks: expect.any(Array),
|
||||
});
|
||||
});
|
||||
|
||||
it('should handle task not found', async () => {
|
||||
const featureWithTasks: Feature = {
|
||||
...mockFeature,
|
||||
planSpec: {
|
||||
status: 'approved',
|
||||
version: 1,
|
||||
reviewedByUser: true,
|
||||
tasks: [{ id: 'task-1', title: 'Task 1', status: 'pending', description: '' }],
|
||||
},
|
||||
};
|
||||
|
||||
(readJsonWithRecovery as Mock).mockResolvedValue({
|
||||
data: featureWithTasks,
|
||||
recovered: false,
|
||||
source: 'main',
|
||||
});
|
||||
|
||||
await manager.updateTaskStatus('/project', 'feature-123', 'non-existent-task', 'completed');
|
||||
|
||||
// Should not persist or emit if task not found
|
||||
expect(atomicWriteJson).not.toHaveBeenCalled();
|
||||
expect(mockEvents.emit).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('should handle feature without tasks', async () => {
|
||||
(readJsonWithRecovery as Mock).mockResolvedValue({
|
||||
data: { ...mockFeature },
|
||||
recovered: false,
|
||||
source: 'main',
|
||||
});
|
||||
|
||||
await expect(
|
||||
manager.updateTaskStatus('/project', 'feature-123', 'task-1', 'completed')
|
||||
).resolves.not.toThrow();
|
||||
expect(atomicWriteJson).not.toHaveBeenCalled();
|
||||
});
|
||||
});
|
||||
|
||||
describe('persist BEFORE emit ordering', () => {
|
||||
it('saveFeatureSummary should persist before emitting event', async () => {
|
||||
const callOrder: string[] = [];
|
||||
|
||||
(readJsonWithRecovery as Mock).mockResolvedValue({
|
||||
data: { ...mockFeature },
|
||||
recovered: false,
|
||||
source: 'main',
|
||||
});
|
||||
(atomicWriteJson as Mock).mockImplementation(async () => {
|
||||
callOrder.push('persist');
|
||||
});
|
||||
(mockEvents.emit as Mock).mockImplementation(() => {
|
||||
callOrder.push('emit');
|
||||
});
|
||||
|
||||
await manager.saveFeatureSummary('/project', 'feature-123', 'Summary');
|
||||
|
||||
expect(callOrder).toEqual(['persist', 'emit']);
|
||||
});
|
||||
|
||||
it('updateTaskStatus should persist before emitting event', async () => {
|
||||
const callOrder: string[] = [];
|
||||
|
||||
const featureWithTasks: Feature = {
|
||||
...mockFeature,
|
||||
planSpec: {
|
||||
status: 'approved',
|
||||
version: 1,
|
||||
reviewedByUser: true,
|
||||
tasks: [{ id: 'task-1', title: 'Task 1', status: 'pending', description: '' }],
|
||||
},
|
||||
};
|
||||
|
||||
(readJsonWithRecovery as Mock).mockResolvedValue({
|
||||
data: featureWithTasks,
|
||||
recovered: false,
|
||||
source: 'main',
|
||||
});
|
||||
(atomicWriteJson as Mock).mockImplementation(async () => {
|
||||
callOrder.push('persist');
|
||||
});
|
||||
(mockEvents.emit as Mock).mockImplementation(() => {
|
||||
callOrder.push('emit');
|
||||
});
|
||||
|
||||
await manager.updateTaskStatus('/project', 'feature-123', 'task-1', 'completed');
|
||||
|
||||
expect(callOrder).toEqual(['persist', 'emit']);
|
||||
});
|
||||
});
|
||||
});
|
||||
@@ -15,7 +15,7 @@ import type {
|
||||
} from '@automaker/types';
|
||||
import { ProviderFactory } from '@/providers/provider-factory.js';
|
||||
|
||||
// Create shared mock instances for assertions using vi.hoisted
|
||||
// Create a shared mock logger instance for assertions using vi.hoisted
|
||||
const mockLogger = vi.hoisted(() => ({
|
||||
info: vi.fn(),
|
||||
error: vi.fn(),
|
||||
@@ -23,13 +23,6 @@ const mockLogger = vi.hoisted(() => ({
|
||||
debug: vi.fn(),
|
||||
}));
|
||||
|
||||
const mockCreateChatOptions = vi.hoisted(() =>
|
||||
vi.fn(() => ({
|
||||
model: 'claude-sonnet-4-20250514',
|
||||
systemPrompt: 'test prompt',
|
||||
}))
|
||||
);
|
||||
|
||||
// Mock dependencies
|
||||
vi.mock('@/lib/secure-fs.js');
|
||||
vi.mock('@automaker/platform');
|
||||
@@ -44,7 +37,10 @@ vi.mock('@automaker/utils', async () => {
|
||||
});
|
||||
vi.mock('@/providers/provider-factory.js');
|
||||
vi.mock('@/lib/sdk-options.js', () => ({
|
||||
createChatOptions: mockCreateChatOptions,
|
||||
createChatOptions: vi.fn(() => ({
|
||||
model: 'claude-sonnet-4-20250514',
|
||||
systemPrompt: 'test prompt',
|
||||
})),
|
||||
validateWorkingDirectory: vi.fn(),
|
||||
}));
|
||||
|
||||
@@ -790,143 +786,6 @@ describe('IdeationService', () => {
|
||||
service.generateSuggestions(testProjectPath, 'non-existent', 'features', 5)
|
||||
).rejects.toThrow('Prompt non-existent not found');
|
||||
});
|
||||
|
||||
it('should include app spec context when useAppSpec is enabled', async () => {
|
||||
const mockAppSpec = `
|
||||
<project_specification>
|
||||
<project_name>Test Project</project_name>
|
||||
<overview>A test application for unit testing</overview>
|
||||
<core_capabilities>
|
||||
<capability>User authentication</capability>
|
||||
<capability>Data visualization</capability>
|
||||
</core_capabilities>
|
||||
<implemented_features>
|
||||
<feature>
|
||||
<name>Login System</name>
|
||||
<description>Basic auth with email/password</description>
|
||||
</feature>
|
||||
</implemented_features>
|
||||
</project_specification>
|
||||
`;
|
||||
|
||||
vi.mocked(platform.getAppSpecPath).mockReturnValue('/test/project/.automaker/app_spec.txt');
|
||||
|
||||
// First call returns app spec, subsequent calls return empty JSON
|
||||
vi.mocked(secureFs.readFile)
|
||||
.mockResolvedValueOnce(mockAppSpec)
|
||||
.mockResolvedValue(JSON.stringify({}));
|
||||
|
||||
const mockProvider = {
|
||||
executeQuery: vi.fn().mockReturnValue({
|
||||
async *[Symbol.asyncIterator]() {
|
||||
yield {
|
||||
type: 'result',
|
||||
subtype: 'success',
|
||||
result: JSON.stringify([{ title: 'Test', description: 'Test' }]),
|
||||
};
|
||||
},
|
||||
}),
|
||||
};
|
||||
vi.mocked(ProviderFactory.getProviderForModel).mockReturnValue(mockProvider as any);
|
||||
|
||||
const prompts = service.getAllPrompts();
|
||||
await service.generateSuggestions(testProjectPath, prompts[0].id, 'feature', 5, {
|
||||
useAppSpec: true,
|
||||
useContextFiles: false,
|
||||
useMemoryFiles: false,
|
||||
useExistingFeatures: false,
|
||||
useExistingIdeas: false,
|
||||
});
|
||||
|
||||
// Verify createChatOptions was called with systemPrompt containing app spec info
|
||||
expect(mockCreateChatOptions).toHaveBeenCalled();
|
||||
const chatOptionsCall = mockCreateChatOptions.mock.calls[0][0];
|
||||
expect(chatOptionsCall.systemPrompt).toContain('Test Project');
|
||||
expect(chatOptionsCall.systemPrompt).toContain('A test application for unit testing');
|
||||
expect(chatOptionsCall.systemPrompt).toContain('User authentication');
|
||||
expect(chatOptionsCall.systemPrompt).toContain('Login System');
|
||||
});
|
||||
|
||||
it('should exclude app spec context when useAppSpec is disabled', async () => {
|
||||
const mockAppSpec = `
|
||||
<project_specification>
|
||||
<project_name>Hidden Project</project_name>
|
||||
<overview>This should not appear</overview>
|
||||
</project_specification>
|
||||
`;
|
||||
|
||||
vi.mocked(platform.getAppSpecPath).mockReturnValue('/test/project/.automaker/app_spec.txt');
|
||||
vi.mocked(secureFs.readFile).mockResolvedValue(mockAppSpec);
|
||||
|
||||
const mockProvider = {
|
||||
executeQuery: vi.fn().mockReturnValue({
|
||||
async *[Symbol.asyncIterator]() {
|
||||
yield {
|
||||
type: 'result',
|
||||
subtype: 'success',
|
||||
result: JSON.stringify([{ title: 'Test', description: 'Test' }]),
|
||||
};
|
||||
},
|
||||
}),
|
||||
};
|
||||
vi.mocked(ProviderFactory.getProviderForModel).mockReturnValue(mockProvider as any);
|
||||
|
||||
const prompts = service.getAllPrompts();
|
||||
await service.generateSuggestions(testProjectPath, prompts[0].id, 'feature', 5, {
|
||||
useAppSpec: false,
|
||||
useContextFiles: false,
|
||||
useMemoryFiles: false,
|
||||
useExistingFeatures: false,
|
||||
useExistingIdeas: false,
|
||||
});
|
||||
|
||||
// Verify createChatOptions was called with systemPrompt NOT containing app spec info
|
||||
expect(mockCreateChatOptions).toHaveBeenCalled();
|
||||
const chatOptionsCall = mockCreateChatOptions.mock.calls[0][0];
|
||||
expect(chatOptionsCall.systemPrompt).not.toContain('Hidden Project');
|
||||
expect(chatOptionsCall.systemPrompt).not.toContain('This should not appear');
|
||||
});
|
||||
|
||||
it('should handle missing app spec file gracefully', async () => {
|
||||
vi.mocked(platform.getAppSpecPath).mockReturnValue('/test/project/.automaker/app_spec.txt');
|
||||
|
||||
const enoentError = new Error('ENOENT: no such file or directory') as NodeJS.ErrnoException;
|
||||
enoentError.code = 'ENOENT';
|
||||
|
||||
// First call fails with ENOENT for app spec, subsequent calls return empty JSON
|
||||
vi.mocked(secureFs.readFile)
|
||||
.mockRejectedValueOnce(enoentError)
|
||||
.mockResolvedValue(JSON.stringify({}));
|
||||
|
||||
const mockProvider = {
|
||||
executeQuery: vi.fn().mockReturnValue({
|
||||
async *[Symbol.asyncIterator]() {
|
||||
yield {
|
||||
type: 'result',
|
||||
subtype: 'success',
|
||||
result: JSON.stringify([{ title: 'Test', description: 'Test' }]),
|
||||
};
|
||||
},
|
||||
}),
|
||||
};
|
||||
vi.mocked(ProviderFactory.getProviderForModel).mockReturnValue(mockProvider as any);
|
||||
|
||||
const prompts = service.getAllPrompts();
|
||||
|
||||
// Should not throw
|
||||
await expect(
|
||||
service.generateSuggestions(testProjectPath, prompts[0].id, 'feature', 5, {
|
||||
useAppSpec: true,
|
||||
useContextFiles: false,
|
||||
useMemoryFiles: false,
|
||||
useExistingFeatures: false,
|
||||
useExistingIdeas: false,
|
||||
})
|
||||
).resolves.toBeDefined();
|
||||
|
||||
// Should not log warning for ENOENT
|
||||
expect(mockLogger.warn).not.toHaveBeenCalled();
|
||||
});
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,458 +0,0 @@
|
||||
import { describe, it, expect, beforeEach, vi, afterEach } from 'vitest';
|
||||
import { PlanApprovalService } from '@/services/plan-approval-service.js';
|
||||
import type { TypedEventBus } from '@/services/typed-event-bus.js';
|
||||
import type { FeatureStateManager } from '@/services/feature-state-manager.js';
|
||||
import type { SettingsService } from '@/services/settings-service.js';
|
||||
import type { Feature } from '@automaker/types';
|
||||
|
||||
describe('PlanApprovalService', () => {
|
||||
let service: PlanApprovalService;
|
||||
let mockEventBus: TypedEventBus;
|
||||
let mockFeatureStateManager: FeatureStateManager;
|
||||
let mockSettingsService: SettingsService | null;
|
||||
|
||||
beforeEach(() => {
|
||||
vi.useFakeTimers();
|
||||
|
||||
mockEventBus = {
|
||||
emitAutoModeEvent: vi.fn(),
|
||||
emit: vi.fn(),
|
||||
subscribe: vi.fn(() => vi.fn()),
|
||||
getUnderlyingEmitter: vi.fn(),
|
||||
} as unknown as TypedEventBus;
|
||||
|
||||
mockFeatureStateManager = {
|
||||
loadFeature: vi.fn(),
|
||||
updateFeatureStatus: vi.fn(),
|
||||
updateFeaturePlanSpec: vi.fn(),
|
||||
} as unknown as FeatureStateManager;
|
||||
|
||||
mockSettingsService = {
|
||||
getProjectSettings: vi.fn().mockResolvedValue({}),
|
||||
} as unknown as SettingsService;
|
||||
|
||||
service = new PlanApprovalService(mockEventBus, mockFeatureStateManager, mockSettingsService);
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
vi.useRealTimers();
|
||||
vi.clearAllMocks();
|
||||
});
|
||||
|
||||
// Helper to flush pending promises
|
||||
const flushPromises = () => vi.runAllTimersAsync();
|
||||
|
||||
describe('waitForApproval', () => {
|
||||
it('should create pending entry and return Promise', async () => {
|
||||
const approvalPromise = service.waitForApproval('feature-1', '/project');
|
||||
// Flush async operations so the approval is registered
|
||||
await vi.advanceTimersByTimeAsync(0);
|
||||
|
||||
expect(service.hasPendingApproval('feature-1')).toBe(true);
|
||||
expect(approvalPromise).toBeInstanceOf(Promise);
|
||||
});
|
||||
|
||||
it('should timeout and reject after configured period', async () => {
|
||||
const approvalPromise = service.waitForApproval('feature-1', '/project');
|
||||
// Flush the async initialization
|
||||
await vi.advanceTimersByTimeAsync(0);
|
||||
|
||||
// Advance time by 30 minutes
|
||||
await vi.advanceTimersByTimeAsync(30 * 60 * 1000);
|
||||
|
||||
await expect(approvalPromise).rejects.toThrow(
|
||||
'Plan approval timed out after 30 minutes - feature execution cancelled'
|
||||
);
|
||||
expect(service.hasPendingApproval('feature-1')).toBe(false);
|
||||
});
|
||||
|
||||
it('should use configured timeout from project settings', async () => {
|
||||
// Configure 10 minute timeout
|
||||
vi.mocked(mockSettingsService!.getProjectSettings).mockResolvedValue({
|
||||
planApprovalTimeoutMs: 10 * 60 * 1000,
|
||||
} as never);
|
||||
|
||||
const approvalPromise = service.waitForApproval('feature-1', '/project');
|
||||
// Flush the async initialization
|
||||
await vi.advanceTimersByTimeAsync(0);
|
||||
|
||||
// Advance time by 10 minutes - should timeout
|
||||
await vi.advanceTimersByTimeAsync(10 * 60 * 1000);
|
||||
|
||||
await expect(approvalPromise).rejects.toThrow(
|
||||
'Plan approval timed out after 10 minutes - feature execution cancelled'
|
||||
);
|
||||
});
|
||||
|
||||
it('should fall back to default timeout when settings service is null', async () => {
|
||||
// Create service without settings service
|
||||
const serviceNoSettings = new PlanApprovalService(
|
||||
mockEventBus,
|
||||
mockFeatureStateManager,
|
||||
null
|
||||
);
|
||||
|
||||
const approvalPromise = serviceNoSettings.waitForApproval('feature-1', '/project');
|
||||
// Flush async
|
||||
await vi.advanceTimersByTimeAsync(0);
|
||||
|
||||
// Advance by 29 minutes - should not timeout yet
|
||||
await vi.advanceTimersByTimeAsync(29 * 60 * 1000);
|
||||
expect(serviceNoSettings.hasPendingApproval('feature-1')).toBe(true);
|
||||
|
||||
// Advance by 1 more minute (total 30) - should timeout
|
||||
await vi.advanceTimersByTimeAsync(1 * 60 * 1000);
|
||||
|
||||
await expect(approvalPromise).rejects.toThrow('Plan approval timed out');
|
||||
});
|
||||
});
|
||||
|
||||
describe('resolveApproval', () => {
|
||||
it('should resolve Promise correctly when approved=true', async () => {
|
||||
const approvalPromise = service.waitForApproval('feature-1', '/project');
|
||||
await vi.advanceTimersByTimeAsync(0);
|
||||
|
||||
const result = await service.resolveApproval('feature-1', true, {
|
||||
editedPlan: 'Updated plan',
|
||||
feedback: 'Looks good!',
|
||||
});
|
||||
|
||||
expect(result).toEqual({ success: true });
|
||||
|
||||
const approval = await approvalPromise;
|
||||
expect(approval).toEqual({
|
||||
approved: true,
|
||||
editedPlan: 'Updated plan',
|
||||
feedback: 'Looks good!',
|
||||
});
|
||||
|
||||
expect(service.hasPendingApproval('feature-1')).toBe(false);
|
||||
});
|
||||
|
||||
it('should resolve Promise correctly when approved=false', async () => {
|
||||
const approvalPromise = service.waitForApproval('feature-1', '/project');
|
||||
await vi.advanceTimersByTimeAsync(0);
|
||||
|
||||
const result = await service.resolveApproval('feature-1', false, {
|
||||
feedback: 'Need more details',
|
||||
});
|
||||
|
||||
expect(result).toEqual({ success: true });
|
||||
|
||||
const approval = await approvalPromise;
|
||||
expect(approval).toEqual({
|
||||
approved: false,
|
||||
editedPlan: undefined,
|
||||
feedback: 'Need more details',
|
||||
});
|
||||
});
|
||||
|
||||
it('should emit plan_rejected event when rejected with feedback', async () => {
|
||||
service.waitForApproval('feature-1', '/project');
|
||||
await vi.advanceTimersByTimeAsync(0);
|
||||
|
||||
await service.resolveApproval('feature-1', false, {
|
||||
feedback: 'Need changes',
|
||||
});
|
||||
|
||||
expect(mockEventBus.emitAutoModeEvent).toHaveBeenCalledWith('plan_rejected', {
|
||||
featureId: 'feature-1',
|
||||
projectPath: '/project',
|
||||
feedback: 'Need changes',
|
||||
});
|
||||
});
|
||||
|
||||
it('should update planSpec status to approved when approved', async () => {
|
||||
service.waitForApproval('feature-1', '/project');
|
||||
await vi.advanceTimersByTimeAsync(0);
|
||||
|
||||
await service.resolveApproval('feature-1', true, {
|
||||
editedPlan: 'New plan content',
|
||||
});
|
||||
|
||||
expect(mockFeatureStateManager.updateFeaturePlanSpec).toHaveBeenCalledWith(
|
||||
'/project',
|
||||
'feature-1',
|
||||
expect.objectContaining({
|
||||
status: 'approved',
|
||||
reviewedByUser: true,
|
||||
content: 'New plan content',
|
||||
})
|
||||
);
|
||||
});
|
||||
|
||||
it('should update planSpec status to rejected when rejected', async () => {
|
||||
service.waitForApproval('feature-1', '/project');
|
||||
await vi.advanceTimersByTimeAsync(0);
|
||||
|
||||
await service.resolveApproval('feature-1', false);
|
||||
|
||||
expect(mockFeatureStateManager.updateFeaturePlanSpec).toHaveBeenCalledWith(
|
||||
'/project',
|
||||
'feature-1',
|
||||
expect.objectContaining({
|
||||
status: 'rejected',
|
||||
reviewedByUser: true,
|
||||
})
|
||||
);
|
||||
});
|
||||
|
||||
it('should clear timeout on normal resolution (no double-fire)', async () => {
|
||||
const approvalPromise = service.waitForApproval('feature-1', '/project');
|
||||
await vi.advanceTimersByTimeAsync(0);
|
||||
|
||||
// Advance 10 minutes then resolve
|
||||
await vi.advanceTimersByTimeAsync(10 * 60 * 1000);
|
||||
await service.resolveApproval('feature-1', true);
|
||||
|
||||
const approval = await approvalPromise;
|
||||
expect(approval.approved).toBe(true);
|
||||
|
||||
// Advance past the 30 minute mark - should NOT reject
|
||||
await vi.advanceTimersByTimeAsync(25 * 60 * 1000);
|
||||
|
||||
// If timeout wasn't cleared, we'd see issues
|
||||
expect(service.hasPendingApproval('feature-1')).toBe(false);
|
||||
});
|
||||
|
||||
it('should return error when no pending approval and no recovery possible', async () => {
|
||||
const result = await service.resolveApproval('non-existent', true);
|
||||
|
||||
expect(result).toEqual({
|
||||
success: false,
|
||||
error: 'No pending approval for feature non-existent',
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
describe('recovery path', () => {
|
||||
it('should return needsRecovery=true when planSpec.status is generated and approved', async () => {
|
||||
const mockFeature: Feature = {
|
||||
id: 'feature-1',
|
||||
name: 'Test Feature',
|
||||
title: 'Test Feature',
|
||||
description: 'Test',
|
||||
status: 'in_progress',
|
||||
createdAt: '2024-01-01T00:00:00Z',
|
||||
updatedAt: '2024-01-01T00:00:00Z',
|
||||
planSpec: {
|
||||
status: 'generated',
|
||||
version: 1,
|
||||
reviewedByUser: false,
|
||||
content: 'Original plan',
|
||||
},
|
||||
};
|
||||
|
||||
vi.mocked(mockFeatureStateManager.loadFeature).mockResolvedValue(mockFeature);
|
||||
|
||||
// No pending approval in Map, but feature has generated planSpec
|
||||
const result = await service.resolveApproval('feature-1', true, {
|
||||
projectPath: '/project',
|
||||
editedPlan: 'Edited plan',
|
||||
});
|
||||
|
||||
expect(result).toEqual({ success: true, needsRecovery: true });
|
||||
|
||||
// Should update planSpec
|
||||
expect(mockFeatureStateManager.updateFeaturePlanSpec).toHaveBeenCalledWith(
|
||||
'/project',
|
||||
'feature-1',
|
||||
expect.objectContaining({
|
||||
status: 'approved',
|
||||
content: 'Edited plan',
|
||||
})
|
||||
);
|
||||
});
|
||||
|
||||
it('should handle recovery rejection correctly', async () => {
|
||||
const mockFeature: Feature = {
|
||||
id: 'feature-1',
|
||||
name: 'Test Feature',
|
||||
title: 'Test Feature',
|
||||
description: 'Test',
|
||||
status: 'in_progress',
|
||||
createdAt: '2024-01-01T00:00:00Z',
|
||||
updatedAt: '2024-01-01T00:00:00Z',
|
||||
planSpec: {
|
||||
status: 'generated',
|
||||
version: 1,
|
||||
reviewedByUser: false,
|
||||
},
|
||||
};
|
||||
|
||||
vi.mocked(mockFeatureStateManager.loadFeature).mockResolvedValue(mockFeature);
|
||||
|
||||
const result = await service.resolveApproval('feature-1', false, {
|
||||
projectPath: '/project',
|
||||
feedback: 'Rejected via recovery',
|
||||
});
|
||||
|
||||
expect(result).toEqual({ success: true }); // No needsRecovery for rejections
|
||||
|
||||
// Should update planSpec to rejected
|
||||
expect(mockFeatureStateManager.updateFeaturePlanSpec).toHaveBeenCalledWith(
|
||||
'/project',
|
||||
'feature-1',
|
||||
expect.objectContaining({
|
||||
status: 'rejected',
|
||||
reviewedByUser: true,
|
||||
})
|
||||
);
|
||||
|
||||
// Should update feature status to backlog
|
||||
expect(mockFeatureStateManager.updateFeatureStatus).toHaveBeenCalledWith(
|
||||
'/project',
|
||||
'feature-1',
|
||||
'backlog'
|
||||
);
|
||||
|
||||
// Should emit plan_rejected event
|
||||
expect(mockEventBus.emitAutoModeEvent).toHaveBeenCalledWith('plan_rejected', {
|
||||
featureId: 'feature-1',
|
||||
projectPath: '/project',
|
||||
feedback: 'Rejected via recovery',
|
||||
});
|
||||
});
|
||||
|
||||
it('should not trigger recovery when planSpec.status is not generated', async () => {
|
||||
const mockFeature: Feature = {
|
||||
id: 'feature-1',
|
||||
name: 'Test Feature',
|
||||
title: 'Test Feature',
|
||||
description: 'Test',
|
||||
status: 'pending',
|
||||
createdAt: '2024-01-01T00:00:00Z',
|
||||
updatedAt: '2024-01-01T00:00:00Z',
|
||||
planSpec: {
|
||||
status: 'pending', // Not 'generated'
|
||||
version: 1,
|
||||
reviewedByUser: false,
|
||||
},
|
||||
};
|
||||
|
||||
vi.mocked(mockFeatureStateManager.loadFeature).mockResolvedValue(mockFeature);
|
||||
|
||||
const result = await service.resolveApproval('feature-1', true, {
|
||||
projectPath: '/project',
|
||||
});
|
||||
|
||||
expect(result).toEqual({
|
||||
success: false,
|
||||
error: 'No pending approval for feature feature-1',
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
describe('cancelApproval', () => {
|
||||
it('should reject pending Promise with cancellation error', async () => {
|
||||
const approvalPromise = service.waitForApproval('feature-1', '/project');
|
||||
await vi.advanceTimersByTimeAsync(0);
|
||||
|
||||
service.cancelApproval('feature-1');
|
||||
|
||||
await expect(approvalPromise).rejects.toThrow(
|
||||
'Plan approval cancelled - feature was stopped'
|
||||
);
|
||||
expect(service.hasPendingApproval('feature-1')).toBe(false);
|
||||
});
|
||||
|
||||
it('should clear timeout on cancellation', async () => {
|
||||
const approvalPromise = service.waitForApproval('feature-1', '/project');
|
||||
await vi.advanceTimersByTimeAsync(0);
|
||||
|
||||
service.cancelApproval('feature-1');
|
||||
|
||||
// Verify rejection happened
|
||||
await expect(approvalPromise).rejects.toThrow();
|
||||
|
||||
// Advance past timeout - should not cause any issues
|
||||
await vi.advanceTimersByTimeAsync(35 * 60 * 1000);
|
||||
|
||||
// No additional errors should occur
|
||||
expect(service.hasPendingApproval('feature-1')).toBe(false);
|
||||
});
|
||||
|
||||
it('should do nothing when no pending approval exists', () => {
|
||||
// Should not throw
|
||||
expect(() => service.cancelApproval('non-existent')).not.toThrow();
|
||||
});
|
||||
});
|
||||
|
||||
describe('hasPendingApproval', () => {
|
||||
it('should return true when approval is pending', async () => {
|
||||
service.waitForApproval('feature-1', '/project');
|
||||
await vi.advanceTimersByTimeAsync(0);
|
||||
|
||||
expect(service.hasPendingApproval('feature-1')).toBe(true);
|
||||
});
|
||||
|
||||
it('should return false when no approval is pending', () => {
|
||||
expect(service.hasPendingApproval('feature-1')).toBe(false);
|
||||
});
|
||||
|
||||
it('should return false after approval is resolved', async () => {
|
||||
service.waitForApproval('feature-1', '/project');
|
||||
await vi.advanceTimersByTimeAsync(0);
|
||||
await service.resolveApproval('feature-1', true);
|
||||
|
||||
expect(service.hasPendingApproval('feature-1')).toBe(false);
|
||||
});
|
||||
|
||||
it('should return false after approval is cancelled', async () => {
|
||||
const promise = service.waitForApproval('feature-1', '/project');
|
||||
await vi.advanceTimersByTimeAsync(0);
|
||||
service.cancelApproval('feature-1');
|
||||
|
||||
// Consume the rejection
|
||||
await promise.catch(() => {});
|
||||
|
||||
expect(service.hasPendingApproval('feature-1')).toBe(false);
|
||||
});
|
||||
});
|
||||
|
||||
describe('getTimeoutMs (via waitForApproval behavior)', () => {
|
||||
it('should return configured value from project settings', async () => {
|
||||
vi.mocked(mockSettingsService!.getProjectSettings).mockResolvedValue({
|
||||
planApprovalTimeoutMs: 5 * 60 * 1000, // 5 minutes
|
||||
} as never);
|
||||
|
||||
const approvalPromise = service.waitForApproval('feature-1', '/project');
|
||||
await vi.advanceTimersByTimeAsync(0);
|
||||
|
||||
// Should not timeout at 4 minutes
|
||||
await vi.advanceTimersByTimeAsync(4 * 60 * 1000);
|
||||
expect(service.hasPendingApproval('feature-1')).toBe(true);
|
||||
|
||||
// Should timeout at 5 minutes
|
||||
await vi.advanceTimersByTimeAsync(1 * 60 * 1000);
|
||||
await expect(approvalPromise).rejects.toThrow('timed out after 5 minutes');
|
||||
});
|
||||
|
||||
it('should return default when settings service throws', async () => {
|
||||
vi.mocked(mockSettingsService!.getProjectSettings).mockRejectedValue(new Error('Failed'));
|
||||
|
||||
const approvalPromise = service.waitForApproval('feature-1', '/project');
|
||||
await vi.advanceTimersByTimeAsync(0);
|
||||
|
||||
// Should use default 30 minute timeout
|
||||
await vi.advanceTimersByTimeAsync(29 * 60 * 1000);
|
||||
expect(service.hasPendingApproval('feature-1')).toBe(true);
|
||||
|
||||
await vi.advanceTimersByTimeAsync(1 * 60 * 1000);
|
||||
await expect(approvalPromise).rejects.toThrow('timed out after 30 minutes');
|
||||
});
|
||||
|
||||
it('should return default when planApprovalTimeoutMs is invalid', async () => {
|
||||
vi.mocked(mockSettingsService!.getProjectSettings).mockResolvedValue({
|
||||
planApprovalTimeoutMs: -1, // Invalid
|
||||
} as never);
|
||||
|
||||
const approvalPromise = service.waitForApproval('feature-1', '/project');
|
||||
await vi.advanceTimersByTimeAsync(0);
|
||||
|
||||
// Should use default 30 minute timeout
|
||||
await vi.advanceTimersByTimeAsync(30 * 60 * 1000);
|
||||
await expect(approvalPromise).rejects.toThrow('timed out after 30 minutes');
|
||||
});
|
||||
});
|
||||
});
|
||||
@@ -1,665 +0,0 @@
|
||||
/**
|
||||
* Unit tests for RecoveryService
|
||||
*
|
||||
* Tests crash recovery and feature resumption functionality:
|
||||
* - Execution state persistence (save/load/clear)
|
||||
* - Context detection (agent-output.md exists)
|
||||
* - Feature resumption flow (pipeline vs non-pipeline)
|
||||
* - Interrupted feature detection and batch resumption
|
||||
*/
|
||||
|
||||
import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest';
|
||||
import { RecoveryService, DEFAULT_EXECUTION_STATE } from '@/services/recovery-service.js';
|
||||
import type { Feature } from '@automaker/types';
|
||||
|
||||
// Mock dependencies
|
||||
vi.mock('@automaker/utils', () => ({
|
||||
createLogger: () => ({
|
||||
info: vi.fn(),
|
||||
warn: vi.fn(),
|
||||
error: vi.fn(),
|
||||
debug: vi.fn(),
|
||||
}),
|
||||
readJsonWithRecovery: vi.fn().mockResolvedValue({ data: null, wasRecovered: false }),
|
||||
logRecoveryWarning: vi.fn(),
|
||||
DEFAULT_BACKUP_COUNT: 5,
|
||||
}));
|
||||
|
||||
vi.mock('@automaker/platform', () => ({
|
||||
getFeatureDir: (projectPath: string, featureId: string) =>
|
||||
`${projectPath}/.automaker/features/${featureId}`,
|
||||
getFeaturesDir: (projectPath: string) => `${projectPath}/.automaker/features`,
|
||||
getExecutionStatePath: (projectPath: string) => `${projectPath}/.automaker/execution-state.json`,
|
||||
ensureAutomakerDir: vi.fn().mockResolvedValue(undefined),
|
||||
}));
|
||||
|
||||
vi.mock('@/lib/secure-fs.js', () => ({
|
||||
access: vi.fn().mockRejectedValue(new Error('ENOENT')),
|
||||
readFile: vi.fn().mockRejectedValue(new Error('ENOENT')),
|
||||
writeFile: vi.fn().mockResolvedValue(undefined),
|
||||
unlink: vi.fn().mockResolvedValue(undefined),
|
||||
readdir: vi.fn().mockResolvedValue([]),
|
||||
}));
|
||||
|
||||
vi.mock('@/lib/settings-helpers.js', () => ({
|
||||
getPromptCustomization: vi.fn().mockResolvedValue({
|
||||
taskExecution: {
|
||||
resumeFeatureTemplate: 'Resume: {{featurePrompt}}\n\nPrevious context:\n{{previousContext}}',
|
||||
},
|
||||
}),
|
||||
}));
|
||||
|
||||
describe('recovery-service.ts', () => {
|
||||
// Import mocked modules for access in tests
|
||||
let secureFs: typeof import('@/lib/secure-fs.js');
|
||||
let utils: typeof import('@automaker/utils');
|
||||
|
||||
// Mock dependencies
|
||||
const mockEventBus = {
|
||||
emitAutoModeEvent: vi.fn(),
|
||||
on: vi.fn(),
|
||||
off: vi.fn(),
|
||||
};
|
||||
|
||||
const mockConcurrencyManager = {
|
||||
getAllRunning: vi.fn().mockReturnValue([]),
|
||||
getRunningFeature: vi.fn().mockReturnValue(null),
|
||||
acquire: vi.fn().mockImplementation(({ featureId }) => ({
|
||||
featureId,
|
||||
abortController: new AbortController(),
|
||||
projectPath: '/test/project',
|
||||
isAutoMode: false,
|
||||
startTime: Date.now(),
|
||||
leaseCount: 1,
|
||||
})),
|
||||
release: vi.fn(),
|
||||
getRunningCountForWorktree: vi.fn().mockReturnValue(0),
|
||||
};
|
||||
|
||||
const mockSettingsService = null;
|
||||
|
||||
// Callback mocks - initialize empty, set up in beforeEach
|
||||
let mockExecuteFeature: ReturnType<typeof vi.fn>;
|
||||
let mockLoadFeature: ReturnType<typeof vi.fn>;
|
||||
let mockDetectPipelineStatus: ReturnType<typeof vi.fn>;
|
||||
let mockResumePipeline: ReturnType<typeof vi.fn>;
|
||||
let mockIsFeatureRunning: ReturnType<typeof vi.fn>;
|
||||
let mockAcquireRunningFeature: ReturnType<typeof vi.fn>;
|
||||
let mockReleaseRunningFeature: ReturnType<typeof vi.fn>;
|
||||
|
||||
let service: RecoveryService;
|
||||
|
||||
beforeEach(async () => {
|
||||
vi.clearAllMocks();
|
||||
|
||||
// Import mocked modules
|
||||
secureFs = await import('@/lib/secure-fs.js');
|
||||
utils = await import('@automaker/utils');
|
||||
|
||||
// Reset secure-fs mocks to default behavior
|
||||
vi.mocked(secureFs.access).mockRejectedValue(new Error('ENOENT'));
|
||||
vi.mocked(secureFs.readFile).mockRejectedValue(new Error('ENOENT'));
|
||||
vi.mocked(secureFs.writeFile).mockResolvedValue(undefined);
|
||||
vi.mocked(secureFs.unlink).mockResolvedValue(undefined);
|
||||
vi.mocked(secureFs.readdir).mockResolvedValue([]);
|
||||
|
||||
// Reset all callback mocks with default implementations
|
||||
mockExecuteFeature = vi.fn().mockResolvedValue(undefined);
|
||||
mockLoadFeature = vi.fn().mockResolvedValue(null);
|
||||
mockDetectPipelineStatus = vi.fn().mockResolvedValue({
|
||||
isPipeline: false,
|
||||
stepId: null,
|
||||
stepIndex: -1,
|
||||
totalSteps: 0,
|
||||
step: null,
|
||||
config: null,
|
||||
});
|
||||
mockResumePipeline = vi.fn().mockResolvedValue(undefined);
|
||||
mockIsFeatureRunning = vi.fn().mockReturnValue(false);
|
||||
mockAcquireRunningFeature = vi.fn().mockImplementation(({ featureId }) => ({
|
||||
featureId,
|
||||
abortController: new AbortController(),
|
||||
}));
|
||||
mockReleaseRunningFeature = vi.fn();
|
||||
|
||||
service = new RecoveryService(
|
||||
mockEventBus as any,
|
||||
mockConcurrencyManager as any,
|
||||
mockSettingsService,
|
||||
mockExecuteFeature,
|
||||
mockLoadFeature,
|
||||
mockDetectPipelineStatus,
|
||||
mockResumePipeline,
|
||||
mockIsFeatureRunning,
|
||||
mockAcquireRunningFeature,
|
||||
mockReleaseRunningFeature
|
||||
);
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
vi.resetAllMocks();
|
||||
});
|
||||
|
||||
describe('DEFAULT_EXECUTION_STATE', () => {
|
||||
it('has correct default values', () => {
|
||||
expect(DEFAULT_EXECUTION_STATE).toEqual({
|
||||
version: 1,
|
||||
autoLoopWasRunning: false,
|
||||
maxConcurrency: expect.any(Number),
|
||||
projectPath: '',
|
||||
branchName: null,
|
||||
runningFeatureIds: [],
|
||||
savedAt: '',
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
describe('saveExecutionStateForProject', () => {
|
||||
it('writes correct JSON to execution state path', async () => {
|
||||
mockConcurrencyManager.getAllRunning.mockReturnValue([
|
||||
{ featureId: 'feature-1', projectPath: '/test/project' },
|
||||
{ featureId: 'feature-2', projectPath: '/test/project' },
|
||||
{ featureId: 'feature-3', projectPath: '/other/project' },
|
||||
]);
|
||||
|
||||
await service.saveExecutionStateForProject('/test/project', 'feature-branch', 3);
|
||||
|
||||
expect(secureFs.writeFile).toHaveBeenCalledWith(
|
||||
'/test/project/.automaker/execution-state.json',
|
||||
expect.any(String),
|
||||
'utf-8'
|
||||
);
|
||||
|
||||
const writtenContent = JSON.parse(vi.mocked(secureFs.writeFile).mock.calls[0][1] as string);
|
||||
expect(writtenContent).toMatchObject({
|
||||
version: 1,
|
||||
autoLoopWasRunning: true,
|
||||
maxConcurrency: 3,
|
||||
projectPath: '/test/project',
|
||||
branchName: 'feature-branch',
|
||||
runningFeatureIds: ['feature-1', 'feature-2'],
|
||||
});
|
||||
expect(writtenContent.savedAt).toBeDefined();
|
||||
});
|
||||
|
||||
it('filters running features by project path', async () => {
|
||||
mockConcurrencyManager.getAllRunning.mockReturnValue([
|
||||
{ featureId: 'feature-1', projectPath: '/project-a' },
|
||||
{ featureId: 'feature-2', projectPath: '/project-b' },
|
||||
]);
|
||||
|
||||
await service.saveExecutionStateForProject('/project-a', null, 2);
|
||||
|
||||
const writtenContent = JSON.parse(vi.mocked(secureFs.writeFile).mock.calls[0][1] as string);
|
||||
expect(writtenContent.runningFeatureIds).toEqual(['feature-1']);
|
||||
});
|
||||
|
||||
it('handles null branch name for main worktree', async () => {
|
||||
mockConcurrencyManager.getAllRunning.mockReturnValue([]);
|
||||
await service.saveExecutionStateForProject('/test/project', null, 1);
|
||||
|
||||
const writtenContent = JSON.parse(vi.mocked(secureFs.writeFile).mock.calls[0][1] as string);
|
||||
expect(writtenContent.branchName).toBeNull();
|
||||
});
|
||||
});
|
||||
|
||||
describe('saveExecutionState (legacy)', () => {
|
||||
it('saves execution state with legacy format', async () => {
|
||||
mockConcurrencyManager.getAllRunning.mockReturnValue([
|
||||
{ featureId: 'feature-1', projectPath: '/test' },
|
||||
]);
|
||||
|
||||
await service.saveExecutionState('/test/project', true, 5);
|
||||
|
||||
expect(secureFs.writeFile).toHaveBeenCalled();
|
||||
const writtenContent = JSON.parse(vi.mocked(secureFs.writeFile).mock.calls[0][1] as string);
|
||||
expect(writtenContent).toMatchObject({
|
||||
autoLoopWasRunning: true,
|
||||
maxConcurrency: 5,
|
||||
branchName: null, // Legacy uses main worktree
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
describe('loadExecutionState', () => {
|
||||
it('parses JSON correctly when file exists', async () => {
|
||||
const mockState = {
|
||||
version: 1,
|
||||
autoLoopWasRunning: true,
|
||||
maxConcurrency: 4,
|
||||
projectPath: '/test/project',
|
||||
branchName: 'dev',
|
||||
runningFeatureIds: ['f1', 'f2'],
|
||||
savedAt: '2026-01-27T12:00:00Z',
|
||||
};
|
||||
vi.mocked(secureFs.readFile).mockResolvedValueOnce(JSON.stringify(mockState));
|
||||
|
||||
const result = await service.loadExecutionState('/test/project');
|
||||
|
||||
expect(result).toEqual(mockState);
|
||||
});
|
||||
|
||||
it('returns default state on ENOENT error', async () => {
|
||||
const error = new Error('File not found') as NodeJS.ErrnoException;
|
||||
error.code = 'ENOENT';
|
||||
vi.mocked(secureFs.readFile).mockRejectedValueOnce(error);
|
||||
|
||||
const result = await service.loadExecutionState('/test/project');
|
||||
|
||||
expect(result).toEqual(DEFAULT_EXECUTION_STATE);
|
||||
});
|
||||
|
||||
it('returns default state on other errors and logs', async () => {
|
||||
vi.mocked(secureFs.readFile).mockRejectedValueOnce(new Error('Permission denied'));
|
||||
|
||||
const result = await service.loadExecutionState('/test/project');
|
||||
|
||||
expect(result).toEqual(DEFAULT_EXECUTION_STATE);
|
||||
});
|
||||
});
|
||||
|
||||
describe('clearExecutionState', () => {
|
||||
it('removes execution state file', async () => {
|
||||
await service.clearExecutionState('/test/project');
|
||||
|
||||
expect(secureFs.unlink).toHaveBeenCalledWith('/test/project/.automaker/execution-state.json');
|
||||
});
|
||||
|
||||
it('does not throw on ENOENT error', async () => {
|
||||
const error = new Error('File not found') as NodeJS.ErrnoException;
|
||||
error.code = 'ENOENT';
|
||||
vi.mocked(secureFs.unlink).mockRejectedValueOnce(error);
|
||||
|
||||
await expect(service.clearExecutionState('/test/project')).resolves.not.toThrow();
|
||||
});
|
||||
|
||||
it('logs error on other failures', async () => {
|
||||
vi.mocked(secureFs.unlink).mockRejectedValueOnce(new Error('Permission denied'));
|
||||
|
||||
await expect(service.clearExecutionState('/test/project')).resolves.not.toThrow();
|
||||
});
|
||||
});
|
||||
|
||||
describe('contextExists', () => {
|
||||
it('returns true when agent-output.md exists', async () => {
|
||||
vi.mocked(secureFs.access).mockResolvedValueOnce(undefined);
|
||||
|
||||
const result = await service.contextExists('/test/project', 'feature-1');
|
||||
|
||||
expect(result).toBe(true);
|
||||
expect(secureFs.access).toHaveBeenCalledWith(
|
||||
'/test/project/.automaker/features/feature-1/agent-output.md'
|
||||
);
|
||||
});
|
||||
|
||||
it('returns false when agent-output.md is missing', async () => {
|
||||
vi.mocked(secureFs.access).mockRejectedValueOnce(new Error('ENOENT'));
|
||||
|
||||
const result = await service.contextExists('/test/project', 'feature-1');
|
||||
|
||||
expect(result).toBe(false);
|
||||
});
|
||||
});
|
||||
|
||||
describe('resumeFeature', () => {
|
||||
const mockFeature: Feature = {
|
||||
id: 'feature-1',
|
||||
title: 'Test Feature',
|
||||
description: 'A test feature',
|
||||
status: 'in_progress',
|
||||
};
|
||||
|
||||
beforeEach(() => {
|
||||
mockLoadFeature.mockResolvedValue(mockFeature);
|
||||
});
|
||||
|
||||
it('skips if feature already running (idempotent)', async () => {
|
||||
mockIsFeatureRunning.mockReturnValueOnce(true);
|
||||
|
||||
await service.resumeFeature('/test/project', 'feature-1');
|
||||
|
||||
expect(mockLoadFeature).not.toHaveBeenCalled();
|
||||
expect(mockExecuteFeature).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('detects pipeline status for feature', async () => {
|
||||
vi.mocked(secureFs.access).mockRejectedValue(new Error('ENOENT'));
|
||||
await service.resumeFeature('/test/project', 'feature-1');
|
||||
|
||||
expect(mockDetectPipelineStatus).toHaveBeenCalledWith(
|
||||
'/test/project',
|
||||
'feature-1',
|
||||
'in_progress'
|
||||
);
|
||||
});
|
||||
|
||||
it('delegates to resumePipeline for pipeline features', async () => {
|
||||
const pipelineInfo = {
|
||||
isPipeline: true,
|
||||
stepId: 'test',
|
||||
stepIndex: 1,
|
||||
totalSteps: 3,
|
||||
step: {
|
||||
id: 'test',
|
||||
name: 'Test Step',
|
||||
command: 'npm test',
|
||||
type: 'test' as const,
|
||||
order: 1,
|
||||
},
|
||||
config: null,
|
||||
};
|
||||
mockDetectPipelineStatus.mockResolvedValueOnce(pipelineInfo);
|
||||
|
||||
await service.resumeFeature('/test/project', 'feature-1');
|
||||
|
||||
expect(mockResumePipeline).toHaveBeenCalledWith(
|
||||
'/test/project',
|
||||
mockFeature,
|
||||
false,
|
||||
pipelineInfo
|
||||
);
|
||||
expect(mockExecuteFeature).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('calls executeFeature with continuation prompt when context exists', async () => {
|
||||
// Reset settings-helpers mock before this test
|
||||
const settingsHelpers = await import('@/lib/settings-helpers.js');
|
||||
vi.mocked(settingsHelpers.getPromptCustomization).mockResolvedValue({
|
||||
taskExecution: {
|
||||
resumeFeatureTemplate:
|
||||
'Resume: {{featurePrompt}}\n\nPrevious context:\n{{previousContext}}',
|
||||
implementationInstructions: '',
|
||||
playwrightVerificationInstructions: '',
|
||||
},
|
||||
} as any);
|
||||
|
||||
vi.mocked(secureFs.access).mockResolvedValueOnce(undefined);
|
||||
vi.mocked(secureFs.readFile).mockResolvedValueOnce('Previous agent output content');
|
||||
|
||||
await service.resumeFeature('/test/project', 'feature-1');
|
||||
|
||||
expect(mockEventBus.emitAutoModeEvent).toHaveBeenCalledWith(
|
||||
'auto_mode_feature_resuming',
|
||||
expect.objectContaining({
|
||||
featureId: 'feature-1',
|
||||
hasContext: true,
|
||||
})
|
||||
);
|
||||
expect(mockExecuteFeature).toHaveBeenCalledWith(
|
||||
'/test/project',
|
||||
'feature-1',
|
||||
false,
|
||||
false,
|
||||
undefined,
|
||||
expect.objectContaining({
|
||||
continuationPrompt: expect.stringContaining('Previous agent output content'),
|
||||
_calledInternally: true,
|
||||
})
|
||||
);
|
||||
});
|
||||
|
||||
it('calls executeFeature fresh when no context', async () => {
|
||||
vi.mocked(secureFs.access).mockRejectedValueOnce(new Error('ENOENT'));
|
||||
|
||||
await service.resumeFeature('/test/project', 'feature-1');
|
||||
|
||||
expect(mockEventBus.emitAutoModeEvent).toHaveBeenCalledWith(
|
||||
'auto_mode_feature_resuming',
|
||||
expect.objectContaining({
|
||||
featureId: 'feature-1',
|
||||
hasContext: false,
|
||||
})
|
||||
);
|
||||
expect(mockExecuteFeature).toHaveBeenCalledWith(
|
||||
'/test/project',
|
||||
'feature-1',
|
||||
false,
|
||||
false,
|
||||
undefined,
|
||||
expect.objectContaining({
|
||||
_calledInternally: true,
|
||||
})
|
||||
);
|
||||
});
|
||||
|
||||
it('releases running feature in finally block', async () => {
|
||||
mockLoadFeature.mockRejectedValueOnce(new Error('Feature not found'));
|
||||
|
||||
await expect(service.resumeFeature('/test/project', 'feature-1')).rejects.toThrow();
|
||||
|
||||
expect(mockReleaseRunningFeature).toHaveBeenCalledWith('feature-1');
|
||||
});
|
||||
|
||||
it('throws error if feature not found', async () => {
|
||||
mockLoadFeature.mockResolvedValueOnce(null);
|
||||
|
||||
await expect(service.resumeFeature('/test/project', 'feature-1')).rejects.toThrow(
|
||||
'Feature feature-1 not found'
|
||||
);
|
||||
});
|
||||
|
||||
it('acquires running feature with allowReuse when calledInternally', async () => {
|
||||
vi.mocked(secureFs.access).mockRejectedValue(new Error('ENOENT'));
|
||||
await service.resumeFeature('/test/project', 'feature-1', false, true);
|
||||
|
||||
expect(mockAcquireRunningFeature).toHaveBeenCalledWith({
|
||||
featureId: 'feature-1',
|
||||
projectPath: '/test/project',
|
||||
isAutoMode: false,
|
||||
allowReuse: true,
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
describe('resumeInterruptedFeatures', () => {
|
||||
it('finds features with in_progress status', async () => {
|
||||
vi.mocked(secureFs.readdir).mockResolvedValueOnce([
|
||||
{ name: 'feature-1', isDirectory: () => true } as any,
|
||||
{ name: 'feature-2', isDirectory: () => true } as any,
|
||||
]);
|
||||
vi.mocked(utils.readJsonWithRecovery)
|
||||
.mockResolvedValueOnce({
|
||||
data: { id: 'feature-1', title: 'Feature 1', status: 'in_progress' },
|
||||
wasRecovered: false,
|
||||
})
|
||||
.mockResolvedValueOnce({
|
||||
data: { id: 'feature-2', title: 'Feature 2', status: 'backlog' },
|
||||
wasRecovered: false,
|
||||
});
|
||||
|
||||
mockLoadFeature.mockResolvedValue({
|
||||
id: 'feature-1',
|
||||
title: 'Feature 1',
|
||||
status: 'in_progress',
|
||||
description: 'Test',
|
||||
});
|
||||
|
||||
await service.resumeInterruptedFeatures('/test/project');
|
||||
|
||||
expect(mockEventBus.emitAutoModeEvent).toHaveBeenCalledWith(
|
||||
'auto_mode_resuming_features',
|
||||
expect.objectContaining({
|
||||
featureIds: ['feature-1'],
|
||||
})
|
||||
);
|
||||
});
|
||||
|
||||
it('finds features with pipeline_* status', async () => {
|
||||
vi.mocked(secureFs.readdir).mockResolvedValueOnce([
|
||||
{ name: 'feature-1', isDirectory: () => true } as any,
|
||||
]);
|
||||
vi.mocked(utils.readJsonWithRecovery).mockResolvedValueOnce({
|
||||
data: { id: 'feature-1', title: 'Feature 1', status: 'pipeline_test' },
|
||||
wasRecovered: false,
|
||||
});
|
||||
|
||||
mockLoadFeature.mockResolvedValue({
|
||||
id: 'feature-1',
|
||||
title: 'Feature 1',
|
||||
status: 'pipeline_test',
|
||||
description: 'Test',
|
||||
});
|
||||
|
||||
await service.resumeInterruptedFeatures('/test/project');
|
||||
|
||||
expect(mockEventBus.emitAutoModeEvent).toHaveBeenCalledWith(
|
||||
'auto_mode_resuming_features',
|
||||
expect.objectContaining({
|
||||
features: expect.arrayContaining([
|
||||
expect.objectContaining({ id: 'feature-1', status: 'pipeline_test' }),
|
||||
]),
|
||||
})
|
||||
);
|
||||
});
|
||||
|
||||
it('distinguishes features with/without context', async () => {
|
||||
vi.mocked(secureFs.readdir).mockResolvedValueOnce([
|
||||
{ name: 'feature-with', isDirectory: () => true } as any,
|
||||
{ name: 'feature-without', isDirectory: () => true } as any,
|
||||
]);
|
||||
vi.mocked(utils.readJsonWithRecovery)
|
||||
.mockResolvedValueOnce({
|
||||
data: { id: 'feature-with', title: 'With Context', status: 'in_progress' },
|
||||
wasRecovered: false,
|
||||
})
|
||||
.mockResolvedValueOnce({
|
||||
data: { id: 'feature-without', title: 'Without Context', status: 'in_progress' },
|
||||
wasRecovered: false,
|
||||
});
|
||||
|
||||
// First feature has context, second doesn't
|
||||
vi.mocked(secureFs.access)
|
||||
.mockResolvedValueOnce(undefined) // feature-with has context
|
||||
.mockRejectedValueOnce(new Error('ENOENT')); // feature-without doesn't
|
||||
|
||||
mockLoadFeature
|
||||
.mockResolvedValueOnce({
|
||||
id: 'feature-with',
|
||||
title: 'With Context',
|
||||
status: 'in_progress',
|
||||
description: 'Test',
|
||||
})
|
||||
.mockResolvedValueOnce({
|
||||
id: 'feature-without',
|
||||
title: 'Without Context',
|
||||
status: 'in_progress',
|
||||
description: 'Test',
|
||||
});
|
||||
|
||||
await service.resumeInterruptedFeatures('/test/project');
|
||||
|
||||
expect(mockEventBus.emitAutoModeEvent).toHaveBeenCalledWith(
|
||||
'auto_mode_resuming_features',
|
||||
expect.objectContaining({
|
||||
features: expect.arrayContaining([
|
||||
expect.objectContaining({ id: 'feature-with', hasContext: true }),
|
||||
expect.objectContaining({ id: 'feature-without', hasContext: false }),
|
||||
]),
|
||||
})
|
||||
);
|
||||
});
|
||||
|
||||
it('emits auto_mode_resuming_features event', async () => {
|
||||
vi.mocked(secureFs.readdir).mockResolvedValueOnce([
|
||||
{ name: 'feature-1', isDirectory: () => true } as any,
|
||||
]);
|
||||
vi.mocked(utils.readJsonWithRecovery).mockResolvedValueOnce({
|
||||
data: { id: 'feature-1', title: 'Feature 1', status: 'in_progress' },
|
||||
wasRecovered: false,
|
||||
});
|
||||
|
||||
mockLoadFeature.mockResolvedValue({
|
||||
id: 'feature-1',
|
||||
title: 'Feature 1',
|
||||
status: 'in_progress',
|
||||
description: 'Test',
|
||||
});
|
||||
|
||||
await service.resumeInterruptedFeatures('/test/project');
|
||||
|
||||
expect(mockEventBus.emitAutoModeEvent).toHaveBeenCalledWith(
|
||||
'auto_mode_resuming_features',
|
||||
expect.objectContaining({
|
||||
message: expect.stringContaining('interrupted feature'),
|
||||
projectPath: '/test/project',
|
||||
})
|
||||
);
|
||||
});
|
||||
|
||||
it('skips features already running (idempotent)', async () => {
|
||||
vi.mocked(secureFs.readdir).mockResolvedValueOnce([
|
||||
{ name: 'feature-1', isDirectory: () => true } as any,
|
||||
]);
|
||||
vi.mocked(utils.readJsonWithRecovery).mockResolvedValueOnce({
|
||||
data: { id: 'feature-1', title: 'Feature 1', status: 'in_progress' },
|
||||
wasRecovered: false,
|
||||
});
|
||||
|
||||
mockIsFeatureRunning.mockReturnValue(true);
|
||||
|
||||
await service.resumeInterruptedFeatures('/test/project');
|
||||
|
||||
// Should emit event but not actually resume
|
||||
expect(mockEventBus.emitAutoModeEvent).toHaveBeenCalledWith(
|
||||
'auto_mode_resuming_features',
|
||||
expect.anything()
|
||||
);
|
||||
// But resumeFeature should exit early due to isFeatureRunning check
|
||||
expect(mockLoadFeature).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('handles ENOENT for features directory gracefully', async () => {
|
||||
const error = new Error('Directory not found') as NodeJS.ErrnoException;
|
||||
error.code = 'ENOENT';
|
||||
vi.mocked(secureFs.readdir).mockRejectedValueOnce(error);
|
||||
|
||||
await expect(service.resumeInterruptedFeatures('/test/project')).resolves.not.toThrow();
|
||||
});
|
||||
|
||||
it('continues with other features when one fails', async () => {
|
||||
vi.mocked(secureFs.readdir).mockResolvedValueOnce([
|
||||
{ name: 'feature-fail', isDirectory: () => true } as any,
|
||||
{ name: 'feature-success', isDirectory: () => true } as any,
|
||||
]);
|
||||
vi.mocked(utils.readJsonWithRecovery)
|
||||
.mockResolvedValueOnce({
|
||||
data: { id: 'feature-fail', title: 'Fail', status: 'in_progress' },
|
||||
wasRecovered: false,
|
||||
})
|
||||
.mockResolvedValueOnce({
|
||||
data: { id: 'feature-success', title: 'Success', status: 'in_progress' },
|
||||
wasRecovered: false,
|
||||
});
|
||||
|
||||
// First feature throws during resume, second succeeds
|
||||
mockLoadFeature.mockRejectedValueOnce(new Error('Resume failed')).mockResolvedValueOnce({
|
||||
id: 'feature-success',
|
||||
title: 'Success',
|
||||
status: 'in_progress',
|
||||
description: 'Test',
|
||||
});
|
||||
|
||||
await service.resumeInterruptedFeatures('/test/project');
|
||||
|
||||
// Should still attempt to resume the second feature
|
||||
expect(mockLoadFeature).toHaveBeenCalledTimes(2);
|
||||
});
|
||||
|
||||
it('logs info when no interrupted features found', async () => {
|
||||
vi.mocked(secureFs.readdir).mockResolvedValueOnce([
|
||||
{ name: 'feature-1', isDirectory: () => true } as any,
|
||||
]);
|
||||
vi.mocked(utils.readJsonWithRecovery).mockResolvedValueOnce({
|
||||
data: { id: 'feature-1', title: 'Feature 1', status: 'completed' },
|
||||
wasRecovered: false,
|
||||
});
|
||||
|
||||
await service.resumeInterruptedFeatures('/test/project');
|
||||
|
||||
expect(mockEventBus.emitAutoModeEvent).not.toHaveBeenCalledWith(
|
||||
'auto_mode_resuming_features',
|
||||
expect.anything()
|
||||
);
|
||||
});
|
||||
});
|
||||
});
|
||||
@@ -1,641 +0,0 @@
|
||||
import { describe, it, expect } from 'vitest';
|
||||
import {
|
||||
parseTasksFromSpec,
|
||||
detectTaskStartMarker,
|
||||
detectTaskCompleteMarker,
|
||||
detectPhaseCompleteMarker,
|
||||
detectSpecFallback,
|
||||
extractSummary,
|
||||
} from '../../../src/services/spec-parser.js';
|
||||
|
||||
describe('SpecParser', () => {
|
||||
describe('parseTasksFromSpec', () => {
|
||||
it('should parse tasks from a tasks code block', () => {
|
||||
const specContent = `
|
||||
## Specification
|
||||
|
||||
Some description here.
|
||||
|
||||
\`\`\`tasks
|
||||
- [ ] T001: Create user model | File: src/models/user.ts
|
||||
- [ ] T002: Add API endpoint | File: src/routes/users.ts
|
||||
- [ ] T003: Write unit tests | File: tests/user.test.ts
|
||||
\`\`\`
|
||||
|
||||
## Notes
|
||||
Some notes here.
|
||||
`;
|
||||
const tasks = parseTasksFromSpec(specContent);
|
||||
expect(tasks).toHaveLength(3);
|
||||
expect(tasks[0]).toEqual({
|
||||
id: 'T001',
|
||||
description: 'Create user model',
|
||||
filePath: 'src/models/user.ts',
|
||||
phase: undefined,
|
||||
status: 'pending',
|
||||
});
|
||||
expect(tasks[1].id).toBe('T002');
|
||||
expect(tasks[2].id).toBe('T003');
|
||||
});
|
||||
|
||||
it('should parse tasks with phases', () => {
|
||||
const specContent = `
|
||||
\`\`\`tasks
|
||||
## Phase 1: Foundation
|
||||
- [ ] T001: Initialize project | File: package.json
|
||||
- [ ] T002: Configure TypeScript | File: tsconfig.json
|
||||
|
||||
## Phase 2: Implementation
|
||||
- [ ] T003: Create main module | File: src/index.ts
|
||||
- [ ] T004: Add utility functions | File: src/utils.ts
|
||||
|
||||
## Phase 3: Testing
|
||||
- [ ] T005: Write tests | File: tests/index.test.ts
|
||||
\`\`\`
|
||||
`;
|
||||
const tasks = parseTasksFromSpec(specContent);
|
||||
expect(tasks).toHaveLength(5);
|
||||
expect(tasks[0].phase).toBe('Phase 1: Foundation');
|
||||
expect(tasks[1].phase).toBe('Phase 1: Foundation');
|
||||
expect(tasks[2].phase).toBe('Phase 2: Implementation');
|
||||
expect(tasks[3].phase).toBe('Phase 2: Implementation');
|
||||
expect(tasks[4].phase).toBe('Phase 3: Testing');
|
||||
});
|
||||
|
||||
it('should return empty array for content without tasks', () => {
|
||||
const specContent = 'Just some text without any tasks';
|
||||
const tasks = parseTasksFromSpec(specContent);
|
||||
expect(tasks).toEqual([]);
|
||||
});
|
||||
|
||||
it('should fallback to finding task lines outside code block', () => {
|
||||
const specContent = `
|
||||
## Implementation Plan
|
||||
|
||||
- [ ] T001: First task | File: src/first.ts
|
||||
- [ ] T002: Second task | File: src/second.ts
|
||||
`;
|
||||
const tasks = parseTasksFromSpec(specContent);
|
||||
expect(tasks).toHaveLength(2);
|
||||
expect(tasks[0].id).toBe('T001');
|
||||
expect(tasks[1].id).toBe('T002');
|
||||
});
|
||||
|
||||
it('should handle empty tasks block', () => {
|
||||
const specContent = `
|
||||
\`\`\`tasks
|
||||
\`\`\`
|
||||
`;
|
||||
const tasks = parseTasksFromSpec(specContent);
|
||||
expect(tasks).toEqual([]);
|
||||
});
|
||||
|
||||
it('should handle empty string input', () => {
|
||||
const tasks = parseTasksFromSpec('');
|
||||
expect(tasks).toEqual([]);
|
||||
});
|
||||
|
||||
it('should handle task without file path', () => {
|
||||
const specContent = `
|
||||
\`\`\`tasks
|
||||
- [ ] T001: Task without file
|
||||
\`\`\`
|
||||
`;
|
||||
const tasks = parseTasksFromSpec(specContent);
|
||||
expect(tasks).toHaveLength(1);
|
||||
expect(tasks[0]).toEqual({
|
||||
id: 'T001',
|
||||
description: 'Task without file',
|
||||
phase: undefined,
|
||||
status: 'pending',
|
||||
});
|
||||
});
|
||||
|
||||
it('should handle mixed valid and invalid lines', () => {
|
||||
const specContent = `
|
||||
\`\`\`tasks
|
||||
- [ ] T001: Valid task | File: src/valid.ts
|
||||
- Invalid line
|
||||
Some other text
|
||||
- [ ] T002: Another valid task
|
||||
\`\`\`
|
||||
`;
|
||||
const tasks = parseTasksFromSpec(specContent);
|
||||
expect(tasks).toHaveLength(2);
|
||||
});
|
||||
|
||||
it('should preserve task order', () => {
|
||||
const specContent = `
|
||||
\`\`\`tasks
|
||||
- [ ] T003: Third
|
||||
- [ ] T001: First
|
||||
- [ ] T002: Second
|
||||
\`\`\`
|
||||
`;
|
||||
const tasks = parseTasksFromSpec(specContent);
|
||||
expect(tasks[0].id).toBe('T003');
|
||||
expect(tasks[1].id).toBe('T001');
|
||||
expect(tasks[2].id).toBe('T002');
|
||||
});
|
||||
|
||||
it('should handle task IDs with different numbers', () => {
|
||||
const specContent = `
|
||||
\`\`\`tasks
|
||||
- [ ] T001: First
|
||||
- [ ] T010: Tenth
|
||||
- [ ] T100: Hundredth
|
||||
\`\`\`
|
||||
`;
|
||||
const tasks = parseTasksFromSpec(specContent);
|
||||
expect(tasks).toHaveLength(3);
|
||||
expect(tasks[0].id).toBe('T001');
|
||||
expect(tasks[1].id).toBe('T010');
|
||||
expect(tasks[2].id).toBe('T100');
|
||||
});
|
||||
|
||||
it('should trim whitespace from description and file path', () => {
|
||||
const specContent = `
|
||||
\`\`\`tasks
|
||||
- [ ] T001: Create API endpoint | File: src/routes/api.ts
|
||||
\`\`\`
|
||||
`;
|
||||
const tasks = parseTasksFromSpec(specContent);
|
||||
expect(tasks[0].description).toBe('Create API endpoint');
|
||||
expect(tasks[0].filePath).toBe('src/routes/api.ts');
|
||||
});
|
||||
});
|
||||
|
||||
describe('detectTaskStartMarker', () => {
|
||||
it('should detect task start marker and return task ID', () => {
|
||||
expect(detectTaskStartMarker('[TASK_START] T001')).toBe('T001');
|
||||
expect(detectTaskStartMarker('[TASK_START] T042')).toBe('T042');
|
||||
expect(detectTaskStartMarker('[TASK_START] T999')).toBe('T999');
|
||||
});
|
||||
|
||||
it('should handle marker with description', () => {
|
||||
expect(detectTaskStartMarker('[TASK_START] T001: Creating user model')).toBe('T001');
|
||||
});
|
||||
|
||||
it('should return null when no marker present', () => {
|
||||
expect(detectTaskStartMarker('No marker here')).toBeNull();
|
||||
expect(detectTaskStartMarker('')).toBeNull();
|
||||
});
|
||||
|
||||
it('should find marker in accumulated text', () => {
|
||||
const accumulated = `
|
||||
Some earlier output...
|
||||
|
||||
Now starting the task:
|
||||
[TASK_START] T003: Setting up database
|
||||
|
||||
Let me begin by...
|
||||
`;
|
||||
expect(detectTaskStartMarker(accumulated)).toBe('T003');
|
||||
});
|
||||
|
||||
it('should handle whitespace variations', () => {
|
||||
expect(detectTaskStartMarker('[TASK_START] T001')).toBe('T001');
|
||||
expect(detectTaskStartMarker('[TASK_START]\tT001')).toBe('T001');
|
||||
});
|
||||
|
||||
it('should not match invalid task IDs', () => {
|
||||
expect(detectTaskStartMarker('[TASK_START] TASK1')).toBeNull();
|
||||
expect(detectTaskStartMarker('[TASK_START] T1')).toBeNull();
|
||||
expect(detectTaskStartMarker('[TASK_START] T12')).toBeNull();
|
||||
});
|
||||
});
|
||||
|
||||
describe('detectTaskCompleteMarker', () => {
|
||||
it('should detect task complete marker and return task ID', () => {
|
||||
expect(detectTaskCompleteMarker('[TASK_COMPLETE] T001')).toBe('T001');
|
||||
expect(detectTaskCompleteMarker('[TASK_COMPLETE] T042')).toBe('T042');
|
||||
});
|
||||
|
||||
it('should handle marker with summary', () => {
|
||||
expect(detectTaskCompleteMarker('[TASK_COMPLETE] T001: User model created')).toBe('T001');
|
||||
});
|
||||
|
||||
it('should return null when no marker present', () => {
|
||||
expect(detectTaskCompleteMarker('No marker here')).toBeNull();
|
||||
expect(detectTaskCompleteMarker('')).toBeNull();
|
||||
});
|
||||
|
||||
it('should find marker in accumulated text', () => {
|
||||
const accumulated = `
|
||||
Working on the task...
|
||||
|
||||
Done with the implementation:
|
||||
[TASK_COMPLETE] T003: Database setup complete
|
||||
|
||||
Moving on to...
|
||||
`;
|
||||
expect(detectTaskCompleteMarker(accumulated)).toBe('T003');
|
||||
});
|
||||
|
||||
it('should not confuse with TASK_START marker', () => {
|
||||
expect(detectTaskCompleteMarker('[TASK_START] T001')).toBeNull();
|
||||
});
|
||||
|
||||
it('should not match invalid task IDs', () => {
|
||||
expect(detectTaskCompleteMarker('[TASK_COMPLETE] TASK1')).toBeNull();
|
||||
expect(detectTaskCompleteMarker('[TASK_COMPLETE] T1')).toBeNull();
|
||||
});
|
||||
});
|
||||
|
||||
describe('detectPhaseCompleteMarker', () => {
|
||||
it('should detect phase complete marker and return phase number', () => {
|
||||
expect(detectPhaseCompleteMarker('[PHASE_COMPLETE] Phase 1')).toBe(1);
|
||||
expect(detectPhaseCompleteMarker('[PHASE_COMPLETE] Phase 2')).toBe(2);
|
||||
expect(detectPhaseCompleteMarker('[PHASE_COMPLETE] Phase 10')).toBe(10);
|
||||
});
|
||||
|
||||
it('should handle marker with description', () => {
|
||||
expect(detectPhaseCompleteMarker('[PHASE_COMPLETE] Phase 1 complete')).toBe(1);
|
||||
expect(detectPhaseCompleteMarker('[PHASE_COMPLETE] Phase 2: Foundation done')).toBe(2);
|
||||
});
|
||||
|
||||
it('should return null when no marker present', () => {
|
||||
expect(detectPhaseCompleteMarker('No marker here')).toBeNull();
|
||||
expect(detectPhaseCompleteMarker('')).toBeNull();
|
||||
});
|
||||
|
||||
it('should be case-insensitive', () => {
|
||||
expect(detectPhaseCompleteMarker('[PHASE_COMPLETE] phase 1')).toBe(1);
|
||||
expect(detectPhaseCompleteMarker('[PHASE_COMPLETE] PHASE 2')).toBe(2);
|
||||
});
|
||||
|
||||
it('should find marker in accumulated text', () => {
|
||||
const accumulated = `
|
||||
Finishing up the phase...
|
||||
|
||||
All tasks complete:
|
||||
[PHASE_COMPLETE] Phase 2 complete
|
||||
|
||||
Starting Phase 3...
|
||||
`;
|
||||
expect(detectPhaseCompleteMarker(accumulated)).toBe(2);
|
||||
});
|
||||
|
||||
it('should not confuse with task markers', () => {
|
||||
expect(detectPhaseCompleteMarker('[TASK_COMPLETE] T001')).toBeNull();
|
||||
});
|
||||
});
|
||||
|
||||
describe('detectSpecFallback', () => {
|
||||
it('should detect spec with tasks block and acceptance criteria', () => {
|
||||
const content = `
|
||||
## Acceptance Criteria
|
||||
- GIVEN a user, WHEN they login, THEN they see the dashboard
|
||||
|
||||
\`\`\`tasks
|
||||
- [ ] T001: Create login form | File: src/Login.tsx
|
||||
\`\`\`
|
||||
`;
|
||||
expect(detectSpecFallback(content)).toBe(true);
|
||||
});
|
||||
|
||||
it('should detect spec with task lines and problem statement', () => {
|
||||
const content = `
|
||||
## Problem Statement
|
||||
Users cannot currently log in to the application.
|
||||
|
||||
## Implementation Plan
|
||||
- [ ] T001: Add authentication endpoint
|
||||
- [ ] T002: Create login UI
|
||||
`;
|
||||
expect(detectSpecFallback(content)).toBe(true);
|
||||
});
|
||||
|
||||
it('should detect spec with Goal section (lite planning mode)', () => {
|
||||
const content = `
|
||||
**Goal**: Implement user authentication
|
||||
|
||||
**Solution**: Use JWT tokens for session management
|
||||
|
||||
- [ ] T001: Setup auth middleware
|
||||
- [ ] T002: Create token service
|
||||
`;
|
||||
expect(detectSpecFallback(content)).toBe(true);
|
||||
});
|
||||
|
||||
it('should detect spec with User Story format', () => {
|
||||
const content = `
|
||||
## User Story
|
||||
As a user, I want to reset my password, so that I can regain access.
|
||||
|
||||
## Technical Context
|
||||
This will modify the auth module.
|
||||
|
||||
\`\`\`tasks
|
||||
- [ ] T001: Add reset endpoint
|
||||
\`\`\`
|
||||
`;
|
||||
expect(detectSpecFallback(content)).toBe(true);
|
||||
});
|
||||
|
||||
it('should detect spec with Overview section', () => {
|
||||
const content = `
|
||||
## Overview
|
||||
This feature adds dark mode support.
|
||||
|
||||
\`\`\`tasks
|
||||
- [ ] T001: Add theme toggle
|
||||
\`\`\`
|
||||
`;
|
||||
expect(detectSpecFallback(content)).toBe(true);
|
||||
});
|
||||
|
||||
it('should detect spec with Summary section', () => {
|
||||
const content = `
|
||||
## Summary
|
||||
Adding a new dashboard component.
|
||||
|
||||
- [ ] T001: Create dashboard layout
|
||||
`;
|
||||
expect(detectSpecFallback(content)).toBe(true);
|
||||
});
|
||||
|
||||
it('should detect spec with implementation plan', () => {
|
||||
const content = `
|
||||
## Implementation Plan
|
||||
We will add the feature in two phases.
|
||||
|
||||
- [ ] T001: Phase 1 setup
|
||||
`;
|
||||
expect(detectSpecFallback(content)).toBe(true);
|
||||
});
|
||||
|
||||
it('should detect spec with implementation steps', () => {
|
||||
const content = `
|
||||
## Implementation Steps
|
||||
Follow these steps:
|
||||
|
||||
- [ ] T001: Step one
|
||||
`;
|
||||
expect(detectSpecFallback(content)).toBe(true);
|
||||
});
|
||||
|
||||
it('should detect spec with implementation approach', () => {
|
||||
const content = `
|
||||
## Implementation Approach
|
||||
We will use a modular approach.
|
||||
|
||||
- [ ] T001: Create modules
|
||||
`;
|
||||
expect(detectSpecFallback(content)).toBe(true);
|
||||
});
|
||||
|
||||
it('should NOT detect spec without task structure', () => {
|
||||
const content = `
|
||||
## Problem Statement
|
||||
Users cannot log in.
|
||||
|
||||
## Acceptance Criteria
|
||||
- GIVEN a user, WHEN they try to login, THEN it works
|
||||
`;
|
||||
expect(detectSpecFallback(content)).toBe(false);
|
||||
});
|
||||
|
||||
it('should NOT detect spec without spec content sections', () => {
|
||||
const content = `
|
||||
Here are some tasks:
|
||||
|
||||
- [ ] T001: Do something
|
||||
- [ ] T002: Do another thing
|
||||
`;
|
||||
expect(detectSpecFallback(content)).toBe(false);
|
||||
});
|
||||
|
||||
it('should NOT detect random text as spec', () => {
|
||||
expect(detectSpecFallback('Just some random text')).toBe(false);
|
||||
expect(detectSpecFallback('')).toBe(false);
|
||||
});
|
||||
|
||||
it('should handle case-insensitive matching for spec sections', () => {
|
||||
const content = `
|
||||
## ACCEPTANCE CRITERIA
|
||||
All caps section header
|
||||
|
||||
- [ ] T001: Task
|
||||
`;
|
||||
expect(detectSpecFallback(content)).toBe(true);
|
||||
});
|
||||
});
|
||||
|
||||
describe('extractSummary', () => {
|
||||
describe('explicit <summary> tags', () => {
|
||||
it('should extract content from summary tags', () => {
|
||||
const text = 'Some preamble <summary>This is the summary content</summary> more text';
|
||||
expect(extractSummary(text)).toBe('This is the summary content');
|
||||
});
|
||||
|
||||
it('should use last match to avoid stale summaries', () => {
|
||||
const text = `
|
||||
<summary>Old stale summary</summary>
|
||||
|
||||
More agent output...
|
||||
|
||||
<summary>Fresh new summary</summary>
|
||||
`;
|
||||
expect(extractSummary(text)).toBe('Fresh new summary');
|
||||
});
|
||||
|
||||
it('should handle multiline summary content', () => {
|
||||
const text = `<summary>First line
|
||||
Second line
|
||||
Third line</summary>`;
|
||||
expect(extractSummary(text)).toBe('First line\nSecond line\nThird line');
|
||||
});
|
||||
|
||||
it('should trim whitespace from summary', () => {
|
||||
const text = '<summary> trimmed content </summary>';
|
||||
expect(extractSummary(text)).toBe('trimmed content');
|
||||
});
|
||||
});
|
||||
|
||||
describe('## Summary section (markdown)', () => {
|
||||
it('should extract from ## Summary section', () => {
|
||||
const text = `
|
||||
## Summary
|
||||
|
||||
This is a summary paragraph.
|
||||
|
||||
## Other Section
|
||||
More content.
|
||||
`;
|
||||
expect(extractSummary(text)).toBe('This is a summary paragraph.');
|
||||
});
|
||||
|
||||
it('should truncate long summaries to 500 chars', () => {
|
||||
const longContent = 'A'.repeat(600);
|
||||
const text = `
|
||||
## Summary
|
||||
|
||||
${longContent}
|
||||
|
||||
## Next Section
|
||||
`;
|
||||
const result = extractSummary(text);
|
||||
expect(result).not.toBeNull();
|
||||
expect(result!.length).toBeLessThanOrEqual(503); // 500 + '...'
|
||||
expect(result!.endsWith('...')).toBe(true);
|
||||
});
|
||||
|
||||
it('should use last match for ## Summary', () => {
|
||||
const text = `
|
||||
## Summary
|
||||
|
||||
Old summary content.
|
||||
|
||||
## Summary
|
||||
|
||||
New summary content.
|
||||
`;
|
||||
expect(extractSummary(text)).toBe('New summary content.');
|
||||
});
|
||||
|
||||
it('should stop at next markdown header', () => {
|
||||
const text = `
|
||||
## Summary
|
||||
|
||||
Summary content here.
|
||||
|
||||
## Implementation
|
||||
Implementation details.
|
||||
`;
|
||||
expect(extractSummary(text)).toBe('Summary content here.');
|
||||
});
|
||||
});
|
||||
|
||||
describe('**Goal**: section (lite planning mode)', () => {
|
||||
it('should extract from **Goal**: section', () => {
|
||||
const text = '**Goal**: Implement user authentication\n**Approach**: Use JWT';
|
||||
expect(extractSummary(text)).toBe('Implement user authentication');
|
||||
});
|
||||
|
||||
it('should use last match for **Goal**:', () => {
|
||||
const text = `
|
||||
**Goal**: Old goal
|
||||
|
||||
More output...
|
||||
|
||||
**Goal**: New goal
|
||||
`;
|
||||
expect(extractSummary(text)).toBe('New goal');
|
||||
});
|
||||
|
||||
it('should handle inline goal', () => {
|
||||
const text = '1. **Goal**: Add login functionality';
|
||||
expect(extractSummary(text)).toBe('Add login functionality');
|
||||
});
|
||||
});
|
||||
|
||||
describe('**Problem**: section (spec/full modes)', () => {
|
||||
it('should extract from **Problem**: section', () => {
|
||||
const text = `
|
||||
**Problem**: Users cannot log in to the application
|
||||
|
||||
**Solution**: Add authentication
|
||||
`;
|
||||
expect(extractSummary(text)).toBe('Users cannot log in to the application');
|
||||
});
|
||||
|
||||
it('should extract from **Problem Statement**: section', () => {
|
||||
const text = `
|
||||
**Problem Statement**: Users need password reset functionality
|
||||
|
||||
1. Create reset endpoint
|
||||
`;
|
||||
expect(extractSummary(text)).toBe('Users need password reset functionality');
|
||||
});
|
||||
|
||||
it('should truncate long problem descriptions', () => {
|
||||
const longProblem = 'X'.repeat(600);
|
||||
const text = `**Problem**: ${longProblem}`;
|
||||
const result = extractSummary(text);
|
||||
expect(result).not.toBeNull();
|
||||
expect(result!.length).toBeLessThanOrEqual(503);
|
||||
});
|
||||
});
|
||||
|
||||
describe('**Solution**: section (fallback)', () => {
|
||||
it('should extract from **Solution**: section as fallback', () => {
|
||||
const text = '**Solution**: Use JWT for authentication\n1. Install package';
|
||||
expect(extractSummary(text)).toBe('Use JWT for authentication');
|
||||
});
|
||||
|
||||
it('should truncate solution to 300 chars', () => {
|
||||
const longSolution = 'Y'.repeat(400);
|
||||
const text = `**Solution**: ${longSolution}`;
|
||||
const result = extractSummary(text);
|
||||
expect(result).not.toBeNull();
|
||||
expect(result!.length).toBeLessThanOrEqual(303);
|
||||
});
|
||||
});
|
||||
|
||||
describe('priority order', () => {
|
||||
it('should prefer <summary> over ## Summary', () => {
|
||||
const text = `
|
||||
## Summary
|
||||
|
||||
Markdown summary
|
||||
|
||||
<summary>Tagged summary</summary>
|
||||
`;
|
||||
expect(extractSummary(text)).toBe('Tagged summary');
|
||||
});
|
||||
|
||||
it('should prefer ## Summary over **Goal**:', () => {
|
||||
const text = `
|
||||
**Goal**: Goal content
|
||||
|
||||
## Summary
|
||||
|
||||
Summary section content.
|
||||
`;
|
||||
expect(extractSummary(text)).toBe('Summary section content.');
|
||||
});
|
||||
|
||||
it('should prefer **Goal**: over **Problem**:', () => {
|
||||
const text = `
|
||||
**Problem**: Problem description
|
||||
|
||||
**Goal**: Goal description
|
||||
`;
|
||||
expect(extractSummary(text)).toBe('Goal description');
|
||||
});
|
||||
|
||||
it('should prefer **Problem**: over **Solution**:', () => {
|
||||
const text = `
|
||||
**Solution**: Solution description
|
||||
|
||||
**Problem**: Problem description
|
||||
`;
|
||||
expect(extractSummary(text)).toBe('Problem description');
|
||||
});
|
||||
});
|
||||
|
||||
describe('edge cases', () => {
|
||||
it('should return null for empty string', () => {
|
||||
expect(extractSummary('')).toBeNull();
|
||||
});
|
||||
|
||||
it('should return null when no summary pattern found', () => {
|
||||
expect(extractSummary('Random text without any summary patterns')).toBeNull();
|
||||
});
|
||||
|
||||
it('should handle multiple paragraph summaries (return first paragraph)', () => {
|
||||
const text = `
|
||||
## Summary
|
||||
|
||||
First paragraph of summary.
|
||||
|
||||
Second paragraph of summary.
|
||||
|
||||
## Other
|
||||
`;
|
||||
expect(extractSummary(text)).toBe('First paragraph of summary.');
|
||||
});
|
||||
});
|
||||
});
|
||||
});
|
||||
@@ -1,299 +0,0 @@
|
||||
import { describe, it, expect, vi, beforeEach } from 'vitest';
|
||||
import { TypedEventBus } from '../../../src/services/typed-event-bus.js';
|
||||
import type { EventEmitter, EventCallback, EventType } from '../../../src/lib/events.js';
|
||||
|
||||
/**
|
||||
* Create a mock EventEmitter for testing
|
||||
*/
|
||||
function createMockEventEmitter(): EventEmitter & {
|
||||
emitCalls: Array<{ type: EventType; payload: unknown }>;
|
||||
subscribers: Set<EventCallback>;
|
||||
} {
|
||||
const subscribers = new Set<EventCallback>();
|
||||
const emitCalls: Array<{ type: EventType; payload: unknown }> = [];
|
||||
|
||||
return {
|
||||
emitCalls,
|
||||
subscribers,
|
||||
emit(type: EventType, payload: unknown) {
|
||||
emitCalls.push({ type, payload });
|
||||
// Also call subscribers to simulate real behavior
|
||||
for (const callback of subscribers) {
|
||||
callback(type, payload);
|
||||
}
|
||||
},
|
||||
subscribe(callback: EventCallback) {
|
||||
subscribers.add(callback);
|
||||
return () => {
|
||||
subscribers.delete(callback);
|
||||
};
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
describe('TypedEventBus', () => {
|
||||
let mockEmitter: ReturnType<typeof createMockEventEmitter>;
|
||||
let eventBus: TypedEventBus;
|
||||
|
||||
beforeEach(() => {
|
||||
mockEmitter = createMockEventEmitter();
|
||||
eventBus = new TypedEventBus(mockEmitter);
|
||||
});
|
||||
|
||||
describe('constructor', () => {
|
||||
it('should wrap an EventEmitter', () => {
|
||||
expect(eventBus).toBeInstanceOf(TypedEventBus);
|
||||
});
|
||||
|
||||
it('should store the underlying emitter', () => {
|
||||
expect(eventBus.getUnderlyingEmitter()).toBe(mockEmitter);
|
||||
});
|
||||
});
|
||||
|
||||
describe('emit', () => {
|
||||
it('should pass events directly to the underlying emitter', () => {
|
||||
const payload = { test: 'data' };
|
||||
eventBus.emit('feature:created', payload);
|
||||
|
||||
expect(mockEmitter.emitCalls).toHaveLength(1);
|
||||
expect(mockEmitter.emitCalls[0]).toEqual({
|
||||
type: 'feature:created',
|
||||
payload: { test: 'data' },
|
||||
});
|
||||
});
|
||||
|
||||
it('should handle various event types', () => {
|
||||
eventBus.emit('feature:updated', { id: '1' });
|
||||
eventBus.emit('agent:streaming', { chunk: 'data' });
|
||||
eventBus.emit('error', { message: 'error' });
|
||||
|
||||
expect(mockEmitter.emitCalls).toHaveLength(3);
|
||||
expect(mockEmitter.emitCalls[0].type).toBe('feature:updated');
|
||||
expect(mockEmitter.emitCalls[1].type).toBe('agent:streaming');
|
||||
expect(mockEmitter.emitCalls[2].type).toBe('error');
|
||||
});
|
||||
});
|
||||
|
||||
describe('emitAutoModeEvent', () => {
|
||||
it('should wrap events in auto-mode:event format', () => {
|
||||
eventBus.emitAutoModeEvent('auto_mode_started', { projectPath: '/test' });
|
||||
|
||||
expect(mockEmitter.emitCalls).toHaveLength(1);
|
||||
expect(mockEmitter.emitCalls[0].type).toBe('auto-mode:event');
|
||||
});
|
||||
|
||||
it('should include event type in payload', () => {
|
||||
eventBus.emitAutoModeEvent('auto_mode_started', { projectPath: '/test' });
|
||||
|
||||
const payload = mockEmitter.emitCalls[0].payload as Record<string, unknown>;
|
||||
expect(payload.type).toBe('auto_mode_started');
|
||||
});
|
||||
|
||||
it('should spread additional data into payload', () => {
|
||||
eventBus.emitAutoModeEvent('auto_mode_feature_start', {
|
||||
featureId: 'feat-1',
|
||||
featureName: 'Test Feature',
|
||||
projectPath: '/project',
|
||||
});
|
||||
|
||||
const payload = mockEmitter.emitCalls[0].payload as Record<string, unknown>;
|
||||
expect(payload).toEqual({
|
||||
type: 'auto_mode_feature_start',
|
||||
featureId: 'feat-1',
|
||||
featureName: 'Test Feature',
|
||||
projectPath: '/project',
|
||||
});
|
||||
});
|
||||
|
||||
it('should handle empty data object', () => {
|
||||
eventBus.emitAutoModeEvent('auto_mode_idle', {});
|
||||
|
||||
const payload = mockEmitter.emitCalls[0].payload as Record<string, unknown>;
|
||||
expect(payload).toEqual({ type: 'auto_mode_idle' });
|
||||
});
|
||||
|
||||
it('should preserve exact event format for frontend compatibility', () => {
|
||||
// This test verifies the exact format that the frontend expects
|
||||
eventBus.emitAutoModeEvent('auto_mode_progress', {
|
||||
featureId: 'feat-123',
|
||||
progress: 50,
|
||||
message: 'Processing...',
|
||||
});
|
||||
|
||||
expect(mockEmitter.emitCalls[0]).toEqual({
|
||||
type: 'auto-mode:event',
|
||||
payload: {
|
||||
type: 'auto_mode_progress',
|
||||
featureId: 'feat-123',
|
||||
progress: 50,
|
||||
message: 'Processing...',
|
||||
},
|
||||
});
|
||||
});
|
||||
|
||||
it('should handle all standard auto-mode event types', () => {
|
||||
const eventTypes = [
|
||||
'auto_mode_started',
|
||||
'auto_mode_stopped',
|
||||
'auto_mode_idle',
|
||||
'auto_mode_error',
|
||||
'auto_mode_paused_failures',
|
||||
'auto_mode_feature_start',
|
||||
'auto_mode_feature_complete',
|
||||
'auto_mode_feature_resuming',
|
||||
'auto_mode_progress',
|
||||
'auto_mode_tool',
|
||||
'auto_mode_task_started',
|
||||
'auto_mode_task_complete',
|
||||
'planning_started',
|
||||
'plan_approval_required',
|
||||
'plan_approved',
|
||||
'plan_rejected',
|
||||
] as const;
|
||||
|
||||
for (const eventType of eventTypes) {
|
||||
eventBus.emitAutoModeEvent(eventType, { test: true });
|
||||
}
|
||||
|
||||
expect(mockEmitter.emitCalls).toHaveLength(eventTypes.length);
|
||||
mockEmitter.emitCalls.forEach((call, index) => {
|
||||
expect(call.type).toBe('auto-mode:event');
|
||||
const payload = call.payload as Record<string, unknown>;
|
||||
expect(payload.type).toBe(eventTypes[index]);
|
||||
});
|
||||
});
|
||||
|
||||
it('should allow custom event types (string extensibility)', () => {
|
||||
eventBus.emitAutoModeEvent('custom_event_type', { custom: 'data' });
|
||||
|
||||
const payload = mockEmitter.emitCalls[0].payload as Record<string, unknown>;
|
||||
expect(payload.type).toBe('custom_event_type');
|
||||
});
|
||||
});
|
||||
|
||||
describe('subscribe', () => {
|
||||
it('should pass subscriptions to the underlying emitter', () => {
|
||||
const callback = vi.fn();
|
||||
eventBus.subscribe(callback);
|
||||
|
||||
expect(mockEmitter.subscribers.has(callback)).toBe(true);
|
||||
});
|
||||
|
||||
it('should return an unsubscribe function', () => {
|
||||
const callback = vi.fn();
|
||||
const unsubscribe = eventBus.subscribe(callback);
|
||||
|
||||
expect(mockEmitter.subscribers.has(callback)).toBe(true);
|
||||
|
||||
unsubscribe();
|
||||
|
||||
expect(mockEmitter.subscribers.has(callback)).toBe(false);
|
||||
});
|
||||
|
||||
it('should receive events when subscribed', () => {
|
||||
const callback = vi.fn();
|
||||
eventBus.subscribe(callback);
|
||||
|
||||
eventBus.emit('feature:created', { id: '1' });
|
||||
|
||||
expect(callback).toHaveBeenCalledWith('feature:created', { id: '1' });
|
||||
});
|
||||
|
||||
it('should receive auto-mode events when subscribed', () => {
|
||||
const callback = vi.fn();
|
||||
eventBus.subscribe(callback);
|
||||
|
||||
eventBus.emitAutoModeEvent('auto_mode_started', { projectPath: '/test' });
|
||||
|
||||
expect(callback).toHaveBeenCalledWith('auto-mode:event', {
|
||||
type: 'auto_mode_started',
|
||||
projectPath: '/test',
|
||||
});
|
||||
});
|
||||
|
||||
it('should not receive events after unsubscribe', () => {
|
||||
const callback = vi.fn();
|
||||
const unsubscribe = eventBus.subscribe(callback);
|
||||
|
||||
eventBus.emit('event1', {});
|
||||
expect(callback).toHaveBeenCalledTimes(1);
|
||||
|
||||
unsubscribe();
|
||||
|
||||
eventBus.emit('event2', {});
|
||||
expect(callback).toHaveBeenCalledTimes(1); // Still 1, not called again
|
||||
});
|
||||
});
|
||||
|
||||
describe('getUnderlyingEmitter', () => {
|
||||
it('should return the wrapped EventEmitter', () => {
|
||||
const emitter = eventBus.getUnderlyingEmitter();
|
||||
expect(emitter).toBe(mockEmitter);
|
||||
});
|
||||
|
||||
it('should allow direct access for special cases', () => {
|
||||
const emitter = eventBus.getUnderlyingEmitter();
|
||||
|
||||
// Verify we can use it directly
|
||||
emitter.emit('direct:event', { direct: true });
|
||||
|
||||
expect(mockEmitter.emitCalls).toHaveLength(1);
|
||||
expect(mockEmitter.emitCalls[0].type).toBe('direct:event');
|
||||
});
|
||||
});
|
||||
|
||||
describe('integration with real EventEmitter pattern', () => {
|
||||
it('should produce the exact payload format used by AutoModeService', () => {
|
||||
// This test documents the exact format that was in AutoModeService.emitAutoModeEvent
|
||||
// before extraction, ensuring backward compatibility
|
||||
|
||||
const receivedEvents: Array<{ type: EventType; payload: unknown }> = [];
|
||||
|
||||
eventBus.subscribe((type, payload) => {
|
||||
receivedEvents.push({ type, payload });
|
||||
});
|
||||
|
||||
// Simulate the exact call pattern from AutoModeService
|
||||
eventBus.emitAutoModeEvent('auto_mode_feature_start', {
|
||||
featureId: 'abc-123',
|
||||
featureName: 'Add user authentication',
|
||||
projectPath: '/home/user/project',
|
||||
});
|
||||
|
||||
expect(receivedEvents).toHaveLength(1);
|
||||
expect(receivedEvents[0]).toEqual({
|
||||
type: 'auto-mode:event',
|
||||
payload: {
|
||||
type: 'auto_mode_feature_start',
|
||||
featureId: 'abc-123',
|
||||
featureName: 'Add user authentication',
|
||||
projectPath: '/home/user/project',
|
||||
},
|
||||
});
|
||||
});
|
||||
|
||||
it('should handle complex nested data in events', () => {
|
||||
eventBus.emitAutoModeEvent('auto_mode_tool', {
|
||||
featureId: 'feat-1',
|
||||
tool: {
|
||||
name: 'write_file',
|
||||
input: {
|
||||
path: '/src/index.ts',
|
||||
content: 'const x = 1;',
|
||||
},
|
||||
},
|
||||
timestamp: 1234567890,
|
||||
});
|
||||
|
||||
const payload = mockEmitter.emitCalls[0].payload as Record<string, unknown>;
|
||||
expect(payload.type).toBe('auto_mode_tool');
|
||||
expect(payload.tool).toEqual({
|
||||
name: 'write_file',
|
||||
input: {
|
||||
path: '/src/index.ts',
|
||||
content: 'const x = 1;',
|
||||
},
|
||||
});
|
||||
});
|
||||
});
|
||||
});
|
||||
@@ -1,310 +0,0 @@
|
||||
import { describe, it, expect, beforeEach, vi, type Mock } from 'vitest';
|
||||
import { WorktreeResolver, type WorktreeInfo } from '@/services/worktree-resolver.js';
|
||||
import { exec } from 'child_process';
|
||||
|
||||
// Mock child_process
|
||||
vi.mock('child_process', () => ({
|
||||
exec: vi.fn(),
|
||||
}));
|
||||
|
||||
// Create promisified mock helper
|
||||
const mockExecAsync = (
|
||||
impl: (cmd: string, options?: { cwd?: string }) => Promise<{ stdout: string; stderr: string }>
|
||||
) => {
|
||||
(exec as unknown as Mock).mockImplementation(
|
||||
(
|
||||
cmd: string,
|
||||
options: { cwd?: string } | undefined,
|
||||
callback: (error: Error | null, result: { stdout: string; stderr: string }) => void
|
||||
) => {
|
||||
impl(cmd, options)
|
||||
.then((result) => callback(null, result))
|
||||
.catch((error) => callback(error, { stdout: '', stderr: '' }));
|
||||
}
|
||||
);
|
||||
};
|
||||
|
||||
describe('WorktreeResolver', () => {
|
||||
let resolver: WorktreeResolver;
|
||||
|
||||
beforeEach(() => {
|
||||
vi.clearAllMocks();
|
||||
resolver = new WorktreeResolver();
|
||||
});
|
||||
|
||||
describe('getCurrentBranch', () => {
|
||||
it('should return branch name when on a branch', async () => {
|
||||
mockExecAsync(async () => ({ stdout: 'main\n', stderr: '' }));
|
||||
|
||||
const branch = await resolver.getCurrentBranch('/test/project');
|
||||
|
||||
expect(branch).toBe('main');
|
||||
});
|
||||
|
||||
it('should return null on detached HEAD (empty output)', async () => {
|
||||
mockExecAsync(async () => ({ stdout: '', stderr: '' }));
|
||||
|
||||
const branch = await resolver.getCurrentBranch('/test/project');
|
||||
|
||||
expect(branch).toBeNull();
|
||||
});
|
||||
|
||||
it('should return null when git command fails', async () => {
|
||||
mockExecAsync(async () => {
|
||||
throw new Error('Not a git repository');
|
||||
});
|
||||
|
||||
const branch = await resolver.getCurrentBranch('/not/a/git/repo');
|
||||
|
||||
expect(branch).toBeNull();
|
||||
});
|
||||
|
||||
it('should trim whitespace from branch name', async () => {
|
||||
mockExecAsync(async () => ({ stdout: ' feature-branch \n', stderr: '' }));
|
||||
|
||||
const branch = await resolver.getCurrentBranch('/test/project');
|
||||
|
||||
expect(branch).toBe('feature-branch');
|
||||
});
|
||||
|
||||
it('should use provided projectPath as cwd', async () => {
|
||||
let capturedCwd: string | undefined;
|
||||
mockExecAsync(async (cmd, options) => {
|
||||
capturedCwd = options?.cwd;
|
||||
return { stdout: 'main\n', stderr: '' };
|
||||
});
|
||||
|
||||
await resolver.getCurrentBranch('/custom/path');
|
||||
|
||||
expect(capturedCwd).toBe('/custom/path');
|
||||
});
|
||||
});
|
||||
|
||||
describe('findWorktreeForBranch', () => {
|
||||
const porcelainOutput = `worktree /Users/dev/project
|
||||
branch refs/heads/main
|
||||
|
||||
worktree /Users/dev/project/.worktrees/feature-x
|
||||
branch refs/heads/feature-x
|
||||
|
||||
worktree /Users/dev/project/.worktrees/feature-y
|
||||
branch refs/heads/feature-y
|
||||
`;
|
||||
|
||||
it('should find worktree by branch name', async () => {
|
||||
mockExecAsync(async () => ({ stdout: porcelainOutput, stderr: '' }));
|
||||
|
||||
const path = await resolver.findWorktreeForBranch('/Users/dev/project', 'feature-x');
|
||||
|
||||
expect(path).toBe('/Users/dev/project/.worktrees/feature-x');
|
||||
});
|
||||
|
||||
it('should return null when branch not found', async () => {
|
||||
mockExecAsync(async () => ({ stdout: porcelainOutput, stderr: '' }));
|
||||
|
||||
const path = await resolver.findWorktreeForBranch('/Users/dev/project', 'non-existent');
|
||||
|
||||
expect(path).toBeNull();
|
||||
});
|
||||
|
||||
it('should return null when git command fails', async () => {
|
||||
mockExecAsync(async () => {
|
||||
throw new Error('Not a git repository');
|
||||
});
|
||||
|
||||
const path = await resolver.findWorktreeForBranch('/not/a/repo', 'main');
|
||||
|
||||
expect(path).toBeNull();
|
||||
});
|
||||
|
||||
it('should find main worktree', async () => {
|
||||
mockExecAsync(async () => ({ stdout: porcelainOutput, stderr: '' }));
|
||||
|
||||
const path = await resolver.findWorktreeForBranch('/Users/dev/project', 'main');
|
||||
|
||||
expect(path).toBe('/Users/dev/project');
|
||||
});
|
||||
|
||||
it('should handle porcelain output without trailing newline', async () => {
|
||||
const noTrailingNewline = `worktree /Users/dev/project
|
||||
branch refs/heads/main
|
||||
|
||||
worktree /Users/dev/project/.worktrees/feature-x
|
||||
branch refs/heads/feature-x`;
|
||||
|
||||
mockExecAsync(async () => ({ stdout: noTrailingNewline, stderr: '' }));
|
||||
|
||||
const path = await resolver.findWorktreeForBranch('/Users/dev/project', 'feature-x');
|
||||
|
||||
expect(path).toBe('/Users/dev/project/.worktrees/feature-x');
|
||||
});
|
||||
|
||||
it('should resolve relative paths to absolute', async () => {
|
||||
const relativePathOutput = `worktree /Users/dev/project
|
||||
branch refs/heads/main
|
||||
|
||||
worktree .worktrees/feature-relative
|
||||
branch refs/heads/feature-relative
|
||||
`;
|
||||
|
||||
mockExecAsync(async () => ({ stdout: relativePathOutput, stderr: '' }));
|
||||
|
||||
const result = await resolver.findWorktreeForBranch('/Users/dev/project', 'feature-relative');
|
||||
|
||||
// Should resolve to absolute path
|
||||
expect(result).toBe('/Users/dev/project/.worktrees/feature-relative');
|
||||
});
|
||||
|
||||
it('should use projectPath as cwd for git command', async () => {
|
||||
let capturedCwd: string | undefined;
|
||||
mockExecAsync(async (cmd, options) => {
|
||||
capturedCwd = options?.cwd;
|
||||
return { stdout: porcelainOutput, stderr: '' };
|
||||
});
|
||||
|
||||
await resolver.findWorktreeForBranch('/custom/project', 'main');
|
||||
|
||||
expect(capturedCwd).toBe('/custom/project');
|
||||
});
|
||||
});
|
||||
|
||||
describe('listWorktrees', () => {
|
||||
it('should list all worktrees with metadata', async () => {
|
||||
const porcelainOutput = `worktree /Users/dev/project
|
||||
branch refs/heads/main
|
||||
|
||||
worktree /Users/dev/project/.worktrees/feature-x
|
||||
branch refs/heads/feature-x
|
||||
|
||||
worktree /Users/dev/project/.worktrees/feature-y
|
||||
branch refs/heads/feature-y
|
||||
`;
|
||||
|
||||
mockExecAsync(async () => ({ stdout: porcelainOutput, stderr: '' }));
|
||||
|
||||
const worktrees = await resolver.listWorktrees('/Users/dev/project');
|
||||
|
||||
expect(worktrees).toHaveLength(3);
|
||||
expect(worktrees[0]).toEqual({
|
||||
path: '/Users/dev/project',
|
||||
branch: 'main',
|
||||
isMain: true,
|
||||
});
|
||||
expect(worktrees[1]).toEqual({
|
||||
path: '/Users/dev/project/.worktrees/feature-x',
|
||||
branch: 'feature-x',
|
||||
isMain: false,
|
||||
});
|
||||
expect(worktrees[2]).toEqual({
|
||||
path: '/Users/dev/project/.worktrees/feature-y',
|
||||
branch: 'feature-y',
|
||||
isMain: false,
|
||||
});
|
||||
});
|
||||
|
||||
it('should return empty array when git command fails', async () => {
|
||||
mockExecAsync(async () => {
|
||||
throw new Error('Not a git repository');
|
||||
});
|
||||
|
||||
const worktrees = await resolver.listWorktrees('/not/a/repo');
|
||||
|
||||
expect(worktrees).toEqual([]);
|
||||
});
|
||||
|
||||
it('should handle detached HEAD worktrees', async () => {
|
||||
const porcelainWithDetached = `worktree /Users/dev/project
|
||||
branch refs/heads/main
|
||||
|
||||
worktree /Users/dev/project/.worktrees/detached-wt
|
||||
detached
|
||||
`;
|
||||
|
||||
mockExecAsync(async () => ({ stdout: porcelainWithDetached, stderr: '' }));
|
||||
|
||||
const worktrees = await resolver.listWorktrees('/Users/dev/project');
|
||||
|
||||
expect(worktrees).toHaveLength(2);
|
||||
expect(worktrees[1]).toEqual({
|
||||
path: '/Users/dev/project/.worktrees/detached-wt',
|
||||
branch: null, // Detached HEAD has no branch
|
||||
isMain: false,
|
||||
});
|
||||
});
|
||||
|
||||
it('should mark only first worktree as main', async () => {
|
||||
const multipleWorktrees = `worktree /Users/dev/project
|
||||
branch refs/heads/main
|
||||
|
||||
worktree /Users/dev/project/.worktrees/wt1
|
||||
branch refs/heads/branch1
|
||||
|
||||
worktree /Users/dev/project/.worktrees/wt2
|
||||
branch refs/heads/branch2
|
||||
`;
|
||||
|
||||
mockExecAsync(async () => ({ stdout: multipleWorktrees, stderr: '' }));
|
||||
|
||||
const worktrees = await resolver.listWorktrees('/Users/dev/project');
|
||||
|
||||
expect(worktrees[0].isMain).toBe(true);
|
||||
expect(worktrees[1].isMain).toBe(false);
|
||||
expect(worktrees[2].isMain).toBe(false);
|
||||
});
|
||||
|
||||
it('should resolve relative paths to absolute', async () => {
|
||||
const relativePathOutput = `worktree /Users/dev/project
|
||||
branch refs/heads/main
|
||||
|
||||
worktree .worktrees/relative-wt
|
||||
branch refs/heads/relative-branch
|
||||
`;
|
||||
|
||||
mockExecAsync(async () => ({ stdout: relativePathOutput, stderr: '' }));
|
||||
|
||||
const worktrees = await resolver.listWorktrees('/Users/dev/project');
|
||||
|
||||
expect(worktrees[1].path).toBe('/Users/dev/project/.worktrees/relative-wt');
|
||||
});
|
||||
|
||||
it('should handle single worktree (main only)', async () => {
|
||||
const singleWorktree = `worktree /Users/dev/project
|
||||
branch refs/heads/main
|
||||
`;
|
||||
|
||||
mockExecAsync(async () => ({ stdout: singleWorktree, stderr: '' }));
|
||||
|
||||
const worktrees = await resolver.listWorktrees('/Users/dev/project');
|
||||
|
||||
expect(worktrees).toHaveLength(1);
|
||||
expect(worktrees[0]).toEqual({
|
||||
path: '/Users/dev/project',
|
||||
branch: 'main',
|
||||
isMain: true,
|
||||
});
|
||||
});
|
||||
|
||||
it('should handle empty git worktree list output', async () => {
|
||||
mockExecAsync(async () => ({ stdout: '', stderr: '' }));
|
||||
|
||||
const worktrees = await resolver.listWorktrees('/Users/dev/project');
|
||||
|
||||
expect(worktrees).toEqual([]);
|
||||
});
|
||||
|
||||
it('should handle output without trailing newline', async () => {
|
||||
const noTrailingNewline = `worktree /Users/dev/project
|
||||
branch refs/heads/main
|
||||
|
||||
worktree /Users/dev/project/.worktrees/feature-x
|
||||
branch refs/heads/feature-x`;
|
||||
|
||||
mockExecAsync(async () => ({ stdout: noTrailingNewline, stderr: '' }));
|
||||
|
||||
const worktrees = await resolver.listWorktrees('/Users/dev/project');
|
||||
|
||||
expect(worktrees).toHaveLength(2);
|
||||
expect(worktrees[1].branch).toBe('feature-x');
|
||||
});
|
||||
});
|
||||
});
|
||||
@@ -14,13 +14,8 @@ const eslintConfig = defineConfig([
|
||||
require: 'readonly',
|
||||
__dirname: 'readonly',
|
||||
__filename: 'readonly',
|
||||
setTimeout: 'readonly',
|
||||
clearTimeout: 'readonly',
|
||||
},
|
||||
},
|
||||
rules: {
|
||||
'no-unused-vars': ['warn', { argsIgnorePattern: '^_', caughtErrorsIgnorePattern: '^_' }],
|
||||
},
|
||||
},
|
||||
{
|
||||
files: ['**/*.ts', '**/*.tsx'],
|
||||
@@ -50,8 +45,6 @@ const eslintConfig = defineConfig([
|
||||
confirm: 'readonly',
|
||||
getComputedStyle: 'readonly',
|
||||
requestAnimationFrame: 'readonly',
|
||||
cancelAnimationFrame: 'readonly',
|
||||
alert: 'readonly',
|
||||
// DOM Element Types
|
||||
HTMLElement: 'readonly',
|
||||
HTMLInputElement: 'readonly',
|
||||
@@ -63,8 +56,6 @@ const eslintConfig = defineConfig([
|
||||
HTMLParagraphElement: 'readonly',
|
||||
HTMLImageElement: 'readonly',
|
||||
Element: 'readonly',
|
||||
SVGElement: 'readonly',
|
||||
SVGSVGElement: 'readonly',
|
||||
// Event Types
|
||||
Event: 'readonly',
|
||||
KeyboardEvent: 'readonly',
|
||||
@@ -73,24 +64,14 @@ const eslintConfig = defineConfig([
|
||||
CustomEvent: 'readonly',
|
||||
ClipboardEvent: 'readonly',
|
||||
WheelEvent: 'readonly',
|
||||
MouseEvent: 'readonly',
|
||||
UIEvent: 'readonly',
|
||||
MediaQueryListEvent: 'readonly',
|
||||
DataTransfer: 'readonly',
|
||||
// Web APIs
|
||||
ResizeObserver: 'readonly',
|
||||
AbortSignal: 'readonly',
|
||||
AbortController: 'readonly',
|
||||
IntersectionObserver: 'readonly',
|
||||
Audio: 'readonly',
|
||||
HTMLAudioElement: 'readonly',
|
||||
ScrollBehavior: 'readonly',
|
||||
URL: 'readonly',
|
||||
URLSearchParams: 'readonly',
|
||||
XMLHttpRequest: 'readonly',
|
||||
Response: 'readonly',
|
||||
RequestInit: 'readonly',
|
||||
RequestCache: 'readonly',
|
||||
// Timers
|
||||
setTimeout: 'readonly',
|
||||
setInterval: 'readonly',
|
||||
@@ -109,8 +90,6 @@ const eslintConfig = defineConfig([
|
||||
Electron: 'readonly',
|
||||
// Console
|
||||
console: 'readonly',
|
||||
// Vite defines
|
||||
__APP_VERSION__: 'readonly',
|
||||
},
|
||||
},
|
||||
plugins: {
|
||||
@@ -120,13 +99,6 @@ const eslintConfig = defineConfig([
|
||||
...ts.configs.recommended.rules,
|
||||
'@typescript-eslint/no-unused-vars': ['warn', { argsIgnorePattern: '^_' }],
|
||||
'@typescript-eslint/no-explicit-any': 'warn',
|
||||
'@typescript-eslint/ban-ts-comment': [
|
||||
'error',
|
||||
{
|
||||
'ts-nocheck': 'allow-with-description',
|
||||
minimumDescriptionLength: 10,
|
||||
},
|
||||
],
|
||||
},
|
||||
},
|
||||
globalIgnores([
|
||||
|
||||
@@ -107,7 +107,6 @@
|
||||
"sonner": "2.0.7",
|
||||
"tailwind-merge": "3.4.0",
|
||||
"usehooks-ts": "3.1.1",
|
||||
"zod": "^3.24.1 || ^4.0.0",
|
||||
"zustand": "5.0.9"
|
||||
},
|
||||
"optionalDependencies": {
|
||||
@@ -170,10 +169,6 @@
|
||||
"from": "server-bundle/node_modules",
|
||||
"to": "server/node_modules"
|
||||
},
|
||||
{
|
||||
"from": "server-bundle/libs",
|
||||
"to": "server/libs"
|
||||
},
|
||||
{
|
||||
"from": "server-bundle/package.json",
|
||||
"to": "server/package.json"
|
||||
|
||||
@@ -29,7 +29,7 @@ async function killProcessOnPort(port) {
|
||||
try {
|
||||
await execAsync(`kill -9 ${pid}`);
|
||||
console.log(`[KillTestServers] Killed process ${pid}`);
|
||||
} catch (_error) {
|
||||
} catch (error) {
|
||||
// Process might have already exited
|
||||
}
|
||||
}
|
||||
@@ -47,7 +47,7 @@ async function killProcessOnPort(port) {
|
||||
await new Promise((resolve) => setTimeout(resolve, 500));
|
||||
return;
|
||||
}
|
||||
} catch (_error) {
|
||||
} catch (error) {
|
||||
// No process on port, which is fine
|
||||
}
|
||||
}
|
||||
|
||||
@@ -6,26 +6,14 @@ import { SplashScreen } from './components/splash-screen';
|
||||
import { useSettingsSync } from './hooks/use-settings-sync';
|
||||
import { useCursorStatusInit } from './hooks/use-cursor-status-init';
|
||||
import { useProviderAuthInit } from './hooks/use-provider-auth-init';
|
||||
import { useAppStore } from './store/app-store';
|
||||
import { TooltipProvider } from '@/components/ui/tooltip';
|
||||
import './styles/global.css';
|
||||
import './styles/theme-imports';
|
||||
import './styles/font-imports';
|
||||
|
||||
const logger = createLogger('App');
|
||||
|
||||
// Key for localStorage to persist splash screen preference
|
||||
const DISABLE_SPLASH_KEY = 'automaker-disable-splash';
|
||||
|
||||
export default function App() {
|
||||
const disableSplashScreen = useAppStore((state) => state.disableSplashScreen);
|
||||
|
||||
const [showSplash, setShowSplash] = useState(() => {
|
||||
// Check localStorage for user preference (available synchronously)
|
||||
const savedPreference = localStorage.getItem(DISABLE_SPLASH_KEY);
|
||||
if (savedPreference === 'true') {
|
||||
return false;
|
||||
}
|
||||
// Only show splash once per session
|
||||
if (sessionStorage.getItem('automaker-splash-shown')) {
|
||||
return false;
|
||||
@@ -33,11 +21,6 @@ export default function App() {
|
||||
return true;
|
||||
});
|
||||
|
||||
// Sync the disableSplashScreen setting to localStorage for fast access on next startup
|
||||
useEffect(() => {
|
||||
localStorage.setItem(DISABLE_SPLASH_KEY, String(disableSplashScreen));
|
||||
}, [disableSplashScreen]);
|
||||
|
||||
// Clear accumulated PerformanceMeasure entries to prevent memory leak in dev mode
|
||||
// React's internal scheduler creates performance marks/measures that accumulate without cleanup
|
||||
useEffect(() => {
|
||||
@@ -76,9 +59,9 @@ export default function App() {
|
||||
}, []);
|
||||
|
||||
return (
|
||||
<TooltipProvider delayDuration={300}>
|
||||
<>
|
||||
<RouterProvider router={router} />
|
||||
{showSplash && !disableSplashScreen && <SplashScreen onComplete={handleSplashComplete} />}
|
||||
</TooltipProvider>
|
||||
{showSplash && <SplashScreen onComplete={handleSplashComplete} />}
|
||||
</>
|
||||
);
|
||||
}
|
||||
|
||||
@@ -68,6 +68,7 @@ export function CodexUsagePopover() {
|
||||
// Use React Query for data fetching with automatic polling
|
||||
const {
|
||||
data: codexUsage,
|
||||
isLoading,
|
||||
isFetching,
|
||||
error: queryError,
|
||||
dataUpdatedAt,
|
||||
|
||||
@@ -40,6 +40,8 @@ interface FileBrowserDialogProps {
|
||||
initialPath?: string;
|
||||
}
|
||||
|
||||
const MAX_RECENT_FOLDERS = 5;
|
||||
|
||||
export function FileBrowserDialog({
|
||||
open,
|
||||
onOpenChange,
|
||||
|
||||
@@ -191,7 +191,7 @@ export function NewProjectModal({
|
||||
|
||||
// Use platform-specific path separator
|
||||
const pathSep =
|
||||
typeof window !== 'undefined' && window.electronAPI
|
||||
typeof window !== 'undefined' && (window as any).electronAPI
|
||||
? navigator.platform.indexOf('Win') !== -1
|
||||
? '\\'
|
||||
: '/'
|
||||
|
||||
@@ -15,7 +15,6 @@ import { getAuthenticatedImageUrl } from '@/lib/api-fetch';
|
||||
import { getHttpApiClient } from '@/lib/http-api-client';
|
||||
import type { Project } from '@/lib/electron';
|
||||
import { IconPicker } from './icon-picker';
|
||||
import { toast } from 'sonner';
|
||||
|
||||
interface EditProjectDialogProps {
|
||||
project: Project;
|
||||
@@ -26,9 +25,9 @@ interface EditProjectDialogProps {
|
||||
export function EditProjectDialog({ project, open, onOpenChange }: EditProjectDialogProps) {
|
||||
const { setProjectName, setProjectIcon, setProjectCustomIcon } = useAppStore();
|
||||
const [name, setName] = useState(project.name);
|
||||
const [icon, setIcon] = useState<string | null>(project.icon || null);
|
||||
const [icon, setIcon] = useState<string | null>((project as any).icon || null);
|
||||
const [customIconPath, setCustomIconPath] = useState<string | null>(
|
||||
project.customIconPath || null
|
||||
(project as any).customIconPath || null
|
||||
);
|
||||
const [isUploadingIcon, setIsUploadingIcon] = useState(false);
|
||||
const fileInputRef = useRef<HTMLInputElement>(null);
|
||||
@@ -37,10 +36,10 @@ export function EditProjectDialog({ project, open, onOpenChange }: EditProjectDi
|
||||
if (name.trim() !== project.name) {
|
||||
setProjectName(project.id, name.trim());
|
||||
}
|
||||
if (icon !== project.icon) {
|
||||
if (icon !== (project as any).icon) {
|
||||
setProjectIcon(project.id, icon);
|
||||
}
|
||||
if (customIconPath !== project.customIconPath) {
|
||||
if (customIconPath !== (project as any).customIconPath) {
|
||||
setProjectCustomIcon(project.id, customIconPath);
|
||||
}
|
||||
onOpenChange(false);
|
||||
@@ -53,18 +52,11 @@ export function EditProjectDialog({ project, open, onOpenChange }: EditProjectDi
|
||||
// Validate file type
|
||||
const validTypes = ['image/jpeg', 'image/png', 'image/gif', 'image/webp'];
|
||||
if (!validTypes.includes(file.type)) {
|
||||
toast.error(
|
||||
`Invalid file type: ${file.type || 'unknown'}. Please use JPG, PNG, GIF or WebP.`
|
||||
);
|
||||
return;
|
||||
}
|
||||
|
||||
// Validate file size (max 5MB for icons - allows animated GIFs)
|
||||
const maxSize = 5 * 1024 * 1024;
|
||||
if (file.size > maxSize) {
|
||||
toast.error(
|
||||
`File too large (${(file.size / 1024 / 1024).toFixed(2)} MB). Maximum size is 5 MB.`
|
||||
);
|
||||
// Validate file size (max 2MB for icons)
|
||||
if (file.size > 2 * 1024 * 1024) {
|
||||
return;
|
||||
}
|
||||
|
||||
@@ -80,24 +72,15 @@ export function EditProjectDialog({ project, open, onOpenChange }: EditProjectDi
|
||||
file.type,
|
||||
project.path
|
||||
);
|
||||
|
||||
if (result.success && result.path) {
|
||||
setCustomIconPath(result.path);
|
||||
// Clear the Lucide icon when custom icon is set
|
||||
setIcon(null);
|
||||
toast.success('Icon uploaded successfully');
|
||||
} else {
|
||||
toast.error('Failed to upload icon');
|
||||
}
|
||||
setIsUploadingIcon(false);
|
||||
};
|
||||
reader.onerror = () => {
|
||||
toast.error('Failed to read file');
|
||||
setIsUploadingIcon(false);
|
||||
};
|
||||
reader.readAsDataURL(file);
|
||||
} catch {
|
||||
toast.error('Failed to upload icon');
|
||||
setIsUploadingIcon(false);
|
||||
}
|
||||
};
|
||||
@@ -179,7 +162,7 @@ export function EditProjectDialog({ project, open, onOpenChange }: EditProjectDi
|
||||
{isUploadingIcon ? 'Uploading...' : 'Upload Custom Icon'}
|
||||
</Button>
|
||||
<p className="text-xs text-muted-foreground mt-1">
|
||||
PNG, JPG, GIF or WebP. Max 5MB.
|
||||
PNG, JPG, GIF or WebP. Max 2MB.
|
||||
</p>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
@@ -3,7 +3,7 @@
|
||||
*/
|
||||
|
||||
import { useCallback } from 'react';
|
||||
import { Bell, Check, Trash2 } from 'lucide-react';
|
||||
import { Bell, Check, Trash2, ExternalLink } from 'lucide-react';
|
||||
import { useNavigate } from '@tanstack/react-router';
|
||||
import { useNotificationsStore } from '@/store/notifications-store';
|
||||
import { useLoadNotifications, useNotificationEvents } from '@/hooks/use-notification-events';
|
||||
|
||||
@@ -59,7 +59,7 @@ interface ThemeButtonProps {
|
||||
/** Handler for pointer leave events (used to clear preview) */
|
||||
onPointerLeave: (e: React.PointerEvent) => void;
|
||||
/** Handler for click events (used to select theme) */
|
||||
onClick: (e: React.MouseEvent) => void;
|
||||
onClick: () => void;
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -77,7 +77,6 @@ const ThemeButton = memo(function ThemeButton({
|
||||
const Icon = option.icon;
|
||||
return (
|
||||
<button
|
||||
type="button"
|
||||
onPointerEnter={onPointerEnter}
|
||||
onPointerLeave={onPointerLeave}
|
||||
onClick={onClick}
|
||||
@@ -146,10 +145,7 @@ const ThemeColumn = memo(function ThemeColumn({
|
||||
isSelected={selectedTheme === option.value}
|
||||
onPointerEnter={() => onPreviewEnter(option.value)}
|
||||
onPointerLeave={onPreviewLeave}
|
||||
onClick={(e) => {
|
||||
e.stopPropagation();
|
||||
onSelect(option.value);
|
||||
}}
|
||||
onClick={() => onSelect(option.value)}
|
||||
/>
|
||||
))}
|
||||
</div>
|
||||
@@ -197,11 +193,13 @@ export function ProjectContextMenu({
|
||||
const {
|
||||
moveProjectToTrash,
|
||||
theme: globalTheme,
|
||||
setTheme,
|
||||
setProjectTheme,
|
||||
setPreviewTheme,
|
||||
} = useAppStore();
|
||||
const [showRemoveDialog, setShowRemoveDialog] = useState(false);
|
||||
const [showThemeSubmenu, setShowThemeSubmenu] = useState(false);
|
||||
const [removeConfirmed, setRemoveConfirmed] = useState(false);
|
||||
const themeSubmenuRef = useRef<HTMLDivElement>(null);
|
||||
const closeTimeoutRef = useRef<ReturnType<typeof setTimeout> | null>(null);
|
||||
|
||||
@@ -319,24 +317,13 @@ export function ProjectContextMenu({
|
||||
|
||||
const handleThemeSelect = useCallback(
|
||||
(value: ThemeMode | typeof USE_GLOBAL_THEME) => {
|
||||
// Clear any pending close timeout to prevent race conditions
|
||||
if (closeTimeoutRef.current) {
|
||||
clearTimeout(closeTimeoutRef.current);
|
||||
closeTimeoutRef.current = null;
|
||||
}
|
||||
|
||||
// Close menu first
|
||||
setShowThemeSubmenu(false);
|
||||
onClose();
|
||||
|
||||
// Then apply theme changes
|
||||
setPreviewTheme(null);
|
||||
const isUsingGlobal = value === USE_GLOBAL_THEME;
|
||||
// Only set project theme - don't change global theme
|
||||
// The UI uses getEffectiveTheme() which handles: previewTheme ?? projectTheme ?? globalTheme
|
||||
setTheme(isUsingGlobal ? globalTheme : value);
|
||||
setProjectTheme(project.id, isUsingGlobal ? null : value);
|
||||
setShowThemeSubmenu(false);
|
||||
},
|
||||
[onClose, project.id, setPreviewTheme, setProjectTheme]
|
||||
[globalTheme, project.id, setPreviewTheme, setProjectTheme, setTheme]
|
||||
);
|
||||
|
||||
const handleConfirmRemove = useCallback(() => {
|
||||
@@ -344,6 +331,7 @@ export function ProjectContextMenu({
|
||||
toast.success('Project removed', {
|
||||
description: `${project.name} has been removed from your projects list`,
|
||||
});
|
||||
setRemoveConfirmed(true);
|
||||
}, [moveProjectToTrash, project.id, project.name]);
|
||||
|
||||
const handleDialogClose = useCallback(
|
||||
@@ -352,6 +340,8 @@ export function ProjectContextMenu({
|
||||
// Close the context menu when dialog closes (whether confirmed or cancelled)
|
||||
// This prevents the context menu from reappearing after dialog interaction
|
||||
if (!isOpen) {
|
||||
// Reset confirmation state
|
||||
setRemoveConfirmed(false);
|
||||
// Always close the context menu when dialog closes
|
||||
onClose();
|
||||
}
|
||||
@@ -440,13 +430,9 @@ export function ProjectContextMenu({
|
||||
<div className="p-2">
|
||||
{/* Use Global Option */}
|
||||
<button
|
||||
type="button"
|
||||
onPointerEnter={() => handlePreviewEnter(globalTheme)}
|
||||
onPointerLeave={handlePreviewLeave}
|
||||
onClick={(e) => {
|
||||
e.stopPropagation();
|
||||
handleThemeSelect(USE_GLOBAL_THEME);
|
||||
}}
|
||||
onClick={() => handleThemeSelect(USE_GLOBAL_THEME)}
|
||||
className={cn(
|
||||
'w-full flex items-center gap-2 px-3 py-2 rounded-md',
|
||||
'text-sm font-medium text-left',
|
||||
|
||||
@@ -1,4 +1,3 @@
|
||||
import { useState } from 'react';
|
||||
import { Folder, LucideIcon } from 'lucide-react';
|
||||
import * as LucideIcons from 'lucide-react';
|
||||
import { cn, sanitizeForTestId } from '@/lib/utils';
|
||||
@@ -20,8 +19,6 @@ export function ProjectSwitcherItem({
|
||||
onClick,
|
||||
onContextMenu,
|
||||
}: ProjectSwitcherItemProps) {
|
||||
const [imageError, setImageError] = useState(false);
|
||||
|
||||
// Convert index to hotkey label: 0 -> "1", 1 -> "2", ..., 8 -> "9", 9 -> "0"
|
||||
const hotkeyLabel =
|
||||
hotkeyIndex !== undefined && hotkeyIndex >= 0 && hotkeyIndex <= 9
|
||||
@@ -38,7 +35,7 @@ export function ProjectSwitcherItem({
|
||||
};
|
||||
|
||||
const IconComponent = getIconComponent();
|
||||
const hasCustomIcon = !!project.customIconPath && !imageError;
|
||||
const hasCustomIcon = !!project.customIconPath;
|
||||
|
||||
// Combine project.id with sanitized name for uniqueness and readability
|
||||
// Format: project-switcher-{id}-{sanitizedName}
|
||||
@@ -77,7 +74,6 @@ export function ProjectSwitcherItem({
|
||||
'w-8 h-8 rounded-lg object-cover transition-all duration-200',
|
||||
isActive ? 'ring-1 ring-brand-500/50' : 'group-hover:scale-110'
|
||||
)}
|
||||
onError={() => setImageError(true)}
|
||||
/>
|
||||
) : (
|
||||
<IconComponent
|
||||
|
||||
@@ -100,8 +100,14 @@ export function ProjectSelectorWithOptions({
|
||||
|
||||
const { sensors, handleDragEnd } = useDragAndDrop({ projects, reorderProjects });
|
||||
|
||||
const { globalTheme, setProjectTheme, setPreviewTheme, handlePreviewEnter, handlePreviewLeave } =
|
||||
useProjectTheme();
|
||||
const {
|
||||
globalTheme,
|
||||
setTheme,
|
||||
setProjectTheme,
|
||||
setPreviewTheme,
|
||||
handlePreviewEnter,
|
||||
handlePreviewLeave,
|
||||
} = useProjectTheme();
|
||||
|
||||
if (!sidebarOpen || projects.length === 0) {
|
||||
return null;
|
||||
@@ -275,8 +281,11 @@ export function ProjectSelectorWithOptions({
|
||||
onValueChange={(value) => {
|
||||
if (currentProject) {
|
||||
setPreviewTheme(null);
|
||||
// Only set project theme - don't change global theme
|
||||
// The UI uses getEffectiveTheme() which handles: previewTheme ?? projectTheme ?? globalTheme
|
||||
if (value !== '') {
|
||||
setTheme(value as ThemeMode);
|
||||
} else {
|
||||
setTheme(globalTheme);
|
||||
}
|
||||
setProjectTheme(
|
||||
currentProject.id,
|
||||
value === '' ? null : (value as ThemeMode)
|
||||
|
||||
@@ -5,7 +5,7 @@ import { formatShortcut } from '@/store/app-store';
|
||||
import { Activity, Settings, BookOpen, MessageSquare, ExternalLink } from 'lucide-react';
|
||||
import { useOSDetection } from '@/hooks/use-os-detection';
|
||||
import { getElectronAPI } from '@/lib/electron';
|
||||
import { Tooltip, TooltipContent, TooltipTrigger } from '@/components/ui/tooltip';
|
||||
import { Tooltip, TooltipContent, TooltipProvider, TooltipTrigger } from '@/components/ui/tooltip';
|
||||
|
||||
function getOSAbbreviation(os: string): string {
|
||||
switch (os) {
|
||||
@@ -72,14 +72,68 @@ export function SidebarFooter({
|
||||
<div className="flex flex-col items-center py-2 px-2 gap-1">
|
||||
{/* Running Agents */}
|
||||
{!hideRunningAgents && (
|
||||
<TooltipProvider delayDuration={0}>
|
||||
<Tooltip>
|
||||
<TooltipTrigger asChild>
|
||||
<button
|
||||
onClick={() => navigate({ to: '/running-agents' })}
|
||||
className={cn(
|
||||
'relative flex items-center justify-center w-10 h-10 rounded-xl',
|
||||
'transition-all duration-200 ease-out titlebar-no-drag',
|
||||
isActiveRoute('running-agents')
|
||||
? [
|
||||
'bg-gradient-to-r from-brand-500/20 via-brand-500/15 to-brand-600/10',
|
||||
'text-foreground border border-brand-500/30',
|
||||
'shadow-md shadow-brand-500/10',
|
||||
]
|
||||
: [
|
||||
'text-muted-foreground hover:text-foreground',
|
||||
'hover:bg-accent/50 border border-transparent hover:border-border/40',
|
||||
]
|
||||
)}
|
||||
data-testid="running-agents-link"
|
||||
>
|
||||
<Activity
|
||||
className={cn(
|
||||
'w-[18px] h-[18px]',
|
||||
isActiveRoute('running-agents') && 'text-brand-500'
|
||||
)}
|
||||
/>
|
||||
{runningAgentsCount > 0 && (
|
||||
<span
|
||||
className={cn(
|
||||
'absolute -top-1 -right-1 flex items-center justify-center',
|
||||
'min-w-4 h-4 px-1 text-[9px] font-bold rounded-full',
|
||||
'bg-brand-500 text-white shadow-sm'
|
||||
)}
|
||||
>
|
||||
{runningAgentsCount > 99 ? '99' : runningAgentsCount}
|
||||
</span>
|
||||
)}
|
||||
</button>
|
||||
</TooltipTrigger>
|
||||
<TooltipContent side="right" sideOffset={8}>
|
||||
Running Agents
|
||||
{runningAgentsCount > 0 && (
|
||||
<span className="ml-2 px-1.5 py-0.5 bg-brand-500 text-white rounded-full text-[10px]">
|
||||
{runningAgentsCount}
|
||||
</span>
|
||||
)}
|
||||
</TooltipContent>
|
||||
</Tooltip>
|
||||
</TooltipProvider>
|
||||
)}
|
||||
|
||||
{/* Settings */}
|
||||
<TooltipProvider delayDuration={0}>
|
||||
<Tooltip>
|
||||
<TooltipTrigger asChild>
|
||||
<button
|
||||
onClick={() => navigate({ to: '/running-agents' })}
|
||||
onClick={() => navigate({ to: '/settings' })}
|
||||
className={cn(
|
||||
'relative flex items-center justify-center w-10 h-10 rounded-xl',
|
||||
'flex items-center justify-center w-10 h-10 rounded-xl',
|
||||
'transition-all duration-200 ease-out titlebar-no-drag',
|
||||
isActiveRoute('running-agents')
|
||||
isActiveRoute('settings')
|
||||
? [
|
||||
'bg-gradient-to-r from-brand-500/20 via-brand-500/15 to-brand-600/10',
|
||||
'text-foreground border border-brand-500/30',
|
||||
@@ -90,115 +144,72 @@ export function SidebarFooter({
|
||||
'hover:bg-accent/50 border border-transparent hover:border-border/40',
|
||||
]
|
||||
)}
|
||||
data-testid="running-agents-link"
|
||||
data-testid="settings-button"
|
||||
>
|
||||
<Activity
|
||||
<Settings
|
||||
className={cn(
|
||||
'w-[18px] h-[18px]',
|
||||
isActiveRoute('running-agents') && 'text-brand-500'
|
||||
isActiveRoute('settings') && 'text-brand-500'
|
||||
)}
|
||||
/>
|
||||
{runningAgentsCount > 0 && (
|
||||
<span
|
||||
className={cn(
|
||||
'absolute -top-1 -right-1 flex items-center justify-center',
|
||||
'min-w-4 h-4 px-1 text-[9px] font-bold rounded-full',
|
||||
'bg-brand-500 text-white shadow-sm'
|
||||
)}
|
||||
>
|
||||
{runningAgentsCount > 99 ? '99' : runningAgentsCount}
|
||||
</span>
|
||||
)}
|
||||
</button>
|
||||
</TooltipTrigger>
|
||||
<TooltipContent side="right" sideOffset={8}>
|
||||
Running Agents
|
||||
{runningAgentsCount > 0 && (
|
||||
<span className="ml-2 px-1.5 py-0.5 bg-brand-500 text-white rounded-full text-[10px]">
|
||||
{runningAgentsCount}
|
||||
</span>
|
||||
)}
|
||||
Global Settings
|
||||
<span className="ml-2 px-1.5 py-0.5 bg-muted rounded text-[10px] font-mono text-muted-foreground">
|
||||
{formatShortcut(shortcuts.settings, true)}
|
||||
</span>
|
||||
</TooltipContent>
|
||||
</Tooltip>
|
||||
)}
|
||||
|
||||
{/* Settings */}
|
||||
<Tooltip>
|
||||
<TooltipTrigger asChild>
|
||||
<button
|
||||
onClick={() => navigate({ to: '/settings' })}
|
||||
className={cn(
|
||||
'flex items-center justify-center w-10 h-10 rounded-xl',
|
||||
'transition-all duration-200 ease-out titlebar-no-drag',
|
||||
isActiveRoute('settings')
|
||||
? [
|
||||
'bg-gradient-to-r from-brand-500/20 via-brand-500/15 to-brand-600/10',
|
||||
'text-foreground border border-brand-500/30',
|
||||
'shadow-md shadow-brand-500/10',
|
||||
]
|
||||
: [
|
||||
'text-muted-foreground hover:text-foreground',
|
||||
'hover:bg-accent/50 border border-transparent hover:border-border/40',
|
||||
]
|
||||
)}
|
||||
data-testid="settings-button"
|
||||
>
|
||||
<Settings
|
||||
className={cn('w-[18px] h-[18px]', isActiveRoute('settings') && 'text-brand-500')}
|
||||
/>
|
||||
</button>
|
||||
</TooltipTrigger>
|
||||
<TooltipContent side="right" sideOffset={8}>
|
||||
Global Settings
|
||||
<span className="ml-2 px-1.5 py-0.5 bg-muted rounded text-[10px] font-mono text-muted-foreground">
|
||||
{formatShortcut(shortcuts.settings, true)}
|
||||
</span>
|
||||
</TooltipContent>
|
||||
</Tooltip>
|
||||
</TooltipProvider>
|
||||
|
||||
{/* Documentation */}
|
||||
{!hideWiki && (
|
||||
<TooltipProvider delayDuration={0}>
|
||||
<Tooltip>
|
||||
<TooltipTrigger asChild>
|
||||
<button
|
||||
onClick={handleWikiClick}
|
||||
className={cn(
|
||||
'flex items-center justify-center w-10 h-10 rounded-xl',
|
||||
'text-muted-foreground hover:text-foreground',
|
||||
'hover:bg-accent/50 border border-transparent hover:border-border/40',
|
||||
'transition-all duration-200 ease-out titlebar-no-drag'
|
||||
)}
|
||||
data-testid="documentation-button"
|
||||
>
|
||||
<BookOpen className="w-[18px] h-[18px]" />
|
||||
</button>
|
||||
</TooltipTrigger>
|
||||
<TooltipContent side="right" sideOffset={8}>
|
||||
Documentation
|
||||
</TooltipContent>
|
||||
</Tooltip>
|
||||
</TooltipProvider>
|
||||
)}
|
||||
|
||||
{/* Feedback */}
|
||||
<TooltipProvider delayDuration={0}>
|
||||
<Tooltip>
|
||||
<TooltipTrigger asChild>
|
||||
<button
|
||||
onClick={handleWikiClick}
|
||||
onClick={handleFeedbackClick}
|
||||
className={cn(
|
||||
'flex items-center justify-center w-10 h-10 rounded-xl',
|
||||
'text-muted-foreground hover:text-foreground',
|
||||
'hover:bg-accent/50 border border-transparent hover:border-border/40',
|
||||
'transition-all duration-200 ease-out titlebar-no-drag'
|
||||
)}
|
||||
data-testid="documentation-button"
|
||||
data-testid="feedback-button"
|
||||
>
|
||||
<BookOpen className="w-[18px] h-[18px]" />
|
||||
<MessageSquare className="w-[18px] h-[18px]" />
|
||||
</button>
|
||||
</TooltipTrigger>
|
||||
<TooltipContent side="right" sideOffset={8}>
|
||||
Documentation
|
||||
Feedback
|
||||
</TooltipContent>
|
||||
</Tooltip>
|
||||
)}
|
||||
|
||||
{/* Feedback */}
|
||||
<Tooltip>
|
||||
<TooltipTrigger asChild>
|
||||
<button
|
||||
onClick={handleFeedbackClick}
|
||||
className={cn(
|
||||
'flex items-center justify-center w-10 h-10 rounded-xl',
|
||||
'text-muted-foreground hover:text-foreground',
|
||||
'hover:bg-accent/50 border border-transparent hover:border-border/40',
|
||||
'transition-all duration-200 ease-out titlebar-no-drag'
|
||||
)}
|
||||
data-testid="feedback-button"
|
||||
>
|
||||
<MessageSquare className="w-[18px] h-[18px]" />
|
||||
</button>
|
||||
</TooltipTrigger>
|
||||
<TooltipContent side="right" sideOffset={8}>
|
||||
Feedback
|
||||
</TooltipContent>
|
||||
</Tooltip>
|
||||
</TooltipProvider>
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
|
||||
@@ -15,7 +15,7 @@ import {
|
||||
DropdownMenuSeparator,
|
||||
DropdownMenuTrigger,
|
||||
} from '@/components/ui/dropdown-menu';
|
||||
import { Tooltip, TooltipContent, TooltipTrigger } from '@/components/ui/tooltip';
|
||||
import { Tooltip, TooltipContent, TooltipProvider, TooltipTrigger } from '@/components/ui/tooltip';
|
||||
|
||||
interface SidebarHeaderProps {
|
||||
sidebarOpen: boolean;
|
||||
@@ -92,74 +92,78 @@ export function SidebarHeader({
|
||||
isMac && isElectron() && 'pt-[10px]'
|
||||
)}
|
||||
>
|
||||
<Tooltip>
|
||||
<TooltipTrigger asChild>
|
||||
<button
|
||||
onClick={handleLogoClick}
|
||||
className="group flex flex-col items-center"
|
||||
data-testid="logo-button"
|
||||
>
|
||||
<svg
|
||||
xmlns="http://www.w3.org/2000/svg"
|
||||
viewBox="0 0 256 256"
|
||||
role="img"
|
||||
aria-label="Automaker Logo"
|
||||
className="size-8 group-hover:rotate-12 transition-transform duration-300 ease-out"
|
||||
<TooltipProvider delayDuration={0}>
|
||||
<Tooltip>
|
||||
<TooltipTrigger asChild>
|
||||
<button
|
||||
onClick={handleLogoClick}
|
||||
className="group flex flex-col items-center"
|
||||
data-testid="logo-button"
|
||||
>
|
||||
<defs>
|
||||
<linearGradient
|
||||
id="bg-collapsed"
|
||||
x1="0"
|
||||
y1="0"
|
||||
x2="256"
|
||||
y2="256"
|
||||
gradientUnits="userSpaceOnUse"
|
||||
>
|
||||
<stop offset="0%" style={{ stopColor: 'var(--brand-400)' }} />
|
||||
<stop offset="100%" style={{ stopColor: 'var(--brand-600)' }} />
|
||||
</linearGradient>
|
||||
</defs>
|
||||
<rect x="16" y="16" width="224" height="224" rx="56" fill="url(#bg-collapsed)" />
|
||||
<g
|
||||
fill="none"
|
||||
stroke="#FFFFFF"
|
||||
strokeWidth="20"
|
||||
strokeLinecap="round"
|
||||
strokeLinejoin="round"
|
||||
<svg
|
||||
xmlns="http://www.w3.org/2000/svg"
|
||||
viewBox="0 0 256 256"
|
||||
role="img"
|
||||
aria-label="Automaker Logo"
|
||||
className="size-8 group-hover:rotate-12 transition-transform duration-300 ease-out"
|
||||
>
|
||||
<path d="M92 92 L52 128 L92 164" />
|
||||
<path d="M144 72 L116 184" />
|
||||
<path d="M164 92 L204 128 L164 164" />
|
||||
</g>
|
||||
</svg>
|
||||
</button>
|
||||
</TooltipTrigger>
|
||||
<TooltipContent side="right" sideOffset={8}>
|
||||
Go to Dashboard
|
||||
</TooltipContent>
|
||||
</Tooltip>
|
||||
<defs>
|
||||
<linearGradient
|
||||
id="bg-collapsed"
|
||||
x1="0"
|
||||
y1="0"
|
||||
x2="256"
|
||||
y2="256"
|
||||
gradientUnits="userSpaceOnUse"
|
||||
>
|
||||
<stop offset="0%" style={{ stopColor: 'var(--brand-400)' }} />
|
||||
<stop offset="100%" style={{ stopColor: 'var(--brand-600)' }} />
|
||||
</linearGradient>
|
||||
</defs>
|
||||
<rect x="16" y="16" width="224" height="224" rx="56" fill="url(#bg-collapsed)" />
|
||||
<g
|
||||
fill="none"
|
||||
stroke="#FFFFFF"
|
||||
strokeWidth="20"
|
||||
strokeLinecap="round"
|
||||
strokeLinejoin="round"
|
||||
>
|
||||
<path d="M92 92 L52 128 L92 164" />
|
||||
<path d="M144 72 L116 184" />
|
||||
<path d="M164 92 L204 128 L164 164" />
|
||||
</g>
|
||||
</svg>
|
||||
</button>
|
||||
</TooltipTrigger>
|
||||
<TooltipContent side="right" sideOffset={8}>
|
||||
Go to Dashboard
|
||||
</TooltipContent>
|
||||
</Tooltip>
|
||||
</TooltipProvider>
|
||||
|
||||
{/* Collapsed project icon with dropdown */}
|
||||
{currentProject && (
|
||||
<>
|
||||
<div className="w-full h-px bg-border/40 my-2" />
|
||||
<DropdownMenu open={dropdownOpen} onOpenChange={setDropdownOpen}>
|
||||
<Tooltip>
|
||||
<TooltipTrigger asChild>
|
||||
<DropdownMenuTrigger asChild>
|
||||
<button
|
||||
onContextMenu={(e) => onProjectContextMenu(currentProject, e)}
|
||||
className="p-1 rounded-lg hover:bg-accent/50 transition-colors"
|
||||
data-testid="collapsed-project-button"
|
||||
>
|
||||
{renderProjectIcon(currentProject)}
|
||||
</button>
|
||||
</DropdownMenuTrigger>
|
||||
</TooltipTrigger>
|
||||
<TooltipContent side="right" sideOffset={8}>
|
||||
{currentProject.name}
|
||||
</TooltipContent>
|
||||
</Tooltip>
|
||||
<TooltipProvider delayDuration={0}>
|
||||
<Tooltip>
|
||||
<TooltipTrigger asChild>
|
||||
<DropdownMenuTrigger asChild>
|
||||
<button
|
||||
onContextMenu={(e) => onProjectContextMenu(currentProject, e)}
|
||||
className="p-1 rounded-lg hover:bg-accent/50 transition-colors"
|
||||
data-testid="collapsed-project-button"
|
||||
>
|
||||
{renderProjectIcon(currentProject)}
|
||||
</button>
|
||||
</DropdownMenuTrigger>
|
||||
</TooltipTrigger>
|
||||
<TooltipContent side="right" sideOffset={8}>
|
||||
{currentProject.name}
|
||||
</TooltipContent>
|
||||
</Tooltip>
|
||||
</TooltipProvider>
|
||||
<DropdownMenuContent
|
||||
align="start"
|
||||
side="right"
|
||||
|
||||
@@ -1,14 +1,10 @@
|
||||
import { useCallback, useEffect, useRef } from 'react';
|
||||
import { useState, useCallback, useEffect, useRef } from 'react';
|
||||
import type { NavigateOptions } from '@tanstack/react-router';
|
||||
import { ChevronDown, Wrench, Github, Folder } from 'lucide-react';
|
||||
import * as LucideIcons from 'lucide-react';
|
||||
import type { LucideIcon } from 'lucide-react';
|
||||
import { ChevronDown, Wrench, Github } from 'lucide-react';
|
||||
import { cn } from '@/lib/utils';
|
||||
import { formatShortcut, useAppStore } from '@/store/app-store';
|
||||
import { getAuthenticatedImageUrl } from '@/lib/api-fetch';
|
||||
import { formatShortcut } from '@/store/app-store';
|
||||
import type { NavSection } from '../types';
|
||||
import type { Project } from '@/lib/electron';
|
||||
import type { SidebarStyle } from '@automaker/types';
|
||||
import { Spinner } from '@/components/ui/spinner';
|
||||
import {
|
||||
DropdownMenu,
|
||||
@@ -16,7 +12,7 @@ import {
|
||||
DropdownMenuItem,
|
||||
DropdownMenuTrigger,
|
||||
} from '@/components/ui/dropdown-menu';
|
||||
import { Tooltip, TooltipContent, TooltipTrigger } from '@/components/ui/tooltip';
|
||||
import { Tooltip, TooltipContent, TooltipProvider, TooltipTrigger } from '@/components/ui/tooltip';
|
||||
|
||||
// Map section labels to icons
|
||||
const sectionIcons: Record<string, React.ComponentType<{ className?: string }>> = {
|
||||
@@ -27,7 +23,6 @@ const sectionIcons: Record<string, React.ComponentType<{ className?: string }>>
|
||||
interface SidebarNavigationProps {
|
||||
currentProject: Project | null;
|
||||
sidebarOpen: boolean;
|
||||
sidebarStyle: SidebarStyle;
|
||||
navSections: NavSection[];
|
||||
isActiveRoute: (id: string) => boolean;
|
||||
navigate: (opts: NavigateOptions) => void;
|
||||
@@ -37,7 +32,6 @@ interface SidebarNavigationProps {
|
||||
export function SidebarNavigation({
|
||||
currentProject,
|
||||
sidebarOpen,
|
||||
sidebarStyle,
|
||||
navSections,
|
||||
isActiveRoute,
|
||||
navigate,
|
||||
@@ -45,26 +39,21 @@ export function SidebarNavigation({
|
||||
}: SidebarNavigationProps) {
|
||||
const navRef = useRef<HTMLElement>(null);
|
||||
|
||||
// Get collapsed state from store (persisted across restarts)
|
||||
const { collapsedNavSections, setCollapsedNavSections, toggleNavSection } = useAppStore();
|
||||
// Track collapsed state for each collapsible section
|
||||
const [collapsedSections, setCollapsedSections] = useState<Record<string, boolean>>({});
|
||||
|
||||
// Initialize collapsed state when sections change (e.g., GitHub section appears)
|
||||
// Only set defaults for sections that don't have a persisted state
|
||||
useEffect(() => {
|
||||
let hasNewSections = false;
|
||||
const updated = { ...collapsedNavSections };
|
||||
|
||||
navSections.forEach((section) => {
|
||||
if (section.collapsible && section.label && !(section.label in updated)) {
|
||||
updated[section.label] = section.defaultCollapsed ?? false;
|
||||
hasNewSections = true;
|
||||
}
|
||||
setCollapsedSections((prev) => {
|
||||
const updated = { ...prev };
|
||||
navSections.forEach((section) => {
|
||||
if (section.collapsible && section.label && !(section.label in updated)) {
|
||||
updated[section.label] = section.defaultCollapsed ?? false;
|
||||
}
|
||||
});
|
||||
return updated;
|
||||
});
|
||||
|
||||
if (hasNewSections) {
|
||||
setCollapsedNavSections(updated);
|
||||
}
|
||||
}, [navSections, collapsedNavSections, setCollapsedNavSections]);
|
||||
}, [navSections]);
|
||||
|
||||
// Check scroll state
|
||||
const checkScrollState = useCallback(() => {
|
||||
@@ -88,7 +77,14 @@ export function SidebarNavigation({
|
||||
nav.removeEventListener('scroll', checkScrollState);
|
||||
resizeObserver.disconnect();
|
||||
};
|
||||
}, [checkScrollState, collapsedNavSections]);
|
||||
}, [checkScrollState, collapsedSections]);
|
||||
|
||||
const toggleSection = useCallback((label: string) => {
|
||||
setCollapsedSections((prev) => ({
|
||||
...prev,
|
||||
[label]: !prev[label],
|
||||
}));
|
||||
}, []);
|
||||
|
||||
// Filter sections: always show non-project sections, only show project sections when project exists
|
||||
const visibleSections = navSections.filter((section) => {
|
||||
@@ -100,50 +96,11 @@ export function SidebarNavigation({
|
||||
return !!currentProject;
|
||||
});
|
||||
|
||||
// Get the icon component for the current project
|
||||
const getProjectIcon = (): LucideIcon => {
|
||||
if (currentProject?.icon && currentProject.icon in LucideIcons) {
|
||||
return (LucideIcons as unknown as Record<string, LucideIcon>)[currentProject.icon];
|
||||
}
|
||||
return Folder;
|
||||
};
|
||||
|
||||
const ProjectIcon = getProjectIcon();
|
||||
const hasCustomIcon = !!currentProject?.customIconPath;
|
||||
|
||||
return (
|
||||
<nav
|
||||
ref={navRef}
|
||||
className={cn(
|
||||
'flex-1 overflow-y-auto scrollbar-hide px-3 pb-2',
|
||||
// Add top padding in discord mode since there's no header
|
||||
sidebarStyle === 'discord' ? 'pt-3' : 'mt-1'
|
||||
)}
|
||||
>
|
||||
{/* Project name display for classic/discord mode */}
|
||||
{sidebarStyle === 'discord' && currentProject && sidebarOpen && (
|
||||
<div className="mb-3">
|
||||
<div className="flex items-center gap-2.5 px-3 py-2">
|
||||
{hasCustomIcon ? (
|
||||
<img
|
||||
src={getAuthenticatedImageUrl(currentProject.customIconPath!, currentProject.path)}
|
||||
alt={currentProject.name}
|
||||
className="w-5 h-5 rounded object-cover"
|
||||
/>
|
||||
) : (
|
||||
<ProjectIcon className="w-5 h-5 text-brand-500 shrink-0" />
|
||||
)}
|
||||
<span className="text-sm font-medium text-foreground truncate">
|
||||
{currentProject.name}
|
||||
</span>
|
||||
</div>
|
||||
<div className="h-px bg-border/40 mx-1 mt-1" />
|
||||
</div>
|
||||
)}
|
||||
|
||||
<nav ref={navRef} className={cn('flex-1 overflow-y-auto scrollbar-hide px-3 pb-2 mt-1')}>
|
||||
{/* Navigation sections */}
|
||||
{visibleSections.map((section, sectionIdx) => {
|
||||
const isCollapsed = section.label ? collapsedNavSections[section.label] : false;
|
||||
const isCollapsed = section.label ? collapsedSections[section.label] : false;
|
||||
const isCollapsible = section.collapsible && section.label && sidebarOpen;
|
||||
|
||||
const SectionIcon = section.label ? sectionIcons[section.label] : null;
|
||||
@@ -153,37 +110,21 @@ export function SidebarNavigation({
|
||||
{/* Section Label - clickable if collapsible (expanded sidebar) */}
|
||||
{section.label && sidebarOpen && (
|
||||
<button
|
||||
onClick={() => isCollapsible && toggleNavSection(section.label!)}
|
||||
onClick={() => isCollapsible && toggleSection(section.label!)}
|
||||
className={cn(
|
||||
'group flex items-center w-full px-3 py-1.5 mb-1 rounded-md',
|
||||
'transition-all duration-200 ease-out',
|
||||
isCollapsible
|
||||
? [
|
||||
'cursor-pointer',
|
||||
'hover:bg-accent/50 hover:text-foreground',
|
||||
'border border-transparent hover:border-border/40',
|
||||
]
|
||||
: 'cursor-default'
|
||||
'flex items-center w-full px-3 mb-1.5',
|
||||
isCollapsible && 'cursor-pointer hover:text-foreground'
|
||||
)}
|
||||
disabled={!isCollapsible}
|
||||
>
|
||||
<span
|
||||
className={cn(
|
||||
'text-[10px] font-semibold uppercase tracking-widest transition-colors duration-200',
|
||||
isCollapsible
|
||||
? 'text-muted-foreground/70 group-hover:text-foreground'
|
||||
: 'text-muted-foreground/70'
|
||||
)}
|
||||
>
|
||||
<span className="text-[10px] font-semibold text-muted-foreground/70 uppercase tracking-widest">
|
||||
{section.label}
|
||||
</span>
|
||||
{isCollapsible && (
|
||||
<ChevronDown
|
||||
className={cn(
|
||||
'w-3 h-3 ml-auto transition-all duration-200',
|
||||
isCollapsed
|
||||
? '-rotate-90 text-muted-foreground/50 group-hover:text-muted-foreground'
|
||||
: 'text-muted-foreground/50 group-hover:text-muted-foreground'
|
||||
'w-3 h-3 ml-auto text-muted-foreground/50 transition-transform duration-200',
|
||||
isCollapsed && '-rotate-90'
|
||||
)}
|
||||
/>
|
||||
)}
|
||||
@@ -193,25 +134,27 @@ export function SidebarNavigation({
|
||||
{/* Section icon with dropdown (collapsed sidebar) */}
|
||||
{section.label && !sidebarOpen && SectionIcon && section.collapsible && isCollapsed && (
|
||||
<DropdownMenu>
|
||||
<Tooltip>
|
||||
<TooltipTrigger asChild>
|
||||
<DropdownMenuTrigger asChild>
|
||||
<button
|
||||
className={cn(
|
||||
'group flex items-center justify-center w-full py-2 rounded-lg',
|
||||
'text-muted-foreground hover:text-foreground',
|
||||
'hover:bg-accent/50 border border-transparent hover:border-border/40',
|
||||
'transition-all duration-200 ease-out'
|
||||
)}
|
||||
>
|
||||
<SectionIcon className="w-[18px] h-[18px]" />
|
||||
</button>
|
||||
</DropdownMenuTrigger>
|
||||
</TooltipTrigger>
|
||||
<TooltipContent side="right" sideOffset={8}>
|
||||
{section.label}
|
||||
</TooltipContent>
|
||||
</Tooltip>
|
||||
<TooltipProvider delayDuration={0}>
|
||||
<Tooltip>
|
||||
<TooltipTrigger asChild>
|
||||
<DropdownMenuTrigger asChild>
|
||||
<button
|
||||
className={cn(
|
||||
'group flex items-center justify-center w-full py-2 rounded-lg',
|
||||
'text-muted-foreground hover:text-foreground',
|
||||
'hover:bg-accent/50 border border-transparent hover:border-border/40',
|
||||
'transition-all duration-200 ease-out'
|
||||
)}
|
||||
>
|
||||
<SectionIcon className="w-[18px] h-[18px]" />
|
||||
</button>
|
||||
</DropdownMenuTrigger>
|
||||
</TooltipTrigger>
|
||||
<TooltipContent side="right" sideOffset={8}>
|
||||
{section.label}
|
||||
</TooltipContent>
|
||||
</Tooltip>
|
||||
</TooltipProvider>
|
||||
<DropdownMenuContent side="right" align="start" sideOffset={8} className="w-48">
|
||||
{section.items.map((item) => {
|
||||
const ItemIcon = item.icon;
|
||||
|
||||
@@ -9,15 +9,19 @@ export const ThemeMenuItem = memo(function ThemeMenuItem({
|
||||
}: ThemeMenuItemProps) {
|
||||
const Icon = option.icon;
|
||||
return (
|
||||
<DropdownMenuRadioItem
|
||||
value={option.value}
|
||||
data-testid={`project-theme-${option.value}`}
|
||||
className="text-xs py-1.5"
|
||||
<div
|
||||
key={option.value}
|
||||
onPointerEnter={() => onPreviewEnter(option.value)}
|
||||
onPointerLeave={onPreviewLeave}
|
||||
>
|
||||
<Icon className="w-3.5 h-3.5 mr-1.5" style={{ color: option.color }} />
|
||||
<span>{option.label}</span>
|
||||
</DropdownMenuRadioItem>
|
||||
<DropdownMenuRadioItem
|
||||
value={option.value}
|
||||
data-testid={`project-theme-${option.value}`}
|
||||
className="text-xs py-1.5"
|
||||
>
|
||||
<Icon className="w-3.5 h-3.5 mr-1.5" style={{ color: option.color }} />
|
||||
<span>{option.label}</span>
|
||||
</DropdownMenuRadioItem>
|
||||
</div>
|
||||
);
|
||||
});
|
||||
|
||||
@@ -53,7 +53,6 @@ export function Sidebar() {
|
||||
trashedProjects,
|
||||
currentProject,
|
||||
sidebarOpen,
|
||||
sidebarStyle,
|
||||
mobileSidebarHidden,
|
||||
projectHistory,
|
||||
upsertAndSetCurrentProject,
|
||||
@@ -382,21 +381,17 @@ export function Sidebar() {
|
||||
)}
|
||||
|
||||
<div className="flex-1 flex flex-col overflow-hidden">
|
||||
{/* Only show header in unified mode - in discord mode, ProjectSwitcher has the logo */}
|
||||
{sidebarStyle === 'unified' && (
|
||||
<SidebarHeader
|
||||
sidebarOpen={sidebarOpen}
|
||||
currentProject={currentProject}
|
||||
onNewProject={handleNewProject}
|
||||
onOpenFolder={handleOpenFolder}
|
||||
onProjectContextMenu={handleContextMenu}
|
||||
/>
|
||||
)}
|
||||
<SidebarHeader
|
||||
sidebarOpen={sidebarOpen}
|
||||
currentProject={currentProject}
|
||||
onNewProject={handleNewProject}
|
||||
onOpenFolder={handleOpenFolder}
|
||||
onProjectContextMenu={handleContextMenu}
|
||||
/>
|
||||
|
||||
<SidebarNavigation
|
||||
currentProject={currentProject}
|
||||
sidebarOpen={sidebarOpen}
|
||||
sidebarStyle={sidebarStyle}
|
||||
navSections={navSections}
|
||||
isActiveRoute={isActiveRoute}
|
||||
navigate={navigate}
|
||||
|
||||
@@ -1,4 +1,8 @@
|
||||
import * as React from 'react';
|
||||
import { Settings2 } from 'lucide-react';
|
||||
import { cn } from '@/lib/utils';
|
||||
import { Button } from '@/components/ui/button';
|
||||
import { Popover, PopoverContent, PopoverTrigger } from '@/components/ui/popover';
|
||||
import { useAppStore } from '@/store/app-store';
|
||||
import type { ModelAlias, CursorModelId, PhaseModelKey, PhaseModelEntry } from '@automaker/types';
|
||||
import { PhaseModelSelector } from '@/components/views/settings-view/model-defaults/phase-model-selector';
|
||||
@@ -70,6 +74,12 @@ export function ModelOverrideTrigger({
|
||||
lg: 'h-10 w-10',
|
||||
};
|
||||
|
||||
const iconSizes = {
|
||||
sm: 'w-3.5 h-3.5',
|
||||
md: 'w-4 h-4',
|
||||
lg: 'w-5 h-5',
|
||||
};
|
||||
|
||||
// For icon variant, wrap PhaseModelSelector and hide text/chevron with CSS
|
||||
if (variant === 'icon') {
|
||||
return (
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user