Compare commits

...

514 Commits

Author SHA1 Message Date
Claude
d3a4e13c4e chore: update package-lock.json
https://claude.ai/code/session_018msdfAb9sirVp5b5ZGi4Eo
2026-01-23 02:34:46 +00:00
Claude
7941deffd7 feat: add unified provider usage tracker for all AI providers
Implements a comprehensive usage tracking system for Claude, Cursor, Codex,
Gemini, GitHub Copilot, OpenCode, MiniMax, and GLM providers. Based on
CodexBar reference implementation.

- Add unified provider usage types in @automaker/types
- Implement usage services for each provider with appropriate auth
- Create unified ProviderUsageTracker service with 60s caching
- Add API routes for fetching provider usage data
- Add React Query hooks with polling support
- Create ProviderUsageBar UI component for board header
- Replace single-provider UsagePopover with unified bar

https://claude.ai/code/session_018msdfAb9sirVp5b5ZGi4Eo
2026-01-23 02:33:12 +00:00
Stefan de Vogelaere
01859f3a9a feat(ui): unified sidebar with collapsible sections and enhanced UX (#659)
* feat(ui): add unified sidebar component

Add new unified-sidebar component for layout improvements.
- Export UnifiedSidebar from layout components
- Update root route to use new sidebar structure

* refactor(ui): consolidate unified-sidebar into sidebar folder

Merge the unified-sidebar implementation into the standard sidebar
folder structure. The unified sidebar becomes the canonical sidebar
with improved features including collapsible sections, scroll
indicators, and enhanced mobile support.

- Delete old sidebar.tsx
- Move unified-sidebar components to sidebar/components
- Rename UnifiedSidebar to Sidebar
- Update all imports in __root.tsx
- Remove redundant unified-sidebar folder

* fix(ui): address PR review comments and fix E2E tests for unified sidebar

- Add try/catch for getElectronAPI() in sidebar-footer with window.open fallback
- Use formatShortcut() for OS-aware hotkey display in sidebar-header
- Remove unnecessary optional chaining on project.icon
- Remove redundant ternary in sidebar-navigation className
- Update E2E tests to use new project-dropdown-trigger data-testid

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-23 02:06:10 +01:00
Stefan de Vogelaere
afb6e14811 feat: Allow drag-to-create dependencies between any non-completed features (#656)
* feat: Allow drag-to-create dependencies between any non-completed features

Previously, the card drag-to-create-dependency feature only worked between
backlog features. This expands the functionality to allow creating dependency
links between features in any status (except completed).

Changes:
- Make all non-completed cards droppable for dependency linking
- Update drag-drop hook to allow links between any status
- Add status badges to the dependency link dialog for better context

* refactor: use barrel export for StatusBadge import

---------

Co-authored-by: Claude <noreply@anthropic.com>
2026-01-23 01:42:51 +01:00
Stefan de Vogelaere
c65f931326 feat(ui): generate meaningful worktree branch names from feature titles (#655)
* feat(ui): generate meaningful worktree branch names from feature titles

Instead of generating random branch names like `feature/main-1737547200000-tt2v`,
this change creates human-readable branch names based on the feature title:
`feature/add-user-authentication-a3b2`

Changes:
- Generate branch name slug from feature title (lowercase, alphanumeric, hyphens)
- Use 4-character random suffix for uniqueness instead of timestamp
- If no title provided, generate one from description first (for auto worktree mode)
- Fall back to 'untitled' if both title and description are empty
- Fix: Apply substring limit before removing trailing hyphens to prevent
  malformed branch names when truncation occurs at a hyphen position

This makes it much easier to identify which worktree corresponds to which
feature when working with multiple features simultaneously.

Closes #604

* fix(ui): preserve existing branch name in auto mode when editing features

When editing a feature that already has a branch name assigned, preserve
it instead of generating a new one. This prevents orphaning existing
worktrees when users edit features in auto worktree mode.
2026-01-23 01:42:36 +01:00
Stefan de Vogelaere
f480386905 feat: add Gemini CLI provider integration (#647)
* feat: add Gemini CLI provider for AI model execution

- Add GeminiProvider class extending CliProvider for Gemini CLI integration
- Add Gemini models (Gemini 3 Pro/Flash Preview, 2.5 Pro/Flash/Flash-Lite)
- Add gemini-models.ts with model definitions and types
- Update ModelProvider type to include 'gemini'
- Add isGeminiModel() to provider-utils.ts for model detection
- Register Gemini provider in provider-factory with priority 4
- Add Gemini setup detection routes (status, auth, deauth)
- Add GeminiCliStatus to setup store for UI state management
- Add Gemini to PROVIDER_ICON_COMPONENTS for UI icon display
- Add GEMINI_MODELS to model-display for dropdown population
- Support thinking levels: off, low, medium, high

Based on https://github.com/google-gemini/gemini-cli

* chore: update package-lock.json

* feat(ui): add Gemini provider to settings and setup wizard

- Add GeminiCliStatus component for CLI detection display
- Add GeminiSettingsTab component for global settings
- Update provider-tabs.tsx to include Gemini as 5th tab
- Update providers-setup-step.tsx with Gemini provider detection
- Add useGeminiCliStatus hook for querying CLI status
- Add getGeminiStatus, authGemini, deauthGemini to HTTP API client
- Add gemini query key for React Query
- Fix GeminiModelId type to not double-prefix model IDs

* feat(ui): add Gemini to settings sidebar navigation

- Add 'gemini-provider' to SettingsViewId type
- Add GeminiIcon and gemini-provider to navigation config
- Add gemini-provider to NAV_ID_TO_PROVIDER mapping
- Add gemini-provider case in settings-view switch
- Export GeminiSettingsTab from providers index

This fixes the missing Gemini entry in the AI Providers sidebar menu.

* feat(ui): add Gemini model configuration in settings

- Create GeminiModelConfiguration component for model selection
- Add enabledGeminiModels and geminiDefaultModel state to app-store
- Add setEnabledGeminiModels, setGeminiDefaultModel, toggleGeminiModel actions
- Update GeminiSettingsTab to show model configuration when CLI is installed
- Import GeminiModelId and getAllGeminiModelIds from types

This adds the ability to configure which Gemini models are available
in the feature modal, similar to other providers like Codex and OpenCode.

* feat(ui): add Gemini models to all model dropdowns

- Add GEMINI_MODELS to model-constants.ts for UI dropdowns
- Add Gemini to ALL_MODELS array used throughout the app
- Add GeminiIcon to PROFILE_ICONS mapping
- Fix GEMINI_MODELS in model-display.ts to use correct model IDs
- Update getModelDisplayName to handle Gemini models correctly

Gemini models now appear in all model selection dropdowns including
Model Defaults, Feature Defaults, and feature card settings.

* fix(gemini): fix CLI integration and event handling

- Fix model ID prefix handling: strip gemini- prefix in agent-service,
  add it back in buildCliArgs for CLI invocation
- Fix event normalization to match actual Gemini CLI output format:
  - type: 'init' (not 'system')
  - type: 'message' with role (not 'assistant')
  - tool_name/tool_id/parameters/output field names
- Add --sandbox false and --approval-mode yolo for faster execution
- Remove thinking level selector from UI (Gemini CLI doesn't support it)
- Update auth status to show errors properly

* test: update provider-factory tests for Gemini provider

- Add GeminiProvider import and spy mock
- Update expected provider count from 4 to 5
- Add test for GeminiProvider inclusion
- Add gemini key to checkAllProviders test

* fix(gemini): address PR review feedback

- Fix npm package name from @anthropic-ai/gemini-cli to @google/gemini-cli
- Fix comments in gemini-provider.ts to match actual CLI output format
- Convert sync fs operations to async using fs/promises

* fix(settings): add Gemini and Codex settings to sync

Add enabledGeminiModels, geminiDefaultModel, enabledCodexModels, and
codexDefaultModel to SETTINGS_FIELDS_TO_SYNC for persistence across sessions.

* fix(gemini): address additional PR review feedback

- Use 'Speed' badge for non-thinking Gemini models (consistency)
- Fix installCommand mapping in gemini-settings-tab.tsx
- Add hasEnvApiKey to GeminiCliStatus interface for API parity
- Clarify GeminiThinkingLevel comment (CLI doesn't support --thinking-level)

* fix(settings): restore Codex and Gemini settings from server

Add sanitization and restoration logic for enabledCodexModels,
codexDefaultModel, enabledGeminiModels, and geminiDefaultModel
in refreshSettingsFromServer() to match the fields in SETTINGS_FIELDS_TO_SYNC.

* feat(gemini): normalize tool names and fix workspace restrictions

- Add tool name mapping to normalize Gemini CLI tool names to standard
  names (e.g., write_todos -> TodoWrite, read_file -> Read)
- Add normalizeGeminiToolInput to convert write_todos format to TodoWrite
  format (description -> content, handle cancelled status)
- Pass --include-directories with cwd to fix workspace restriction errors
  when Gemini CLI has a different cached workspace from previous sessions

---------

Co-authored-by: Claude <noreply@anthropic.com>
2026-01-23 01:42:17 +01:00
Stefan de Vogelaere
7773db559d fix(ui): improve review dialog rendering for tool calls and tables (#657)
* fix(ui): improve review dialog rendering for tool calls and tables

- Replace Markdown component with LogViewer in plan-approval-dialog to
  properly format tool calls with collapsible sections and JSON highlighting
- Add remark-gfm plugin to Markdown component for GitHub Flavored Markdown
  support including tables, task lists, and strikethrough
- Add table styling classes to Markdown component for proper table rendering
- Install remark-gfm and rehype-sanitize dependencies

Fixes mixed/broken rendering in review dialog where tool calls showed as
raw text and markdown tables showed as pipe-separated text.

* chore: fix git+ssh URL and prettier formatting

- Convert git+ssh:// to git+https:// in package-lock.json for @electron/node-gyp
- Apply prettier formatting to plan-approval-dialog.tsx

* fix(ui): create PlanContentViewer for better plan display

The previous LogViewer approach showed tool calls prominently but hid
the actual plan/specification markdown content. The new PlanContentViewer:

- Separates tool calls (exploration) from plan markdown
- Shows the plan/specification markdown prominently using Markdown component
- Collapses tool calls by default in an "Exploration" section
- Properly renders GFM tables in the plan content

This provides a better UX where users see the important plan content
first, with tool calls available but not distracting.

* fix(ui): add show more/less toggle for feature description

The feature description in the plan approval dialog header was
truncated at 150 characters with no way to see the full text.
Now users can click "show more" to expand and "show less" to collapse.

* fix(ui): increase description limit and add feature title to dialog

- Increase description character limit from 150 to 250 characters
- Add feature title to dialog header (e.g., "Review Plan - Feature Title")
  only if title exists and is <= 50 characters

* feat(ui): render tasks code blocks as proper checkbox lists

When markdown contains a ```tasks code block, it now renders as:
- Phase headers (## Phase 1: ...) as styled section headings
- Task items (- [ ] or - [x]) with proper checkbox icons
- Checked items show green checkmark and strikethrough text
- Unchecked items show empty square icon

This makes implementation task lists in plans much more readable
compared to rendering them as raw code blocks.

* fix(ui): improve plan content parsing robustness

Address CodeRabbit review feedback:

1. Relax heading detection regex to match emoji and non-word chars
   - Change \w to \S so headings like "##  Plan" are detected
   - Change \*\*[A-Z] to \*\*\S for bold section detection

2. Flush active tool call when heading is detected
   - Prevents plan content being dropped when heading follows tool call
     without a blank line separator

3. Support tool names with dots/hyphens
   - Change \w+ to [^\s]+ so names like "web.run" or "file-read" work

---------

Co-authored-by: Claude <noreply@anthropic.com>
2026-01-23 01:41:45 +01:00
Shirone
655f254538 Merge pull request #658 from AutoMaker-Org/feature/v0.14.0rc-1769075904343-i0uw
feat: abillity to configure "Start Dev Server" command in project settings
2026-01-22 20:56:34 +00:00
Shirone
b4be3c11e2 refactor: consolidate dev and test command configuration into a new CommandsSection
- Introduced a new `CommandsSection` component to manage both development and test commands, replacing the previous `DevServerSection` and `TestingSection`.
- Updated the `SettingsService` to handle special cases for `devCommand` and `testCommand`, allowing for null values to delete commands.
- Removed deprecated sections and streamlined the project settings view to enhance user experience and maintainability.

This refactor simplifies command management and improves the overall structure of the project settings interface.
2026-01-22 21:47:35 +01:00
Shirone
57ce198ae9 fix: normalize custom command handling and improve project settings loading
- Updated the `DevServerService` to normalize custom commands by trimming whitespace and treating empty strings as undefined.
- Refactored the `DevServerSection` component to utilize TanStack Query for fetching project settings, improving data handling and error management.
- Enhanced the save functionality to use mutation hooks for updating project settings, streamlining the save process and ensuring better state management.

These changes enhance the reliability and user experience when configuring development server commands.
2026-01-22 17:49:06 +01:00
alexanderalgemi
733ca15e15 Fix production docker build (#651)
* fix: Add missing fast-xml-parser dependency

* fix: Add mssing spec-parser package.json to dockerfile
2026-01-22 17:37:45 +01:00
Shirone
e110c058a2 feat: enhance dev server configuration and command handling
- Updated the `/start-dev` route to accept a custom development command from project settings, allowing for greater flexibility in starting dev servers.
- Implemented a new `parseCustomCommand` method in the `DevServerService` to handle custom command parsing, including support for quoted strings.
- Added a new `DevServerSection` component in the UI for configuring the dev server command, featuring quick presets and auto-detection options.
- Updated project settings interface to include a `devCommand` property for storing custom commands.

This update improves the user experience by allowing users to specify custom commands for their development servers, enhancing the overall development workflow.
2026-01-22 17:13:16 +01:00
webdevcody
0fdda11b09 refactor: normalize branch name handling and enhance auto mode settings merging
- Updated branch name normalization to align with UI conventions, treating "main" as null for consistency.
- Implemented deep merging of `autoModeByWorktree` settings to preserve existing entries during updates.
- Enhanced the BoardView component to persist max concurrency settings to the server, ensuring accurate capacity checks.
- Added error handling for feature rollback persistence in useBoardActions.

These changes improve the reliability and consistency of auto mode settings across the application.
2026-01-22 09:43:28 -05:00
Stefan de Vogelaere
0155da0be5 fix: resolve model aliases in backlog plan explicit override (#654)
When a user explicitly passes a model override (e.g., model: "sonnet"),
the code was only fetching credentials without resolving the model alias.
This caused API calls to fail because the Claude API expects full model
strings like "claude-sonnet-4-20250514", not aliases like "sonnet".

The other code branches (settings-based and fallback) correctly called
resolvePhaseModel(), but the explicit override branch was missing this.

This fix adds the resolvePhaseModel() call to ensure model aliases are
properly resolved before being sent to the API.
2026-01-22 12:58:55 +01:00
Shirone
41b127ebf3 Merge pull request #643 from AutoMaker-Org/feature/v0.14.0rc-1768981415660-tt2v
feat: add import / export features in json / yaml format
2026-01-21 23:06:10 +00:00
Shirone
e7e83a30d9 Merge pull request #650 from AutoMaker-Org/fix/ideation-view-non-claude-models
fix: ideation view not working with other providers
2026-01-21 22:49:11 +00:00
Shirone
40950b5fce refactor: remove suggestions routes and related logic
This commit removes the suggestions routes and associated files from the server, streamlining the codebase. The `suggestionsModel` has been replaced with `ideationModel` across various components, including UI and service layers, to better reflect the updated functionality. Additionally, adjustments were made to ensure that the ideation service correctly utilizes the new model configuration.

- Deleted suggestions routes and their handlers.
- Updated references from `suggestionsModel` to `ideationModel` in settings and UI components.
- Refactored related logic in the ideation service to align with the new model structure.
2026-01-21 23:42:53 +01:00
Shirone
3f05735be1 Merge pull request #649 from AutoMaker-Org/feat/detect-no-remote-branch
fix: detect no remote branch
2026-01-21 21:44:19 +00:00
Shirone
05f0ceceb6 fix: build failing 2026-01-21 22:39:20 +01:00
Shirone
28d50aa017 refactor: Consolidate validation and improve error logging 2026-01-21 22:28:22 +01:00
Shirone
103c6bc8a0 docs: improve comment clarity for resolvePhaseModel usage
Updated the comment to better explain why resolveModelString is not
needed after resolvePhaseModel - the latter already handles model
alias resolution internally.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-21 22:26:01 +01:00
Shirone
6c47068f71 refactor: remove redundant resolveModelString call in ideation service
Address PR #650 review feedback from gemini-code-assist. The call to
resolveModelString was redundant because resolvePhaseModel already
returns the fully resolved canonical model ID. When providerId is set,
it returns the provider-specific model ID unchanged; otherwise, it
already calls resolveModelString internally.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-21 22:23:10 +01:00
Shirone
a9616ff309 feat: add remote management functionality
- Introduced a new route for adding remotes to git worktrees.
- Enhanced the PushToRemoteDialog component to support adding new remotes, including form handling and error management.
- Updated the API client to include an endpoint for adding remotes.
- Modified the worktree state management to track the presence of remotes.
- Improved the list branches handler to check for configured remotes.

This update allows users to easily add remotes through the UI, enhancing the overall git workflow experience.
2026-01-21 22:11:16 +01:00
Shirone
4fa0923ff8 feat(ideation): enhance model resolution and provider integration
- Updated the ideation service to utilize phase settings for model resolution, improving flexibility in handling model aliases.
- Introduced `getPhaseModelWithOverrides` to fetch model and provider information, allowing for dynamic adjustments based on project settings.
- Enhanced logging to provide clearer insights into the model and provider being used during suggestion generation.

This update streamlines the process of generating suggestions by leveraging phase-specific configurations, ensuring better alignment with user-defined settings.
2026-01-21 22:08:51 +01:00
Shirone
c3cecc18f2 Merge pull request #646 from AutoMaker-Org/fix/excessive-api-polling
fix: excessive api pooling
2026-01-21 19:17:39 +00:00
Shirone
3fcda8abfc chore: update package-lock.json for version bump and dependency adjustments
- Bumped version from 0.12.0rc to 0.13.0 across the project.
- Updated package-lock.json to reflect changes in dependencies, including marking certain dependencies as `devOptional`.
- Adjusted import paths in the UI for better module organization.

This update ensures consistency in versioning and improves the structure of utility imports.
2026-01-21 20:14:39 +01:00
Shirone
a45ee59b7d Merge remote-tracking branch 'origin/v0.14.0rc' into feature/v0.14.0rc-1768981415660-tt2v
# Conflicts:
#	apps/ui/src/components/views/project-settings-view/config/navigation.ts
#	apps/ui/src/components/views/project-settings-view/hooks/use-project-settings-view.ts
2026-01-21 17:46:22 +01:00
Shirone
662f854203 feat(ui): move export/import features from board header to project settings
Relocate the export and import features functionality from the board header
dropdown menu to a new "Data" section in project settings for better UX.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-21 17:43:33 +01:00
Shirone
f2860d9366 Merge pull request #645 from AutoMaker-Org/feat/test-runner
feat(tests): implement test runner functionality with API integration
2026-01-21 16:25:13 +00:00
Shirone
6eb7acb6d4 fix: Add path validation for optional params in test runner routes
Add path validation middleware for optional projectPath and worktreePath
parameters in test runner routes to maintain parity with other worktree
routes and ensure proper security validation when ALLOWED_ROOT_DIRECTORY
is configured.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-21 17:21:18 +01:00
Shirone
4ab927a5fb fix: Prevent command injection and stale state in test runner 2026-01-21 16:12:36 +01:00
Shirone
02de3df3df fix: replace magic numbers with named constants in polling logic
Address PR review feedback:
- Use WS_ACTIVITY_THRESHOLD constant instead of hardcoded 10000 in agent-info-panel.tsx
- Extract AGENT_OUTPUT_POLLING_INTERVAL constant for 5000ms value in use-features.ts

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-21 16:10:22 +01:00
Shirone
b73885e04a fix: adress pr comments 2026-01-21 16:00:40 +01:00
Shirone
afa93dde0d feat(tests): implement test runner functionality with API integration
- Added Test Runner Service to manage test execution processes for worktrees.
- Introduced endpoints for starting and stopping tests, and retrieving test logs.
- Created UI components for displaying test logs and managing test sessions.
- Integrated test runner events for real-time updates in the UI.
- Updated project settings to include configurable test commands.

This enhancement allows users to run tests directly from the UI, view logs in real-time, and manage test sessions effectively.
2026-01-21 15:45:33 +01:00
Shirone
aac59c2b3a feat(ui): enhance WebSocket event handling and polling logic
- Introduced a new `useEventRecency` hook to track the recency of WebSocket events, allowing for conditional polling based on event activity.
- Updated `AgentInfoPanel` to utilize the new hook, adjusting polling intervals based on WebSocket activity.
- Implemented debounced invalidation for auto mode events to optimize query updates during rapid event streams.
- Added utility functions for managing event recency checks in various query hooks, improving overall responsiveness and reducing unnecessary polling.
- Introduced debounce and throttle utilities for better control over function execution rates.

This enhancement improves the application's performance by reducing polling when real-time updates are available, ensuring a more efficient use of resources.
2026-01-21 14:57:26 +01:00
Stefan de Vogelaere
c3e7e57968 feat(ui): make React Query DevTools configurable (#642)
* feat(ui): make React Query DevTools configurable

- Add showQueryDevtools setting to app store with persistence
- Add toggle in Global Settings > Developer section
- Move DevTools button from bottom-left to bottom-right (less intrusive)
- Support VITE_HIDE_QUERY_DEVTOOLS env variable to disable
- DevTools only available in development mode

Users can now:
1. Toggle DevTools on/off via Settings > Developer
2. Set VITE_HIDE_QUERY_DEVTOOLS=true to hide permanently
3. DevTools are now positioned at bottom-right to avoid overlapping UI controls

* chore: update package-lock.json

* fix(ui): hide React Query DevTools toggle in production mode

* refactor(ui): remove VITE_HIDE_QUERY_DEVTOOLS env variable

The persisted toggle in Settings > Developer is sufficient for controlling
DevTools visibility. No need for an additional env variable override.

* fix(ui): persist showQueryDevtools setting across page refreshes

- Add showQueryDevtools to GlobalSettings type
- Add showQueryDevtools to hydrateStoreFromSettings function
- Add default value in DEFAULT_GLOBAL_SETTINGS

* fix: restore package-lock.json from base branch

Removes git+ssh:// URL that was accidentally introduced

---------

Co-authored-by: Claude <noreply@anthropic.com>
2026-01-21 13:20:36 +01:00
Shirone
7bb97953a7 feat: Refactor feature export service with type guards and parallel conflict checking 2026-01-21 13:11:18 +01:00
Shirone
2214c2700b feat(ui): add export and import features functionality
- Introduced new routes for exporting and importing features, enhancing project management capabilities.
- Added UI components for export and import dialogs, allowing users to easily manage feature data.
- Updated HTTP API client to support export and import operations with appropriate options and responses.
- Enhanced board view with controls for triggering export and import actions, improving user experience.
- Defined new types for feature export and import, ensuring type safety and clarity in data handling.
2026-01-21 13:00:34 +01:00
Shirone
7bee54717c Merge pull request #637 from AutoMaker-Org/feature/v0.13.0rc-1768936017583-e6ni
feat: implement pipeline step exclusion functionality
2026-01-21 11:59:08 +00:00
Stefan de Vogelaere
5ab53afd7f feat: add per-project default model override for new features (#640)
* feat: add per-project default model override for new features

- Add defaultFeatureModel to ProjectSettings type for project-level override
- Add defaultFeatureModel to Project interface for UI state
- Display Default Feature Model in Model Defaults section alongside phase models
- Include Default Feature Model in global Bulk Replace dialog
- Add Default Feature Model override section to Project Settings
- Add setProjectDefaultFeatureModel store action for project-level overrides
- Update clearAllProjectPhaseModelOverrides to also clear defaultFeatureModel
- Update add-feature-dialog to use project override when available
- Include Default Feature Model in Project Bulk Replace dialog

This allows projects with different complexity levels to use different
default models (e.g., Haiku for simple tasks, Opus for complex projects).

* fix: add server-side __CLEAR__ handler for defaultFeatureModel

- Add handler in settings-service.ts to properly delete defaultFeatureModel
  when '__CLEAR__' marker is sent from the UI
- Fix bulk-replace-dialog.tsx to correctly return claude-opus when resetting
  default feature model to Anthropic Direct (was incorrectly using
  enhancementModel's settings which default to sonnet)

These fixes ensure:
1. Clearing project default model override properly removes the setting
   instead of storing literal '__CLEAR__' string
2. Global bulk replace correctly resets default feature model to opus

* fix: include defaultFeatureModel in Reset to Defaults action

- Updated resetPhaseModels to also reset defaultFeatureModel to claude-opus
- Fixed initial state to use canonical 'claude-opus' instead of 'opus'

* refactor: use DEFAULT_GLOBAL_SETTINGS constant for defaultFeatureModel

Address PR review feedback:
- Replace hardcoded { model: 'claude-opus' } with DEFAULT_GLOBAL_SETTINGS.defaultFeatureModel
- Fix Prettier formatting for long destructuring lines
- Import DEFAULT_GLOBAL_SETTINGS from @automaker/types where needed

This improves maintainability by centralizing the default value.
2026-01-21 12:45:14 +01:00
Stefan de Vogelaere
3ebd67f35f fix: hide Cursor models in selector when provider is disabled (#639)
* fix: hide Cursor models in selector when provider is disabled

The Cursor Models section was appearing in model dropdown selectors even
when the Cursor provider was toggled OFF in Settings → AI Providers.

This fix adds a !isCursorDisabled check to the rendering condition,
matching the pattern already used by Codex and OpenCode providers.
This ensures consistency across all provider types.

Fixes the issue where:
- Codex/OpenCode correctly hide models when disabled
- Cursor incorrectly showed models even when disabled

* style: fix Prettier formatting
2026-01-21 11:40:26 +01:00
Stefan de Vogelaere
641bbde877 refactor: replace crypto.randomUUID with generateUUID utility (#638)
* refactor: replace crypto.randomUUID with generateUUID in spec editor

Use the centralized generateUUID utility from @/lib/utils instead of
direct crypto.randomUUID calls in spec editor components. This provides
better fallback handling for non-secure contexts (e.g., Docker via HTTP).

Files updated:
- array-field-editor.tsx
- features-section.tsx
- roadmap-section.tsx

* refactor: simplify generateUUID to always use crypto.getRandomValues

Remove conditional checks and fallbacks - crypto.getRandomValues() works
in all modern browsers including non-secure HTTP contexts (Docker).
This simplifies the code while maintaining the same security guarantees.

* refactor: add defensive check for crypto availability

Add check for crypto.getRandomValues() availability before use.
Throws a meaningful error if the crypto API is not available,
rather than failing with an unclear runtime error.

---------

Co-authored-by: Claude <noreply@anthropic.com>
2026-01-21 10:32:12 +01:00
Shirone
7c80249bbf Merge remote-tracking branch 'origin/main' into feature/v0.13.0rc-1768936017583-e6ni
# Conflicts:
#	apps/ui/src/components/views/board-view.tsx
2026-01-21 08:47:16 +01:00
Shirone
a73a57b9a4 feat: implement pipeline step exclusion functionality
- Added support for excluding specific pipeline steps in feature management, allowing users to skip certain steps during execution.
- Introduced a new `PipelineExclusionControls` component for managing exclusions in the UI.
- Updated relevant dialogs and components to handle excluded pipeline steps, including `AddFeatureDialog`, `EditFeatureDialog`, and `MassEditDialog`.
- Enhanced the `getNextStatus` method in `PipelineService` to account for excluded steps when determining the next status in the pipeline flow.
- Updated tests to cover scenarios involving excluded pipeline steps.
2026-01-21 08:34:55 +01:00
webdevcody
db71dc9aa5 fix(workflows): update artifact upload paths in release workflow
- Modified paths for macOS, Windows, and Linux artifacts to use explicit file patterns instead of wildcard syntax.
- Ensured all relevant file types are included for each platform, improving artifact management during releases.
2026-01-20 22:48:00 -05:00
webdevcody
a8ddd07442 chore: release v0.13.0
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 18:52:59 -05:00
Web Dev Cody
2165223b49 Merge pull request #635 from AutoMaker-Org/v0.13.0rc
V0.13.0rc
2026-01-20 18:48:46 -05:00
webdevcody
3bde3d2732 Merge branch 'main' into v0.13.0rc 2026-01-20 18:47:24 -05:00
Shirone
900a312c92 fix(ui): add HMR fallback for FileBrowserContext to prevent crashes during module reloads
- Implemented a no-op fallback for useFileBrowser to handle cases where the context is temporarily unavailable during Hot Module Replacement (HMR).
- Added warnings to notify when the context is not available, ensuring a smoother development experience without crashing the app.
2026-01-21 00:09:35 +01:00
Shirone
69ff8df7c1 feat(ui): enhance BoardBackgroundModal with local state management for opacity sliders
- Implemented local state for card, column, and card border opacity during slider dragging to improve user experience.
- Added useEffect to sync local state with store settings when not dragging.
- Updated handlers to commit changes to the store and persist settings upon slider release.
- Adjusted UI to reflect local state values for opacity sliders, ensuring immediate feedback during adjustments.
2026-01-20 23:58:00 +01:00
Stefan de Vogelaere
4f584f9a89 fix(ui): bulk update cache invalidation and model dropdown display (#633)
Fix two related issues with bulk model updates in Kanban view:

1. Bulk update now properly invalidates React Query cache
   - Changed handleBulkUpdate and bulk verify handler to call loadFeatures()
   - This ensures UI immediately reflects bulk changes

2. Custom provider models (GLM, MiniMax, etc.) now display correctly
   - Added fallback lookup in PhaseModelSelector by model ID
   - Updated mass-edit-dialog to track providerId after selection
2026-01-20 23:01:06 +01:00
USerik
47a6033b43 fix(opencode-provider): correct z.ai coding plan model mapping (#625)
* fix(opencode-provider): correct z.ai coding plan model mapping

The model mapping for 'z.ai coding plan' was incorrectly pointing to 'z-ai'
instead of 'zai-coding-plan', which would cause model resolution failures
when users selected the z.ai coding plan provider.

This fix ensures the correct model identifier is used for z.ai coding plan,
aligning with the expected model naming convention.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>

* test: Add unit tests for parseProvidersOutput function

Add comprehensive unit tests for the parseProvidersOutput private method
in OpencodeProvider. This addresses PR feedback requesting test coverage
for the z.ai coding plan mapping fix.

Test coverage (22 tests):
- Critical fix validation: z.ai coding plan vs z.ai distinction
- Provider name mapping: all 12 providers with case-insensitive handling
- Duplicate aliases: copilot, bedrock, lmstudio variants
- Authentication methods: oauth, api_key detection
- ANSI escape sequences: color code removal
- Edge cases: malformed input, whitespace, newlines
- Real-world CLI output: box characters, decorations

All tests passing. Ensures regression protection for provider parsing.

---------

Co-authored-by: devkeruse <devkeruse@gmail.com>
Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-20 21:03:38 +01:00
Stefan de Vogelaere
a1f234c7e2 feat: Claude Compatible Providers System (#629)
* feat: refactor Claude API Profiles to Claude Compatible Providers

- Rename ClaudeApiProfile to ClaudeCompatibleProvider with models[] array
- Each ProviderModel has mapsToClaudeModel field for Claude tier mapping
- Add providerType field for provider-specific icons (glm, minimax, openrouter)
- Add thinking level support for provider models in phase selectors
- Show all mapped Claude models per provider model (e.g., "Maps to Haiku, Sonnet, Opus")
- Add Bulk Replace feature to switch all phases to a provider at once
- Hide Bulk Replace button when no providers are enabled
- Fix project-level phaseModelOverrides not persisting after refresh
- Fix deleting last provider not persisting (remove empty array guard)
- Add getProviderByModelId() helper for all SDK routes
- Update all routes to pass provider config for provider models
- Update terminology from "profiles" to "providers" throughout UI
- Update documentation to reflect new provider system

* fix: atomic writer race condition and bulk replace reset to defaults

1. AtomicWriter Race Condition Fix (libs/utils/src/atomic-writer.ts):
   - Changed temp file naming from Date.now() to Date.now() + random hex
   - Uses crypto.randomBytes(4).toString('hex') for uniqueness
   - Prevents ENOENT errors when multiple concurrent writes happen
     within the same millisecond

2. Bulk Replace "Anthropic Direct" Reset (both dialogs):
   - When selecting "Anthropic Direct", now uses DEFAULT_PHASE_MODELS
   - Properly resets thinking levels and other settings to defaults
   - Added thinkingLevel to the change detection comparison
   - Affects both global and project-level bulk replace dialogs

* fix: update tests for new model resolver passthrough behavior

1. model-resolver tests:
   - Unknown models now pass through unchanged (provider model support)
   - Removed expectations for warnings on unknown models
   - Updated case sensitivity and edge case tests accordingly
   - Added tests for provider-like model names (GLM-4.7, MiniMax-M2.1)

2. atomic-writer tests:
   - Updated regex to match new temp file format with random suffix
   - Format changed from .tmp.{timestamp} to .tmp.{timestamp}.{hex}

* refactor: simplify getPhaseModelWithOverrides calls per code review

Address code review feedback on PR #629:
- Make settingsService parameter optional in getPhaseModelWithOverrides
- Function now handles undefined settingsService gracefully by returning defaults
- Remove redundant ternary checks in 4 call sites:
  - apps/server/src/routes/context/routes/describe-file.ts
  - apps/server/src/routes/context/routes/describe-image.ts
  - apps/server/src/routes/worktree/routes/generate-commit-message.ts
  - apps/server/src/services/auto-mode-service.ts
- Remove unused DEFAULT_PHASE_MODELS imports where applicable

* test: fix server tests for provider model passthrough behavior

- Update model-resolver.test.ts to expect unknown models to pass through
  unchanged (supports ClaudeCompatibleProvider models like GLM-4.7)
- Remove warning expectations for unknown models (valid for providers)
- Add missing getCredentials and getGlobalSettings mocks to
  ideation-service.test.ts for settingsService

* fix: address code review feedback for model providers

- Honor thinkingLevel in generate-commit-message.ts
- Pass claudeCompatibleProvider in ideation-service.ts for provider models
- Resolve provider configuration for model overrides in generate-suggestions.ts
- Update "Active Profile" to "Active Provider" label in project-claude-section
- Use substring instead of deprecated substr in api-profiles-section
- Preserve provider enabled state when editing in api-profiles-section

* fix: address CodeRabbit review issues for Claude Compatible Providers

- Fix TypeScript TS2339 error in generate-suggestions.ts where
  settingsService was narrowed to 'never' type in else branch
- Use DEFAULT_PHASE_MODELS per-phase defaults instead of hardcoded
  'sonnet' in settings-helpers.ts
- Remove duplicate eventHooks key in use-settings-migration.ts
- Add claudeCompatibleProviders to localStorage migration parsing
  and merging functions
- Handle canonical claude-* model IDs (claude-haiku, claude-sonnet,
  claude-opus) in project-models-section display names

This resolves the CI build failures and addresses code review feedback.

* fix: skip broken list-view-priority E2E test and add Priority column label

- Skip list-view-priority.spec.ts with TODO explaining the infrastructure
  issue: setupRealProject only sets localStorage but server settings
  take precedence with localStorageMigrated: true
- Add 'Priority' label to list-header.tsx for the priority column
  (was empty string, now shows proper header text)
- Increase column width to accommodate the label

The E2E test issue is that tests create features in a temp directory,
but the server loads from the E2E Test Project fixture path set in
setup-e2e-fixtures.mjs. Needs infrastructure fix to properly switch
projects or create features through UI instead of on disk.
2026-01-20 20:57:23 +01:00
webdevcody
8facdc66a9 feat: enhance auto mode service and UI components for branch handling and verification
- Added a new function to retrieve the current branch name in the auto mode service, improving branch management.
- Updated the `getRunningCountForWorktree` method to utilize the current branch name for accurate feature counting.
- Modified UI components to include a toggle for skipping verification in auto mode, enhancing user control.
- Refactored various hooks and components to ensure consistent handling of branch names across the application.
- Introduced a new utility file for string operations, providing common functions for text manipulation.
2026-01-20 13:39:38 -05:00
webdevcody
2ab78dd590 chore: update package-lock.json and enhance kanban-board component imports
- Removed unnecessary "dev" flags and replaced them with "devOptional" in package-lock.json for better dependency management.
- Added additional imports (useRef, useState, useCallback, useEffect, type RefObject, type ReactNode) to the kanban-board component for improved functionality and state management.
2026-01-20 10:59:44 -05:00
Web Dev Cody
c14a40f7f8 Merge pull request #626 from AutoMaker-Org/include-the-patches
apply the patches
2026-01-20 10:57:44 -05:00
webdevcody
8dd5858299 docs: add SECURITY_TODO.md outlining critical security vulnerabilities and action items
- Introduced a comprehensive security audit document detailing critical command injection vulnerabilities in merge and push handlers, as well as unsafe environment variable handling in a shell script.
- Provided recommendations for immediate fixes, including input validation and safer command execution practices.
- Highlighted positive security findings and outlined testing recommendations for command injection prevention.
2026-01-20 10:50:53 -05:00
webdevcody
76eb3a2ac2 apply the patches 2026-01-20 10:24:38 -05:00
Dhanush Santosh
179c5ae9c2 Merge pull request #499 from AutoMaker-Org/feat/react-query
feat(ui): migrate to React Query for data fetching
2026-01-20 20:21:32 +05:30
DhanushSantosh
8c356d7c36 fix(ui): sync updated feature query 2026-01-20 20:15:15 +05:30
DhanushSantosh
a863dcc11d fix(ui): handle review feedback 2026-01-20 19:50:15 +05:30
DhanushSantosh
cf60f84f89 Merge remote-tracking branch 'upstream/v0.13.0rc' into feat/react-query
# Conflicts:
#	apps/ui/src/components/views/board-view.tsx
#	apps/ui/src/components/views/board-view/dialogs/agent-output-modal.tsx
#	apps/ui/src/components/views/board-view/hooks/use-board-features.ts
#	apps/ui/src/components/views/board-view/worktree-panel/worktree-panel.tsx
#	apps/ui/src/hooks/use-project-settings-loader.ts
2026-01-20 19:19:21 +05:30
webdevcody
47e6ed6a17 feat: add publish option to package.json for UI application
- Introduced a new "publish" field set to null in the package.json file, allowing for future configuration of publishing settings.

This change prepares the UI application for potential deployment configurations.
2026-01-19 17:48:33 -05:00
webdevcody
d266c98e48 feat: add option to disable authentication for local/trusted networks
- Implemented a mechanism to disable authentication when the environment variable AUTOMAKER_DISABLE_AUTH is set to 'true'.
- Updated authMiddleware to bypass authentication checks for requests from trusted networks.
- Modified getAuthStatus and isRequestAuthenticated functions to reflect the authentication status based on the new configuration.

This enhancement allows for easier development and testing in trusted environments by simplifying access control.
2026-01-19 17:41:55 -05:00
webdevcody
628e464b74 feat: update branch handling and UI components for worktree management
- Enhanced branch name determination logic in useBoardActions to ensure features created on non-main worktrees are correctly associated with their respective branches.
- Improved DevServerLogsPanel styling for better responsiveness and user experience.
- Added event hooks support in settings migration and sync processes to maintain consistency across application state.

These changes improve the overall functionality and usability of worktree management within the application.
2026-01-19 17:40:46 -05:00
webdevcody
17d42e7931 feat: enhance ANSI code stripping in ClaudeUsageService
- Improved the stripAnsiCodes method to handle various ANSI escape sequences, including CSI, OSC, and single-character sequences.
- Added logic to manage backspaces and explicitly strip known "Synchronized Output" and "Window Title" garbage.
- Updated tests to cover new functionality, ensuring robust handling of complex terminal outputs and control characters.

This enhancement improves the reliability of text processing in terminal environments.
2026-01-19 17:38:21 -05:00
webdevcody
5119ee4222 Merge branch 'v0.13.0rc' of github.com:AutoMaker-Org/automaker into v0.13.0rc 2026-01-19 17:37:28 -05:00
webdevcody
b039b745be feat: add discard changes functionality for worktrees
- Introduced a new POST /discard-changes endpoint to discard all uncommitted changes in a worktree, including resetting staged changes, discarding modifications to tracked files, and removing untracked files.
- Implemented a corresponding handler in the UI to confirm and execute the discard operation, enhancing user control over worktree changes.
- Added a ViewWorktreeChangesDialog component to display changes in the worktree, improving the user experience for managing worktree states.
- Updated the WorktreePanel and WorktreeActionsDropdown components to integrate the new functionality, allowing users to view and discard changes directly from the UI.

This update streamlines the management of worktree changes, providing users with essential tools for version control.
2026-01-19 17:37:13 -05:00
Stefan de Vogelaere
02a7a54736 feat: auto-discover available ports when defaults are in use (#614)
* feat: auto-discover available ports when defaults are in use

Instead of prompting the user to kill processes or manually enter
alternative ports, the launcher now automatically finds the next
available ports when the defaults (3007/3008) are already in use.

This enables running the built Electron app alongside web development
mode without conflicts - web dev will automatically use the next
available ports (e.g., 3009/3010) when Electron is running.

Changes:
- Add find_next_available_port() function that searches up to 100 ports
- Update resolve_port_conflicts() to auto-select ports without prompts
- Update check_ports() for consistency (currently unused but kept)
- Add safety check to ensure web and server ports don't conflict

* fix: sanitize PIDs to single line for centered display

* feat: add user choice for port conflicts with auto-select as default

When ports are in use, users can now choose:
- [Enter] Auto-select available ports (default, recommended)
- [K] Kill processes and use default ports
- [C] Choose custom ports manually
- [X] Cancel

Pressing Enter without typing anything will auto-select the next
available ports, making it easy to quickly continue when running
alongside an existing Electron instance.

* fix: improve port discovery error handling and code quality

Address PR review feedback:
- Extract magic number 100 to PORT_SEARCH_MAX_ATTEMPTS constant
- Fix find_next_available_port to return nothing on failure instead of
  the busy port, preventing misleading "auto-selected" messages
- Update all callers to handle port discovery failure with clear error
  messages showing the searched range
- Simplify PID formatting using xargs instead of tr|sed|sed pipeline
2026-01-19 23:36:40 +01:00
webdevcody
43481c2bab refactor: sanitize featureId for worktree paths across multiple handlers
- Updated createDiffsHandler, createFileDiffHandler, createInfoHandler, createStatusHandler, and auto-mode service to sanitize featureId when constructing worktree paths.
- Ensured consistent handling of featureId to prevent issues with invalid characters in branch names.
- Added branchName support in UI components to enhance feature visibility and management.

This change improves the robustness of worktree operations and enhances user experience by ensuring valid paths are used throughout the application.
2026-01-19 17:35:01 -05:00
webdevcody
d7f6e72a9e Merge branch 'v0.13.0rc' of github.com:AutoMaker-Org/automaker into v0.13.0rc 2026-01-19 17:26:38 -05:00
webdevcody
82e22b4362 feat: enhance auto mode functionality with worktree support
- Updated auto mode handlers to support branch-specific operations, allowing for better management of features across different worktrees.
- Introduced normalization of branch names to handle undefined values gracefully.
- Enhanced status and response messages to reflect the current worktree context.
- Updated the auto mode service to manage state and concurrency settings per worktree, improving user experience and flexibility.
- Added UI elements to display current max concurrency for auto mode in both board and mobile views.

This update aims to streamline the auto mode experience, making it more intuitive for users working with multiple branches and worktrees.
2026-01-19 17:17:40 -05:00
Stefan de Vogelaere
0d9259473e fix: prevent refresh button from overlapping close button in Dev Server dialog (#610)
* fix: prevent refresh button from overlapping close button in Dev Server dialog

Use compact mode for DialogContent and add right padding to the header
to ensure the refresh button doesn't overlap with the dialog close button.

Fixes #579

* fix: restore p-0 to prevent unwanted padding from compact mode
2026-01-19 22:58:47 +01:00
Stefan de Vogelaere
ea3930cf3d fix: convert OpenCode model format to CLI slash format (#605)
* fix: convert OpenCode model format to CLI slash format

The OpenCode CLI expects models in provider/model format (e.g., opencode/big-pickle),
but after commit 4b0d1399 changed model IDs from slash format to prefix format,
the buildCliArgs() method was not updated to convert back to CLI format.

Root cause:
- Commit 4b0d1399 changed OpenCode model IDs from opencode/model to opencode-model
- The old code used stripProviderPrefix() which just removed the prefix
- This resulted in bare model names (e.g., "big-pickle") being passed to CLI
- CLI interpreted "big-pickle" as a provider ID, causing ProviderModelNotFoundError

Fix:
- Updated buildCliArgs() to properly convert model formats for CLI
- Bare model names (after prefix strip) now get opencode/ prepended
- Models with slashes (dynamic providers) pass through unchanged

Model conversion examples:
- opencode-big-pickle → (stripped to) big-pickle → opencode/big-pickle
- opencode-github-copilot/gpt-4o → (stripped to) github-copilot/gpt-4o → github-copilot/gpt-4o
- google/gemini-2.5-pro → google/gemini-2.5-pro (unchanged)

* refactor: simplify OpenCode model format conversion logic

Address review feedback from Gemini Code Assist to reduce code repetition.
The conditional logic for handling models with/without slashes is now
unified into a simpler two-step approach:
1. Strip opencode- prefix if present
2. Prepend opencode/ if no slash exists
2026-01-19 21:17:05 +01:00
Stefan de Vogelaere
d97c4b7b57 feat: unified Claude API key and profile system with z.AI, MiniMax, OpenRouter support (#600)
* feat: add Claude API provider profiles for alternative endpoints

Add support for managing multiple Claude-compatible API endpoints
(z.AI GLM, AWS Bedrock, etc.) through provider profiles in settings.

Features:
- New ClaudeApiProfile type with base URL, API key, model mappings
- Pre-configured z.AI GLM template with correct model names
- Profile selector in Settings > Claude > API Profiles
- Clean switching between profiles and direct Anthropic API
- Immediate persistence to prevent data loss on restart

Profile support added to all execution paths:
- Agent service (chat)
- Ideation service
- Auto-mode service (feature agents, enhancements)
- Simple query service (title generation, descriptions, etc.)
- Backlog planning, commit messages, spec generation
- GitHub issue validation, suggestions

Environment variables set when profile is active:
- ANTHROPIC_BASE_URL, ANTHROPIC_AUTH_TOKEN/API_KEY
- ANTHROPIC_DEFAULT_HAIKU/SONNET/OPUS_MODEL
- API_TIMEOUT_MS, CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC
2026-01-19 20:36:58 +01:00
DhanushSantosh
2fac2ca4bb Fix opencode auth error mapping and perf containment 2026-01-19 19:58:10 +05:30
DhanushSantosh
9bb52f1ded perf(ui): smooth large lists and graphs 2026-01-19 19:38:56 +05:30
Shirone
f987fc1f10 Merge branch 'v0.13.0rc' into feat/react-query
Merged latest changes from v0.13.0rc into feat/react-query while preserving
React Query migration. Key merge decisions:

- Kept React Query hooks for data fetching (useRunningAgents, useStopFeature, etc.)
- Added backlog plan handling to running-agents-view stop functionality
- Imported both SkeletonPulse and Spinner for CLI status components
- Used Spinner for refresh buttons across all settings sections
- Preserved isBacklogPlan check in agent-output-modal TaskProgressPanel
- Added handleOpenInIntegratedTerminal to worktree actions while keeping React Query mutations
2026-01-19 13:28:43 +01:00
DhanushSantosh
63b8eb0991 chore: refresh package-lock 2026-01-19 17:22:55 +05:30
Stefan de Vogelaere
a52c0461e5 feat: add external terminal support with cross-platform detection (#565)
* feat(platform): add cross-platform openInTerminal utility

Add utility function to open a terminal in a specified directory:
- macOS: Uses Terminal.app via AppleScript
- Windows: Tries Windows Terminal, falls back to cmd
- Linux: Tries common terminal emulators (gnome-terminal,
  konsole, xfce4-terminal, xterm, x-terminal-emulator)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* feat(server): add open-in-terminal endpoint

Add POST /open-in-terminal endpoint to open a system terminal in the
worktree directory using the cross-platform openInTerminal utility.

The endpoint validates that worktreePath is provided and is an
absolute path for security.

Extracted from PR #558.

* feat(ui): add Open in Terminal action to worktree dropdown

Add "Open in Terminal" option to the worktree actions dropdown menu.
This opens the system terminal in the worktree directory.

Changes:
- Add openInTerminal method to http-api-client
- Add Terminal icon and menu item to worktree-actions-dropdown
- Add onOpenInTerminal prop to WorktreeTab component
- Add handleOpenInTerminal handler to use-worktree-actions hook
- Wire up handler in worktree-panel for both mobile and desktop views

Extracted from PR #558.

* fix(ui): open in terminal navigates to Automaker terminal view

Instead of opening the system terminal, the "Open in Terminal" action
now opens Automaker's built-in terminal with the worktree directory:

- Add pendingTerminalCwd state to app store
- Update use-worktree-actions to set pending cwd and navigate to /terminal
- Add effect in terminal-view to create session with pending cwd

This matches the original PR #558 behavior.

* feat(ui): add terminal open mode setting (new tab vs split)

Add a setting to choose how "Open in Terminal" behaves:
- New Tab: Creates a new tab named after the branch (default)
- Split: Adds to current tab as a split view

Changes:
- Add openTerminalMode setting to terminal state ('newTab' | 'split')
- Update terminal-view to respect the setting
- Add UI in Terminal Settings to toggle the behavior
- Rename pendingTerminalCwd to pendingTerminal with branch name

The new tab mode names tabs after the branch for easy identification.
The split mode is useful for comparing terminals side by side.

* feat(ui): display branch name in terminal header with git icon

- Move branch name display from tab name to terminal header
- Show full branch name (no truncation) with GitBranch icon
- Display branch name for both 'new tab' and 'split' modes
- Persist openTerminalMode setting to server and include in import/export
- Update settings dropdown to simplified "New Tab" label

* feat: add external terminal support with cross-platform detection

Add support for opening worktree directories in external terminals
(iTerm2, Warp, Ghostty, System Terminal, etc.) while retaining the
integrated terminal as the default option.

Changes:
- Add terminal detection for macOS, Windows, and Linux
- Add "Open in Terminal" split-button in worktree dropdown
- Add external terminal selection in Settings > Terminal
- Add default open mode setting (new tab vs split)
- Display branch name in terminal panel header
- Support 20+ terminals across platforms

Part of #558, Closes #550

* fix: address PR review comments

- Add nonce parameter to terminal navigation to allow reopening same
  worktree multiple times
- Fix shell path escaping in editor.ts using single-quote wrapper
- Add validatePathParams middleware to open-in-external-terminal route
- Remove redundant validation block from createOpenInExternalTerminalHandler
- Remove unused pendingTerminal state and setPendingTerminal action
- Remove unused getTerminalInfo function from editor.ts

* fix: address PR review security and validation issues

- Add runtime type check for worktreePath in open-in-terminal handler
- Fix Windows Terminal detection using commandExists before spawn
- Fix xterm shell injection by using sh -c with escapeShellArg
- Use loose equality for null/undefined in useEffectiveDefaultTerminal
- Consolidate duplicate imports from open-in-terminal.js

* chore: update package-lock.json

* fix: use response.json() to prevent disposal race condition in E2E test

Replace response.body() with response.json() in open-existing-project.spec.ts
to fix the "Response has been disposed" error. This matches the pattern used
in other test files.

* Revert "fix: use response.json() to prevent disposal race condition in E2E test"

This reverts commit 36bdf8c24a.

* fix: address PR review feedback for terminal feature

- Add explicit validation for worktreePath in createOpenInExternalTerminalHandler
- Add aria-label to refresh button in terminal settings for accessibility
- Only show "no terminals" message when not refreshing
- Reset initialCwdHandledRef on failure to allow retries
- Use z.coerce.number() for nonce URL param to handle string coercion
- Preserve branchName when creating layout for empty tab
- Update getDefaultTerminal return type to allow null result

---------

Co-authored-by: Kacper <kacperlachowiczwp.pl@wp.pl>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-19 10:22:26 +01:00
Shirone
e73c92b031 Merge pull request #582 from stefandevo/fix/e2e-response-disposal-race
fix: prevent response disposal race condition in E2E test
2026-01-19 00:02:04 +00:00
Web Dev Cody
09151aa3c8 Merge pull request #590 from AutoMaker-Org/automode-api
feat: implement cursor model migration and enhance auto mode function…
2026-01-18 18:59:59 -05:00
Shirone
d6300f33ca fix: skip PR assignment for main worktree and refine metadata fallback logic
This update modifies the list handler to skip PR assignment for the main worktree, preventing confusion when displaying PRs on the main branch tab. Additionally, the fallback logic for assigning stored metadata is refined to only apply if the PR state is 'OPEN', ensuring more accurate representation of PRs.
2026-01-19 00:49:56 +01:00
webdevcody
4b0d1399b1 feat: implement cursor model migration and enhance auto mode functionality
This commit introduces significant updates to the cursor model handling and auto mode features. The cursor model IDs have been standardized to a canonical format, ensuring backward compatibility while migrating legacy IDs. New endpoints for starting and stopping the auto mode loop have been added, allowing for better control over project-specific auto mode operations.

Key changes:
- Updated cursor model IDs to use the 'cursor-' prefix for consistency.
- Added new API endpoints: `/start` and `/stop` for managing auto mode.
- Enhanced the status endpoint to provide detailed project-specific auto mode information.
- Improved error handling and logging throughout the auto mode service.
- Migrated legacy model IDs to their canonical counterparts in various components.

This update aims to streamline the user experience and ensure a smooth transition for existing users while providing new functionalities.
2026-01-18 18:42:52 -05:00
Stefan de Vogelaere
55a34a9f1f feat: add auto-login for dev mode and fix log box formatting (#567)
* feat: add auto-login for dev mode and fix log box formatting

Add AUTOMAKER_AUTO_LOGIN environment variable that, when set to 'true',
automatically creates a session for web mode users without requiring
them to enter the API key. Useful for development environments.

Also fix formatting issues in console log boxes:
- API Key box: add right border, show auto-login status and tips
- Claude auth warning: add separator line, fix emoji spacing
- Server info box: use consistent 71-char width, proper padding
- Port conflict error: use same width, proper dynamic padding

Environment variables:
- AUTOMAKER_AUTO_LOGIN=true: Skip login prompt, auto-create session
- AUTOMAKER_API_KEY: Use a fixed API key (existing)
- AUTOMAKER_HIDE_API_KEY=true: Hide the API key banner (existing)

* fix: add production safeguard to auto-login and extract log box constant

- Add NODE_ENV !== 'production' check to prevent auto-login in production
- Extract magic number 67 to BOX_CONTENT_WIDTH constant in auth.ts and index.ts
- Document AUTOMAKER_AUTO_LOGIN env var in CLAUDE.md and README.md
2026-01-18 23:48:00 +01:00
Stefan de Vogelaere
c4652190eb feat: add three viewing modes for app specification (#566)
* feat: add three viewing modes for app specification

Introduces View, Edit, and Source modes for the spec page:

- View: Clean read-only display with cards, badges, and accordions
- Edit: Structured form-based editor for all spec fields
- Source: Raw XML editor for advanced users

Also adds @automaker/spec-parser shared package for XML parsing
between server and client.

* fix: address PR review feedback

- Replace array index keys with stable UUIDs in array-field-editor,
  features-section, and roadmap-section components
- Replace regex-based XML parsing with fast-xml-parser for robustness
- Simplify renderContent logic in spec-view by removing dead code paths

* fix: convert git+ssh URLs to https in package-lock.json

* fix: address PR review feedback for spec visualiser

- Remove unused RefreshCw import from spec-view.tsx
- Add explicit parsedSpec check in renderContent for robustness
- Hide save button in view mode since it's read-only
- Remove GripVertical drag handles since drag-and-drop is not implemented
- Rename Map imports to MapIcon to avoid shadowing global Map
- Escape tagName in xml-utils.ts RegExp functions for safety
- Add aria-label attributes for accessibility on mode tabs

* fix: address additional PR review feedback

- Fix Textarea controlled/uncontrolled warning with default value
- Preserve IDs in useEffect sync to avoid unnecessary remounts
- Consolidate lucide-react imports
- Add JSDoc note about tag attributes limitation in xml-utils.ts
- Remove redundant disabled prop from SpecModeTabs
2026-01-18 23:45:43 +01:00
Web Dev Cody
af95dae73a Merge pull request #574 from stefandevo/fix/v0.13.0rc
fix: use getTerminalFontFamily for dev server logs terminal font
2026-01-18 17:17:44 -05:00
Web Dev Cody
1c1d9d30a7 Merge pull request #583 from stefandevo/fix/initial-theme
fix: prevent new projects from overriding global theme setting
2026-01-18 17:17:20 -05:00
webdevcody
3faebfa3fe refactor: update migration process to selectively copy specific application data files
This commit refines the migration functionality in the SettingsService to focus on migrating only specific application data files from the legacy Electron userData directory. The migration now explicitly handles files such as settings.json, credentials.json, and agent-sessions, while excluding internal caches. Enhanced logging provides clearer insights into the migration process, including skipped items and errors encountered.

Key changes:
- Modified migration logic to target specific application data files and directories.
- Improved logging for migration status and error handling.
- Introduced a new private method, `copyDirectory`, to facilitate directory copying.
2026-01-18 16:29:55 -05:00
webdevcody
d0eaf0e51d feat: enhance migration process to copy entire data directory from legacy Electron userData location
This update expands the migration functionality in the SettingsService to include the entire data directory, rather than just specific files. The migration now handles all files and directories, including settings.json, credentials.json, sessions-metadata.json, and conversation histories. Additionally, logging has been improved to reflect the migration of all items and to provide clearer information on the migration process.

Key changes:
- Updated migration logic to recursively copy all contents from the legacy directory.
- Enhanced logging for migration status and errors.
- Added a new private method, `copyDirectoryContents`, to facilitate the recursive copying of files and directories.
2026-01-18 16:25:25 -05:00
Web Dev Cody
cf3ee6aec6 Merge pull request #586 from ScotTFO/fix/windows-launcher-compatibility
fix: add cross-platform Node.js launcher for Windows CMD/PowerShell support
2026-01-18 16:11:56 -05:00
webdevcody
da80729f56 feat: implement migration of settings from legacy Electron userData directory
This commit introduces a new feature in the SettingsService to migrate user settings from the legacy Electron userData directory to the new shared data directory. The migration process checks for the existence of settings in both locations and handles the transfer of settings.json and credentials.json files if necessary. It also includes logging for successful migrations and any errors encountered during the process, ensuring a smooth transition for users upgrading from previous versions.

Key changes:
- Added `migrateFromLegacyElectronPath` method to handle migration logic.
- Implemented platform-specific paths for legacy settings based on the operating system.
- Enhanced error handling and logging for migration operations.
2026-01-18 16:10:04 -05:00
Web Dev Cody
9ad58e1a74 Merge pull request #587 from AutoMaker-Org/fix/sidebar-project-theme-ui-overlap
fix: enhance project context menu with theme submenu improvements
2026-01-18 15:51:24 -05:00
Kacper
55b17a7a11 fix: adress pr comments and add docs strings 2026-01-18 21:46:14 +01:00
Scott
2854e24e84 fix: validate both ports before assigning either
Collect web and server port inputs first, then validate both before
assigning to global variables. This prevents WEB_PORT from being
modified when SERVER_PORT validation subsequently fails.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-18 13:46:11 -07:00
Scott
b91d84ee84 fix: improve bash detection and add input validation
- Add detectBashVariant() that checks $OSTYPE for reliable WSL/MSYS/Cygwin
  detection instead of relying solely on executable path
- Add input validation to convertPathForBash() to catch null/undefined args
- Add validate_port() function in bash script to reject invalid port input
  (non-numeric, out of range) with clear error messages

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-18 13:39:15 -07:00
Kacper
30a2c3d740 feat: enhance project context menu with theme submenu improvements
- Added handlers for theme submenu to manage mouse enter/leave events with a delay, preventing premature closure.
- Implemented dynamic positioning for the submenu to avoid viewport overflow, ensuring better visibility.
- Updated styles to accommodate new positioning logic and added scroll functionality for theme selection.

These changes improve user experience by making the theme selection process more intuitive and visually accessible.
2026-01-18 21:36:23 +01:00
Scott
e3213b1426 fix: add WSL/Cygwin path translation and improve signal handling
- Add convertPathForBash() function that detects bash variant:
  - Cygwin: /cygdrive/c/path
  - WSL: /mnt/c/path
  - MSYS/Git Bash: /c/path
- Update exit handler to properly handle signal termination
  (exit code 1 when killed by signal vs code from child)

Addresses remaining CodeRabbit PR #586 recommendations.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-18 13:30:04 -07:00
Scott
bfc23cdfa1 fix: guard signal forwarding against race conditions 2026-01-18 13:12:11 -07:00
Scott
8b5da3195b fix: address PR review feedback
- Remove duplicate killPtyProcess method in claude-usage-service.ts
- Import and use spawnSync correctly instead of spawn.sync
- Fix misleading comment about shell option and signal handling
2026-01-18 13:06:13 -07:00
Scott
0c452a3ebc fix: add cross-platform Node.js launcher for Windows CMD/PowerShell support
The `./start-automaker.sh` script doesn't work when invoked from Windows
CMD or PowerShell because:
1. The `./` prefix is Unix-style path notation
2. Windows shells don't execute .sh files directly

This adds a Node.js launcher (`start-automaker.mjs`) that:
- Detects the platform and finds bash (Git Bash, MSYS2, Cygwin, or WSL)
- Converts Windows paths to Unix-style for bash compatibility
- Passes all arguments through to the original bash script
- Provides helpful error messages if bash isn't found

The npm scripts now use `node start-automaker.mjs` which works on all
platforms while preserving the full functionality of the bash TUI launcher.
2026-01-18 12:59:46 -07:00
Scott
cfc5530d1c Merge origin/main into local branch
Resolved conflict in terminal-service.ts by accepting upstream
Electron detection properties alongside local Windows termination fixes.
2026-01-18 12:16:00 -07:00
DhanushSantosh
749fb3a5c1 fix: add token query parameter support to auth middleware for web mode image loading
The /api/fs/image endpoint requires authentication, but when loading images via
CSS background-image or img tags, only query parameters can be used (headers
cannot be set). Web mode passes the session token as a query parameter (?token=...),
but the auth middleware didn't recognize it, causing image requests to fail.

This fix adds support for the 'token' query parameter in the checkAuthentication
function, allowing the auth middleware to validate web mode session tokens when
they're passed as query parameters.

Now image loads work correctly in web mode by:
1. Client passes session token in URL: ?token={sessionToken}
2. Auth middleware recognizes and validates the token query parameter
3. Image endpoint successfully serves the image after authentication

This fixes the issue where kanban board background images were not visible
in web mode.

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-18 21:23:18 +05:30
DhanushSantosh
dd26de9f55 fix: add authentication validation to image endpoint for web mode
Adds authentication checks to the /api/fs/image endpoint to validate
session tokens in web mode. This ensures background images and other
image assets load correctly in web mode by validating:
- session token from query parameter (web mode)
- API key from query parameter (Electron mode)
- session cookie (web mode fallback)
- X-API-Key and X-Session-Token headers

This fixes the issue where kanban board background images were not
visible in web mode because the image request lacked proper authentication.

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-18 21:13:10 +05:30
Stefan de Vogelaere
b6cb926cbe fix: also remove theme calculation from dashboard-view
Missed this code path which is used when opening projects from the
dashboard after completing setup.
2026-01-18 16:18:58 +01:00
Stefan de Vogelaere
eb30ef71f9 fix: prevent response disposal race condition in E2E test
Wrap route.fetch() and response.json() in try/catch blocks to handle
cases where the response is disposed before it can be accessed. Falls
back to route.continue() to let the original request proceed normally.

This fixes the intermittent "Response has been disposed" error in
open-existing-project.spec.ts that occurs due to timing issues in CI.
2026-01-18 16:13:53 +01:00
Stefan de Vogelaere
75fe579e93 fix: prevent new projects from overriding global theme setting
When creating new projects, the theme was always explicitly set even when
matching the global theme. This caused "Use Global Theme" to be unchecked,
preventing global theme changes from affecting the project.

Now theme is only set on new projects when explicitly provided or when
recovering a trashed project's theme preference.
2026-01-18 16:12:32 +01:00
Stefan de Vogelaere
8ab9dc5a11 fix: use user's terminal font settings for dev server logs
XtermLogViewer was passing DEFAULT_TERMINAL_FONT directly to xterm.js,
but this value is 'default' - a sentinel string for the dropdown selector,
not a valid CSS font family. Also the font size was hardcoded to 13px.

Now reads the user's font preference from terminalState:
- fontFamily: Uses getTerminalFontFamily() to convert to CSS font stack
- defaultFontSize: Uses store value when fontSize prop not provided

Also adds useEffects to update font settings dynamically when they change.

This ensures dev server logs respect Settings > Terminal settings.
2026-01-18 15:22:21 +01:00
Dhanush Santosh
96202d4bc2 Merge pull request #573 from DhanushSantosh/patchcraft
fix: resolve data directory persistence between Electron and Web modes
2026-01-18 19:36:09 +05:30
DhanushSantosh
f68aee6a19 fix: prevent response disposal race condition in E2E test 2026-01-18 19:29:32 +05:30
DhanushSantosh
7795d81183 merge: resolve conflicts with upstream/v0.13.0rc 2026-01-18 19:21:56 +05:30
Dhanush Santosh
0c053dab48 Merge pull request #578 from stefandevo/fix/v0.13.0rc-e2e-ci
fix: improve project-switcher data-testid for uniqueness and special chars
2026-01-18 19:14:32 +05:30
Stefan de Vogelaere
1ede7e7e6a refactor: extract sanitizeForTestId to shared utility
Address PR review comments by:
- Creating shared sanitizeForTestId utility in apps/ui/src/lib/utils.ts
- Updating ProjectSwitcherItem to use the shared utility
- Adding matching helper to test utils for E2E tests
- Updating all E2E tests to use the sanitization helper

This ensures the component and tests use identical sanitization logic,
making tests robust against project names with special characters.
2026-01-18 14:36:31 +01:00
DhanushSantosh
980006d40e fix: use setItem helper and safer Playwright selector in tests
- Replace direct localStorage.setItem() with setItem helper in use-settings-migration.ts (line 472) for consistent storage-availability checks and error handling
- Replace brittle attribute selector with Playwright's getByRole in open-existing-project.spec.ts (line 162) to handle names containing special characters

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-18 19:06:07 +05:30
Stefan de Vogelaere
ef2dcbacd4 fix: improve project-switcher data-testid for uniqueness and special chars
The data-testid generation was using only the sanitized project name which
could produce collisions and didn't handle special characters properly.

Changes:
- Combine stable project.id with sanitized name: project-switcher-{id}-{name}
- Expand sanitization to remove non-alphanumeric chars (except hyphens)
- Collapse multiple hyphens and trim leading/trailing hyphens
- Update E2E tests to use ends-with selector for matching

This ensures test IDs are deterministic, unique, and safe for CSS selectors.
2026-01-18 14:29:04 +01:00
DhanushSantosh
505a2b1e0b docs: enhance docstrings to reach 80% coverage threshold
- Expanded docstrings in use-settings-migration.ts for parseLocalStorageSettings, localStorageHasMoreData, mergeSettings, and performSettingsMigration
- Expanded docstrings in use-settings-sync.ts for getSettingsFieldValue and hasSettingsFieldChanged helper functions
- Added detailed parameter and return value documentation
- Improved clarity on migration flow and settings merging logic

This brings docstring coverage from 77.78% to 80%+ to satisfy CodeRabbit checks.

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-18 18:42:41 +05:30
DhanushSantosh
2e57553639 Merge remote-tracking branch 'upstream/v0.13.0rc' into patchcraft 2026-01-18 18:21:34 +05:30
DhanushSantosh
f37812247d fix: resolve data directory persistence between Electron and Web modes
This commit fixes bidirectional data synchronization between Electron and Web
modes by addressing multiple interconnected issues:

**Core Fixes:**

1. **Electron userData Path (main.ts)**
   - Explicitly set userData path in development using app.setPath()
   - Navigate from __dirname to project root instead of relying on process.cwd()
   - Ensures Electron reads from /data instead of ~/.config/Automaker

2. **Server DataDir Path (main.ts, start-automaker.sh)**
   - Fixed startServer() to use __dirname for reliable path calculation
   - Export DATA_DIR environment variable in start-automaker.sh
   - Server now consistently uses shared /data directory

3. **Settings Sync Protection (settings-service.ts)**
   - Modified wipe protection to distinguish legitimate removals from accidents
   - Allow empty projects array if trashedProjects has items
   - Prevent false-positive wipe detection when removing projects

4. **Diagnostics & Logging**
   - Enhanced cache loading logging in use-settings-migration.ts
   - Detailed migration decision logs for troubleshooting
   - Track project counts from both cache and server

**Impact:**
- Projects created in Electron now appear in Web mode after restart
- Projects removed in Web mode stay removed in Electron after restart
- Settings changes sync bidirectionally across mode switches
- No more data loss or project duplication issues

**Testing:**
- Verified Electron uses /home/dhanush/Projects/automaker/data
- Confirmed server startup logs show correct DATA_DIR
- Tested project persistence across mode restarts
- Validated no writes to ~/.config/Automaker in dev mode

Fixes: Data persistence between Electron and Web modes

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-18 18:21:14 +05:30
DhanushSantosh
484d4c65d5 fix: use shared data directory for Electron and web modes
CRITICAL: Electron was using ~/.config/@automaker/app/data/ while web mode
used ./data/, causing projects to never sync between modes.

In development mode, both now use the shared project root ./data directory.
In production, Electron uses its isolated userData directory for app portability.

This ensures:
- Electron projects sync to the same server data directory as web mode
- Projects opened in Electron immediately appear in web mode
- Server restart doesn't lose projects from either mode

The issue was on line 487 where DATA_DIR was set to app.getPath('userData')
instead of the shared project ./data directory.

Fixes the fundamental problem where projects never appeared in web mode
even though they were in the server's settings file.

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-18 16:25:35 +05:30
Shirone
327aef89a2 Merge pull request #562 from AutoMaker-Org/feature/v0.12.0rc-1768688900786-5ea1
refactor: standardize PR state representation across the application
2026-01-18 10:45:59 +00:00
Kacper
d96f369b73 test: mock Unix platform for SIGTERM behavior in ClaudeUsageService tests
Added a mock for the Unix platform in the SIGTERM test case to ensure proper behavior during testing on non-Windows systems. This change enhances the reliability of the tests by simulating the expected environment for process termination.
2026-01-17 18:14:36 -07:00
Kacper
f0e655f49a fix: unify PTY process termination handling across platforms
Refactored the process termination logic in both ClaudeUsageService and TerminalService to use a centralized method for killing PTY processes. This ensures consistent handling of process termination across Windows and Unix-like systems, improving reliability and maintainability of the code.
2026-01-17 18:13:54 -07:00
Kacper
d22deabe79 fix: improve process termination handling for Windows
Updated the process termination logic in ClaudeUsageService to handle Windows environments correctly. The code now checks the operating system and calls the appropriate kill method, ensuring consistent behavior across platforms.
2026-01-17 18:13:54 -07:00
Web Dev Cody
518c81815e Merge pull request #563 from AutoMaker-Org/v0.12.0rc
V0.12.0rc
2026-01-17 18:50:55 -05:00
webdevcody
01652d0d11 feat: add hostname configuration for web server
Introduce APP_HOST variable to allow custom hostname configuration for the web server. Default to localhost if VITE_HOSTNAME is not set. Update relevant URLs and CORS origins to use APP_HOST, enhancing flexibility for local development and deployment.

This change improves the application's adaptability to different environments.
2026-01-17 18:43:10 -05:00
Shirone
44e665f1bf fix: adress pr comments 2026-01-18 00:22:27 +01:00
Shirone
5b1e0105f4 refactor: standardize PR state representation across the application
Updated the PR state handling to use a consistent uppercase format ('OPEN', 'MERGED', 'CLOSED') throughout the codebase. This includes changes to the worktree metadata interface, PR creation logic, and related tests to ensure uniformity and prevent potential mismatches in state representation.

Additionally, modified the GitHub PR fetching logic to retrieve all PR states, allowing for better detection of state changes.

This refactor enhances clarity and consistency in how PR states are managed and displayed.
2026-01-17 23:58:19 +01:00
webdevcody
832d10e133 refactor: replace Loader2 with Spinner component across the application
This update standardizes the loading indicators by replacing all instances of Loader2 with the new Spinner component. The Spinner component provides a consistent look and feel for loading states throughout the UI, enhancing the user experience.

Changes include:
- Updated loading indicators in various components such as popovers, modals, and views.
- Ensured that the Spinner component is used with appropriate sizes for different contexts.

No functional changes were made; this is purely a visual and structural improvement.
2026-01-17 17:58:16 -05:00
DhanushSantosh
7b7ac72c14 fix: use shared data directory for Electron and web modes
CRITICAL FIX: Electron and web mode were using DIFFERENT data directories:
- Electron: Docker volume 'automaker-data' (isolated from host)
- Web: Local ./data directory (host filesystem)

This caused projects opened in Electron to never appear in web mode because
they were synced to a completely separate Docker volume.

Solution: Mount the host's ./data directory into both containers
This ensures Electron and web mode always share the same data directory
and all projects are immediately visible across modes.

Now when you:
1. Open projects in Electron → synced to ./data
2. Switch to web mode → loads from same ./data
3. Restart server → both see the same projects

Fixes issue where projects opened in Electron don't appear in web mode.

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-18 03:06:09 +05:30
DhanushSantosh
9137f0e75f fix: keep localStorage cache in sync with server settings
When switching between Electron and web modes or when the server temporarily
stops, web mode was falling back to stale localStorage data instead of fresh
server data.

This fix:
1. Updates localStorage cache whenever fresh server settings are fetched
2. Updates localStorage cache whenever settings are synced to server
3. Prioritizes fresh settings cache over old Zustand persisted storage

This ensures that:
- Web mode always sees the latest projects even after mode switches
- Switching from Electron to web mode immediately shows new projects
- Server restarts don't cause web mode to use stale cached data

Fixes issue where projects opened in Electron didn't appear in web mode
after stopping and restarting the server.

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-18 02:46:31 +05:30
DhanushSantosh
b66efae5b7 fix: sync projects immediately instead of debouncing
Projects are critical data that must persist across mode switches (Electron/web).
Previously, project changes were debounced by 1 second, which could cause data
loss if:
1. User switched from Electron to web mode quickly
2. App closed before debounce timer fired
3. Network temporarily unavailable during debounce window

This change makes project array changes sync immediately (syncNow) instead of
using the 1-second debounce, ensuring projects are always persisted to the
server right away and visible in both Electron and web modes.

Fixes issue where projects opened in Electron didn't appear in web mode.

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-18 02:30:16 +05:30
DhanushSantosh
2a8706e714 fix: add session token to image URLs for web mode authentication
In web mode, image loads may not send session cookies due to proxy/CORS
restrictions. This adds the session token as a query parameter to ensure
images load correctly with proper authentication in web mode.

Fixes custom project icons and images not loading in web mode.

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-18 02:21:47 +05:30
DhanushSantosh
174c02cb79 fix: automatically remove projects with non-existent paths
When a project fails to initialize because the directory no longer exists
(e.g., test artifacts, deleted folders), automatically remove it from the
project list instead of showing the error repeatedly on every reload.

This prevents users from being stuck with broken project references in their
settings after testing or when project directories are moved/deleted.

The user is notified with a toast message explaining the removal.

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-18 02:09:28 +05:30
DhanushSantosh
a7f7898ee4 fix: persist session token to localStorage for web mode page reload survival
Web mode sessions were being lost on page reload because the session token was
stored only in memory (cachedSessionToken). When the page reloaded, the token
was cleared and verifySession() would fail, redirecting users to login.

This commit adds localStorage persistence for the session token, ensuring:
1. Token survives page reloads in web mode
2. verifySession() can use the persisted token from localStorage
3. Token is cleared properly on logout
4. Graceful fallback if localStorage is unavailable (SSR, disabled storage)

The HTTP-only cookie alone isn't sufficient for web mode due to SameSite cookie
restrictions and potential proxy issues with credentials forwarding.

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-18 02:02:10 +05:30
DhanushSantosh
fdad82bf88 fix: enable WebSocket proxying in Vite dev server
Enables ws: true for /api proxy to properly forward WebSocket connections through the development server in web mode. This ensures real-time features work correctly when developing in browser mode.

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-18 01:52:11 +05:30
DhanushSantosh
b0b49764b9 fix: add localhost to CORS_ORIGIN for web mode development
The web mode launcher was setting CORS_ORIGIN to only include the system
hostname and 127.0.0.1, but users access via http://localhost:3007 which
wasn't in the allowed list.

Now includes:
- http://localhost:3007 (primary dev URL)
- http://$HOSTNAME:3007 (system hostname if needed)
- http://127.0.0.1:3007 (loopback IP)

Also cleaned up debug logging from CORS check since root cause is now clear.

Fixes: Persistent "Not allowed by CORS" errors in web mode

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-18 01:50:41 +05:30
DhanushSantosh
e10cb83adc debug: add CORS logging to diagnose origin rejection
Added detailed logging to see:
- What origin is being sent
- How the hostname is parsed
- Why origins are being accepted/rejected

This will help us understand why CORS is still failing in web mode.

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-18 01:47:53 +05:30
DhanushSantosh
b8875f71a5 fix: improve CORS configuration to handle localhost and private IPs
The CORS check was too strict for local development. Changed to:
- Parse origin URL properly to extract hostname
- Allow all localhost origins (any port)
- Allow all 127.0.0.1 origins (loopback IP)
- Allow all private network IPs (192.168.x.x, 10.x.x.x, 172.x.x.x)
- Keep security by rejecting unknown origins

This fixes CORS errors when accessing from http://localhost:3007
or other local addresses during web mode development.

Fixes: "Not allowed by CORS" errors in web mode

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-18 01:45:10 +05:30
DhanushSantosh
4186b80a82 fix: use relative URLs in web mode to leverage Vite proxy
In web mode, the API client was hardcoding localhost:3008, which bypassed
the Vite proxy and caused CORS errors. Now it uses relative URLs (just /api)
in web mode, allowing the proxy to handle routing and making requests appear
same-origin.

- Web mode: Use relative URLs for proxy routing (no CORS issues)
- Electron mode: Continue using hardcoded localhost:3008

This allows the Vite proxy configuration to actually work in web mode.

Fixes: Persistent CORS errors in web mode development

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-18 01:41:21 +05:30
DhanushSantosh
7eae0215f2 chore: update package-lock.json 2026-01-18 01:38:09 +05:30
DhanushSantosh
4cd84a4734 fix: add API proxy to Vite dev server for web mode CORS
When running in web mode (npm run dev:web), the frontend on localhost:3007
was making cross-origin requests to the backend on localhost:3008, causing
CORS errors.

Added Vite proxy configuration to forward /api requests from the dev server
to the backend. This makes all API calls appear same-origin to the browser,
eliminating CORS blocks during development.

Now web mode users can access http://localhost:3007 without CORS errors.

Fixes: CORS "Not allowed by CORS" errors in web mode

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-18 01:37:49 +05:30
DhanushSantosh
044c3d50d1 fix: mark dmg-license as optional dependency for cross-platform builds
dmg-license is a macOS-only package used for building DMG installers.
Moving it from devDependencies to optionalDependencies allows npm ci
to succeed on Linux and Windows without failing on platform checks.

macOS developers will still get the package when available.
Linux/Windows developers can now run npm ci without errors.

Fixes: npm ci failing on Linux with "EBADPLATFORM" error

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-18 01:28:14 +05:30
Shirone
a1de0a78a0 Merge pull request #545 from stefandevo/fix/sandbox-warning-persistence
fix: sandbox warning persistence and add env var option
2026-01-17 18:57:19 +00:00
Shirone
fef9639e01 Merge pull request #539 from stefandevo/fix/light-mode-agent-output
fix: respect theme in agent output modal and log viewer
2026-01-17 18:35:32 +00:00
Stefan de Vogelaere
aef479218d fix: use DEFAULT_FONT_VALUE for initial terminal font
The initial terminalState.fontFamily was set to a raw font string
that didn't match any option in TERMINAL_FONT_OPTIONS, causing the
dropdown to appear empty. Changed to use DEFAULT_FONT_VALUE sentinel.
2026-01-17 19:32:42 +01:00
Stefan de Vogelaere
ded5ecf4e9 refactor: reduce code duplication in font settings and sync logic
Address CodeRabbit review feedback:
- Create getEffectiveFont helper to deduplicate getEffectiveFontSans/Mono
- Extract getSettingsFieldValue and hasSettingsFieldChanged helpers
- Create reusable FontSelector component for font selection UI
- Refactor project-theme-section and appearance-section to use FontSelector
2026-01-17 19:30:00 +01:00
Stefan de Vogelaere
a01f299597 fix: resolve type errors after merging upstream v0.12.0rc
- Fix ThemeMode type casting in __root.tsx
- Use specRegeneration.create() instead of non-existent generateAppSpec
- Add missing keyboard shortcut entries for projectSettings and notifications
- Fix lucide-react type casts with intermediate unknown cast
- Remove unused pipelineConfig prop from ListRow component
- Align SettingsProject interface with Project type
- Fix defaultDeleteBranchWithWorktree property name
2026-01-17 19:20:49 +01:00
Stefan de Vogelaere
21c9e88a86 Merge remote-tracking branch 'upstream/v0.12.0rc' into fix/light-mode-agent-output 2026-01-17 19:10:49 +01:00
Shirone
af17f6e36f Merge pull request #535 from stefandevo/v0.12.0rc
feat: add font customization and 8 new themes
2026-01-17 18:06:04 +00:00
Stefan de Vogelaere
e69a2ad722 docs: add AUTOMAKER_SKIP_SANDBOX_WARNING env var documentation
Document the new environment variable in README.md and .env.example
2026-01-17 18:33:08 +01:00
DhanushSantosh
0480f6ccd6 fix: handle dynamic model IDs with slashes in the model name
isOpencodeModel was rejecting valid dynamic model IDs like
'openrouter/qwen/qwen3-14b:free' because it was splitting on all slashes
and expecting exactly 2 parts. This caused valid OpenCode models to be
treated as unknown, falling back to Claude.

Now correctly splits on the FIRST slash only, allowing model names
like 'qwen/qwen3-14b:free' to be recognized as valid.

Fixes: User selects openrouter/qwen/qwen3-14b:free → server falls back to Claude

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-17 21:13:47 +05:30
DhanushSantosh
24042d20c2 fix: filter dynamic OpenCode models by enabled status in model selector
The phase model selector was showing ALL discovered dynamic models regardless
of whether they were enabled in settings. Now it filters dynamic models by
enabledDynamicModelIds, matching the behavior of Cursor models and making
the enable/disable setting meaningful.

Users can now:
- Disable models in settings they don't want to use
- See only enabled dynamic models in the model selector dropdown
- Have the "Select all" checkbox properly control which models appear

This ensures consistency: enabling/disabling models in settings affects
which models are available for feature execution.

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-17 21:00:22 +05:30
DhanushSantosh
9c3b3a4104 fix: make dynamic models select-all checkbox respect search filters
The "Select all" checkbox for dynamic models was using the unfiltered models list,
causing the checkbox state to not reflect what users see when searching. Now it
correctly operates on the filtered models list so:

- Checkbox state matches the visible filtered models
- "Select all" only toggles models the user can see
- Indeterminate state shows if some filtered models are selected

This ensures the checkbox has a meaningful purpose when filtering/searching models.

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-17 20:32:10 +05:30
Stefan de Vogelaere
17e2cdfc85 fix: sandbox warning persistence and add env var option
Fix race condition where sandbox warning appeared on every refresh
even after checking "Do not show again". The issue was that the
sandbox check effect ran before settings were hydrated from the
server, so skipSandboxWarning was always false (the default).

Changes:
- Add settingsLoaded to sandbox check dependencies to ensure the
  user's preference is loaded before checking
- Add AUTOMAKER_SKIP_SANDBOX_WARNING env var option to skip the
  warning entirely (useful for dev/CI environments)
2026-01-17 15:33:51 +01:00
DhanushSantosh
466c34afd4 ci: improve release workflow artifact uploads
- Use explicit file patterns to exclude builder config/debug files (builder-*.yml, *.yaml)
- Include blockmap files for efficient delta updates in auto-update scenarios
- Ensure only production-ready artifacts are uploaded to GitHub releases

This prevents accidental inclusion of builder configuration files in the release assets.

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-17 19:18:15 +05:30
Shirone
b9567f5904 Merge pull request #542 from stefandevo/fix/api-key-info-on-dev-restart
docs: add hint about AUTOMAKER_API_KEY env var to API key banner
2026-01-17 13:09:36 +00:00
Shirone
c2cf8ae892 Merge pull request #540 from stefandevo/fix/gh-not-in-git-folder
fix: stop repeated GitHub PR fetch warnings for non-GitHub repos
2026-01-17 13:08:28 +00:00
Stefan de Vogelaere
3aa3c10ea4 docs: add hint about AUTOMAKER_API_KEY env var to API key banner
When the dev server restarts, developers need to re-enter the API key
in the browser. While the key is persisted to ./data/.api-key, this
file may be missing in clean dev scenarios.

This adds a helpful tip to the API key banner informing developers
they can set AUTOMAKER_API_KEY environment variable for a persistent
API key during development, avoiding the need to re-enter it after
server restarts.
2026-01-17 13:53:34 +01:00
Stefan de Vogelaere
5cd4183a7b fix: use fresh timestamp when setting cache entry
Use Date.now() after checkGitHubRemote() completes instead of the
pre-captured timestamp to ensure accurate 5-minute TTL.
2026-01-17 12:36:33 +01:00
Stefan de Vogelaere
2d9e38ad99 fix: stop repeated GitHub PR fetch warnings for non-GitHub repos
When opening a git repository without a GitHub remote, the server logs
were spammed with warnings every 5 seconds during worktree polling:

  WARN [Worktree] Failed to fetch GitHub PRs: Command failed: gh pr list
  ... no git remotes found

This happened because fetchGitHubPRs() ran `gh pr list` without first
checking if the project has a GitHub remote configured.

Changes:
- Add per-project cache for GitHub remote status with 5-minute TTL
- Check cache before attempting to fetch PRs, skip silently if no remote
- Add forceRefreshGitHub parameter to clear cache on manual refresh
- Pass forceRefreshGitHub when user clicks the refresh worktrees button

This allows users to add a GitHub remote and immediately detect it by
clicking the refresh button, while preventing log spam during normal
polling for projects without GitHub remotes.
2026-01-17 12:32:42 +01:00
Shirone
93d73f6d26 Merge pull request #529 from AutoMaker-Org/feature/v0.12.0rc-1768603410796-o2fn
fix: UUID generation fail in docker env
2026-01-17 11:19:01 +00:00
Stefan de Vogelaere
5209395a74 fix: respect theme in agent output modal and log viewer
The Agent Output modal and LogViewer component had hardcoded dark zinc
colors that didn't adapt to light mode themes. Replaced all hardcoded
colors with semantic Tailwind classes (bg-popover, text-foreground,
text-muted-foreground, bg-muted, border-border) that automatically
respect the active theme.
2026-01-17 11:44:33 +01:00
DhanushSantosh
ef6b9ac2d2 fix: add --force flag to npm ci for platform-specific dependencies
npm ci without --force rejects platform-specific packages like dmg-license
which is macOS-only. The --force flag tells npm to proceed even when
platform constraints are violated.

This allows Linux containers to skip dmg-license and continue with the
install, matching the behavior we want for Docker development.
2026-01-17 15:53:31 +05:30
DhanushSantosh
92afbeb6bd fix: run npm install as root to avoid permission issues
The named Docker volume for node_modules is created with root ownership,
causing EACCES errors when npm tries to write as the automaker user.

Solution:
- Run npm ci as root (installation phase)
- Use --legacy-peer-deps to properly handle optional dependencies
- Fix permissions after install
- Run server process as automaker user for security

This eliminates permission denied errors during npm install in dev containers.
2026-01-17 15:36:50 +05:30
DhanushSantosh
bbdc11ce47 fix: improve docker-compose npm install permissions and use npm ci
Fixes permission denied errors when installing dependencies in Docker containers:

Changes:
- Remove stale node_modules directories before installing (fresh start)
- Use 'npm ci --force' instead of 'npm install --force' for deterministic installs
- Add chmod to ensure writable permissions on node_modules
- Properly fix directory ownership and permissions before install

This prevents EACCES errors when multiple processes try to write to node_modules
and handles lingering permission issues from previous failed container runs.
2026-01-17 15:30:21 +05:30
DhanushSantosh
545bf2045d fix: add --force flag to npm install in docker-compose files
Allow npm to install platform-specific devDependencies (like dmg-license
which is macOS-only) by skipping platform checks in Linux Docker containers.
This matches the behavior already used in CI workflows.

Fixes Docker container startup failure:
- docker-compose.dev.yml (full stack development)
- docker-compose.dev-server.yml (server-only with local Electron)

The --force flag allows npm to proceed with installation even when some
optional/platform-specific dependencies can't be installed on the current
platform.
2026-01-17 15:00:48 +05:30
DhanushSantosh
a0471098fa fix: use specific data-testid selectors in project switcher assertions
Replace generic getByRole('button', { name: /.../ }) selectors with specific
getByTestId('project-switcher-project-') to avoid strict mode
violations where the selector resolves to multiple elements (project switcher
button and sidebar button).

Fixes failing E2E tests:
- feature-manual-review-flow.spec.ts
- new-project-creation.spec.ts
- open-existing-project.spec.ts
2026-01-17 14:53:06 +05:30
Stefan de Vogelaere
3320b40d15 feat: align terminal font settings with appearance fonts
- Terminal font dropdown now uses mono fonts from UI font options
- Unified font list between appearance section and terminal settings
- Terminal font persisted to GlobalSettings for import/export support
- Aligned global terminal settings popover with per-terminal popover:
  - Same settings in same order (Font Size, Run on New Terminal, Font Family, Scrollback, Line Height, Screen Reader)
  - Consistent styling (Radix Select instead of native select)
- Added terminal padding (12px vertical, 16px horizontal) for readability
2026-01-17 10:18:11 +01:00
DhanushSantosh
bac5e1c220 Merge upstream/v0.12.0rc into feature/fedora-rpm-support
Resolved conflict in backlog-plan/common.ts:
- Kept local (stricter) validation: Array.isArray(parsed?.result?.changes)
- This ensures type safety for the changes array
2026-01-17 14:44:37 +05:30
DhanushSantosh
33fa138d21 feat: add docker group support with sg docker command
Improve Docker access handling by detecting and using 'sg docker' command
when the user is in the docker group but hasn't logged out yet. This allows
running docker commands without requiring a full session restart after
`usermod -aG docker $USER`.

Changes:
- Detect docker group access and fall back to sg docker -c when needed
- Export DOCKER_CMD variable for use throughout the script
- Update all docker compose and docker ps commands to use DOCKER_CMD
- Improve error messages to guide users on fixing docker access issues
2026-01-17 14:40:08 +05:30
DhanushSantosh
bc09a22e1f fix: extract app version from apps/ui/package.json instead of monorepo root
The start-automaker.sh script now correctly sources the app version (0.12.0)
from apps/ui/package.json instead of the monorepo version (1.0.0) from the
root package.json. This ensures the launcher displays the correct Automaker
application version.
2026-01-17 14:19:12 +05:30
Stefan de Vogelaere
b771b51842 fix: address code review feedback
- Fix git+ssh URL to git+https for @electron/node-gyp (build compatibility)
- Remove duplicate @fontsource packages from root package.json
- Refactor font state initialization to reduce code duplication
2026-01-17 09:15:35 +01:00
Stefan de Vogelaere
1a7bf27ead feat: add new themes, Zed fonts, and sort theme/font lists
New themes added:
- Dark: Ayu Dark, Ayu Mirage, Ember, Matcha
- Light: Ayu Light, One Light, Bluloco, Feather

Other changes:
- Bundle Zed Sans and Zed Mono fonts from zed-industries/zed-fonts
- Sort font options alphabetically (default first)
- Sort theme options alphabetically (Dark/Light first)
- Improve Ayu Dark text contrast for better readability
- Fix Matcha theme to have green undertone instead of blue
2026-01-17 09:15:35 +01:00
Stefan de Vogelaere
f3b00d0f78 feat: add global font settings with per-project override
- Add fontFamilySans and fontFamilyMono to GlobalSettings type
- Add global font state and actions to app store
- Update getEffectiveFontSans/Mono to fall back to global settings
- Add font selectors to global Settings → Appearance
- Add "Use Global Font" checkboxes in Project Settings → Theme
- Add fonts to settings sync and migration
- Include fonts in import/export JSON
2026-01-17 09:15:34 +01:00
Stefan de Vogelaere
c747baaee2 fix: use sentinel value for default font selection
Radix UI Select doesn't allow empty string values, so use 'default'
as a sentinel value instead.
2026-01-17 09:15:34 +01:00
Stefan de Vogelaere
1322722db2 feat: add per-project font override settings
Add font selectors that allow per-project font customization for both
sans and mono fonts, independent of theme selection. Uses system fonts.

- Add fontFamilySans and fontFamilyMono to ProjectSettings and Project types
- Create ui-font-options.ts config with system font options
- Add store actions: setProjectFontSans, setProjectFontMono, getEffectiveFontSans, getEffectiveFontMono
- Apply font CSS variables in root component
- Add font selector UI in project-theme-section (Project Settings → Theme)
2026-01-17 09:15:34 +01:00
webdevcody
aa35eb3d3a feat: implement spec synchronization feature for improved project management
- Added a new `/sync` endpoint to synchronize the project specification with the current codebase and feature state.
- Introduced `syncSpec` function to handle the synchronization logic, updating technology stack, implemented features, and roadmap phases.
- Enhanced the running state management to track synchronization tasks alongside existing generation tasks.
- Updated UI components to support synchronization actions, including loading indicators and status updates.
- Improved logging and error handling for better visibility during sync operations.

These changes enhance project management capabilities by ensuring that the specification remains up-to-date with the latest code and feature developments.
2026-01-17 01:45:45 -05:00
webdevcody
616e2ef75f feat: add HOSTNAME and VITE_HOSTNAME support for improved server URL configuration
- Introduced `HOSTNAME` environment variable for user-facing URLs, defaulting to localhost.
- Updated server and client code to utilize `HOSTNAME` for constructing URLs instead of hardcoded localhost.
- Enhanced documentation in CLAUDE.md to reflect new configuration options.
- Added `VITE_HOSTNAME` for frontend API URLs, ensuring consistent hostname usage across the application.

These changes improve flexibility in server configuration and enhance the user experience by providing accurate URLs.
2026-01-16 22:40:36 -05:00
webdevcody
d98cae124f feat: enhance sidebar functionality for mobile and compact views
- Introduced a floating toggle button for mobile to show/hide the sidebar when collapsed.
- Updated sidebar behavior to completely hide on mobile when the new mobileSidebarHidden state is true.
- Added logic to conditionally render sidebar components based on screen size using the new useIsCompact hook.
- Enhanced SidebarHeader to include close and expand buttons for mobile views.
- Refactored CollapseToggleButton to hide in compact mode.
- Implemented HeaderActionsPanel for mobile actions in various views, improving accessibility and usability on smaller screens.

These changes improve the user experience on mobile devices by providing better navigation options and visibility controls.
2026-01-16 22:27:19 -05:00
Web Dev Cody
26aaef002d Merge pull request #537 from AutoMaker-Org/claude/issue-536-20260117-0132
feat: add configurable host binding for server and Vite dev server
2026-01-16 21:22:34 -05:00
claude[bot]
09bb59d090 feat: add configurable host binding for server and Vite dev server
- Add HOST environment variable (default: 0.0.0.0) to allow binding to specific network interfaces
- Update server to listen on configurable host instead of hardcoded localhost
- Update Vite dev server to respect HOST environment variable
- Enhanced server startup banner to display listening address
- Updated .env.example and CLAUDE.md documentation

Fixes #536

Co-authored-by: Web Dev Cody <webdevcody@users.noreply.github.com>
2026-01-17 01:34:06 +00:00
Shirone
2f38ffe2d5 Merge pull request #532 from AutoMaker-Org/feature/v0.12.0rc-1768605251997-8ufb
fix: feature.json corruption on crash lose
2026-01-17 00:00:18 +00:00
Shirone
12fa9d858d Merge pull request #533 from AutoMaker-Org/feature/v0.12.0rc-1768605477061-fhv5
fix: Codex freezes
2026-01-16 23:59:16 +00:00
Shirone
c4e1a58e0d refactor: update timeout constants in CLI and Codex providers
- Removed redundant definition of CLI base timeout in `cli-provider.ts` and added a detailed comment explaining its purpose.
- Updated `codex-provider.ts` to use the imported `DEFAULT_TIMEOUT_MS` directly instead of an alias.
- Enhanced unit tests to ensure fallback behavior for invalid reasoning effort values in timeout calculations.
2026-01-17 00:52:57 +01:00
Shirone
8661f33c6d feat: implement atomic file writing and recovery utilities
- Introduced atomic write functionality for JSON files to ensure data integrity during writes.
- Added recovery mechanisms to read JSON files with fallback options for corrupted or missing files.
- Enhanced existing services to utilize atomic write and recovery features for improved reliability.
- Updated tests to cover new atomic writing and recovery scenarios, ensuring robust error handling and data consistency.
2026-01-17 00:50:51 +01:00
Shirone
5c24ca2220 feat: implement dynamic timeout calculation for reasoning efforts in CLI and Codex providers
- Added `calculateReasoningTimeout` function to dynamically adjust timeouts based on reasoning effort levels.
- Updated CLI and Codex providers to utilize the new timeout calculation, addressing potential timeouts for high reasoning efforts.
- Enhanced unit tests to validate timeout behavior for various reasoning efforts, ensuring correct timeout values are applied.
2026-01-17 00:50:06 +01:00
webdevcody
14559354dd refactor: update sidebar navigation sections for clarity
- Added Notifications and Project Settings as standalone sections in the sidebar without labels for visual separation.
- Removed the previous 'Other' label to enhance the organization of navigation items.
2026-01-16 18:49:35 -05:00
webdevcody
3bf9dbd43a Merge branch 'v0.12.0rc' of github.com:AutoMaker-Org/automaker into v0.12.0rc 2026-01-16 18:39:31 -05:00
webdevcody
bd3999416b feat: implement notifications and event history features
- Added Notification Service to manage project-level notifications, including creation, listing, marking as read, and dismissing notifications.
- Introduced Event History Service to store and manage historical events, allowing for listing, retrieval, deletion, and replaying of events.
- Integrated notifications into the server and UI, providing real-time updates for feature statuses and operations.
- Enhanced sidebar and project switcher components to display unread notifications count.
- Created dedicated views for managing notifications and event history, improving user experience and accessibility.

These changes enhance the application's ability to inform users about important events and statuses, improving overall usability and responsiveness.
2026-01-16 18:37:11 -05:00
Shirone
cc9f7d48c8 fix: enhance authentication error handling in Claude usage service tests
- Updated test to send a specific authentication error pattern to the data callback.
- Triggered the exit handler to validate the handling of authentication errors.
- Improved error message expectations for better clarity during test failures.
2026-01-16 23:58:48 +01:00
Shirone
6bb0461be7 Merge pull request #527 from AutoMaker-Org/feature/v0.12.0rc-1768598412391-lnp7
feat: implement XML extraction utilities and enhance feature handling
2026-01-16 22:52:55 +00:00
Shirone
16ef026b38 refactor: Centralize UUID generation with fallback support 2026-01-16 23:49:36 +01:00
Shirone
50ed405c4a fix: adress pr comments 2026-01-16 23:41:23 +01:00
Web Dev Cody
5407e1a9ff Merge pull request #525 from stefandevo/feature/project-settings
feat: Separate Project Settings from Global Settings
2026-01-16 17:31:19 -05:00
Stefan de Vogelaere
5436b18f70 refactor: move Project Settings below Tools section in sidebar
- Remove Project Settings from Project section
- Add Project Settings as standalone section below Tools/GitHub
- Use empty label for visual separation without header
- Add horizontal separator line above sections without labels
- Rename to "Project Settings" for clarity
- Keep "Global Settings" at bottom of sidebar
2026-01-16 23:27:53 +01:00
Stefan de Vogelaere
8b7700364d refactor: move project settings to Project section, rename global settings
- Move "Settings" from Tools section to Project section in sidebar
- Rename bottom settings link from "Settings" to "Global Settings"
- Update keyboard shortcut description accordingly
2026-01-16 23:17:50 +01:00
Shirone
3bdf3cbb5c fix: improve branch name generation logic in BoardView and useBoardActions
- Updated the logic for auto-generating branch names to consistently use the primary branch (main/master) and avoid nested feature paths.
- Removed references to currentWorktreeBranch in favor of getPrimaryWorktreeBranch for better clarity and maintainability.
- Enhanced comments to clarify the purpose of branch name generation.
2026-01-16 23:14:22 +01:00
webdevcody
45d9c9a5d8 fix: adjust menu dimensions and formatting in start-automaker.sh
- Increased MENU_BOX_WIDTH and MENU_INNER_WIDTH for better layout.
- Updated printf statements in show_menu() for consistent spacing and alignment of menu options.
- Enhanced exit option formatting for improved readability.
2026-01-16 17:10:20 -05:00
Stefan de Vogelaere
6a23e6ce78 fix: address PR review feedback
- Fix race conditions when rapidly switching projects
  - Added cancellation logic to prevent stale responses from updating state
  - Both project settings and init script loading now properly cancelled on unmount

- Improve error handling in custom icon upload
  - Added toast notifications for validation errors (file type, file size)
  - Added toast notifications for upload success/failure
  - Handle network errors gracefully with user feedback
  - Handle file reader errors
2026-01-16 23:03:21 +01:00
Stefan de Vogelaere
4e53215104 chore: reset package-lock.json to match base branch 2026-01-16 23:03:21 +01:00
Stefan de Vogelaere
2899b6d416 feat: separate project settings from global settings
This PR introduces a new dedicated Project Settings screen accessible from
the sidebar, clearly separating project-specific settings from global
application settings.

- Added new route `/project-settings` with dedicated view
- Sidebar navigation item "Settings" in Tools section (Shift+S shortcut)
- Sidebar-based navigation matching global Settings pattern
- Sections: Identity, Worktrees, Theme, Danger Zone

**Moved to Project Settings:**
- Project name and icon customization
- Project-specific theme override
- Worktree isolation enable/disable (per-project override)
- Init script indicator visibility and auto-dismiss
- Delete branch by default preference
- Initialization script editor
- Delete project (Danger Zone)

**Remains in Global Settings:**
- Global theme (default for all projects)
- Global worktree isolation (default for new projects)
- Feature Defaults, Model Defaults
- API Keys, AI Providers, MCP Servers
- Terminal, Keyboard Shortcuts, Audio
- Account, Security, Developer settings

Both Theme and Worktree Isolation now follow a consistent override pattern:
1. Global Settings defines the default value
2. New projects inherit the global value
3. Project Settings can override for that specific project
4. Changing global setting doesn't affect projects with overrides

- Fixed: Changing global theme was incorrectly overwriting project themes
- Fixed: Project worktree setting not persisting across sessions
- Project settings now properly load from server on component mount

- Shell syntax editor: improved background contrast (bg-background)
- Shell syntax editor: removed distracting active line highlight
- Project Settings header matches Context/Memory views pattern

- `apps/ui/src/routes/project-settings.tsx`
- `apps/ui/src/components/views/project-settings-view/` (9 files)

- Global settings simplified (removed project-specific options)
- Sidebar navigation updated with project settings link
- App store: added project-specific useWorktrees state/actions
- Types: added projectSettings keyboard shortcut
- HTTP client: added missing project settings response fields
2026-01-16 23:03:21 +01:00
Kacper
b263cc615e feat: implement XML extraction utilities and enhance feature handling
- Introduced a new xml-extractor module with functions for XML parsing, including escaping/unescaping XML characters, extracting sections and elements, and managing implemented features.
- Added functionality to add, remove, update, and check for implemented features in the app_spec.txt file.
- Enhanced the create and update feature handlers to check for duplicate titles and trigger synchronization with app_spec.txt on status changes.
- Updated tests to cover new XML extraction utilities and feature handling logic, ensuring robust functionality and reliability.
2026-01-16 22:55:10 +01:00
webdevcody
97b0028919 chore: update package versions to 0.12.0 and 0.12.0rc
- Updated the version in package.json for the main project to 0.12.0rc.
- Updated the version in apps/server/package.json and apps/ui/package.json to 0.12.0.
- Adjusted the version extraction logic in start-automaker.sh to reference the correct package.json path.
2026-01-16 16:48:43 -05:00
webdevcody
fd1727a443 Merge branch 'v0.12.0rc' of github.com:AutoMaker-Org/automaker into v0.12.0rc 2026-01-16 16:11:56 -05:00
webdevcody
597cb9bfae refactor: remove dev.mjs and integrate start-automaker.sh for development mode
- Deleted the dev.mjs script, consolidating development mode functionality into start-automaker.sh.
- Updated package.json to use start-automaker.sh for the "dev" script and added a "start" script for production mode.
- Enhanced start-automaker.sh with production build capabilities and improved argument parsing for better user experience.
- Removed launcher-utils.mjs as its functionality has been integrated into start-automaker.sh.
2026-01-16 16:11:53 -05:00
Kacper
c2430e5bd3 feat: enhance PTY handling for Windows in ClaudeUsageService and TerminalService
- Added detection for Electron environment to improve compatibility with Windows PTY processes.
- Implemented winpty fallback for ConPTY failures, ensuring robust terminal session creation in Electron and other contexts.
- Updated error handling to provide clearer messages for authentication and terminal access issues.
- Refined usage data detection logic to avoid false positives, improving the accuracy of usage reporting.

These changes aim to enhance the reliability and user experience of terminal interactions on Windows, particularly in Electron applications.
2026-01-16 21:53:53 +01:00
Shirone
68df8efd10 Merge pull request #522 from AutoMaker-Org/feature/v0.12.0rc-1768590871767-bl1c
feat: add filters to github issues view
2026-01-16 20:08:05 +00:00
Kacper
c0d64bc994 fix: adress pr comments 2026-01-16 21:05:58 +01:00
Kacper
6237f1a0fe feat: add filtering capabilities to GitHub issues view
- Implemented a comprehensive filtering system for GitHub issues, allowing users to filter by state, labels, assignees, and validation status.
- Introduced a new IssuesFilterControls component for managing filter options.
- Updated the GitHubIssuesView to utilize the new filtering logic, enhancing the user experience by providing clearer visibility into matching issues.
- Added hooks for filtering logic and state management, ensuring efficient updates and rendering of filtered issues.

These changes aim to improve the usability of the issues view by enabling users to easily navigate and manage their issues based on specific criteria.
2026-01-16 20:56:23 +01:00
Web Dev Cody
30c50d9b78 Merge pull request #513 from JZilla808/feature/tui-launcher
feat: add TUI launcher script for easy app startup
2026-01-16 14:51:36 -05:00
Web Dev Cody
03516ac09e Merge pull request #519 from WikiRik/WikiRik/audit-fix
chore: run npm audit fix
2026-01-16 14:43:27 -05:00
Shirone
5e5a136f1f Merge pull request #521 from AutoMaker-Org/feature/v0.12.0rc-1768591325146-pye6
fix: Signals not supported on windows.
2026-01-16 19:39:00 +00:00
Kacper
98c50d44a4 test: mock Unix platform for SIGTERM behavior in ClaudeUsageService tests
Added a mock for the Unix platform in the SIGTERM test case to ensure proper behavior during testing on non-Windows systems. This change enhances the reliability of the tests by simulating the expected environment for process termination.
2026-01-16 20:38:29 +01:00
Kacper
0e9369816f fix: unify PTY process termination handling across platforms
Refactored the process termination logic in both ClaudeUsageService and TerminalService to use a centralized method for killing PTY processes. This ensures consistent handling of process termination across Windows and Unix-like systems, improving reliability and maintainability of the code.
2026-01-16 20:34:12 +01:00
Kacper
be63a59e9c fix: improve process termination handling for Windows
Updated the process termination logic in ClaudeUsageService to handle Windows environments correctly. The code now checks the operating system and calls the appropriate kill method, ensuring consistent behavior across platforms.
2026-01-16 20:27:53 +01:00
Kacper
dbb84aba23 fix: ensure proper type handling for JSON parsing in loadBacklogPlan function
Updated the JSON parsing in the loadBacklogPlan function to explicitly cast the raw input as a string, improving type safety and preventing potential runtime errors when handling backlog plan data.
2026-01-16 20:09:01 +01:00
Shirone
9819d2e91c Merge pull request #514 from Seonfx/fix/510-spec-generation-json-fallback
fix: add JSON fallback for spec generation with custom API endpoints
2026-01-16 19:01:47 +00:00
Kacper
4c24ba5a8b feat: enhance TUI launcher with Docker/Electron process detection
- Add 4 launch options matching dev.mjs (Web, Electron, Docker Dev, Electron+Docker)
- Add arrow key navigation in menu with visual selection indicator
- Add cross-platform port conflict detection and resolution (Windows/Unix)
- Add Docker container detection with Stop/Restart/Attach/Cancel options
- Add Electron process detection when switching between modes
- Add centered, styled output for Docker build progress
- Add HUSKY=0 to docker-compose files to prevent permission errors
- Fix Windows/Git Bash compatibility (platform detection, netstat/taskkill)
- Fix bash arithmetic issue with set -e causing script to hang

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-16 19:58:32 +01:00
Rik Smale
e67cab1e07 chore: fix lockfile 2026-01-16 19:23:18 +01:00
Rik Smale
132b8f7529 chore: run npm audit fix 2026-01-16 19:18:16 +01:00
Seonfx
d651e9d8d6 fix: address PR review feedback for JSON fallback
- Simplify escapeXml() using 'str == null' check (type narrowing)
- Add validation for extracted JSON before passing to specToXml()
- Prevents runtime errors when JSON doesn't match SpecOutput schema

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-01-16 13:43:56 -04:00
webdevcody
92f14508aa chore: update environment variable documentation for Anthropic API key
- Changed comments in docker-compose files to clarify that the ANTHROPIC_API_KEY is optional.
- Updated README to reflect changes in authentication setup, emphasizing integration with Claude Code CLI and removing outdated API key instructions.
- Improved clarity on authentication methods and streamlined the setup process for users.
2026-01-16 11:23:45 -05:00
DhanushSantosh
842b059fac fix: remove invalid local keyword in main script body
The 'local' keyword can only be used inside functions. Line 423 had
'local timeout_count=0' in the main script body which caused a bash error.
Removed the unused variable declaration.

Fixes: bash error 'local: can only be used in a function'

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-16 20:44:17 +05:30
DhanushSantosh
49f9ecc168 feat: enhance TUI launcher with production-ready features and documentation
Major improvements to start-automaker.sh launcher script:

**Architecture & Code Quality:**
- Organized into logical sections with clear separators (8 sections)
- Extracted all magic numbers into named constants at top
- Added comprehensive comments throughout

**Functionality:**
- Dynamic version extraction from package.json (no manual updates)
- Pre-flight checks: validates Node.js, npm, tput installed
- Platform detection: warns on Windows/unsupported systems
- Terminal size validation: checks min 70x20, displays warning if too small
- Input timeout: 30-second auto-timeout for hands-free operation
- History tracking: remembers last selected mode in ~/.automaker_launcher_history

**User Experience:**
- Added --help flag with comprehensive usage documentation
- Added --version flag showing version, Node.js, Bash info
- Added --check-deps flag to verify project dependencies
- Added --no-colors flag for terminals without color support
- Added --no-history flag to disable history tracking
- Enhanced cleanup function: restores cursor + echo, better signal handling
- Better error messages with actionable remediation steps
- Improved exit experience: "Goodbye! See you soon." message

**Robustness:**
- Real initialization checks (validates node_modules, build artifacts)
- Spinner uses frame counting instead of infinite loop (max 1.6s)
- Proper signal trap handling (EXIT, INT, TERM)
- Error recovery: respects --no-colors in pre-flight checks

**File Management:**
- Renamed from "start automaker.sh" to "start-automaker.sh" for consistency
- Made script more portable with SCRIPT_DIR detection

**Documentation:**
- Added section to README.md: "Interactive TUI Launcher"
- Documented all launch modes and options with examples
- Added feature list, history file location, usage tips
- Updated table of contents with TUI launcher section

Fixes: #511 (CI test failures resolved)
Improvements: Better UX for new users, production-ready error handling

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-16 20:27:53 +05:30
DhanushSantosh
e02fd889c2 fix: add --force flag to npm install in format-check workflow
Ensures dmg-license can be installed on Linux CI runners even though it's
a darwin-only package. The --force flag allows npm to skip platform mismatches.

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-16 19:51:23 +05:30
DhanushSantosh
52a821d3bb fix: add --force flag to npm install in CI to allow platform-specific devDependencies
dmg-license is a darwin-only package required for macOS DMG building. The CI runs on
Linux, so npm install fails when trying to install a platform-specific devDependency.

Using --force allows npm to skip platform mismatches instead of erroring out, allowing
the build to proceed on non-darwin platforms where the darwin-only dependency will simply
be skipped.

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-16 19:43:09 +05:30
DhanushSantosh
becd79f1e3 fix: add missing dmg-license dependency to fix release builds
The release workflow was failing for all platforms because macOS DMG
builder requires dmg-license. This single dependency was preventing
AppImage, DEB, RPM, DMG, and EXE artifacts from being built and
uploaded to any release since v0.7.3.

Includes lockfile updates and conversion of git+ssh:// URLs to https://
to prevent SSH key requirement issues in CI/CD and across environments.

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-16 19:38:10 +05:30
DhanushSantosh
883ad2a04b fix(backlog-plan): clear running details in generate-plan finally block
Ensure running details are cleared when generation completes or fails, preventing state leaks.

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-16 19:37:21 +05:30
DhanushSantosh
bf93cdf0c4 fix(backlog-plan): clear running details when stopping generation
Add setRunningDetails(null) to stop handler to prevent state leaks when aborting operation.

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-16 19:37:21 +05:30
DhanushSantosh
c0ea1c736a fix(backlog-plan): clear running details and handle plan cleanup safely
- Add setRunningDetails(null) in finally block of generate handler to prevent state leaks
- Move clearBacklogPlan before response in apply handler and wrap in try-catch to prevent errors after headers sent

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-16 19:37:21 +05:30
DhanushSantosh
8b448b9481 fix: address CodeRabbit security and validation issues in Fedora docs and backlog plan
Documentation improvements:
- Fix GitHub URL placeholder issues in install-fedora.md - GitHub /latest/download/ endpoint
  doesn't support version substitution, use explicit download URL pattern instead
- Improve security in network troubleshooting section:
  - Change ping target from claude.ai (marketing site) to api.anthropic.com (actual API)
  - Remove unsafe 'echo \$ANTHROPIC_API_KEY' command that exposes secrets in shell history
  - Use safe API key check with conditional output instead

Code improvements:
- apps/server/src/routes/backlog-plan/common.ts: Add Array.isArray() validation
  for stored plan shape before returning it. Ensures changes is actually an array,
  not just truthy, preventing downstream runtime errors.

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-16 19:37:21 +05:30
DhanushSantosh
12f2b9f2b3 fix: remove invalid license property from RPM configuration
The 'license' property is not supported by electron-builder's RPM schema.
Valid RPM properties are: afterInstall, afterRemove, appArmorProfile,
artifactName, category, compression, depends, description, desktop,
executableArgs, fpm, icon, maintainer, mimeTypes, packageCategory,
packageName, publish, synopsis, vendor.

This fix allows electron-builder to proceed to the RPM build stage.

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-16 19:37:21 +05:30
DhanushSantosh
017ff3ca0a fix: resolve TypeScript error in backlog plan loading
Fix type mismatch in loadBacklogPlan where secureFs.readFile with 'utf-8'
encoding returns union type string | Buffer, causing JSON.parse to fail type checking.
Cast raw to string to satisfy TypeScript strict mode.

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-16 19:37:21 +05:30
Seonfx
bcec178bbe fix: add JSON fallback for spec generation with custom API endpoints
Fixes spec generation failure when using custom API endpoints (e.g., GLM proxy)
that don't support structured output. The AI returns JSON instead of XML, but
the fallback parser only looked for XML tags.

Changes:
- escapeXml: Handle undefined/null values gracefully (converts to empty string)
- generate-spec: Add JSON extraction fallback when XML tags aren't found
  - Reuses existing extractJson() utility (already used for Cursor models)
  - Converts extracted JSON to XML using specToXml()

Closes #510

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-01-16 08:37:53 -04:00
Jay Zhou
e3347c7b9c feat: add TUI launcher script for easy app startup
Add a beautiful terminal user interface (TUI) script that provides an
interactive menu for launching Automaker in different modes:

- [1] Web Browser mode (localhost:3007)
- [2] Desktop App (Electron)
- [3] Desktop + Debug (Electron with DevTools)
- [Q] Exit

Features:
- ASCII art logo with gradient colors
- Centered, responsive layout that adapts to terminal size
- Animated spinner during launch sequence
- Cross-shell compatibility (bash/zsh)
- Clean exit handling with cursor restoration

This provides a more user-friendly alternative to remembering
npm commands, especially for new users getting started with
the project.
2026-01-16 03:34:47 -08:00
DhanushSantosh
6529446281 feat: add Fedora/RHEL RPM package support with comprehensive documentation
Add native RPM package building for Fedora-based distributions:
- Extend electron-builder configuration to include RPM target
- Add rpm-build installation to GitHub Actions CI/CD workflow
- Update artifact upload patterns to include .rpm files
- Declare proper RPM dependencies (gtk3, libnotify, nss, etc.)
- Use xz compression for optimal package size

Documentation:
- Update README.md with Fedora/RHEL installation instructions
- Create comprehensive docs/install-fedora.md guide covering:
  - Installation methods (dnf/yum, direct URL)
  - System requirements and capabilities
  - Configuration and troubleshooting
  - SELinux handling and firewall rules
  - Performance tips and security considerations
  - Building from source
- Support for Fedora 39+, RHEL 9+, Rocky Linux, AlmaLinux

End-to-end support enables Fedora users to install Automaker via:
  sudo dnf install ./Automaker-<version>-x86_64.rpm

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-16 12:31:30 +05:30
webdevcody
379551c40e feat: add JSON import/export functionality in settings view
- Introduced a new ImportExportDialog component for managing settings import and export via JSON.
- Integrated JsonSyntaxEditor for editing JSON settings with syntax highlighting.
- Updated SettingsView to include the import/export dialog and associated state management.
- Enhanced SettingsHeader with an import/export button for easy access.

These changes aim to improve user experience by allowing seamless transfer of settings between installations.
2026-01-16 00:34:59 -05:00
webdevcody
7465017600 feat: implement server logging and event hook features
- Introduced server log level configuration and HTTP request logging settings, allowing users to control the verbosity of server logs and enable or disable request logging at runtime.
- Added an Event Hook Service to execute custom actions based on system events, supporting shell commands and HTTP webhooks.
- Enhanced the UI with new sections for managing server logging preferences and event hooks, including a dialog for creating and editing hooks.
- Updated global settings to include server log level and request logging options, ensuring persistence across sessions.

These changes aim to improve debugging capabilities and provide users with customizable event-driven actions within the application.
2026-01-16 00:21:49 -05:00
webdevcody
874c5a36de feat: enable auto loading of ClaudeMd by default
- Updated the default setting for autoLoadClaudeMd from false to true in the global settings. This change aims to enhance user experience by automatically loading ClaudeMd, streamlining the workflow for users.
2026-01-15 23:54:08 -05:00
webdevcody
03436103d1 feat: implement backlog plan management and UI enhancements
- Added functionality to save, clear, and load backlog plans within the application.
- Introduced a new API endpoint for clearing saved backlog plans.
- Enhanced the backlog plan dialog to allow users to review and apply changes to their features.
- Integrated dependency management features in the UI, allowing users to select parent and child dependencies for features.
- Improved the graph view with options to manage plans and visualize dependencies effectively.
- Updated the sidebar and settings to include provider visibility toggles for better user control over model selection.

These changes aim to enhance the user experience by providing robust backlog management capabilities and improving the overall UI for feature planning.
2026-01-15 22:21:46 -05:00
Shirone
cb544e0011 Merge pull request #505 from AutoMaker-Org/feature/v0.12.0rc-1768509532254-tt6z
fix: "Remove Project" button not working on right click of the project
2026-01-15 22:05:24 +00:00
Shirone
df23c9e6ab Merge pull request #507 from AutoMaker-Org/feat/improve-ideation-view
feat: improve ideation view
2026-01-15 22:03:50 +00:00
Shirone
52cc82fb3f feat: enhance ideation dashboard and prompt components
- Added a helper function to map priority levels to badge variants in the IdeationDashboard.
- Improved UI elements in SuggestionCard for better spacing and visual hierarchy.
- Updated PromptCategoryGrid and PromptList components with enhanced hover effects and layout adjustments for a more responsive design.
- Refined button styles and interactions for better user experience across components.

These changes aim to improve the overall usability and aesthetics of the ideation view.
2026-01-15 23:01:12 +01:00
Shirone
d9571bfb8d Merge pull request #506 from AutoMaker-Org/feature/v0.12.0rc-1768509904121-pjft
feat: add discard all functionality to ideation view
2026-01-15 21:39:54 +00:00
Shirone
07d800b589 feat: add discard all functionality to ideation view
- Introduced a new button in the IdeationHeader for discarding all ideas when in dashboard mode.
- Implemented state management for discard readiness and count in IdeationView.
- Added confirmation dialog for discarding ideas in IdeationDashboard.
- Enhanced bulk action readiness checks to include discard operations.

This update improves user experience by allowing bulk discarding of ideas with confirmation, ensuring actions are intentional.
2026-01-15 22:37:26 +01:00
Shirone
ec042de69c fix: streamline context menu behavior for project removal dialog
- Ensure the context menu closes consistently after the confirmation dialog, regardless of user action.
- Reset confirmation state upon dialog closure to prevent unintended interactions.
2026-01-15 22:20:30 +01:00
Shirone
585ae32c32 fix: Prevent race condition in project removal dialog cleanup 2026-01-15 22:15:16 +01:00
Shirone
a89ba04109 fix: project removal not being executed
- Prevent context menu from closing when a confirmation dialog is open.
- Add success toast notification upon project removal.
- Refactor event handlers to account for dialog state, improving user experience.
2026-01-15 22:06:35 +01:00
Shirone
05a3b95d75 Merge pull request #501 from AutoMaker-Org/feature/v0.11.0rc-1768426435282-1ogl
feat: centralize prompts and add customization UI for App Spec, Context, Suggestions, Tasks
2026-01-15 20:20:56 +00:00
Shirone
0e269ca15d fix: update outdated server unit tests
- auto-mode-service-planning.test.ts: Add taskExecutionPrompts argument
  to buildFeaturePrompt calls, update test for implementation instructions
- claude-usage-service.test.ts: Skip deprecated Mac tests (service now
  uses PTY for all platforms), rename Windows tests to PTY tests, update
  to use process.cwd() instead of home directory
- claude-provider.test.ts: Add missing model parameter to environment
  variable passthrough tests

All tests now pass (1093 passed, 23 skipped).

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 21:16:46 +01:00
Shirone
fd03cb4afa refactor: split prompt customization into multiple files
Split prompt-customization-section.tsx into focused modules:
- types.ts (51 lines) - Type definitions
- tab-configs.ts (448 lines) - Configuration data for all tabs
- components.tsx (159 lines) - Reusable Banner, PromptField, PromptFieldList
- prompt-customization-section.tsx (176 lines) - Main component

Benefits:
- Main component reduced from ~810 to 176 lines
- Clear separation of concerns
- Easier to find and modify specific parts
- Configuration data isolated for easy updates

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 21:07:38 +01:00
Shirone
d6c5c93fe5 refactor: use data-driven configuration for prompt customization UI
- Replace repetitive JSX with TAB_CONFIGS array defining all tabs and fields
- Create reusable Banner component for info/warning banners
- Create PromptFieldList component for rendering fields from config
- Support nested sections (like Auto Mode's Template Prompts section)
- Reduce file from ~950 lines to ~810 lines (-15% code)

Benefits:
- Adding new prompt tabs/fields is now declarative (just add to config)
- Consistent structure enforced by TypeScript interfaces
- Much easier to maintain and extend

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 21:03:51 +01:00
Shirone
1abf219230 refactor: create reusable PromptTabContent component and add {{count}} placeholder
- Create PromptTabContent reusable component in prompt-customization-section.tsx
- Update all tabs (Agent, Commit Message, Title Generation, Ideation, App Spec,
  Context Description, Suggestions, Task Execution) to use the new component
- Add {{count}} placeholder to DEFAULT_SUGGESTIONS_SYSTEM_PROMPT for dynamic
  suggestion count

Addresses PR review comments from Gemini.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 21:00:32 +01:00
Shirone
3a2ba6dbfe feat: connect Task Execution prompts to auto-mode-service
Update auto-mode-service.ts to use centralized Task Execution prompts
from settings, making all 9 task execution prompts customizable via UI:

- buildFeaturePrompt: uses implementationInstructions and
  playwrightVerificationInstructions from settings
- buildTaskPrompt: uses taskPromptTemplate with variable substitution
- buildPipelineStepPrompt: updated to pass prompts through
- executeFeatureWithContext: uses resumeFeatureTemplate
- resolvePlanApproval recovery: uses continuationAfterApprovalTemplate
- Multi-agent continuation: uses continuationAfterApprovalTemplate
- recordLearningsFromFeature: uses learningExtractionSystemPrompt
  and learningExtractionUserPromptTemplate

All 12 prompt categories are now fully customizable from the UI.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 20:54:26 +01:00
Shirone
8fa8ba0a16 fix: address PR comments and complete prompt centralization
- Fix inline type imports in defaults.ts (move to top-level imports)
- Update ideation-service.ts to use centralized prompts from settings
- Update generate-title.ts to use centralized prompts
- Update validate-issue.ts to use centralized prompts
- Clean up validation-schema.ts (prompts already centralized)
- Minor server index cleanup

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 20:31:19 +01:00
Shirone
285f526e0c feat: centralize prompts and add customization UI for App Spec, Context, Suggestions, Tasks
- Add 4 new prompt type interfaces (AppSpecPrompts, ContextDescriptionPrompts,
  SuggestionsPrompts, TaskExecutionPrompts) with resolved types
- Add default prompts for all new categories to @automaker/prompts/defaults.ts
- Add merge functions for new prompt categories in merge.ts
- Update settings-helpers.ts getPromptCustomization() to return all 12 categories
- Update server routes (generate-spec, generate-features-from-spec, describe-file,
  describe-image, generate-suggestions) to use centralized prompts
- Add 4 new tabs in prompt customization UI (App Spec, Context, Suggestions, Tasks)
- Fix Ideation tab layout using grid-cols-4 for even distribution

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 20:13:14 +01:00
webdevcody
bd68b497ac Merge branch 'v0.12.0rc' of github.com:AutoMaker-Org/automaker into v0.12.0rc 2026-01-15 13:14:19 -05:00
webdevcody
06b047cfcb feat: implement bulk feature verification and enhance selection mode
- Added functionality for bulk verifying features in the BoardView, allowing users to mark multiple features as verified at once.
- Introduced a selection target mechanism to differentiate between 'backlog' and 'waiting_approval' features during selection mode.
- Updated the KanbanCard and SelectionActionBar components to support the new selection target logic, improving user experience for bulk actions.
- Enhanced the UI to provide appropriate actions based on the current selection target, including verification options for waiting approval features.
2026-01-15 13:14:15 -05:00
Shirone
361cb06bf0 fix(ui): improve React Query hooks and fix edge cases
- Update query keys to include all relevant parameters (branches, agents)
- Fix use-branches to pass includeRemote parameter to query key
- Fix use-settings to include sources in agents query key
- Update running-agents-view to use correct query key structure
- Update use-spec-loading to properly use spec query hooks
- Add missing queryClient invalidation in auto-mode mutations
- Add missing cache invalidation in spec mutations after creation

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 19:11:25 +01:00
Shirone
3170e22383 fix(ui): add missing cache invalidation for React Query
- Add cache invalidation to useBoardPersistence after create/update/delete
- Add useAutoModeQueryInvalidation to board-view for WebSocket events
- Add cache invalidation to github-issues-view after converting issue to task
- Add cache invalidation to analysis-view after generating features
- Fix UI not updating when features are added, updated, or completed

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 19:10:35 +01:00
DhanushSantosh
c585cee12f feat: add dynamic usage status icon and tab-aware updates to usage button
- Add provider icon (Anthropic/OpenAI) that displays based on active tab
- Icon color reflects usage status (green/orange/red)
- Progress bar and stale indicator update dynamically when switching tabs
- Shows Claude metrics when Claude tab is active, Codex metrics when Codex tab is active

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-15 21:05:35 +05:30
Shirone
9dbec7281a fix: package lock file 2026-01-15 16:29:12 +01:00
Shirone
c2fed78733 refactor(ui): migrate remaining components to React Query
- Migrate workspace-picker-modal to useWorkspaceDirectories query
- Migrate session-manager to useSessions query
- Migrate git-diff-panel to useGitDiffs query
- Migrate prompt-list to useIdeationPrompts query
- Migrate spec-view hooks to useSpecFile query and spec mutations
- Migrate use-board-background-settings to useProjectSettings query
- Migrate use-guided-prompts to useIdeationPrompts query
- Migrate use-project-settings-loader to React Query
- Complete React Query migration across all components

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 16:22:39 +01:00
Shirone
5fe7bcd378 refactor(ui): migrate usage popovers and running agents to React Query
- Migrate claude-usage-popover to useClaudeUsage query with polling
- Migrate codex-usage-popover to useCodexUsage query with polling
- Migrate usage-popover to React Query hooks
- Migrate running-agents-view to useRunningAgents query
- Replace manual polling intervals with refetchInterval
- Remove manual loading/error state management

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 16:22:17 +01:00
Shirone
20caa424fc refactor(ui): migrate settings view to React Query
- Migrate use-cursor-permissions to query and mutation hooks
- Migrate use-cursor-status to React Query
- Migrate use-skills-settings to useUpdateGlobalSettings mutation
- Migrate use-subagents-settings to mutation hooks
- Migrate use-subagents to useDiscoveredAgents query
- Migrate opencode-settings-tab to React Query hooks
- Migrate worktrees-section to query hooks
- Migrate codex/claude usage sections to query hooks
- Remove manual useState for loading/error states

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 16:22:04 +01:00
Shirone
c4e0a7cc96 refactor(ui): migrate GitHub views to React Query
- Migrate use-github-issues to useGitHubIssues query
- Migrate use-issue-comments to useGitHubIssueComments infinite query
- Migrate use-issue-validation to useGitHubValidations with mutations
- Migrate github-prs-view to useGitHubPRs query
- Support pagination for comments with useInfiniteQuery
- Remove manual loading state management

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 16:21:49 +01:00
Shirone
d1219a225c refactor(ui): migrate worktree panel to React Query
- Migrate use-worktrees to useWorktrees query hook
- Migrate use-branches to useWorktreeBranches query hook
- Migrate use-available-editors to useAvailableEditors query hook
- Migrate use-worktree-actions to use mutation hooks
- Update worktree-panel component to use query data
- Remove manual state management for loading/errors

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 16:21:36 +01:00
Shirone
3411256366 refactor(ui): migrate board view to React Query
- Replace manual fetching in use-board-features with useFeatures query
- Migrate use-board-actions to use mutation hooks
- Update kanban-card and agent-info-panel to use query hooks
- Migrate agent-output-modal to useAgentOutput query
- Migrate create-pr-dialog to useCreatePR mutation
- Remove manual loading/error state management
- Add proper cache invalidation on mutations

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 16:21:23 +01:00
Shirone
d08ef472a3 feat(ui): add shared skeleton component and update CLI status
- Add reusable SkeletonPulse component to replace 4 duplicate definitions
- Update CLI status components to use shared skeleton
- Simplify CLI status components by using React Query hooks

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 16:21:08 +01:00
Shirone
d81997d24b feat(ui): add WebSocket event to React Query cache bridge
- Add useAutoModeQueryInvalidation for feature/agent events
- Add useSpecRegenerationQueryInvalidation for spec updates
- Add useGitHubValidationQueryInvalidation for PR validation events
- Bridge WebSocket events to cache invalidation for real-time updates

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 16:20:53 +01:00
Shirone
845674128e feat(ui): add React Query mutation hooks
- Add feature mutations (create, update, delete with optimistic updates)
- Add auto-mode mutations (start, stop, approve plan)
- Add worktree mutations (create, delete, checkout, switch branch)
- Add settings mutations (update global/project, validate API keys)
- Add GitHub mutations (create PR, validate PR)
- Add cursor permissions mutations (apply profile, copy config)
- Add spec mutations (generate, update, save)
- Add pipeline mutations (toggle, update config)
- Add session mutations with cache invalidation

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 16:20:38 +01:00
Shirone
2bc931a8b0 feat(ui): add React Query hooks for data fetching
- Add useFeatures, useFeature, useAgentOutput for feature data
- Add useGitHubIssues, useGitHubPRs, useGitHubValidations, useGitHubIssueComments
- Add useClaudeUsage, useCodexUsage with polling intervals
- Add useRunningAgents, useRunningAgentsCount
- Add useWorktrees, useWorktreeInfo, useWorktreeStatus, useWorktreeDiffs
- Add useGlobalSettings, useProjectSettings, useCredentials
- Add useAvailableModels, useCodexModels, useOpencodeModels
- Add useSessions, useSessionHistory, useSessionQueue
- Add useIdeationPrompts, useIdeas
- Add CLI status queries (claude, cursor, codex, opencode, github)
- Add useCursorPermissionsQuery, useWorkspaceDirectories
- Add usePipelineConfig, useSpecFile, useSpecRegenerationStatus

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 16:20:24 +01:00
Shirone
e57549c06e feat(ui): add React Query foundation and provider setup
- Install @tanstack/react-query and @tanstack/react-query-devtools
- Add QueryClient with default stale times and retry config
- Create query-keys.ts factory for consistent cache key management
- Wrap app root with QueryClientProvider and DevTools

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 16:20:08 +01:00
DhanushSantosh
241fd0b252 Merge remote-tracking branch 'upstream/v0.12.0rc' into patchcraft 2026-01-15 19:38:37 +05:30
DhanushSantosh
164acc1b4e chore: update package-lock.json 2026-01-15 19:38:18 +05:30
webdevcody
78e5ddb4a8 chore: release v0.11.0
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-14 20:34:37 -05:00
Web Dev Cody
43904cdb02 Merge pull request #475 from AutoMaker-Org/v0.11.0rc
V0.11.0rc
2026-01-14 20:31:11 -05:00
Shirone
7ea1383e10 Merge pull request #492 from AutoMaker-Org/feature/v0.11.0rc-1768413895104-31pa
feat: merge worktree to main in dropdown menu
2026-01-14 22:10:02 +00:00
Shirone
425e38811f Merge pull request #493 from AutoMaker-Org/feature/v0.11.0rc-1768413909856-a0al
fix: agent output modal ui/ux and task list with spec/full plan mode
2026-01-14 21:49:47 +00:00
Shirone
f6bda66ed4 feat: enhance agent info panel with real-time task status updates and fresh planSpec integration
- Added support for real-time task status updates using WebSocket events, allowing the Kanban card to reflect current task progress accurately.
- Introduced a new state for fresh planSpec data fetched from the API to ensure the agent info panel displays up-to-date task information.
- Updated the effectiveTodos calculation to prioritize fresh planSpec data and incorporate real-time status, improving task display accuracy.
- Enhanced the logic to listen for relevant WebSocket events and update task statuses accordingly, ensuring synchronization with the agent output modal.
2026-01-14 22:39:47 +01:00
Web Dev Cody
0df7e4a33d Merge pull request #490 from thesobercoder/fix/openrouter-models-kanban
fix: load OpenCode models on Kanban
2026-01-14 16:12:45 -05:00
Web Dev Cody
41ad717b8e Merge pull request #494 from mcbodge/feature/add-copilot-support
Fix OpenCode GitHub Copilot authentication detection
2026-01-14 16:11:38 -05:00
Manuel Grillo
fec5f88d91 feat: add GitHub Copilot support for OAuth token validation 2026-01-14 21:44:33 +01:00
Shirone
724858d215 fix: adjust more 2026-01-14 21:12:48 +01:00
Shirone
2b93afbd43 Changes from feature/v0.11.0rc-1768413909856-a0al 2026-01-14 21:03:54 +01:00
Shirone
ca0f3ecedf fix: adjust task progress panel height and improve effective todos handling in agent info panel
- Reduced the maximum height of the task progress panel from 300px to 200px for better UI consistency.
- Introduced a new `effectiveTodos` calculation in the agent info panel to correctly display tasks from `planSpec` when available, ensuring accurate task counts and statuses.
- Updated references to use `effectiveTodos` instead of the original `agentInfo.todos` for task display logic in the agent info panel.
- Adjusted the height of various modal components to align with the new task progress panel height.
2026-01-14 21:02:24 +01:00
Shirone
ee0d0c6c59 fix: merge worktree handler now uses correct branch name and path
The merge handler previously hardcoded branch names as `feature/${featureId}`
and worktree paths as `.worktrees/${featureId}`, which failed for auto-generated
branches (e.g., `feature/v0.11.0rc-1768413895104-31pa`) and custom worktrees.

Changes:
- Server handler now accepts branchName and worktreePath directly from the UI
- Added branch existence validation before attempting merge
- Updated merge dialog with 2-step confirmation (type "merge" to confirm)
- Removed feature branch naming restriction - any branch can now be merged
- Updated API types and client to pass correct parameters

Closes #408

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-14 20:49:17 +01:00
Soham Dasgupta
ac38e85f3c Merge branch 'v0.11.0rc' into fix/openrouter-models-kanban 2026-01-15 00:01:00 +05:30
Shirone
ca3286a374 Merge pull request #489 from AutoMaker-Org/feature/v0.11.0rc-1768410827235-36uf
feat: enhance pr dialog base branch selection
2026-01-14 17:39:27 +00:00
Shirone
0898578c11 fix: Include remote branches in PR base selection even when local branch exists
The branch listing logic now correctly shows remote branches (e.g., "origin/main") even if a local branch with the same base name exists, since users need remote branches as PR base targets. Also extracted duplicate state reset logic in create-pr-dialog into a reusable function.
2026-01-14 18:36:14 +01:00
Shirone
07593f8704 feat: enhance list-branches endpoint to support fetching remote branches
- Updated the list-branches endpoint to accept an optional parameter for including remote branches.
- Implemented logic to fetch and deduplicate remote branches alongside local branches.
- Modified the CreatePRDialog component to utilize the updated API for branch selection, allowing users to select from both local and remote branches.
2026-01-14 18:25:31 +01:00
Shirone
3f8a8db7a5 Merge pull request #486 from AutoMaker-Org/fix/claude-usage-parsing
fix: Claude usage parsing for CLI v2.x and trust prompt handling
2026-01-14 16:56:26 +00:00
Shirone
13eead3855 fix: use process.cwd() consistently across all platforms
Address PR review comment - use process.cwd() for Windows too instead of
USERPROFILE/homedir fallback for consistency.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-14 17:53:13 +01:00
Shirone
cb910feae9 fix: Claude usage parsing for CLI v2.x and trust prompt handling
- Use node-pty on all platforms instead of expect on macOS for more reliable PTY handling
- Use process.cwd() as working directory (project dir is likely already trusted)
- Add detection for new trust prompt text variants ("Ready to code here", "permission to work")
- Add specific error handling for trust prompt pending state
- Show helpful UI message when trust prompt needs manual approval

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-14 17:48:44 +01:00
Shirone
c75f9a29cb Merge pull request #485 from AutoMaker-Org/feature/v0.11.0rc-1768406316676-kza4
feat: add feature request github template
2026-01-14 16:18:44 +00:00
Shirone
3c5e453b01 fix: adress pr comments 2026-01-14 17:13:41 +01:00
Shirone
63e0ffac42 Merge pull request #484 from AutoMaker-Org/feature/v0.11.0rc-1768405788678-28bn
fix: adress pr comments task not respecting worktrees
2026-01-14 16:10:15 +00:00
Shirone
d0155f28c8 feat: add feature request github template 2026-01-14 17:07:01 +01:00
Shirone
27ca08d98a fix: Set workMode to custom for PR and conflict flows 2026-01-14 17:02:52 +01:00
Shirone
df99950475 Merge pull request #481 from AutoMaker-Org/feature/v0.11.0rc-1768383713091-hnir
feat(ui): Add project theme selection to context menu
2026-01-14 15:28:18 +00:00
Web Dev Cody
6a85073d94 Merge pull request #339 from ramarivera/feat/custom-anthropic-endpoint
feat: support ANTHROPIC_BASE_URL and ANTHROPIC_AUTH_TOKEN for custom endpoints
2026-01-14 10:09:56 -05:00
Shirone
7b73ff34f1 fix: adress pr comments 2026-01-14 15:43:42 +01:00
Shirone
8419b12f3f feat(ui): Add project theme selection to context menu with clean code refactoring
Implement per-project theme override capability in the Discord-like layout:
- Add theme submenu to project context menu with live preview
- Reuse existing theme constants and useThemePreview hook from sidebar
- Extract reusable ThemeButton and ThemeColumn components (DRY principle)
- Replace magic z-index values with named constants

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-14 15:33:51 +01:00
Soham Dasgupta
f1a5bcd17a fix: load OpenRouter models on kanban board without visiting settings
Problem:
- OpenRouter dynamic models only appeared after visiting settings page
- PhaseModelSelector (used in Add/Edit Feature dialogs) only fetched Codex models
- dynamicOpencodeModels remained empty until OpencodeSettingsTab mounted

Solution:
- Add fetchOpencodeModels() action to app-store mirroring fetchCodexModels pattern
- Add state tracking: opencodeModelsLoading, opencodeModelsError, timestamps
- Call fetchOpencodeModels() in PhaseModelSelector useEffect on mount
- Use same caching strategy: 5min success cache, 30sec failure cooldown

Files changed:
- apps/ui/src/store/app-store.ts
  - Add OpenCode model loading state properties
  - Add fetchOpencodeModels action with error handling & caching
- apps/ui/src/components/views/settings-view/model-defaults/phase-model-selector.tsx
  - Add opencodeModelsLoading, fetchOpencodeModels to store hook
  - Add useEffect to fetch OpenCode models on mount

Result:
- OpenRouter models now appear in Add/Edit Feature dialogs immediately
- No need to visit settings page first
- Consistent with Codex model loading behavior
2026-01-14 19:46:43 +05:30
Web Dev Cody
28d8a4cc9e Merge pull request #473 from thesobercoder/fix/npm-cache-permissions
fix: ensure npm cache directory has correct permissions
2026-01-14 09:16:30 -05:00
Web Dev Cody
7108cdd2ca Merge pull request #480 from thesobercoder/fix/cli-provider-system-prompt
fix: embed systemPrompt into prompt for CLI-based providers
2026-01-14 09:16:01 -05:00
Soham Dasgupta
e7bfb19203 fix: embed systemPrompt into prompt for CLI-based providers
CLI-based providers (OpenCode, etc.) only accept a single prompt via
stdin/args and don't support separate system/user message channels like
Claude SDK. When systemPrompt is passed to these providers, it was
silently dropped, causing:

- BacklogPlan JSON parsing failures with OpenCode/GPT-5.2 (missing
  "output ONLY JSON" formatting instruction)
- Loss of critical formatting/schema instructions for structured outputs

This fix adds embedSystemPromptIntoPrompt() method to CliProvider base
class that:
- Prepends systemPrompt to the user prompt before CLI execution
- Handles both string and array prompts (vision support)
- Handles both string systemPrompt and SystemPromptPreset objects
- Uses standard \n\n---\n\n separator (consistent with codebase)
- Sets systemPrompt to undefined to prevent double-injection

Benefits OpencodeProvider immediately (uses base executeQuery).
CursorProvider still uses manual workarounds (overrides executeQuery).

Fixes the immediate BacklogPlan + OpenCode bug while maintaining
backward compatibility with existing Cursor workarounds.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-14 19:20:47 +05:30
Shirone
beac823472 Merge pull request #479 from AutoMaker-Org/fix/followup-run
fix: follow up prompts
2026-01-14 08:43:39 +00:00
Shirone
c7fac3d9e6 refactor: Replace worktreePath param with useWorktrees flag 2026-01-14 09:29:38 +01:00
Shirone
3689eb969d Merge pull request #478 from thesobercoder/chore/playwright-docker-prereqs
chore(docker): add Playwright Chromium prerequisites
2026-01-14 07:30:24 +00:00
Soham Dasgupta
5e330b7691 chore(docker): add Playwright Chromium deps
- Add missing system libraries required by Playwright/Chromium in server and dev images\n- Document optional Playwright browser cache volume in docker-compose.override.yml.example
2026-01-14 12:47:46 +05:30
webdevcody
5ec5fe82e6 refactor: Enhance project management features and UI components
- Updated create-pr.ts to improve commit error handling and logging.
- Enhanced project-switcher.tsx with new folder opening functionality and state management for project setup.
- Expanded icon-picker.tsx to include a comprehensive list of icons organized by category.
- Replaced dialog components with popover components for auto mode and plan settings, improving UI responsiveness.
- Refactored board-view components to streamline feature management and enhance user experience.
- Removed outdated dialog components and replaced them with popover alternatives for better accessibility.

These changes aim to improve the overall usability and functionality of the project management interface.
2026-01-13 22:35:45 -05:00
Shirone
ee13bf9a8f Merge pull request #476 from DenyCZ/fix/windows-npx
fix: Resolve windows npx spawn errors
2026-01-14 00:00:32 +00:00
Shirone
219af28afc Merge pull request #477 from AutoMaker-Org/fix/dynamic-branch-references
fix: use dynamic branch references instead of hardcoded origin/main
2026-01-13 23:59:49 +00:00
Shirone
b64025b134 fix: adress pr comments 2026-01-14 00:57:30 +01:00
Shirone
51e4e8489a fix: use dynamic branch references instead of hardcoded origin/main
- Fix handleResolveConflicts to use origin/${worktree.branch} instead of
  hardcoded origin/main for pull and resolve conflicts
- Add defaultBaseBranch prop to CreatePRDialog to use selected branch
- Fix branchCardCounts to use primary worktree branch as default
- Enable PR status and Address PR Comments for main branch tab
- Add automatic PR detection from GitHub for branches without stored metadata

This allows users working on release branches (like v0.11.0rc) to properly
pull from their branch's remote and see PR status for any branch.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-14 00:48:09 +01:00
DenyCZ
bb70d04b88 fix: Resolve windows npx spawn errors 2026-01-14 00:31:04 +01:00
Shirone
32f6c6d6eb Merge pull request #474 from AutoMaker-Org/feat/dev-server-log-panel
feat: add dev server log panel with real-time streaming
2026-01-13 21:04:58 +00:00
Shirone
b6688e630e Merge branch 'v0.11.0rc' into feat/dev-server-log-panel
Resolved conflict in worktree-panel.tsx by combining imports:
- DevServerLogsPanel from this branch
- WorktreeMobileDropdown, WorktreeActionsDropdown, BranchSwitchDropdown from v0.11.0rc

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-13 22:01:44 +01:00
Shirone
073f6d5793 feat: add dev server log panel with real-time streaming
Add the ability to view dev server logs in a dedicated panel with:
- Real-time log streaming via WebSocket events
- ANSI color support using xterm.js
- Scrollback buffer (50KB) for log history on reconnect
- Output throttling to prevent UI flooding
- "View Logs" option in worktree dropdown menu

Server changes:
- Add scrollback buffer and event emission to DevServerService
- Add GET /api/worktree/dev-server-logs endpoint
- Add dev-server:started, dev-server:output, dev-server:stopped events

UI changes:
- Add reusable XtermLogViewer component
- Add DevServerLogsPanel dialog component
- Add useDevServerLogs hook for WebSocket subscription

Closes #462

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-13 21:56:35 +01:00
Shirone
9153b06f09 Merge pull request #449 from AutoMaker-Org/feat/mobile-improvements-contributor
feat: Mobile responsiveness improvements from community contributor
2026-01-13 20:48:47 +00:00
Shirone
6cb2af8757 test: update claude-usage-service tests for improved error handling and timeout management
- Modified command arguments in tests to include '--add-dir' for better context.
- Updated error messages for authentication and timeout scenarios to provide clearer guidance.
- Adjusted timer values in tests to align with implementation delays, ensuring accurate simulation of usage data retrieval.
2026-01-13 21:37:16 +01:00
Shirone
ca3b013a7b Merge v0.11.0rc into feat/mobile-improvements-contributor
Resolves merge conflicts by keeping both features:
- enableAiCommitMessages (from our branch)
- defaultFeatureModel (from v0.11.0rc)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-13 21:31:44 +01:00
Web Dev Cody
abde1ba40a Merge pull request #472 from AutoMaker-Org/claude/issue-469-20260113-1744
feat: Add Discord-like project switcher sidebar with icon support
2026-01-13 14:59:32 -05:00
webdevcody
b04659fb56 Merge branch 'v0.11.0rc' into claude/issue-469-20260113-1744 2026-01-13 14:59:14 -05:00
webdevcody
74ee30d5db refactor: improve code readability and maintainability in SDK options and icon picker
- Reformatted the fullAccess and chat tool presets in sdk-options.ts for better readability.
- Simplified the return statement in icon-picker.tsx for cleaner code.
- Removed the board-background-persistence.spec.ts test file as it is no longer needed.
2026-01-13 14:56:44 -05:00
webdevcody
a300466ca9 feat: enhance project management with custom icon support and UI improvements
- Introduced custom icon functionality for projects, allowing users to upload and manage their own icons.
- Updated Project and ProjectRef types to include customIconPath.
- Enhanced the ProjectSwitcher component to display custom icons alongside preset icons.
- Added EditProjectDialog for inline editing of project details, including icon uploads.
- Improved AppearanceSection to support custom icon uploads and display.
- Updated sidebar and project switcher UI for better user experience and accessibility.

Implements #469
2026-01-13 14:39:19 -05:00
DhanushSantosh
9311f2e62a Merge remote-tracking branch 'upstream/v0.11.0rc' into patchcraft 2026-01-14 00:55:46 +05:30
DhanushSantosh
67245158ea chore: ignore .codex directory 2026-01-14 00:55:22 +05:30
DhanushSantosh
520d9a945c chore: add workflow files to git tracking 2026-01-14 00:54:46 +05:30
DhanushSantosh
fa3ead0e8d feat(auto-mode): skip memory extraction when Claude not configured and add reasoning effort support
- Skip learning extraction when ANTHROPIC_API_KEY is not available
- Add reasoningEffort parameter to simpleQuery for Codex model configuration
- Add stdinData support to spawnProcess for CLI stdin input
- Update UI API types for model override with reasoning support
2026-01-14 00:50:33 +05:30
DhanushSantosh
253ab94646 feat(github): add Codex/OpenCode model support for issue validation
- Support Codex and OpenCode models in issue validation
- Add reasoningEffort parameter for Codex model configuration
- Update validation logic to use structured output for Claude/Codex
- Update UI hooks and types for multi-provider model selection
2026-01-14 00:50:02 +05:30
DhanushSantosh
fbb3f697e1 feat(settings): add OpenAI/Google API key support and unified ModelId type
- Add OpenAI API key storage to store-api-key handler
- Include Google/OpenAI key status in credentials API responses
- Add unified ModelId type for Claude, Codex, Cursor, OpenCode, and dynamic providers
- Update PhaseModelEntry to support all provider model types
2026-01-14 00:49:35 +05:30
Soham Dasgupta
1a1517dffb fix: ensure npm cache directory has correct permissions
Fix EACCES permission error when running npx commands (e.g., MCP servers)
inside the Docker container.

Error that was occurring:
  npm error code EACCES
  npm error syscall mkdir
  npm error path /home/automaker/.npm/_cacache/index-v5/1f/fc
  npm error errno EACCES
  npm error Your cache folder contains root-owned files, due to a bug in
  npm error previous versions of npm which has since been addressed.

The fix ensures the /home/automaker/.npm directory exists and has correct
ownership before switching to the automaker user in the entrypoint script.
2026-01-14 00:49:28 +05:30
DhanushSantosh
690cf1f281 fix(codex-provider): use SDK mode when API key is present to avoid OAuth failures
When an OpenAI API key is stored in settings or environment, use SDK mode
instead of CLI mode. This bypasses the MCP transport layer which was
failing with 'TokenRefreshFailed' errors due to OAuth token issues.

The SDK uses the API key directly via @openai/codex-sdk, avoiding the
OAuth token refresh mechanism that was causing mid-execution failures.
2026-01-14 00:45:01 +05:30
Shirone
6f55da46ac Merge pull request #471 from AutoMaker-Org/fix/dev-server-url
fix: use browser hostname for dev server URLs instead of localhost
2026-01-13 18:46:17 +00:00
webdevcody
57453966ac Merge branch 'v0.11.0rc' of github.com:AutoMaker-Org/automaker into v0.11.0rc 2026-01-13 13:45:35 -05:00
webdevcody
298acc9f89 style: add overflow-y-auto class to dialog content containers
- Updated the BacklogPlanDialog, AddEditServerDialog, CreateSpecDialog, and RegenerateSpecDialog components to include the overflow-y-auto class for improved scrolling behavior in dialog content.
2026-01-13 13:45:29 -05:00
Shirone
f4390bc82f security: add noopener,noreferrer to window.open calls
Add 'noopener,noreferrer' parameter to all window.open() calls with
target='_blank' to prevent tabnabbing attacks. This prevents the newly
opened page from accessing window.opener, protecting against potential
security vulnerabilities.

Affected files:
- use-dev-servers.ts: Dev server URL links
- worktree-actions-dropdown.tsx: PR URL links
- create-pr-dialog.tsx: PR creation and browser fallback links

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-13 19:43:20 +01:00
Shirone
62af2031f6 feat: enhance dev server URL handling and improve accessibility
- Added URL and URLSearchParams as readonly globals in ESLint configuration.
- Updated WorktreeActionsDropdown and WorktreeTab components to include aria-labels for better accessibility.
- Implemented error handling for dev server URL opening, ensuring only valid HTTP/HTTPS protocols are used and providing user feedback for errors.

These changes improve user experience and accessibility when interacting with the dev server functionality.
2026-01-13 19:33:09 +01:00
claude[bot]
0ddd672e0e feat: Add Discord-like project switcher sidebar with icon support
- Add project icon field to ProjectRef and Project types
- Create vertical project switcher sidebar component
  - Project icons with hover tooltips
  - Active project highlighting
  - Plus button to create new projects
  - Right-click context menu for edit/delete
- Add IconPicker component with 35+ Lucide icons
- Add EditProjectDialog for inline project editing
- Update settings appearance section with project details editor
- Add setProjectIcon and setProjectName actions to app store
- Integrate ProjectSwitcher in root layout (shows on app pages only)

Implements #469

Co-authored-by: Web Dev Cody <webdevcody@users.noreply.github.com>
2026-01-13 17:53:15 +00:00
Shirone
7ef525effa fix: clarify comments on branch name handling in BoardView
Updated comments in BoardView to better explain the behavior of the 'current' work mode. The changes specify that an empty string clears the branch assignment, allowing work to proceed on the main/current branch. This enhances code readability and understanding of branch management logic.
2026-01-13 18:51:20 +01:00
Shirone
2303dcd133 Merge pull request #468 from AutoMaker-Org/feat/add-branch-name-to-mass-edit
feat: add branch/worktree support to mass edit dialog
2026-01-13 17:43:25 +00:00
Shirone
cc4f39a6ab chore: fix formatting issues for CI
Fix Prettier formatting in two files:
- apps/server/src/lib/sdk-options.ts: Split long arrays to one item per line
- docs/docker-isolation.md: Align markdown table columns

Resolves CI format check failures.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-13 18:38:09 +01:00
Shirone
d4076ad0ce refactor: address CodeRabbit PR feedback
Improvements based on CodeRabbit review comments:

1. Use getPrimaryWorktreeBranch for consistent branch detection
   - Replace hardcoded 'main' fallback with getPrimaryWorktreeBranch()
   - Ensures auto-generated branch names respect the repo's actual primary branch
   - Handles repos using 'master' or other primary branch names

2. Extract worktree auto-selection logic to helper function
   - Create addAndSelectWorktree helper to eliminate code duplication
   - Use helper in both onWorktreeAutoSelect and handleBulkUpdate
   - Reduces maintenance burden and ensures consistent behavior

These changes improve code consistency and maintainability without affecting functionality.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-13 18:37:26 +01:00
Shirone
3bd8626d48 feat: add branch/worktree support to mass edit dialog
Implement worktree creation and branch assignment in the mass edit dialog to match the functionality of the add-feature and edit-feature dialogs.

Changes:
- Add WorkModeSelector to mass-edit-dialog.tsx with three modes:
  - 'Current Branch': Work on current branch (no worktree)
  - 'Auto Worktree': Auto-generate branch name and create worktree
  - 'Custom Branch': Use specified branch name and create worktree
- Update handleBulkUpdate in board-view.tsx to:
  - Accept workMode parameter
  - Create worktrees for 'auto' and 'custom' modes
  - Auto-select created worktrees in the board header
  - Handle branch name generation for 'auto' mode
- Add necessary props to MassEditDialog (branchSuggestions, branchCardCounts, currentBranch)

Users can now bulk-assign features to a branch and automatically create/select worktrees, enabling efficient project setup with many features.

Fixes #459

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-13 18:27:22 +01:00
webdevcody
ff5915dd20 refactor: update terminology in board view components
- Renamed "Worktrees" to "Worktree Bar" in the BoardHeader component for clarity.
- Updated comments and labels in AddFeatureDialog, PlanSettingsDialog, and WorktreeSettingsDialog to reflect the new terminology and improve user understanding of worktree mode functionality.
2026-01-13 12:19:24 -05:00
webdevcody
8500f71565 Merge branches 'v0.11.0rc' and 'v0.11.0rc' of github.com:AutoMaker-Org/automaker into v0.11.0rc 2026-01-13 12:05:57 -05:00
Shirone
81bab1d8ab Merge pull request #465 from thesobercoder/fix/opencode-cache-volume
fix: add OpenCode cache volume for version file persistence
2026-01-13 15:22:52 +00:00
Soham Dasgupta
24a6633322 fix: add OpenCode cache volume for version file persistence
OpenCode stores a version file in ~/.cache/opencode/ which was causing
EACCES permission errors. This adds:

- Volume mount for ~/.cache/opencode
- Entrypoint script to set correct ownership/permissions on the cache directory
2026-01-13 20:45:33 +05:30
DhanushSantosh
f073f6ecc3 Merge remote-tracking branch 'upstream/v0.11.0rc' into patchcraft 2026-01-13 20:28:49 +05:30
Web Dev Cody
2870ddb223 Merge pull request #460 from thesobercoder/feature/opencode-docker-support
feat: add OpenCode CLI support in Docker
2026-01-13 09:35:28 -05:00
Web Dev Cody
1578d02e70 Merge branch 'v0.11.0rc' into feature/opencode-docker-support 2026-01-13 09:35:18 -05:00
webdevcody
bb710ada1a feat: enhance settings view and feature defaults management
- Introduced default feature model settings in the settings view, allowing users to specify the default AI model for new feature cards.
- Updated navigation to include a direct link to model defaults in the settings menu.
- Enhanced the Add Feature dialog to utilize the default feature model from the app store.
- Implemented synchronization of the default feature model in settings migration and sync hooks.
- Improved UI components to reflect changes in default settings, ensuring a cohesive user experience.
2026-01-13 20:04:36 +05:30
Soham Dasgupta
33ae860059 feat: update Docker volumes for OpenCode CLI data and user configuration 2026-01-13 20:01:22 +05:30
webdevcody
3de6d58af3 Merge branch 'v0.11.0rc' of github.com:AutoMaker-Org/automaker into v0.11.0rc 2026-01-13 09:30:20 -05:00
webdevcody
c8e66a866e feat: enhance settings view and feature defaults management
- Introduced default feature model settings in the settings view, allowing users to specify the default AI model for new feature cards.
- Updated navigation to include a direct link to model defaults in the settings menu.
- Enhanced the Add Feature dialog to utilize the default feature model from the app store.
- Implemented synchronization of the default feature model in settings migration and sync hooks.
- Improved UI components to reflect changes in default settings, ensuring a cohesive user experience.
2026-01-13 09:30:15 -05:00
DhanushSantosh
c25efdc0d8 Revert "Wire provider model enablement into selectors"
This reverts commit 8f1740c0f5.
2026-01-13 19:54:20 +05:30
DhanushSantosh
bde82492ae Revert "Sync branch state"
This reverts commit 6704293cb1.
2026-01-13 19:54:05 +05:30
Soham Dasgupta
67f18021c3 feat: add OpenCode CLI config path to Docker example 2026-01-13 19:50:54 +05:30
DhanushSantosh
6704293cb1 Sync branch state 2026-01-13 16:58:20 +05:30
DhanushSantosh
8f1740c0f5 Wire provider model enablement into selectors 2026-01-13 16:53:17 +05:30
Soham Dasgupta
62019d5916 feat: add OpenCode CLI support in Docker
- Install OpenCode CLI in Dockerfile alongside Claude and Cursor
- Add automaker-opencode-config volume for persisting auth
- Add OpenCode directory setup in docker-entrypoint.sh
- Update docker-isolation.md with OpenCode documentation
- Add OpenCode bind mount example to docker-compose.override.yml.example
2026-01-13 14:14:56 +05:30
DhanushSantosh
e66283b1d6 Merge remote-tracking branch 'upstream/main' into patchcraft 2026-01-13 13:45:17 +05:30
Web Dev Cody
a0d6d76626 Merge pull request #448 from comzine/fix/docker-uid-gid-configurable
fix: make Docker container UID/GID configurable
2026-01-13 00:00:47 -05:00
Web Dev Cody
c2f5c07038 Merge pull request #344 from casiusss/fix/pipeline-resume-edge-cases
fix: handle pipeline resume edge cases and improve robustness
2026-01-12 23:57:27 -05:00
webdevcody
419abf88dd Merge branch 'v0.11.0rc' into fix/pipeline-resume-edge-cases 2026-01-12 23:49:33 -05:00
Web Dev Cody
b7596617ed Merge pull request #455 from stefandevo/fix/codex-infinite-loop
fix(codex): prevent infinite loop when fetching models on settings screen
2026-01-12 23:46:02 -05:00
Web Dev Cody
26da99e834 Merge pull request #454 from AutoMaker-Org/abstract-anthropic-sdk
feat: implement simple query service and enhance provider abstraction
2026-01-12 23:40:42 -05:00
webdevcody
2b33a0d322 refactor: integrate simple query service into auto mode
- Replaced dynamic import of the query function with a call to the new Simple Query Service for improved clarity and maintainability.
- Streamlined the response handling by directly utilizing the result from the simple query, enhancing code readability.
- Updated the prompt and options structure to align with the new service's requirements, ensuring consistent behavior in learning extraction.
2026-01-12 21:39:39 -05:00
webdevcody
c796adbae8 test: update project view tests for dashboard integration
- Modified tests to navigate directly to the dashboard instead of the welcome view, ensuring a smoother project selection process.
- Updated project name verification to check against the sidebar button instead of multiple elements.
- Added logic to expand the sidebar if collapsed, improving visibility for project names during tests.
- Adjusted test assertions to reflect changes in the UI structure, including the introduction of the dashboard view.
2026-01-12 21:23:33 -05:00
Stefan de Vogelaere
18d82b1bb1 refactor: improve time constant readability
- Rename FAILURE_COOLDOWN to FAILURE_COOLDOWN_MS with explicit calculation
- Add SUCCESS_CACHE_MS constant to replace magic number 300000
- Use multiplication (30 * 1000, 5 * 60 * 1000) to make units explicit

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-13 00:53:39 +01:00
webdevcody
0c68fcc8c8 chore: update node-gyp dependency URL in package-lock.json
- Changed the resolved URL for the @electron/node-gyp module from SSH to HTTPS for improved accessibility and compatibility.
2026-01-12 18:51:03 -05:00
Stefan de Vogelaere
e4458b8222 fix(codex): prevent infinite loop when fetching models on settings screen
When Codex is not connected/authenticated, the /api/codex/models endpoint
returns 503. The fetchCodexModels function had no cooldown after failures,
causing infinite retries when navigating to the Settings screen.

Added codexModelsLastFailedAt state to track failed fetch attempts and
skip retries for 30 seconds after a failure. This prevents the infinite
loop while still allowing periodic retry attempts.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-13 00:47:51 +01:00
webdevcody
eb8ebe3ce0 feat: implement simple query service and enhance provider abstraction
- Introduced a new Simple Query Service to streamline basic AI queries, allowing for structured JSON outputs.
- Updated existing routes to utilize the new service, replacing direct SDK calls with a unified interface for querying.
- Enhanced provider handling in various routes, including generate-spec, generate-features-from-spec, and validate-issue, to support both Claude and Cursor models seamlessly.
- Added structured output support for improved response handling and error management across the application.
2026-01-12 17:33:54 -05:00
Web Dev Cody
0dc70addb6 Merge pull request #424 from comzine/fix/add-todowrite-to-allowed-tools
fix: add TodoWrite to allowed tools in SDK presets
2026-01-12 16:30:56 -05:00
webdevcody
f3f5d05349 chore: release v0.10.0
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-12 16:16:44 -05:00
Web Dev Cody
0c4b833b07 Merge pull request #405 from AutoMaker-Org/v0.10.0rc
V0.10.0rc
2026-01-12 16:11:56 -05:00
Shirone
029c5ca855 fix: adress pr comments 2026-01-12 22:03:14 +01:00
Shirone
1f270edbe1 refactor(list-row): remove priority display logic and related components
- Eliminated the getPriorityDisplay function and the PriorityBadge component from the ListRow implementation.
- Removed the pipelineConfig prop from ListRowProps interface.
- Cleaned up the code to streamline the ListRow component, focusing on essential features.
2026-01-12 21:46:04 +01:00
Shirone
47c188d8f9 fix: adress pr comments
- Added validation to check if the specified worktree path exists before generating commit messages.
- Implemented a check to ensure the worktree path is a valid git repository by verifying the presence of the .git directory.
- Improved error handling by returning appropriate responses for invalid paths and non-git repositories.
2026-01-12 21:41:55 +01:00
Shirone
cca4638b71 fix: adjust pr commnets 2026-01-12 21:21:24 +01:00
Shirone
19c12b7813 refactor(settings): remove deprecated notification settings from GlobalSettings interface
- Eliminated unused properties related to notification commands and ntfy.sh integration from the GlobalSettings interface.
- Updated default global settings to reflect the removal of these properties.
2026-01-12 20:58:58 +01:00
Shirone
0261ec2892 feat(prompts): implement customizable commit message prompts
- Added a new section in the UI for customizing commit message prompts.
- Integrated a system prompt for AI-generated commit messages, allowing users to define their own instructions.
- Updated the backend to merge custom prompts with default settings for commit message generation.
- Enhanced the commit message generation logic to utilize the effective system prompt based on user settings.
2026-01-12 20:55:01 +01:00
Shirone
5e4f5f86cd feat(worktree): add AI commit message generation feature
- Implemented a new endpoint to generate commit messages based on git diffs.
- Updated worktree routes to include the AI commit message generation functionality.
- Enhanced the UI to support automatic generation of commit messages when the commit dialog opens, based on user settings.
- Added settings for enabling/disabling AI-generated commit messages and configuring the model used for generation.
2026-01-12 20:55:01 +01:00
DhanushSantosh
fbab1d323f test: align app-spec and enhancement mode tests 2026-01-13 00:11:11 +05:30
eclipxe
8b19266c9a feat: Add secondary inline actions for waiting_approval status 2026-01-12 19:39:12 +01:00
anonymous
1b9d194dd1 feat: Improve mobile scrolling experience in autocomplete and dropdown components 2026-01-12 19:39:12 +01:00
anonymous
74c793b6c6 Add branch switch to mobile worktree panel 2026-01-12 19:39:12 +01:00
anonymous
d1222268c3 feat: Improve Claude CLI usage detection, mobile usage view, and add provider auth initialization 2026-01-12 19:39:12 +01:00
anonymous
df7a0f8687 feat: Make input controls and settings responsive for mobile devices 2026-01-12 19:37:36 +01:00
anonymous
c7def000df feat: Handle backlog feature editing on row click in board view 2026-01-12 19:37:35 +01:00
anonymous
e2394244f6 feat: Implement responsive mobile header layout with menu consolidation 2026-01-12 19:37:35 +01:00
anonymous
007830ec74 feat: Add responsive session manager with mobile backdrop overlay 2026-01-12 19:33:33 +01:00
anonymous
f721eb7152 List View Features 2026-01-12 19:33:33 +01:00
anonymous
e56db2362c feat: Add AI-generated commit messages
Integrate Claude Haiku to automatically generate commit messages when
committing worktree changes. Shows a sparkle animation while generating
and auto-populates the commit message field.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-12 19:32:20 +01:00
anonymous
d2c7a9e05d Fix model selector on mobile 2026-01-12 19:32:20 +01:00
anonymous
acce06b304 Fix sidebar lables not showign up 2026-01-12 19:32:20 +01:00
anonymous
4ab54270db fix: enable sidebar expand and project switching on mobile
- Sidebar now uses overlay pattern on mobile (fixed position when open)
- Added backdrop overlay that dismisses sidebar on tap
- Made collapse toggle button visible on all screen sizes
- Made project options menu visible on all screen sizes

Previously the sidebar was forced to collapsed width (w-16) on mobile
even when sidebarOpen was true, and the toggle/options buttons were
hidden with `hidden lg:flex`.
2026-01-12 19:31:38 +01:00
Shirone
f50520c93f feat(delete): enhance branch deletion handling and validation
- Introduced a flag to track if a branch was successfully deleted, improving response clarity.
- Updated the response structure to include the new branchDeleted flag.
- Enhanced projectPath validation in init-script to ensure it is a non-empty string before processing.
2026-01-12 19:21:37 +01:00
Dhanush Santosh
cebf57ffd3 Merge pull request #426 from stefandevo/opencode-dynamic-providers
feat: add dynamic model discovery and routing for OpenCode provider
2026-01-12 23:51:06 +05:30
DhanushSantosh
6020219fda fix(opencode): address review feedback 2026-01-12 23:44:21 +05:30
DhanushSantosh
8094941385 feat(opencode): persist dynamic model selection 2026-01-12 23:44:21 +05:30
DhanushSantosh
9ce3cfee7d feat(opencode): drop bedrock defaults 2026-01-12 23:44:05 +05:30
DhanushSantosh
6184440441 fix(ui): tie dynamic models to connected providers 2026-01-12 23:42:38 +05:30
DhanushSantosh
0cff4cf510 feat(ui): add OpenRouter icon 2026-01-12 23:42:28 +05:30
DhanushSantosh
b152f119c5 fix(ui): refresh OpenCode models on new providers 2026-01-12 23:42:27 +05:30
DhanushSantosh
9f936c6968 fix(opencode): parse api-key provider models 2026-01-12 23:42:12 +05:30
Stefan de Vogelaere
b8531cf7e8 fix: add OpenCode settings to migration for persistence
Add enabledOpencodeModels and opencodeDefaultModel to the settings
migration to ensure they are properly persisted like Cursor settings.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-12 23:41:40 +05:30
Stefan de Vogelaere
edcc4e789b fix: address CodeRabbitAI review feedback
- Replace busy-wait loop in refreshModels with Promise-based approach
- Remove duplicate error logging in opencode-models.ts handlers
- Fix multi-slash parsing in provider-icon.tsx (only handle exactly one slash)
- Use dynamic icon resolution for selected OpenCode model in trigger
- Fix misleading comment about merge precedence (static takes precedence)
- Add enabledOpencodeModels and opencodeDefaultModel to settings sync
- Add clarifying comments about session-only dynamic model settings

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-12 23:41:26 +05:30
Stefan de Vogelaere
20cc401238 fix: update enhancement test to include ux-reviewer mode
Test expected 4 enhancement modes but there are now 5 after adding
the ux-reviewer mode.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-12 23:41:14 +05:30
Stefan de Vogelaere
70204a2d36 fix: address code review feedback from gemini-code-assist
- Convert execFileSync to async execFile in fetchModelsFromCli and
  fetchAuthenticatedProviders to avoid blocking the event loop
- Remove unused opencode-dynamic-providers.tsx component
- Use regex for more robust model ID validation in parseModelsOutput

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-12 23:40:57 +05:30
Stefan de Vogelaere
e38325c27f fix: improve dynamic model icons and fix React reference
- Add icon detection for dynamic OpenCode provider models (provider/model format)
- Support zai-coding-plan, github-copilot, google, xai, and other providers
- Detect model type from name (glm, claude, gpt, gemini, grok, etc.)
- Fix React.useMemo → useMemo to resolve "React is not defined" error

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-12 23:40:28 +05:30
Stefan de Vogelaere
5e4b422315 fix: improve OpenCode error handling and message extraction
- Update error event interface to handle nested error objects with
  name/data/message structure from OpenCode CLI
- Extract meaningful error messages from provider errors in normalizeEvent
- Add error type handling in executeWithProvider to throw errors with
  actual provider messages instead of returning empty response

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-12 23:40:13 +05:30
Stefan de Vogelaere
6c5206daf4 feat: add dynamic model discovery and routing for OpenCode provider
- Update isOpencodeModel() to detect dynamic models with provider/model format
  (e.g., github-copilot/gpt-4o, google/gemini-2.5-pro, zai-coding-plan/glm-4.7)
- Update resolveModelString() to recognize and pass through OpenCode models
- Update enhance route to route OpenCode models to OpenCode provider
- Fix OpenCode CLI command format: use --format json (not stream-json)
- Remove unsupported -q and - flags from CLI arguments
- Update normalizeEvent() to handle actual OpenCode JSON event format
- Add dynamic model configuration UI with provider grouping
- Cache providers and models in app store for snappier navigation
- Show authenticated providers in OpenCode CLI status card

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-12 23:39:38 +05:30
Shirone
ed65f70315 Merge pull request #409 from AutoMaker-Org/feat/worktrees-init-script
feat: worktrees init script
2026-01-12 17:52:31 +00:00
Shirone
f41a42010c fix: address pr comments 2026-01-12 18:41:56 +01:00
Tobias Weber
aa8caeaeb0 fix: make Docker container UID/GID configurable
Add UID and GID build arguments to Dockerfiles to allow matching the
container user to the host user. This fixes file permission issues when
mounting host directories as volumes.

Default remains 1001 for backward compatibility. To match host user:
  UID=$(id -u) GID=$(id -g) docker-compose build

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-12 16:14:56 +01:00
Kacper
a0669d4262 feat(board-view): enhance feature and plan dialogs with worktree branch settings
- Added WorktreeSettingsDialog and PlanSettingsDialog components to manage worktree branch settings.
- Integrated new settings into BoardHeader for toggling worktree branch usage in feature creation.
- Updated AddFeatureDialog to utilize selected worktree branch for custom mode.
- Introduced new state management in app-store for handling worktree branch preferences.

These changes improve user control over feature creation workflows by allowing branch selection based on the current worktree context.
2026-01-11 23:05:32 +01:00
Shirone
a4a792c6b1 Merge pull request #416 from AutoMaker-Org/feat/emtpy-columns-enhancments
feat: add empty state card component and integrate AI suggestion func…
2026-01-11 21:37:17 +00:00
Shirone
6842e4c7f7 refactor: simplify EmptyStateCard and update empty state configurations
- Removed unused properties and state management from the EmptyStateCard component for cleaner code.
- Updated the EMPTY_STATE_CONFIGS to remove exampleCard entries, streamlining the empty state configuration.
- Enhanced the primary action handling in the EmptyStateCard for improved functionality.
2026-01-11 22:35:25 +01:00
webdevcody
6638c35945 refactor(sidebar): enhance sidebar responsiveness and improve layout
- Updated sidebar component to include a mobile overlay backdrop when open.
- Adjusted visibility of logo and footer elements based on sidebar state.
- Improved layout and spacing for various components within the sidebar for better usability on different screen sizes.
- Refined styles for buttons and project selectors to enhance visual consistency and responsiveness.
2026-01-11 16:02:25 -05:00
Kacper
53f5c2b2bb feat(backlog): add branchName support to apply handler and UI components
- Updated apply handler to accept an optional branchName from the request body.
- Modified BoardView and BacklogPlanDialog components to pass currentBranch to the apply API.
- Enhanced ElectronAPI and HttpApiClient to include branchName in the apply method.

This change allows users to specify a branch when applying backlog plans, improving flexibility in feature management.
2026-01-11 20:52:07 +01:00
Kacper
6e13cdd516 Merge branch: resolve conflict in worktree-actions-dropdown.tsx 2026-01-11 20:08:19 +01:00
Kacper
a48c67d271 refactor: update EmptyStateCard component for improved layout and functionality
- Removed unused props and adjusted styles for a more compact and centered design.
- Enhanced the display of the icon, title, and description for better visibility.
- Updated keyboard shortcut hint and AI suggestion action for improved user interaction.
- Refined dismiss/minimize controls to appear on hover, enhancing the user experience.
2026-01-11 19:59:01 +01:00
Shirone
43fc3de2e1 Merge pull request #423 from stefandevo/main
feat: add default IDE setting and multi-editor support with icons
2026-01-11 18:36:12 +00:00
Kacper
80081b60bf fix(platform): remove logger import to avoid circular dependency
Replace createLogger with console.warn to prevent circular import
between @automaker/platform and @automaker/utils.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-11 19:34:29 +01:00
Kacper
cbca9b68e6 fix: correct Kiro CLI command typo (kido -> kiro)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-11 19:25:26 +01:00
Shirone
b9b3695497 feat(platform): add VS Code Insiders and Kiro editor support
Added support for two new editors:
- VS Code Insiders (code-insiders command)
- Kiro (kido command) - VS Code fork

Changes:
- Added editor definitions to SUPPORTED_EDITORS list
- Added VSCodeInsidersIcon (reuses VS Code icon)
- Added KiroIcon with custom SVG logo
- Updated getEditorIcon() to handle both new commands
- Fixed logger initialization to be lazy-loaded, preventing circular
  dependency error with isBrowser variable during module initialization

Both editors were tested and successfully open directories on macOS.
2026-01-11 19:14:44 +01:00
Shirone
1b9acb1395 fix(platform): verify full Xcode installation for xed command
The xed command requires full Xcode.app, not just Command Line Tools.
This fix adds validation to ensure Xcode is properly configured before
offering it as an editor option.

Changes:
- Added isXcodeFullyInstalled() to check xcode-select points to Xcode.app
- Added helpful warning when Xcode is installed but xcode-select points to CLT
- Users see clear instructions on how to fix the configuration

Fixes issue where xed would fail with "tool 'xed' requires Xcode" error
when only Command Line Tools are configured via xcode-select.
2026-01-11 19:04:39 +01:00
DhanushSantosh
01cf81a105 fix(platform): detect Antigravity CLI aliases 2026-01-11 23:22:13 +05:30
Tobias Weber
6381ecaa37 fix: add TodoWrite to allowed tools in SDK presets
The TodoWrite tool was missing from the fullAccess and chat tool
presets, causing the Claude Agent SDK to crash with exit code 1
when the agent attempted to use it for task tracking.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-11 18:33:11 +01:00
Kacper
6d267ce0fa feat(platform): add cross-platform editor utilities and refresh functionality
- Add libs/platform/src/editor.ts with cross-platform editor detection and launching
  - Handles Windows .cmd batch scripts (cursor.cmd, code.cmd, etc.)
  - Supports macOS app bundles in /Applications and ~/Applications
  - Includes caching with 5-minute TTL for performance
- Refactor open-in-editor.ts to use @automaker/platform utilities
- Add POST /api/worktree/refresh-editors endpoint to clear cache
- Add refresh button to Settings > Account for IDE selection
- Update useAvailableEditors hook with refresh() and isRefreshing

Fixes Windows issue where "Open in Editor" was falling back to Explorer
because execFile cannot run .cmd scripts without shell:true.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-11 18:08:09 +01:00
Kacper
8b0b565282 Merge remote-tracking branch 'origin/v0.10.0rc' into stefandevo/main 2026-01-11 17:34:19 +01:00
DhanushSantosh
a046d1232e Merge remote-tracking branch 'upstream/v0.10.0rc' into feature/codex-cli 2026-01-11 21:59:04 +05:30
DhanushSantosh
d724e782dd fix(ui): restore startup project context 2026-01-11 21:58:36 +05:30
Shirone
a266d85ecd Merge pull request #421 from AutoMaker-Org/refactor/extract-enhance-with-ai-shared-components
refactor: extract Enhance with AI into shared components
2026-01-11 16:22:18 +00:00
Kacper
a4a111fad0 feat: add pre-enhancement description tracking for feature updates
- Introduced a new parameter `preEnhancementDescription` to capture the original description before enhancements.
- Updated the `update` method in `FeatureLoader` to handle the new parameter and maintain a history of original descriptions.
- Enhanced UI components to support tracking and restoring pre-enhancement descriptions across various dialogs.
- Improved history management in `AddFeatureDialog`, `EditFeatureDialog`, and `FollowUpDialog` to include original text for better user experience.

This change enhances the ability to revert to previous descriptions, improving the overall functionality of the feature enhancement process.
2026-01-11 17:19:39 +01:00
Stefan de Vogelaere
2a98de85a8 fix: improve cache management and editor fallback handling
Cache management improvements:
- Remove separate cachedEditor variable; derive default from cachedEditors
- Update isCacheValid() to check cachedEditors existence
- detectDefaultEditor() now always goes through detectAllEditors()
  to ensure cache TTL is respected consistently

Editor fallback improvements:
- Log warning when requested editorCommand is not found in available editors
- Include list of available editor commands in warning message
- Make fallback to default editor explicit rather than silent

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-11 17:08:10 +01:00
Stefan de Vogelaere
fb3a8499f3 fix: address CodeRabbitAI security and UX review comments
Security improvements in open-in-editor.ts:
- Use execFile with argument arrays instead of shell interpolation
  in commandExists() to prevent command injection
- Replace shell `test -d` commands with Node.js fs/promises access()
  in findMacApp() for safer file system checks
- Add cache TTL (5 minutes) for editor detection to prevent stale data

UX improvements in worktree-actions-dropdown.tsx:
- Add error handling for clipboard copy operation
- Show success toast when path is copied
- Show error toast if clipboard write fails

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-11 16:55:25 +01:00
Stefan de Vogelaere
33dd9ae347 fix: address nitpick feedback from PR #423
## Security Fix (Command Injection)
- Use `execFile` with argument arrays instead of string interpolation
- Add `safeOpenInEditor` helper that properly handles `open -a` commands
- Validate that worktreePath is an absolute path before execution
- Prevents shell metacharacter injection attacks

## Shared Type Definition
- Move `EditorInfo` interface to `@automaker/types` package
- Server and UI now import from shared package to prevent drift
- Re-export from use-available-editors.ts for convenience

## Remove Unused Code
- Remove unused `defaultEditorName` prop from WorktreeActionsDropdown
- Remove prop from WorktreeTab component interface
- Remove useDefaultEditor hook usage from WorktreePanel
- Export new hooks from hooks/index.ts

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-11 16:37:05 +01:00
Stefan de Vogelaere
ac87594b5d fix: address code review feedback from PR #423
Addresses feedback from gemini-code-assist and coderabbitai reviewers:

## Duplicate Code (High Priority)
- Extract `getEffectiveDefaultEditor` logic into shared `useEffectiveDefaultEditor` hook
- Both account-section.tsx and worktree-actions-dropdown.tsx now use the shared hook

## Performance (Medium Priority)
- Refactor `detectAllEditors` to use `Promise.all` for parallel editor detection
- Replace sequential `await tryAddEditor()` calls with parallel `findEditor()` checks

## Code Quality (Medium Priority)
- Remove verbose IIFE pattern for editor icon rendering
- Pre-compute icon components before JSX return statement

## Bug Fixes
- Use `os.homedir()` instead of `~` fallback which doesn't expand in shell
- Normalize Select value to 'auto' when saved editor command not found in editors
- Add defensive check for empty editors array in useEffectiveDefaultEditor
- Improve mock openInEditor to correctly map all editor commands to display names

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-11 16:28:31 +01:00
Stefan de Vogelaere
32656a9662 feat: add default IDE setting and multi-editor support with icons
Add comprehensive editor detection and selection system that allows users
to configure their preferred IDE for opening branches and worktrees.

## Server-side Changes

- Add `/api/worktree/available-editors` endpoint to detect installed editors
- Support detection via CLI commands (cursor, code, zed, subl, etc.)
- Support detection via macOS app bundles in /Applications and ~/Applications
- Detect editors: Cursor, VS Code, Zed, Sublime Text, Windsurf, Trae,
  Rider, WebStorm, Xcode, Android Studio, Antigravity, and file managers

## UI Changes

### Editor Icons
- Add new `editor-icons.tsx` with SVG icons for all supported editors
- Icons: Cursor, VS Code, Zed, Sublime Text, Windsurf, Trae, Rider,
  WebStorm, Xcode, Android Studio, Antigravity, Finder
- `getEditorIcon()` helper maps editor commands to appropriate icons

### Default IDE Setting
- Add "Default IDE" selector in Settings > Account section
- Options: Auto-detect (Cursor > VS Code > first available) or explicit choice
- Setting persists via `defaultEditorCommand` in global settings

### Worktree Dropdown Improvements
- Implement split-button UX for "Open In" action
- Click main area: opens directly in default IDE (single click)
- Click chevron: shows submenu with other editors + Copy Path
- Each editor shows with its branded icon

## Type & Store Changes

- Add `defaultEditorCommand: string | null` to GlobalSettings
- Add to app-store with `setDefaultEditorCommand` action
- Add to SETTINGS_FIELDS_TO_SYNC for persistence
- Add `useAvailableEditors` hook for fetching detected editors

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-11 16:17:05 +01:00
DhanushSantosh
785a4d2c3b fix: restore auth and auto-open last project 2026-01-11 20:43:55 +05:30
Shirone
41a6c7f712 fix: address second round of PR review feedback
- Add fallback for unknown enhancement modes in history button to prevent "Enhanced (undefined)" UI bug
- Move DescriptionHistoryEntry interface to top level in add-feature-dialog
- Import and use EnhancementMode type in edit-feature-dialog to eliminate hardcoded types
- Make FollowUpHistoryEntry extend BaseHistoryEntry for consistency

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-11 15:27:17 +01:00
Shirone
7e5d915b60 fix: address PR review feedback from Gemini Code Assist
Address three issues identified in code review:

1. Fix missing thinkingLevel parameter (Critical)
   - Added thinkingLevel parameter to enhance API call
   - Updated electron.ts type definition to match http-api-client
   - Fixes functional regression in Claude model enhancement

2. Refactor dropdown menu to use constants dynamically
   - Changed hardcoded DropdownMenuItem components to dynamic generation
   - Now iterates over ENHANCEMENT_MODE_LABELS object
   - Ensures automatic sync when new modes are added
   - Eliminates manual UI updates for new enhancement modes

3. Optimize array reversal performance
   - Added useMemo hook to memoize reversed history array
   - Prevents creating new array on every render
   - Improves performance with lengthy histories

All TypeScript errors resolved. Build verified.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-11 15:17:46 +01:00
Shirone
8321c06e16 refactor: extract Enhance with AI into shared components
Extract all "Enhance with AI" functionality into reusable shared components
following DRY principles and clean code guidelines.

Changes:
- Create shared/enhancement/ folder for related functionality
- Extract EnhanceWithAI component (AI enhancement with model override)
- Extract EnhancementHistoryButton component (version history UI)
- Extract enhancement constants and types
- Refactor add-feature-dialog.tsx to use shared components
- Refactor edit-feature-dialog.tsx to use shared components
- Refactor follow-up-dialog.tsx to use shared components
- Add history tracking to add-feature-dialog for consistency

Benefits:
- Eliminated ~527 lines of duplicated code
- Single source of truth for enhancement logic
- Consistent UX across all dialogs
- Easier maintenance and extensibility
- Better code organization

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-11 15:10:54 +01:00
Shirone
f60c18d31a Update apps/ui/src/components/views/board-view/constants.ts
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2026-01-11 12:08:33 +01:00
Shirone
e171b6a049 feat: add empty state card component and integrate AI suggestion functionality
- Introduced the EmptyStateCard component to display contextual guidance when columns are empty.
- Enhanced the KanbanBoard and BoardView components to utilize the new EmptyStateCard for improved user experience.
- Added AI suggestion functionality to the empty state configuration, allowing users to generate ideas directly from the backlog column.
- Updated constants to define empty state configurations for various column types.
2026-01-11 12:03:52 +01:00
Shirone
6e4b611662 refactor: optimize bulk delete handler and UI feedback
- Refactored the bulk delete handler to utilize Promise.all for concurrent deletion of features, improving performance and error handling.
- Updated the BoardView component to handle deletion results more effectively, providing user feedback for both successful and failed deletions.
- Enhanced local state management to avoid redundant API calls during feature deletion.
2026-01-11 11:17:28 +01:00
DhanushSantosh
7522e58fee Merge remote-tracking branch 'upstream/v0.10.0rc' into feature/codex-cli 2026-01-11 14:51:06 +05:30
Shirone
317c21ffc0 Merge pull request #413 from AutoMaker-Org/feat/bulk-delete-features-in-backlog
feat: bulk delete in backlog mass select
2026-01-11 09:19:15 +00:00
Shirone
9c5fe44617 feat: add bulk delete functionality for features
- Introduced a new endpoint `/bulk-delete` to allow deletion of multiple features at once.
- Implemented `createBulkDeleteHandler` to process bulk delete requests and handle success/failure responses.
- Updated the UI to include a bulk delete option in the BoardView component, with confirmation dialog for user actions.
- Enhanced the HTTP API client to support bulk delete requests.
- Improved the selection action bar to trigger bulk delete functionality and provide user feedback.
2026-01-11 10:17:35 +01:00
DhanushSantosh
7f79d9692c feat: Add official icons for MiniMax, GLM (Z.ai), and BigPickle models
- Add official MiniMax logo SVG from LobeHub icons library
- Add official Z.ai logo SVG for GLM models from LobeHub icons library
- Add BigPickle icon with custom green color (#4ADE80)
- Fix icon detection logic to properly handle amazon-bedrock/ and opencode/ prefixes
- Update phase-model-selector and opencode-model-configuration to use
  getProviderIconForModel() for consistent icon display across the app

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-11 14:44:32 +05:30
Shirone
2d4ffc7514 feat: add accept all functionality in ideation view
- Introduced a new "Accept All" button in the IdeationHeader component, allowing users to accept all filtered suggestions at once.
- Implemented state management for accept all readiness and count in the IdeationView component.
- Enhanced the IdeationDashboard to notify the parent component about the readiness of the accept all feature.
- Added logic to handle the acceptance of all suggestions, including success and failure notifications.
- Updated UI components to reflect the new functionality and improve user experience.
2026-01-11 10:01:01 +01:00
webdevcody
5f3db1f25e feat: enhance spec regeneration management by project
- Refactored spec regeneration status tracking to support multiple projects using a Map for running states and abort controllers.
- Updated `getSpecRegenerationStatus` to accept a project path, allowing retrieval of status specific to a project.
- Modified `setRunningState` to manage running states and abort controllers per project.
- Adjusted related route handlers to utilize project-specific status checks and updates.
- Introduced a new Graph View page and integrated it into the routing structure.
- Enhanced UI components to reflect the current project’s spec generation state.
2026-01-11 01:37:26 -05:00
webdevcody
7115460804 feat: add resume interrupted features endpoint and handler
- Introduced a new endpoint `/resume-interrupted` to handle resuming features that were interrupted during server restarts.
- Implemented the `createResumeInterruptedHandler` to check for and resume interrupted features based on the project path.
- Enhanced the `AutoModeService` to track and manage the execution state of features, ensuring they can be resumed correctly.
- Updated relevant types and prompts to include the new 'ux-reviewer' enhancement mode for better user experience handling.
- Added new templates for UX review and other enhancement modes to improve task descriptions from a user experience perspective.
2026-01-11 01:37:13 -05:00
webdevcody
0db8808b2a Merge branch 'memory-ui' into v0.10.0rc 2026-01-10 20:13:07 -05:00
webdevcody
cf3ed1dd8f Merge branch 'v0.10.0rc' of github.com:AutoMaker-Org/automaker into v0.10.0rc 2026-01-10 20:08:02 -05:00
webdevcody
da682e3993 feat: add memory management feature with UI components
- Introduced a new MemoryView component for viewing and editing AI memory files.
- Updated navigation hooks and keyboard shortcuts to include memory functionality.
- Added memory file creation, deletion, and renaming capabilities.
- Enhanced the sidebar navigation to support memory as a new section.
- Implemented loading and saving of memory files with a markdown editor.
- Integrated dialogs for creating, deleting, and renaming memory files.
2026-01-10 20:07:50 -05:00
Shirone
4a59e901e6 chore: format 2026-01-11 01:15:27 +01:00
Shirone
8ed2fa07a0 security: Fix critical vulnerabilities in worktree init script feature
Fix multiple command injection and security vulnerabilities in the worktree
initialization script system:

**Critical Fixes:**
- Add branch name validation to prevent command injection in create/delete endpoints
- Replace string interpolation with array-based command execution using spawnProcess
- Implement safe environment variable allowlist to prevent credential exposure
- Add script content validation with 1MB size limit and dangerous pattern detection

**Code Quality:**
- Centralize execGitCommand helper in common.ts using @automaker/platform's spawnProcess
- Remove duplicate isGitRepo implementation, standardize imports to @automaker/git-utils
- Follow DRY principle by reusing existing platform utilities
- Add comprehensive JSDoc documentation with security examples

This addresses 6 critical/high severity vulnerabilities identified in security audit:
1. Command injection via unsanitized branch names (delete.ts)
2. Command injection via unsanitized branch names (create.ts)
3. Missing branch validation in init script execution
4. Environment variable exposure (ANTHROPIC_API_KEY and other secrets)
5. Path injection via command substitution
6. Arbitrary script execution without content limits

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-11 01:14:07 +01:00
Kacper
385e7f5c1e fix: address pr comments 2026-01-11 00:01:23 +01:00
Kacper
861fff1aae fix: broken lock file 2026-01-10 23:48:33 +01:00
Kacper
09527b3b67 feat: Add auto-dismiss functionality for Init Script Indicator
This commit introduces an auto-dismiss feature for the Init Script Indicator, enhancing user experience by automatically hiding the indicator 5 seconds after the script completes. Key changes include:

1. **State Management**: Added `autoDismissInitScriptIndicatorByProject` to manage the auto-dismiss setting per project.
2. **UI Components**: Updated the WorktreesSection to include a toggle for enabling or disabling the auto-dismiss feature, allowing users to customize their experience.
3. **Indicator Logic**: Implemented logic in the SingleIndicator component to handle auto-dismiss based on the new setting.

These enhancements provide users with more control over the visibility of the Init Script Indicator, streamlining project management workflows.
2026-01-10 23:43:52 +01:00
Kacper
d98ff16c8f feat: Enhance CreateWorktreeDialog with user-friendly error handling
This commit introduces a new error parsing function to provide clearer, user-friendly error messages in the CreateWorktreeDialog component. Key changes include:

1. **Error Parsing**: Added `parseWorktreeError` function to interpret various git-related error messages and return structured titles and descriptions for better user feedback.
2. **State Management**: Updated the error state to store structured error objects instead of strings, allowing for more detailed error display.
3. **UI Updates**: Enhanced the error display in the dialog to show both title and description, improving clarity for users encountering issues during worktree creation.

These improvements enhance the user experience by providing more informative error messages, helping users troubleshoot issues effectively.
2026-01-10 23:37:39 +01:00
Kacper
e902e8ea4c feat: Introduce default delete branch option for worktrees
This commit adds a new feature allowing users to set a default value for the "delete branch" checkbox when deleting a worktree. Key changes include:

1. **State Management**: Introduced `defaultDeleteBranchByProject` to manage the default delete branch setting per project.
2. **UI Components**: Updated the WorktreesSection to include a toggle for the default delete branch option, enhancing user control during worktree deletion.
3. **Dialog Updates**: Modified the DeleteWorktreeDialog to respect the default delete branch setting, improving the user experience by streamlining the deletion process.

These enhancements provide users with more flexibility and control over worktree management, improving overall project workflows.
2026-01-10 23:18:39 +01:00
Kacper
aeb5bd829f feat: Add Init Script Indicator visibility feature for worktrees
This commit introduces a new feature allowing users to toggle the visibility of the Init Script Indicator for each project. Key changes include:

1. **State Management**: Added `showInitScriptIndicatorByProject` to manage the visibility state per project.
2. **UI Components**: Implemented a checkbox in the WorktreesSection to enable or disable the Init Script Indicator, enhancing user control over the UI.
3. **BoardView Updates**: Modified the BoardView to conditionally render the Init Script Indicator based on the new visibility state.

These enhancements improve the user experience by providing customizable visibility options for the Init Script Indicator, streamlining project management workflows.
2026-01-10 23:03:29 +01:00
DhanushSantosh
a92457b871 fix: Handle Claude CLI unavailability gracefully in CI
- Add try-catch around pty.spawn() to prevent crashes when PTY unavailable
- Add unhandledRejection/uncaughtException handlers for graceful degradation
- Add checkBackendHealth/waitForBackendHealth utilities for tests
- Add data/.api-key and data/credentials.json to .gitignore
2026-01-11 03:22:43 +05:30
Kacper
c24e6207d0 feat: Enhance ShellSyntaxEditor and WorktreesSection with new features
This commit introduces several improvements to the ShellSyntaxEditor and WorktreesSection components:

1. **ShellSyntaxEditor**: Added a `maxHeight` prop to allow for customizable maximum height, enhancing layout flexibility.
2. **WorktreesSection**:
   - Introduced state management for original script content and existence checks for scripts.
   - Implemented save, reset, and delete functionalities for initialization scripts, providing users with better control over their scripts.
   - Added action buttons for saving, resetting, and deleting scripts, along with loading indicators for improved user feedback.
   - Enhanced UI to indicate unsaved changes, improving user awareness of script modifications.

These changes improve the user experience by providing more robust script management capabilities and a more responsive UI.
2026-01-10 22:46:06 +01:00
Kacper
6c412cd367 feat: Add run init script functionality for worktrees
This commit introduces the ability to run initialization scripts for worktrees, enhancing the setup process. Key changes include:

1. **New API Endpoint**: Added a POST endpoint to run the init script for a specified worktree.
2. **Worktree Routes**: Updated worktree routes to include the new run init script handler.
3. **Init Script Service**: Enhanced the Init Script Service to support running scripts asynchronously and handling errors.
4. **UI Updates**: Added UI components to check for the existence of init scripts and trigger their execution, providing user feedback through toast notifications.
5. **Event Handling**: Implemented event handling for init script execution status, allowing real-time updates in the UI.

This feature streamlines the workflow for users by automating the execution of setup scripts, improving overall project management.
2026-01-10 22:36:50 +01:00
DhanushSantosh
89a960629a fix: Improve E2E test workflow for better backend debugging
Enhanced backend server startup in CI:
- Track server PID and process status
- Save logs to backend.log for debugging
- Better error detection with process monitoring
- Added cleanup step to kill server process
- Print backend logs on test failure

Improves reliability of E2E tests by providing better diagnostics when backend fails to start
2026-01-11 02:58:56 +05:30
Kacper
05d96a7d6e feat: Implement worktree initialization script functionality
This commit introduces a new feature for managing worktree initialization scripts, allowing users to configure and execute scripts upon worktree creation. Key changes include:

1. **New API Endpoints**: Added endpoints for getting, setting, and deleting init scripts.
2. **Worktree Routes**: Updated worktree routes to include init script handling.
3. **Init Script Service**: Created a service to execute the init scripts asynchronously, with support for cross-platform compatibility.
4. **UI Components**: Added UI components for displaying and editing init scripts, including a dedicated section in the settings view.
5. **Event Handling**: Implemented event handling for init script execution status, providing real-time feedback in the UI.

This enhancement improves the user experience by allowing automated setup processes for new worktrees, streamlining project workflows.
2026-01-10 22:19:34 +01:00
DhanushSantosh
41144ff1fa Merge recovered upstream commits including worktree enhancement
This brings back commits that were accidentally overwritten during a force push:
- fa8ae149 feat: enhance worktree listing by scanning external directories
- Plus any other changes from upstream/v0.10.0rc at that time

The merge ensures all changes are preserved while keeping the history intact.
2026-01-11 02:27:21 +05:30
DhanushSantosh
360cddcb91 Merge upstream commits including worktree enhancement
This recovers the commits that were accidentally overwritten during force push.

Included:
- fa8ae149 feat: enhance worktree listing by scanning external directories
- Any other commits from upstream/v0.10.0rc at that point
2026-01-11 02:27:08 +05:30
DhanushSantosh
427832e72e fix: Display correct provider icons for all OpenCode/Bedrock models
The issue was that ALL OpenCode models were showing the OpenCode icon, regardless
of their actual underlying provider. This fix ensures each model shows its
authentic brand icon.

Changes:

1. **model-constants.ts** - Fixed provider field assignment
   - Changed provider from hardcoded 'opencode' to actual config.provider
   - Now correctly maps: opencode/big-pickle, amazon-bedrock/anthropic.*, etc.

2. **phase-model-selector.tsx** - Added provider-specific icon logic
   - Added imports for DeepSeekIcon, NovaIcon, QwenIcon, MistralIcon, MetaIcon
   - Added ProviderIcon selector based on model.provider field
   - Each model type now displays its correct provider icon

3. **provider-icon.tsx** - Updated icon detection and mapping
   - Enhanced getUnderlyingModelIcon() to detect specific Bedrock providers:
     * amazon-bedrock/anthropic.* → anthropic icon
     * amazon-bedrock/deepseek.* → deepseek icon
     * amazon-bedrock/nova.* → nova icon
     * amazon-bedrock/meta.* or llama → meta icon
     * amazon-bedrock/mistral.* → mistral icon
     * amazon-bedrock/qwen.* → qwen icon
     * opencode/* → opencode icon
   - Added meta and mistral to PROVIDER_ICON_KEYS
   - Added placeholder definitions for meta/mistral in PROVIDER_ICON_DEFINITIONS
   - Updated iconMap to include all provider icons
   - Set OpenCode icon to official brand color (#6366F1 indigo)

Result: All model selectors and kanban cards now show correct brand icons
for each OpenCode model (DeepSeek whale, Amazon Nova sparkle, Qwen star, etc.)
2026-01-11 02:18:55 +05:30
webdevcody
27c60658f7 Merge branch 'v0.10.0rc' of github.com:AutoMaker-Org/automaker into v0.10.0rc 2026-01-10 15:41:50 -05:00
webdevcody
fa8ae149d3 feat: enhance worktree listing by scanning external directories
- Implemented a new function to scan the .worktrees directory for worktrees that may exist outside of git's management, allowing for better detection of externally created or corrupted worktrees.
- Updated the /list endpoint to include discovered worktrees in the response, improving the accuracy of the worktree listing.
- Added logging for discovered worktrees to aid in debugging and tracking.
- Cleaned up and organized imports in the list.ts file for better maintainability.
2026-01-10 15:41:35 -05:00
DhanushSantosh
0c19beb11c fix: Set OpenCode icon to official brand color (#6366F1 indigo)
The OpenCode icon now uses the official indigo brand color (#6366F1)
from opencode.ai instead of white, making it visible in both light
and dark themes.
2026-01-11 01:54:17 +05:30
DhanushSantosh
e34e4a59e9 fix: Resolve TypeScript error assigning part.result to string field
Fix TS2322 error where finishEvent.part?.result (typed as {}) was being
assigned to result.result (typed as string).

Solution: Safely handle arbitrary result payloads by:
1. Reading raw value as unknown from Record<string, unknown>
2. Checking if it's a string, otherwise JSON.stringify()

This ensures type safety while supporting both string and object results
from the OpenCode CLI.
2026-01-11 01:35:32 +05:30
DhanushSantosh
7cc092cd59 test: Fix remaining OpenCode provider test failures
Fix all 8 remaining test failures:

1. Update executeQuery integration tests to use new OpenCode event format:
   - text events use type='text' with part.text
   - tool_call events use type='tool_call' with part containing call_id, name, args
   - tool_result events use type='tool_result' with part
   - step_finish events use type='step_finish' with part
   - Use sessionID field instead of session_id

2. Fix step_finish event handling:
   - Include result field in successful completion response
   - Check for reason === 'error' to detect failed steps
   - Provide default error message when error field is missing

3. Update model test expectations:
   - Model 'opencode/big-pickle' stays as-is (not stripped to 'big-pickle')
   - PROVIDER_PREFIXES only strips 'opencode-' prefix, not 'opencode/'

All 84 tests now pass successfully!
2026-01-11 01:23:42 +05:30
DhanushSantosh
51cd7156d2 test: Update OpenCode provider tests to use new event format
Update normalizeEvent tests to match new OpenCode API:
- text events use type='text' with part.text instead of text-delta
- tool_call events use type='tool_call' with part containing call_id, name, args
- tool_result events use type='tool_result' with part
- tool_error events use type='tool_error' with part
- step_finish events use type='step_finish' with part

Update buildCliArgs tests:
- Remove expectations for -q flag (no longer used)
- Remove expectations for -c flag (cwd set at subprocess level)
- Remove expectations for - final arg (prompt via stdin)
- Update format to 'json' instead of 'stream-json'

Remaining 8 test failures are in integration tests that use executeQuery
and require more extensive mock data updates.
2026-01-11 01:07:55 +05:30
DhanushSantosh
1dc843d2d0 Merge upstream/v0.10.0rc into feature/codex-cli
Sync with latest upstream changes:
- feat: enhance feature dialogs with planning mode tooltips
- refactor: remove kanbanCardDetailLevel from settings and UI components
- refactor: streamline feature addition in BoardView and KanbanBoard
- feat: implement dashboard view and enhance sidebar navigation
2026-01-11 00:59:36 +05:30
DhanushSantosh
4040bef4b8 feat: Add OpenCode provider integration with official brand icons
This commit integrates OpenCode as a new AI provider and updates all provider
icons with their official brand colors for better visual recognition.

**OpenCode Provider Integration:**
- Add OpencodeProvider class with CLI-based execution
- Support for OpenCode native models (opencode/) and Bedrock models
- Proper event normalization for OpenCode streaming format
- Correct CLI arguments: --format json (not stream-json)
- Event structure: type, part.text, sessionID fields

**Provider Icons:**
- Add official OpenCode icon (white square frame from opencode.ai)
- Add DeepSeek icon (blue whale #4D6BFE)
- Add Qwen icon (purple gradient #6336E7 → #6F69F7)
- Add Amazon Nova icon (AWS orange #FF9900)
- Add Mistral icon (rainbow gradient gold→red)
- Add Meta icon (blue #1877F2)
- Update existing icons with brand colors:
  * Claude: #d97757 (terra cotta)
  * OpenAI/Codex: #74aa9c (teal-green)
  * Cursor: #5E9EFF (bright blue)

**Settings UI Updates:**
- Update settings navigation to show OpenCode icon
- Update model configuration to use provider-specific icons
- Differentiate between OpenCode free models and Bedrock-hosted models
- All AI models now display their official brand logos

**Model Resolution:**
- Add isOpencodeModel() function to detect OpenCode models
- Support patterns: opencode/, opencode-*, amazon-bedrock/*
- Update getProviderFromModel to recognize opencode provider

Note: Some unit tests in opencode-provider.test.ts need updating to match
the new event structure and CLI argument format.
2026-01-11 00:56:25 +05:30
Kacper
e64a850f57 feat: enhance feature dialogs with planning mode tooltips
- Integrated Tooltip components into AddFeatureDialog, EditFeatureDialog, and MassEditDialog to provide user guidance on planning mode availability.
- Updated the rendering logic for planning mode selection to conditionally display tooltips when planning modes are not supported.
- Improved user experience by clarifying the conditions under which planning modes can be utilized.
2026-01-10 20:08:17 +01:00
webdevcody
555523df38 refactor: remove kanbanCardDetailLevel from settings and UI components
- Eliminated kanbanCardDetailLevel from the SettingsService, app state, and various UI components including BoardView and BoardControls.
- Updated related hooks and API client to reflect the removal of kanbanCardDetailLevel.
- Cleaned up imports and props associated with kanbanCardDetailLevel across the codebase for improved clarity and maintainability.
2026-01-10 13:39:45 -05:00
webdevcody
dd882139f3 refactor: streamline feature addition in BoardView and KanbanBoard
- Removed the add feature shortcut from BoardHeader and integrated the add feature functionality directly into the KanbanBoard and BoardView components.
- Added a floating action button for adding features in the KanbanBoard's backlog column.
- Updated KanbanColumn to support a footer action for enhanced UI consistency.
- Cleaned up unused imports and props related to feature addition.
2026-01-10 13:19:21 -05:00
webdevcody
a67b8c6109 feat: implement dashboard view and enhance sidebar navigation
- Added a new DashboardView component for improved project management.
- Updated sidebar navigation to redirect to the dashboard instead of the home page.
- Removed ProjectActions from the sidebar for a cleaner interface.
- Enhanced BoardView to conditionally render the WorktreePanel based on visibility settings.
- Introduced worktree panel visibility management per project in the app store.
- Updated project settings to include worktree panel visibility and favorite status.
- Adjusted navigation logic to ensure users are directed to the appropriate view based on project state.
2026-01-10 13:08:59 -05:00
webdevcody
134208dab6 Merge branch 'v0.10.0rc' of github.com:AutoMaker-Org/automaker into v0.10.0rc 2026-01-10 12:27:33 -05:00
webdevcody
887343d232 Merge branch 'main' into v0.10.0rc 2026-01-10 12:27:29 -05:00
Web Dev Cody
299b838400 Merge pull request #404 from AutoMaker-Org/fix/security-vulnerability
chore: update @modelcontextprotocol/sdk in server dep
2026-01-10 12:25:29 -05:00
Kacper
c5d0a8be7d chore: update @modelcontextprotocol/sdk in server dep 2026-01-10 17:18:08 +01:00
Shirone
fe433a84c9 Merge pull request #402 from AutoMaker-Org/fix/security-vulnerability-in-dep
fix: security vulnerability in server
2026-01-10 15:52:00 +00:00
Kacper
543aa7a27b chore: update @modelcontextprotocol/sdk in server dep 2026-01-10 16:42:26 +01:00
Shirone
36ddf0513b Merge pull request #400 from AutoMaker-Org/feat/codex-usage
feat: improve codex plan and usage detection
2026-01-10 15:29:33 +00:00
Shirone
c99883e634 fix: address PR review comments
- Add error logging to CodexProvider auth check instead of silent failure
- Fix cachedAt timestamp to return actual cache time instead of request time
- Replace misleading hardcoded rate limit values (100) with sentinel value (-1)
- Fix unused parameter warning in codex routes

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-10 16:26:12 +01:00
Shirone
604f98b08f chore: ignore .codex/config.toml to prevent API key exposure
Move .codex/config.toml to .gitignore to prevent accidental commits of
API keys. The file will remain local to each user's setup.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-10 16:18:19 +01:00
Shirone
c5009a0333 refactor: remove Codex credits handling from services and UI components
- Eliminated CodexCreditsSnapshot interface and related logic from CodexUsageService and UI components.
- Updated CodexUsageSection to display only plan type, removing credits information for a cleaner interface.
- Streamlined Codex usage formatting functions by removing unused credit formatting logic.

These changes simplify the Codex usage management by focusing on plan types, enhancing clarity and maintainability.
2026-01-10 16:18:19 +01:00
Shirone
99b05d35a2 feat: update Codex services and UI components for enhanced model management
- Bumped version numbers for @automaker/server and @automaker/ui to 0.9.0 in package-lock.json.
- Introduced CodexAppServerService and CodexModelCacheService to manage communication with the Codex CLI's app-server and cache model data.
- Updated CodexUsageService to utilize app-server for fetching usage data.
- Enhanced Codex routes to support fetching available models and integrated model caching.
- Improved UI components to dynamically load and display Codex models, including error handling and loading states.
- Added new API methods for fetching Codex models and integrated them into the app store for state management.

These changes improve the overall functionality and user experience of the Codex integration, ensuring efficient model management and data retrieval.
2026-01-10 16:18:08 +01:00
Web Dev Cody
a3ecc6fe02 Merge pull request #394 from AutoMaker-Org/remove-profiles
refactor: remove AI profile functionality and related components
2026-01-09 19:25:15 -05:00
webdevcody
fc20dd5ad4 refactor: remove AI profile functionality and related components
- Deleted the AI profile management feature, including all associated views, hooks, and types.
- Updated settings and navigation components to remove references to AI profiles.
- Adjusted local storage and settings synchronization logic to reflect the removal of AI profiles.
- Cleaned up tests and utility functions that were dependent on the AI profile feature.

These changes streamline the application by eliminating unused functionality, improving maintainability and reducing complexity.
2026-01-09 19:21:30 -05:00
Kacper
eb94e4de72 feat: enhance CodexUsageService to fetch usage data from app-server JSON-RPC API
- Implemented a new method to retrieve usage data from the Codex app-server, providing real-time data and improving reliability.
- Updated the fetchUsageData method to prioritize app-server data over fallback methods.
- Added detailed logging for better traceability and debugging.
- Removed unused methods related to OpenAI API usage and Codex CLI requests, streamlining the service.

These changes enhance the functionality and robustness of the CodexUsageService, ensuring accurate usage statistics retrieval.
2026-01-10 00:11:42 +01:00
Stephan Rieche
0fa5fdd478 docs: add comprehensive JSDoc docstrings for pipeline resume methods
Add detailed JSDoc documentation to meet 80% docstring coverage requirement:

- PipelineStatusInfo interface: Document all properties with types and descriptions
- resumePipelineFeature(): Document edge case handling and parameters
- resumeFromPipelineStep(): Document complete pipeline resume workflow
- detectPipelineStatus(): Document pipeline status detection scenarios

Each docstring includes:
- Clear method purpose and behavior
- All parameters with types and descriptions
- Return value documentation
- Error conditions and exceptions
- @private tags for internal methods

This improves code maintainability and helps developers understand the
complex pipeline resume logic.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-02 15:00:25 +01:00
Stephan Rieche
472342c246 chore: run prettier to fix formatting
Auto-format all files to fix format-check CI failure.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-02 13:56:39 +01:00
Stephan Rieche
71e03c2a13 fix: restore Verify button fallback with honest labeling
Re-add the onVerify fallback for in_progress/pipeline features without context,
but fix the misleading UX issue where the button said 'Resume' but executed
verification (tests/build).

Changes:
- Restore onVerify fallback as 3rd option after skipTests Verify and Resume
- Change button label from 'Resume' to 'Verify' (honest!)
- Change icon from PlayCircle to CheckCircle2 (matches action)
- Keep same green styling for consistency

This makes sense because if a feature is in_progress but has no context,
it likely completed execution but the context was deleted. User should be
able to verify it (run tests/build) rather than having no action available.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-02 13:55:05 +01:00
Stephan Rieche
c3403c033c Merge upstream/main into fix/pipeline-resume-edge-cases
Resolved conflict in card-actions.tsx by:
- Keeping pipeline_status check from our branch (supports pipeline_step_* statuses)
- Adopting simplified Resume button logic from main (removed hasContext check and onVerify fallback)

The Resume button now appears for features with:
- status === 'in_progress'
- status.startsWith('pipeline_')

This combines our pipeline support fix with main's simplified button rendering logic.
2026-01-02 13:45:12 +01:00
Stephan Rieche
2a87d55519 fix: show Resume button for features stuck in pipeline steps
Enable the Resume button to appear for features with pipeline status
(e.g., 'pipeline_step_xyz') in addition to 'in_progress' status.

Previously, features that crashed during pipeline execution would show
a 'testing' status badge but no Resume button, making it impossible to
resume them from the UI.

Changes:
- Update card-actions.tsx condition to include pipeline_ status check
- Resume button now shows for both in_progress and pipeline_step_* statuses
- Maintains all existing behavior for other feature states

This fixes the UX issue where users could see a feature was stuck in a
pipeline step but had no way to resume it.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-02 13:22:11 +01:00
Stephan Rieche
2d309f6833 perf: eliminate redundant file read in detectPipelineStatus
Remove unnecessary call to pipelineService.getStep() which was causing
a redundant file read of pipeline.json. The config is already loaded at
line 2807, so we can find the step directly from the in-memory config.

Changes:
- Sort config.steps first
- Find stepIndex using findIndex()
- Get step directly from sortedSteps[stepIndex] instead of calling getStep()
- Simplify null check to only check !step instead of stepIndex === -1 || !step

This optimization reduces file I/O operations and improves performance when
resuming pipeline features.

Co-authored-by: gemini-code-assist bot

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-02 13:04:01 +01:00
Stephan Rieche
7a2a3ef500 refactor: create PipelineStatusInfo interface to eliminate type duplication
Define a dedicated PipelineStatusInfo interface and use it consistently in both
resumePipelineFeature() parameter and detectPipelineStatus() return type.

This eliminates duplicate inline type definitions and improves maintainability
by ensuring both locations always stay in sync. Any future changes to the
pipeline status structure only need to be made in one place.

Changes:
- Add PipelineStatusInfo interface definition
- Replace inline type in resumePipelineFeature() with PipelineStatusInfo
- Replace inline return type in detectPipelineStatus() with PipelineStatusInfo

Co-authored-by: gemini-code-assist bot

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-02 13:02:43 +01:00
Stephan Rieche
3ff9658723 refactor: remove unnecessary runningFeatures.delete() calls
Remove confusing and unnecessary delete calls from resumeFeature() and
resumePipelineFeature() methods. These were leftovers from a previous
implementation where temporary entries were added to runningFeatures.

The resumeFeature() method already ensures the feature is not running
at the start (via has() check that throws if already running), so these
delete calls serve no purpose and only add confusion about state management.

Removed delete calls from:
- resumeFeature() non-pipeline flow (line 748)
- resumePipelineFeature() no-context case (line 798)
- resumePipelineFeature() step-not-found case (line 822)

Co-authored-by: gemini-code-assist bot

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-02 12:44:19 +01:00
Stephan Rieche
c587947de6 refactor: optimize feature loading in pipeline resume
Reduce redundant file reads by loading the feature object once and passing
it down the call chain instead of reloading it multiple times.

Changes:
- Pass feature object to resumePipelineFeature() instead of featureId
- Pass feature object to resumeFromPipelineStep() instead of featureId
- Remove redundant loadFeature() calls from these methods
- Add FeatureStatusWithPipeline import for type safety

This improves performance by eliminating unnecessary file I/O operations
and makes the data flow clearer.

Co-authored-by: gemini-code-assist bot

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-02 12:42:04 +01:00
Stephan Rieche
a9403651d4 fix: handle pipeline resume edge cases and improve robustness
This commit fixes several edge cases when resuming features stuck in pipeline steps:

- Detect if feature is stuck in a pipeline step during resume
- Handle case where context file is missing (restart from beginning)
- Handle case where pipeline step no longer exists in config
- Add dedicated resumePipelineFeature() method for pipeline-specific resume logic
- Add detectPipelineStatus() to extract and validate pipeline step information
- Add resumeFromPipelineStep() to resume from a specific pipeline step index
- Update board view to check context availability for features with pipeline status

Edge cases handled:
1. No context file → restart entire pipeline from beginning
2. Step no longer exists in config → complete feature without pipeline
3. Valid step exists → resume from the crashed step

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-02 00:58:32 +01:00
Ramiro Rivera
d2f64f10ff test: add tests for ANTHROPIC_BASE_URL and ANTHROPIC_AUTH_TOKEN passthrough
- apps/server/tests/unit/providers/claude-provider.test.ts

Verify custom endpoint environment variables are passed to the SDK.
2026-01-01 13:43:12 +01:00
Ramiro Rivera
9fe5b485f8 feat: support ANTHROPIC_BASE_URL and ANTHROPIC_AUTH_TOKEN for custom endpoints
- apps/server/src/providers/claude-provider.ts

Add ANTHROPIC_BASE_URL and ANTHROPIC_AUTH_TOKEN to the environment variable
allowlist, enabling use of LLM gateways (LiteLLM, Helicone) and Anthropic-
compatible providers (GLM 4.7, Minimax M2.1, etc.).

Closes #338
2026-01-01 13:35:10 +01:00
706 changed files with 87785 additions and 18481 deletions

View File

@@ -0,0 +1,108 @@
name: Feature Request
description: Suggest a new feature or enhancement for Automaker
title: '[Feature]: '
labels: ['enhancement']
body:
- type: markdown
attributes:
value: |
Thanks for taking the time to suggest a feature! Please fill out the form below to help us understand your request.
- type: dropdown
id: feature-area
attributes:
label: Feature Area
description: Which area of Automaker does this feature relate to?
options:
- UI/UX (User Interface)
- Agent/AI
- Kanban Board
- Git/Worktree Management
- Project Management
- Settings/Configuration
- Documentation
- Performance
- Other
default: 0
validations:
required: true
- type: dropdown
id: priority
attributes:
label: Priority
description: How important is this feature to your workflow?
options:
- Nice to have
- Would improve my workflow
- Critical for my use case
default: 0
validations:
required: true
- type: textarea
id: problem-statement
attributes:
label: Problem Statement
description: Is your feature request related to a problem? Please describe the problem you're trying to solve.
placeholder: A clear and concise description of what the problem is. Ex. I'm always frustrated when...
validations:
required: true
- type: textarea
id: proposed-solution
attributes:
label: Proposed Solution
description: Describe the solution you'd like to see implemented.
placeholder: A clear and concise description of what you want to happen.
validations:
required: true
- type: textarea
id: alternatives-considered
attributes:
label: Alternatives Considered
description: Describe any alternative solutions or workarounds you've considered.
placeholder: A clear and concise description of any alternative solutions or features you've considered.
validations:
required: false
- type: textarea
id: use-cases
attributes:
label: Use Cases
description: Describe specific scenarios where this feature would be useful.
placeholder: |
1. When working on...
2. As a user who needs to...
3. In situations where...
validations:
required: false
- type: textarea
id: mockups
attributes:
label: Mockups/Screenshots
description: If applicable, add mockups, wireframes, or screenshots to help illustrate your feature request.
placeholder: Drag and drop images here or paste image URLs
validations:
required: false
- type: textarea
id: additional-context
attributes:
label: Additional Context
description: Add any other context, references, or examples about the feature request here.
placeholder: Any additional information that might be helpful...
validations:
required: false
- type: checkboxes
id: terms
attributes:
label: Checklist
options:
- label: I have searched existing issues to ensure this feature hasn't been requested already
required: true
- label: I have provided a clear description of the problem and proposed solution
required: true

View File

@@ -41,7 +41,8 @@ runs:
# Use npm install instead of npm ci to correctly resolve platform-specific
# optional dependencies (e.g., @tailwindcss/oxide, lightningcss binaries)
# Skip scripts to avoid electron-builder install-app-deps which uses too much memory
run: npm install --ignore-scripts
# Use --force to allow platform-specific dev dependencies like dmg-license on non-darwin platforms
run: npm install --ignore-scripts --force
- name: Install Linux native bindings
shell: bash

View File

@@ -37,7 +37,14 @@ jobs:
git config --global user.email "ci@example.com"
- name: Start backend server
run: npm run start --workspace=apps/server &
run: |
echo "Starting backend server..."
# Start server in background and save PID
npm run start --workspace=apps/server > backend.log 2>&1 &
SERVER_PID=$!
echo "Server started with PID: $SERVER_PID"
echo "SERVER_PID=$SERVER_PID" >> $GITHUB_ENV
env:
PORT: 3008
NODE_ENV: test
@@ -53,21 +60,70 @@ jobs:
- name: Wait for backend server
run: |
echo "Waiting for backend server to be ready..."
# Check if server process is running
if [ -z "$SERVER_PID" ]; then
echo "ERROR: Server PID not found in environment"
cat backend.log 2>/dev/null || echo "No backend log found"
exit 1
fi
# Check if process is actually running
if ! kill -0 $SERVER_PID 2>/dev/null; then
echo "ERROR: Server process $SERVER_PID is not running!"
echo "=== Backend logs ==="
cat backend.log
echo ""
echo "=== Recent system logs ==="
dmesg 2>/dev/null | tail -20 || echo "No dmesg available"
exit 1
fi
# Wait for health endpoint
for i in {1..60}; do
if curl -s -f http://localhost:3008/api/health > /dev/null 2>&1; then
echo "Backend server is ready!"
curl -s http://localhost:3008/api/health | jq . 2>/dev/null || echo "Health check response: $(curl -s http://localhost:3008/api/health 2>/dev/null || echo 'No response')"
echo "=== Backend logs ==="
cat backend.log
echo ""
echo "Health check response:"
curl -s http://localhost:3008/api/health | jq . 2>/dev/null || echo "Health check: $(curl -s http://localhost:3008/api/health 2>/dev/null || echo 'No response')"
exit 0
fi
# Check if server process is still running
if ! kill -0 $SERVER_PID 2>/dev/null; then
echo "ERROR: Server process died during wait!"
echo "=== Backend logs ==="
cat backend.log
exit 1
fi
echo "Waiting... ($i/60)"
sleep 1
done
echo "Backend server failed to start!"
echo "Checking server status..."
echo "ERROR: Backend server failed to start within 60 seconds!"
echo "=== Backend logs ==="
cat backend.log
echo ""
echo "=== Process status ==="
ps aux | grep -E "(node|tsx)" | grep -v grep || echo "No node processes found"
echo ""
echo "=== Port status ==="
netstat -tlnp 2>/dev/null | grep :3008 || echo "Port 3008 not listening"
echo "Testing health endpoint..."
lsof -i :3008 2>/dev/null || echo "lsof not available or port not in use"
echo ""
echo "=== Health endpoint test ==="
curl -v http://localhost:3008/api/health 2>&1 || echo "Health endpoint failed"
# Kill the server process if it's still hanging
if kill -0 $SERVER_PID 2>/dev/null; then
echo ""
echo "Killing stuck server process..."
kill -9 $SERVER_PID 2>/dev/null || true
fi
exit 1
- name: Run E2E tests
@@ -81,6 +137,18 @@ jobs:
# Keep UI-side login/defaults consistent
AUTOMAKER_API_KEY: test-api-key-for-e2e-tests
- name: Print backend logs on failure
if: failure()
run: |
echo "=== E2E Tests Failed - Backend Logs ==="
cat backend.log 2>/dev/null || echo "No backend log found"
echo ""
echo "=== Process status at failure ==="
ps aux | grep -E "(node|tsx)" | grep -v grep || echo "No node processes found"
echo ""
echo "=== Port status ==="
netstat -tlnp 2>/dev/null | grep :3008 || echo "Port 3008 not listening"
- name: Upload Playwright report
uses: actions/upload-artifact@v4
if: always()
@@ -98,3 +166,13 @@ jobs:
apps/ui/test-results/
retention-days: 7
if-no-files-found: ignore
- name: Cleanup - Kill backend server
if: always()
run: |
if [ -n "$SERVER_PID" ]; then
echo "Cleaning up backend server (PID: $SERVER_PID)..."
kill $SERVER_PID 2>/dev/null || true
kill -9 $SERVER_PID 2>/dev/null || true
echo "Backend server cleanup complete"
fi

View File

@@ -25,7 +25,7 @@ jobs:
cache-dependency-path: package-lock.json
- name: Install dependencies
run: npm install --ignore-scripts
run: npm install --ignore-scripts --force
- name: Check formatting
run: npm run format:check

View File

@@ -35,6 +35,11 @@ jobs:
with:
check-lockfile: 'true'
- name: Install RPM build tools (Linux)
if: matrix.os == 'ubuntu-latest'
shell: bash
run: sudo apt-get update && sudo apt-get install -y rpm
- name: Build Electron app (macOS)
if: matrix.os == 'macos-latest'
shell: bash
@@ -57,7 +62,9 @@ jobs:
uses: actions/upload-artifact@v4
with:
name: macos-builds
path: apps/ui/release/*.{dmg,zip}
path: |
apps/ui/release/*.dmg
apps/ui/release/*.zip
retention-days: 30
- name: Upload Windows artifacts
@@ -73,7 +80,10 @@ jobs:
uses: actions/upload-artifact@v4
with:
name: linux-builds
path: apps/ui/release/*.{AppImage,deb}
path: |
apps/ui/release/*.AppImage
apps/ui/release/*.deb
apps/ui/release/*.rpm
retention-days: 30
upload:
@@ -104,8 +114,14 @@ jobs:
uses: softprops/action-gh-release@v2
with:
files: |
artifacts/macos-builds/*
artifacts/windows-builds/*
artifacts/linux-builds/*
artifacts/macos-builds/*.dmg
artifacts/macos-builds/*.zip
artifacts/macos-builds/*.blockmap
artifacts/windows-builds/*.exe
artifacts/windows-builds/*.blockmap
artifacts/linux-builds/*.AppImage
artifacts/linux-builds/*.deb
artifacts/linux-builds/*.rpm
artifacts/linux-builds/*.blockmap
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

12
.gitignore vendored
View File

@@ -73,6 +73,9 @@ blob-report/
!.env.example
!.env.local.example
# Codex config (contains API keys)
.codex/config.toml
# TypeScript
*.tsbuildinfo
@@ -84,4 +87,11 @@ docker-compose.override.yml
.claude/hans/
pnpm-lock.yaml
yarn.lock
yarn.lock
# Fork-specific workflow files (should never be committed)
# API key files
data/.api-key
data/credentials.json
data/
.codex/

View File

@@ -31,7 +31,12 @@ fi
# Ensure common system paths are in PATH (for systems without nvm)
# This helps find node/npm installed via Homebrew, system packages, etc.
export PATH="$PATH:/usr/local/bin:/opt/homebrew/bin:/usr/bin"
if [ -n "$WINDIR" ]; then
export PATH="$PATH:/c/Program Files/nodejs:/c/Program Files (x86)/nodejs"
export PATH="$PATH:$APPDATA/npm:$LOCALAPPDATA/Programs/nodejs"
else
export PATH="$PATH:/usr/local/bin:/opt/homebrew/bin:/usr/bin"
fi
# Run lint-staged - works with or without nvm
# Prefer npx, fallback to npm exec, both work with system-installed Node.js

View File

@@ -166,7 +166,11 @@ Use `resolveModelString()` from `@automaker/model-resolver` to convert model ali
## Environment Variables
- `ANTHROPIC_API_KEY` - Anthropic API key (or use Claude Code CLI auth)
- `HOST` - Host to bind server to (default: 0.0.0.0)
- `HOSTNAME` - Hostname for user-facing URLs (default: localhost)
- `PORT` - Server port (default: 3008)
- `DATA_DIR` - Data storage directory (default: ./data)
- `ALLOWED_ROOT_DIRECTORY` - Restrict file operations to specific directory
- `AUTOMAKER_MOCK_AGENT=true` - Enable mock agent mode for CI testing
- `AUTOMAKER_AUTO_LOGIN=true` - Skip login prompt in development (disabled when NODE_ENV=production)
- `VITE_HOSTNAME` - Hostname for frontend API URLs (default: localhost)

View File

@@ -24,6 +24,7 @@ For complete details on contribution terms and rights assignment, please review
- [Development Setup](#development-setup)
- [Project Structure](#project-structure)
- [Pull Request Process](#pull-request-process)
- [Branching Strategy (RC Branches)](#branching-strategy-rc-branches)
- [Branch Naming Convention](#branch-naming-convention)
- [Commit Message Format](#commit-message-format)
- [Submitting a Pull Request](#submitting-a-pull-request)
@@ -186,6 +187,59 @@ automaker/
This section covers everything you need to know about contributing changes through pull requests, from creating your branch to getting your code merged.
### Branching Strategy (RC Branches)
Automaker uses **Release Candidate (RC) branches** for all development work. Understanding this workflow is essential before contributing.
**How it works:**
1. **All development happens on RC branches** - We maintain version-specific RC branches (e.g., `v0.10.0rc`, `v0.11.0rc`) where all active development occurs
2. **RC branches are eventually merged to main** - Once an RC branch is stable and ready for release, it gets merged into `main`
3. **Main branch is for releases only** - The `main` branch contains only released, stable code
**Before creating a PR:**
1. **Check for the latest RC branch** - Before starting work, check the repository for the current RC branch:
```bash
git fetch upstream
git branch -r | grep rc
```
2. **Base your work on the RC branch** - Create your feature branch from the latest RC branch, not from `main`:
```bash
# Find the latest RC branch (e.g., v0.11.0rc)
git checkout upstream/v0.11.0rc
git checkout -b feature/your-feature-name
```
3. **Target the RC branch in your PR** - When opening your pull request, set the base branch to the current RC branch, not `main`
**Example workflow:**
```bash
# 1. Fetch latest changes
git fetch upstream
# 2. Check for RC branches
git branch -r | grep rc
# Output: upstream/v0.11.0rc
# 3. Create your branch from the RC
git checkout -b feature/add-dark-mode upstream/v0.11.0rc
# 4. Make your changes and commit
git commit -m "feat: Add dark mode support"
# 5. Push to your fork
git push origin feature/add-dark-mode
# 6. Open PR targeting the RC branch (v0.11.0rc), NOT main
```
**Important:** PRs opened directly against `main` will be asked to retarget to the current RC branch.
### Branch Naming Convention
We use a consistent branch naming pattern to keep our repository organized:
@@ -275,14 +329,14 @@ Follow these steps to submit your contribution:
#### 1. Prepare Your Changes
Ensure you've synced with the latest upstream changes:
Ensure you've synced with the latest upstream changes from the RC branch:
```bash
# Fetch latest changes from upstream
git fetch upstream
# Rebase your branch on main (if needed)
git rebase upstream/main
# Rebase your branch on the current RC branch (if needed)
git rebase upstream/v0.11.0rc # Use the current RC branch name
```
#### 2. Run Pre-submission Checks
@@ -314,18 +368,19 @@ git push origin feature/your-feature-name
1. Go to your fork on GitHub
2. Click "Compare & pull request" for your branch
3. Ensure the base repository is `AutoMaker-Org/automaker` and base branch is `main`
3. **Important:** Set the base repository to `AutoMaker-Org/automaker` and the base branch to the **current RC branch** (e.g., `v0.11.0rc`), not `main`
4. Fill out the PR template completely
#### PR Requirements Checklist
Your PR should include:
- [ ] **Targets the current RC branch** (not `main`) - see [Branching Strategy](#branching-strategy-rc-branches)
- [ ] **Clear title** describing the change (use conventional commit format)
- [ ] **Description** explaining what changed and why
- [ ] **Link to related issue** (if applicable): `Closes #123` or `Fixes #456`
- [ ] **All CI checks passing** (format, lint, build, tests)
- [ ] **No merge conflicts** with main branch
- [ ] **No merge conflicts** with the RC branch
- [ ] **Tests included** for new functionality
- [ ] **Documentation updated** if adding/changing public APIs

253
DEVELOPMENT_WORKFLOW.md Normal file
View File

@@ -0,0 +1,253 @@
# Development Workflow
This document defines the standard workflow for keeping a branch in sync with the upstream
release candidate (RC) and for shipping feature work. It is paired with `check-sync.sh`.
## Quick Decision Rule
1. Ask the user to select a workflow:
- **Sync Workflow** → you are maintaining the current RC branch with fixes/improvements
and will push the same fixes to both origin and upstream RC when you have local
commits to publish.
- **PR Workflow** → you are starting new feature work on a new branch; upstream updates
happen via PR only.
2. After the user selects, run:
```bash
./check-sync.sh
```
3. Use the status output to confirm alignment. If it reports **diverged**, default to
merging `upstream/<TARGET_RC>` into the current branch and preserving local commits.
For Sync Workflow, when the working tree is clean and you are behind upstream RC,
proceed with the fetch + merge without asking for additional confirmation.
## Target RC Resolution
The target RC is resolved dynamically so the workflow stays current as the RC changes.
Resolution order:
1. Latest `upstream/v*rc` branch (auto-detected)
2. `upstream/HEAD` (fallback)
3. If neither is available, you must pass `--rc <branch>`
Override for a single run:
```bash
./check-sync.sh --rc <rc-branch>
```
## Pre-Flight Checklist
1. Confirm a clean working tree:
```bash
git status
```
2. Confirm the current branch:
```bash
git branch --show-current
```
3. Ensure remotes exist (origin + upstream):
```bash
git remote -v
```
## Sync Workflow (Upstream Sync)
Use this flow when you are updating the current branch with fixes or improvements and
intend to keep origin and upstream RC in lockstep.
1. **Check sync status**
```bash
./check-sync.sh
```
2. **Update from upstream RC before editing (no pulls)**
- **Behind upstream RC** → fetch and merge RC into your branch:
```bash
git fetch upstream
git merge upstream/<TARGET_RC> --no-edit
```
When the working tree is clean and the user selected Sync Workflow, proceed without
an extra confirmation prompt.
- **Diverged** → stop and resolve manually.
3. **Resolve conflicts if needed**
- Handle conflicts intelligently: preserve upstream behavior and your local intent.
4. **Make changes and commit (if you are delivering fixes)**
```bash
git add -A
git commit -m "type: description"
```
5. **Build to verify**
```bash
npm run build:packages
npm run build
```
6. **Push after a successful merge to keep remotes aligned**
- If you only merged upstream RC changes, push **origin only** to sync your fork:
```bash
git push origin <branch>
```
- If you have local fixes to publish, push **origin + upstream**:
```bash
git push origin <branch>
git push upstream <branch>:<TARGET_RC>
```
- Always ask the user which push to perform.
- Origin (origin-only sync):
```bash
git push origin <branch>
```
- Upstream RC (publish the same fixes when you have local commits):
```bash
git push upstream <branch>:<TARGET_RC>
```
7. **Re-check sync**
```bash
./check-sync.sh
```
## PR Workflow (Feature Work)
Use this flow only for new feature work on a new branch. Do not push to upstream RC.
1. **Create or switch to a feature branch**
```bash
git checkout -b <branch>
```
2. **Make changes and commit**
```bash
git add -A
git commit -m "type: description"
```
3. **Merge upstream RC before shipping**
```bash
git merge upstream/<TARGET_RC> --no-edit
```
4. **Build and/or test**
```bash
npm run build:packages
npm run build
```
5. **Push to origin**
```bash
git push -u origin <branch>
```
6. **Create or update the PR**
- Use `gh pr create` or the GitHub UI.
7. **Review and follow-up**
- Apply feedback, commit changes, and push again.
- Re-run `./check-sync.sh` if additional upstream sync is needed.
## Conflict Resolution Checklist
1. Identify which changes are from upstream vs. local.
2. Preserve both behaviors where possible; avoid dropping either side.
3. Prefer minimal, safe integrations over refactors.
4. Re-run build commands after resolving conflicts.
5. Re-run `./check-sync.sh` to confirm status.
## Build/Test Matrix
- **Sync Workflow**: `npm run build:packages` and `npm run build`.
- **PR Workflow**: `npm run build:packages` and `npm run build` (plus relevant tests).
## Post-Sync Verification
1. `git status` should be clean.
2. `./check-sync.sh` should show expected alignment.
3. Verify recent commits with:
```bash
git log --oneline -5
```
## check-sync.sh Usage
- Uses dynamic Target RC resolution (see above).
- Override target RC:
```bash
./check-sync.sh --rc <rc-branch>
```
- Optional preview limit:
```bash
./check-sync.sh --preview 10
```
- The script prints sync status for both origin and upstream and previews recent commits
when you are behind.
## Stop Conditions
Stop and ask for guidance if any of the following are true:
- The working tree is dirty and you are about to merge or push.
- `./check-sync.sh` reports **diverged** during PR Workflow, or a merge cannot be completed.
- The script cannot resolve a target RC and requests `--rc`.
- A build fails after sync or conflict resolution.
## AI Agent Guardrails
- Always run `./check-sync.sh` before merges or pushes.
- Always ask for explicit user approval before any push command.
- Do not ask for additional confirmation before a Sync Workflow fetch + merge when the
working tree is clean and the user has already selected the Sync Workflow.
- Choose Sync vs PR workflow based on intent (RC maintenance vs new feature work), not
on the script's workflow hint.
- Only use force push when the user explicitly requests a history rewrite.
- Ask for explicit approval before dependency installs, branch deletion, or destructive operations.
- When resolving merge conflicts, preserve both upstream changes and local intent where possible.
- Do not create or switch to new branches unless the user explicitly requests it.
## AI Agent Decision Guidance
Agents should provide concrete, task-specific suggestions instead of repeatedly asking
open-ended questions. Use the user's stated goal and the `./check-sync.sh` status to
propose a default path plus one or two alternatives, and only ask for confirmation when
an action requires explicit approval.
Default behavior:
- If the intent is RC maintenance, recommend the Sync Workflow and proceed with
safe preparation steps (status checks, previews). If the branch is behind upstream RC,
fetch and merge without additional confirmation when the working tree is clean, then
push to origin to keep the fork aligned. Push upstream only when there are local fixes
to publish.
- If the intent is new feature work, recommend the PR Workflow and proceed with safe
preparation steps (status checks, identifying scope). Ask for approval before merges,
pushes, or dependency installs.
- If `./check-sync.sh` reports **diverged** during Sync Workflow, merge
`upstream/<TARGET_RC>` into the current branch and preserve local commits.
- If `./check-sync.sh` reports **diverged** during PR Workflow, stop and ask for guidance
with a short explanation of the divergence and the minimal options to resolve it.
If the user's intent is RC maintenance, prefer the Sync Workflow regardless of the
script hint. When the intent is new feature work, use the PR Workflow and avoid upstream
RC pushes.
Suggestion format (keep it short):
- **Recommended**: one sentence with the default path and why it fits the task.
- **Alternatives**: one or two options with the tradeoff or prerequisite.
- **Approval points**: mention any upcoming actions that need explicit approval (exclude sync
workflow pushes and merges).
## Failure Modes and How to Avoid Them
Sync Workflow:
- Wrong RC target: verify the auto-detected RC in `./check-sync.sh` output before merging.
- Diverged from upstream RC: stop and resolve manually before any merge or push.
- Dirty working tree: commit or stash before syncing to avoid accidental merges.
- Missing remotes: ensure both `origin` and `upstream` are configured before syncing.
- Build breaks after sync: run `npm run build:packages` and `npm run build` before pushing.
PR Workflow:
- Branch not synced to current RC: re-run `./check-sync.sh` and merge RC before shipping.
- Pushing the wrong branch: confirm `git branch --show-current` before pushing.
- Unreviewed changes: always commit and push to origin before opening or updating a PR.
- Skipped tests/builds: run the build commands before declaring the PR ready.
## Notes
- Avoid merging with uncommitted changes; commit or stash first.
- Prefer merge over rebase for PR branches; rebases rewrite history and often require a force push,
which should only be done with an explicit user request.
- Use clear, conventional commit messages and split unrelated changes into separate commits.

View File

@@ -28,6 +28,7 @@ COPY libs/platform/package*.json ./libs/platform/
COPY libs/model-resolver/package*.json ./libs/model-resolver/
COPY libs/dependency-resolver/package*.json ./libs/dependency-resolver/
COPY libs/git-utils/package*.json ./libs/git-utils/
COPY libs/spec-parser/package*.json ./libs/spec-parser/
# Copy scripts (needed by npm workspace)
COPY scripts ./scripts
@@ -59,9 +60,22 @@ FROM node:22-slim AS server
ARG GIT_COMMIT_SHA=unknown
LABEL automaker.git.commit.sha="${GIT_COMMIT_SHA}"
# Build arguments for user ID matching (allows matching host user for mounted volumes)
# Override at build time: docker build --build-arg UID=$(id -u) --build-arg GID=$(id -g) ...
ARG UID=1001
ARG GID=1001
# Install git, curl, bash (for terminal), gosu (for user switching), and GitHub CLI (pinned version, multi-arch)
# Also install Playwright/Chromium system dependencies (aligns with playwright install-deps on Debian/Ubuntu)
RUN apt-get update && apt-get install -y --no-install-recommends \
git curl bash gosu ca-certificates openssh-client \
# Playwright/Chromium dependencies
libglib2.0-0 libnss3 libnspr4 libdbus-1-3 libatk1.0-0 libatk-bridge2.0-0 \
libcups2 libdrm2 libxkbcommon0 libatspi2.0-0 libxcomposite1 libxdamage1 \
libxfixes3 libxrandr2 libgbm1 libasound2 libpango-1.0-0 libcairo2 \
libx11-6 libx11-xcb1 libxcb1 libxext6 libxrender1 libxss1 libxtst6 \
libxshmfence1 libgtk-3-0 libexpat1 libfontconfig1 fonts-liberation \
xdg-utils libpangocairo-1.0-0 libpangoft2-1.0-0 libu2f-udev libvulkan1 \
&& GH_VERSION="2.63.2" \
&& ARCH=$(uname -m) \
&& case "$ARCH" in \
@@ -79,8 +93,10 @@ RUN apt-get update && apt-get install -y --no-install-recommends \
RUN npm install -g @anthropic-ai/claude-code
# Create non-root user with home directory BEFORE installing Cursor CLI
RUN groupadd -g 1001 automaker && \
useradd -u 1001 -g automaker -m -d /home/automaker -s /bin/bash automaker && \
# Uses UID/GID build args to match host user for mounted volume permissions
# Use -o flag to allow non-unique IDs (GID 1000 may already exist as 'node' group)
RUN groupadd -o -g ${GID} automaker && \
useradd -o -u ${UID} -g automaker -m -d /home/automaker -s /bin/bash automaker && \
mkdir -p /home/automaker/.local/bin && \
mkdir -p /home/automaker/.cursor && \
chown -R automaker:automaker /home/automaker && \
@@ -95,6 +111,12 @@ RUN curl https://cursor.com/install -fsS | bash && \
ls -la /home/automaker/.local/bin/ && \
echo "=== PATH is: $PATH ===" && \
(which cursor-agent && cursor-agent --version) || echo "cursor-agent installed (may need auth setup)"
# Install OpenCode CLI (for multi-provider AI model access)
RUN curl -fsSL https://opencode.ai/install | bash && \
echo "=== Checking OpenCode CLI installation ===" && \
ls -la /home/automaker/.local/bin/ && \
(which opencode && opencode --version) || echo "opencode installed (may need auth setup)"
USER root
# Add PATH to profile so it's available in all interactive shells (for login shells)

View File

@@ -8,9 +8,17 @@
FROM node:22-slim
# Install build dependencies for native modules (node-pty) and runtime tools
# Also install Playwright/Chromium system dependencies (aligns with playwright install-deps on Debian/Ubuntu)
RUN apt-get update && apt-get install -y --no-install-recommends \
python3 make g++ \
git curl bash gosu ca-certificates openssh-client \
# Playwright/Chromium dependencies
libglib2.0-0 libnss3 libnspr4 libdbus-1-3 libatk1.0-0 libatk-bridge2.0-0 \
libcups2 libdrm2 libxkbcommon0 libatspi2.0-0 libxcomposite1 libxdamage1 \
libxfixes3 libxrandr2 libgbm1 libasound2 libpango-1.0-0 libcairo2 \
libx11-6 libx11-xcb1 libxcb1 libxext6 libxrender1 libxss1 libxtst6 \
libxshmfence1 libgtk-3-0 libexpat1 libfontconfig1 fonts-liberation \
xdg-utils libpangocairo-1.0-0 libpangoft2-1.0-0 libu2f-udev libvulkan1 \
&& GH_VERSION="2.63.2" \
&& ARCH=$(uname -m) \
&& case "$ARCH" in \
@@ -27,9 +35,15 @@ RUN apt-get update && apt-get install -y --no-install-recommends \
# Install Claude CLI globally
RUN npm install -g @anthropic-ai/claude-code
# Create non-root user
RUN groupadd -g 1001 automaker && \
useradd -u 1001 -g automaker -m -d /home/automaker -s /bin/bash automaker && \
# Build arguments for user ID matching (allows matching host user for mounted volumes)
# Override at build time: docker-compose build --build-arg UID=$(id -u) --build-arg GID=$(id -g)
ARG UID=1001
ARG GID=1001
# Create non-root user with configurable UID/GID
# Use -o flag to allow non-unique IDs (GID 1000 may already exist as 'node' group)
RUN groupadd -o -g ${GID} automaker && \
useradd -o -u ${UID} -g automaker -m -d /home/automaker -s /bin/bash automaker && \
mkdir -p /home/automaker/.local/bin && \
mkdir -p /home/automaker/.cursor && \
chown -R automaker:automaker /home/automaker && \

160
README.md
View File

@@ -28,6 +28,7 @@
- [Quick Start](#quick-start)
- [How to Run](#how-to-run)
- [Development Mode](#development-mode)
- [Interactive TUI Launcher](#interactive-tui-launcher-recommended-for-new-users)
- [Building for Production](#building-for-production)
- [Testing](#testing)
- [Linting](#linting)
@@ -101,11 +102,9 @@ In the Discord, you can:
### Prerequisites
- **Node.js 18+** (tested with Node.js 22)
- **Node.js 22+** (required: >=22.0.0 <23.0.0)
- **npm** (comes with Node.js)
- **Authentication** (choose one):
- **[Claude Code CLI](https://code.claude.com/docs/en/overview)** (recommended) - Install and authenticate, credentials used automatically
- **Anthropic API Key** - Direct API key for Claude Agent SDK ([get one here](https://console.anthropic.com/))
- **[Claude Code CLI](https://code.claude.com/docs/en/overview)** - Install and authenticate with your Anthropic subscription. Automaker integrates with your authenticated Claude Code CLI to access Claude models.
### Quick Start
@@ -117,30 +116,14 @@ cd automaker
# 2. Install dependencies
npm install
# 3. Build shared packages (can be skipped - npm run dev does it automatically)
npm run build:packages
# 4. Start Automaker
# 3. Start Automaker
npm run dev
# Choose between:
# 1. Web Application (browser at localhost:3007)
# 2. Desktop Application (Electron - recommended)
```
**Authentication Setup:** On first run, Automaker will automatically show a setup wizard where you can configure authentication. You can choose to:
- Use **Claude Code CLI** (recommended) - Automaker will detect your CLI credentials automatically
- Enter an **API key** directly in the wizard
If you prefer to set up authentication before running (e.g., for headless deployments or CI/CD), you can set it manually:
```bash
# Option A: Environment variable
export ANTHROPIC_API_KEY="sk-ant-..."
# Option B: Create .env file in project root
echo "ANTHROPIC_API_KEY=sk-ant-..." > .env
```
**Authentication:** Automaker integrates with your authenticated Claude Code CLI. Make sure you have [installed and authenticated](https://code.claude.com/docs/en/quickstart) the Claude Code CLI before running Automaker. Your CLI credentials will be detected automatically.
**For Development:** `npm run dev` starts the development server with Vite live reload and hot module replacement for fast refresh and instant updates as you make changes.
@@ -179,6 +162,40 @@ npm run dev:electron:wsl:gpu
npm run dev:web
```
### Interactive TUI Launcher (Recommended for New Users)
For a user-friendly interactive menu, use the built-in TUI launcher script:
```bash
# Show interactive menu with all launch options
./start-automaker.sh
# Or launch directly without menu
./start-automaker.sh web # Web browser
./start-automaker.sh electron # Desktop app
./start-automaker.sh electron-debug # Desktop + DevTools
# Additional options
./start-automaker.sh --help # Show all available options
./start-automaker.sh --version # Show version information
./start-automaker.sh --check-deps # Verify project dependencies
./start-automaker.sh --no-colors # Disable colored output
./start-automaker.sh --no-history # Don't remember last choice
```
**Features:**
- 🎨 Beautiful terminal UI with gradient colors and ASCII art
- Interactive menu (press 1-3 to select, Q to exit)
- 💾 Remembers your last choice
- Pre-flight checks (validates Node.js, npm, dependencies)
- 📏 Responsive layout (adapts to terminal size)
- 30-second timeout for hands-free selection
- 🌐 Cross-shell compatible (bash/zsh)
**History File:**
Your last selected mode is saved in `~/.automaker_launcher_history` for quick re-runs.
### Building for Production
#### Web Application
@@ -197,11 +214,30 @@ npm run build:electron
# Platform-specific builds
npm run build:electron:mac # macOS (DMG + ZIP, x64 + arm64)
npm run build:electron:win # Windows (NSIS installer, x64)
npm run build:electron:linux # Linux (AppImage + DEB, x64)
npm run build:electron:linux # Linux (AppImage + DEB + RPM, x64)
# Output directory: apps/ui/release/
```
**Linux Distribution Packages:**
- **AppImage**: Universal format, works on any Linux distribution
- **DEB**: Ubuntu, Debian, Linux Mint, Pop!\_OS
- **RPM**: Fedora, RHEL, Rocky Linux, AlmaLinux, openSUSE
**Installing on Fedora/RHEL:**
```bash
# Download the RPM package
wget https://github.com/AutoMaker-Org/automaker/releases/latest/download/Automaker-<version>-x86_64.rpm
# Install with dnf (Fedora)
sudo dnf install ./Automaker-<version>-x86_64.rpm
# Or with yum (RHEL/CentOS)
sudo yum localinstall ./Automaker-<version>-x86_64.rpm
```
#### Docker Deployment
Docker provides the most secure way to run Automaker by isolating it from your host filesystem.
@@ -220,16 +256,9 @@ docker-compose logs -f
docker-compose down
```
##### Configuration
##### Authentication
Create a `.env` file in the project root if using API key authentication:
```bash
# Optional: Anthropic API key (not needed if using Claude CLI authentication)
ANTHROPIC_API_KEY=sk-ant-...
```
**Note:** Most users authenticate via Claude CLI instead of API keys. See [Claude CLI Authentication](#claude-cli-authentication-optional) below.
Automaker integrates with your authenticated Claude Code CLI. To use CLI authentication in Docker, mount your Claude CLI config directory (see [Claude CLI Authentication](#claude-cli-authentication) below).
##### Working with Projects (Host Directory Access)
@@ -243,9 +272,9 @@ services:
- /path/to/your/project:/projects/your-project
```
##### Claude CLI Authentication (Optional)
##### Claude CLI Authentication
To use Claude Code CLI authentication instead of an API key, mount your Claude CLI config directory:
Mount your Claude CLI config directory to use your authenticated CLI credentials:
```yaml
services:
@@ -343,10 +372,6 @@ npm run lint
### Environment Configuration
#### Authentication (if not using Claude Code CLI)
- `ANTHROPIC_API_KEY` - Your Anthropic API key for Claude Agent SDK (not needed if using Claude Code CLI)
#### Optional - Server
- `PORT` - Server port (default: 3008)
@@ -357,49 +382,23 @@ npm run lint
- `AUTOMAKER_API_KEY` - Optional API authentication for the server
- `ALLOWED_ROOT_DIRECTORY` - Restrict file operations to specific directory
- `CORS_ORIGIN` - CORS policy (default: \*)
- `CORS_ORIGIN` - CORS allowed origins (comma-separated list; defaults to localhost only)
#### Optional - Development
- `VITE_SKIP_ELECTRON` - Skip Electron in dev mode
- `OPEN_DEVTOOLS` - Auto-open DevTools in Electron
- `AUTOMAKER_SKIP_SANDBOX_WARNING` - Skip sandbox warning dialog (useful for dev/CI)
- `AUTOMAKER_AUTO_LOGIN=true` - Skip login prompt in development (ignored when NODE_ENV=production)
### Authentication Setup
#### Option 1: Claude Code CLI (Recommended)
Automaker integrates with your authenticated Claude Code CLI and uses your Anthropic subscription.
Install and authenticate the Claude Code CLI following the [official quickstart guide](https://code.claude.com/docs/en/quickstart).
Once authenticated, Automaker will automatically detect and use your CLI credentials. No additional configuration needed!
#### Option 2: Direct API Key
If you prefer not to use the CLI, you can provide an Anthropic API key directly using one of these methods:
##### 2a. Shell Configuration
Add to your `~/.bashrc` or `~/.zshrc`:
```bash
export ANTHROPIC_API_KEY="sk-ant-..."
```
Then restart your terminal or run `source ~/.bashrc` (or `source ~/.zshrc`).
##### 2b. .env File
Create a `.env` file in the project root (gitignored):
```bash
ANTHROPIC_API_KEY=sk-ant-...
PORT=3008
DATA_DIR=./data
```
##### 2c. In-App Storage
The application can store your API key securely in the settings UI. The key is persisted in the `DATA_DIR` directory.
## Features
### Core Workflow
@@ -508,20 +507,24 @@ Automaker provides several specialized views accessible via the sidebar or keybo
| **Agent** | `A` | Interactive chat sessions with AI agents for exploratory work and questions |
| **Spec** | `D` | Project specification editor with AI-powered generation and feature suggestions |
| **Context** | `C` | Manage context files (markdown, images) that AI agents automatically reference |
| **Profiles** | `M` | Create and manage AI agent profiles with custom prompts and configurations |
| **Settings** | `S` | Configure themes, shortcuts, defaults, authentication, and more |
| **Terminal** | `T` | Integrated terminal with tabs, splits, and persistent sessions |
| **GitHub Issues** | - | Import and validate GitHub issues, convert to tasks |
| **Graph** | `H` | Visualize feature dependencies with interactive graph visualization |
| **Ideation** | `I` | Brainstorm and generate ideas with AI assistance |
| **Memory** | `Y` | View and manage agent memory and conversation history |
| **GitHub Issues** | `G` | Import and validate GitHub issues, convert to tasks |
| **GitHub PRs** | `R` | View and manage GitHub pull requests |
| **Running Agents** | - | View all active agents across projects with status and progress |
### Keyboard Navigation
All shortcuts are customizable in Settings. Default shortcuts:
- **Navigation:** `K` (Board), `A` (Agent), `D` (Spec), `C` (Context), `S` (Settings), `M` (Profiles), `T` (Terminal)
- **Navigation:** `K` (Board), `A` (Agent), `D` (Spec), `C` (Context), `S` (Settings), `T` (Terminal), `H` (Graph), `I` (Ideation), `Y` (Memory), `G` (GitHub Issues), `R` (GitHub PRs)
- **UI:** `` ` `` (Toggle sidebar)
- **Actions:** `N` (New item in current view), `G` (Start next features), `O` (Open project), `P` (Project picker)
- **Actions:** `N` (New item in current view), `O` (Open project), `P` (Project picker)
- **Projects:** `Q`/`E` (Cycle previous/next project)
- **Terminal:** `Alt+D` (Split right), `Alt+S` (Split down), `Alt+W` (Close), `Alt+T` (New tab)
## Architecture
@@ -586,10 +589,16 @@ Stored in `{projectPath}/.automaker/`:
│ ├── agent-output.md # AI agent output log
│ └── images/ # Attached images
├── context/ # Context files for AI agents
├── worktrees/ # Git worktree metadata
├── validations/ # GitHub issue validation results
├── ideation/ # Brainstorming and analysis data
│ └── analysis.json # Project structure analysis
├── board/ # Board-related data
├── images/ # Project-level images
├── settings.json # Project-specific settings
├── spec.md # Project specification
├── analysis.json # Project structure analysis
└── feature-suggestions.json # AI-generated suggestions
├── app_spec.txt # Project specification (XML format)
├── active-branches.json # Active git branches tracking
└── execution-state.json # Auto-mode execution state
```
#### Global Data
@@ -627,7 +636,6 @@ data/
- [Contributing Guide](./CONTRIBUTING.md) - How to contribute to Automaker
- [Project Documentation](./docs/) - Architecture guides, patterns, and developer docs
- [Docker Isolation Guide](./docs/docker-isolation.md) - Security-focused Docker deployment
- [Shared Packages Guide](./docs/llm-shared-packages.md) - Using monorepo packages
### Community

300
SECURITY_TODO.md Normal file
View File

@@ -0,0 +1,300 @@
# Security Audit Findings - v0.13.0rc Branch
**Date:** $(date)
**Audit Type:** Git diff security review against v0.13.0rc branch
**Status:** ⚠️ Security vulnerabilities found - requires fixes before release
## Executive Summary
No intentionally malicious code was detected in the changes. However, several **critical security vulnerabilities** were identified that could allow command injection attacks. These must be fixed before release.
---
## 🔴 Critical Security Issues
### 1. Command Injection in Merge Handler
**File:** `apps/server/src/routes/worktree/routes/merge.ts`
**Lines:** 43, 54, 65-66, 93
**Severity:** CRITICAL
**Issue:**
User-controlled inputs (`branchName`, `mergeTo`, `options?.message`) are directly interpolated into shell commands without validation, allowing command injection attacks.
**Vulnerable Code:**
```typescript
// Line 43 - branchName not validated
await execAsync(`git rev-parse --verify ${branchName}`, { cwd: projectPath });
// Line 54 - mergeTo not validated
await execAsync(`git rev-parse --verify ${mergeTo}`, { cwd: projectPath });
// Lines 65-66 - branchName and message not validated
const mergeCmd = options?.squash
? `git merge --squash ${branchName}`
: `git merge ${branchName} -m "${options?.message || `Merge ${branchName} into ${mergeTo}`}"`;
// Line 93 - message not sanitized
await execAsync(`git commit -m "${options?.message || `Merge ${branchName} (squash)`}"`, {
cwd: projectPath,
});
```
**Attack Vector:**
An attacker could inject shell commands via branch names or commit messages:
- Branch name: `main; rm -rf /`
- Commit message: `"; malicious_command; "`
**Fix Required:**
1. Validate `branchName` and `mergeTo` using `isValidBranchName()` before use
2. Sanitize commit messages or use `execGitCommand` with proper escaping
3. Replace `execAsync` template literals with `execGitCommand` array-based calls
**Note:** `isValidBranchName` is imported but only used AFTER deletion (line 119), not before execAsync calls.
---
### 2. Command Injection in Push Handler
**File:** `apps/server/src/routes/worktree/routes/push.ts`
**Lines:** 44, 49
**Severity:** CRITICAL
**Issue:**
User-controlled `remote` parameter and `branchName` are directly interpolated into shell commands without validation.
**Vulnerable Code:**
```typescript
// Line 38 - remote defaults to 'origin' but not validated
const targetRemote = remote || 'origin';
// Lines 44, 49 - targetRemote and branchName not validated
await execAsync(`git push -u ${targetRemote} ${branchName} ${forceFlag}`, {
cwd: worktreePath,
});
await execAsync(`git push --set-upstream ${targetRemote} ${branchName} ${forceFlag}`, {
cwd: worktreePath,
});
```
**Attack Vector:**
An attacker could inject commands via the remote name:
- Remote: `origin; malicious_command; #`
**Fix Required:**
1. Validate `targetRemote` parameter (alphanumeric + `-`, `_` only)
2. Validate `branchName` before use (even though it comes from git output)
3. Use `execGitCommand` with array arguments instead of template literals
---
### 3. Unsafe Environment Variable Export in Shell Script
**File:** `start-automaker.sh`
**Lines:** 5068, 5085
**Severity:** CRITICAL
**Issue:**
Unsafe parsing and export of `.env` file contents using `xargs` without proper handling of special characters.
**Vulnerable Code:**
```bash
export $(grep -v '^#' .env | xargs)
```
**Attack Vector:**
If `.env` file contains malicious content with spaces, special characters, or code, it could be executed:
- `.env` entry: `VAR="value; malicious_command"`
- Could lead to code execution during startup
**Fix Required:**
Replace with safer parsing method:
```bash
# Safer approach
set -a
source <(grep -v '^#' .env | sed 's/^/export /')
set +a
# Or even safer - validate each line
while IFS= read -r line; do
[[ "$line" =~ ^[[:space:]]*# ]] && continue
[[ -z "$line" ]] && continue
if [[ "$line" =~ ^([A-Za-z_][A-Za-z0-9_]*)=(.*)$ ]]; then
export "${BASH_REMATCH[1]}"="${BASH_REMATCH[2]}"
fi
done < .env
```
---
## 🟡 Moderate Security Concerns
### 4. Inconsistent Use of Secure Command Execution
**Issue:**
The codebase has `execGitCommand()` function available (which uses array arguments and is safer), but it's not consistently used. Some places still use `execAsync` with template literals.
**Files Affected:**
- `apps/server/src/routes/worktree/routes/merge.ts`
- `apps/server/src/routes/worktree/routes/push.ts`
**Recommendation:**
- Audit all `execAsync` calls with template literals
- Replace with `execGitCommand` where possible
- Document when `execAsync` is acceptable (only with fully validated inputs)
---
### 5. Missing Input Validation
**Issues:**
1. `targetRemote` in `push.ts` defaults to 'origin' but isn't validated
2. Commit messages in `merge.ts` aren't sanitized before use in shell commands
3. `worktreePath` validation relies on middleware but should be double-checked
**Recommendation:**
- Add validation functions for remote names
- Sanitize commit messages (remove shell metacharacters)
- Add defensive validation even when middleware exists
---
## ✅ Positive Security Findings
1. **No Hardcoded Credentials:** No API keys, passwords, or tokens found in the diff
2. **No Data Exfiltration:** No suspicious network requests or data transmission patterns
3. **No Backdoors:** No hidden functionality or unauthorized access patterns detected
4. **Safe Command Execution:** `execGitCommand` function properly uses array arguments in some places
5. **Environment Variable Handling:** `init-script-service.ts` properly sanitizes environment variables (lines 194-220)
---
## 📋 Action Items
### Immediate (Before Release)
- [ ] **Fix command injection in `merge.ts`**
- [ ] Validate `branchName` with `isValidBranchName()` before line 43
- [ ] Validate `mergeTo` with `isValidBranchName()` before line 54
- [ ] Sanitize commit messages or use `execGitCommand` for merge commands
- [ ] Replace `execAsync` template literals with `execGitCommand` array calls
- [ ] **Fix command injection in `push.ts`**
- [ ] Add validation function for remote names
- [ ] Validate `targetRemote` before use
- [ ] Validate `branchName` before use (defensive programming)
- [ ] Replace `execAsync` template literals with `execGitCommand`
- [ ] **Fix shell script security issue**
- [ ] Replace unsafe `export $(grep ... | xargs)` with safer parsing
- [ ] Add validation for `.env` file contents
- [ ] Test with edge cases (spaces, special chars, quotes)
### Short-term (Next Sprint)
- [ ] **Audit all `execAsync` calls**
- [ ] Create inventory of all `execAsync` calls with template literals
- [ ] Replace with `execGitCommand` where possible
- [ ] Document exceptions and why they're safe
- [ ] **Add input validation utilities**
- [ ] Create `isValidRemoteName()` function
- [ ] Create `sanitizeCommitMessage()` function
- [ ] Add validation for all user-controlled inputs
- [ ] **Security testing**
- [ ] Add unit tests for command injection prevention
- [ ] Add integration tests with malicious inputs
- [ ] Test shell script with malicious `.env` files
### Long-term (Security Hardening)
- [ ] **Code review process**
- [ ] Add security checklist for PR reviews
- [ ] Require security review for shell command execution changes
- [ ] Add automated security scanning
- [ ] **Documentation**
- [ ] Document secure coding practices for shell commands
- [ ] Create security guidelines for contributors
- [ ] Add security section to CONTRIBUTING.md
---
## 🔍 Testing Recommendations
### Command Injection Tests
```typescript
// Test cases for merge.ts
describe('merge handler security', () => {
it('should reject branch names with shell metacharacters', () => {
// Test: branchName = "main; rm -rf /"
// Expected: Validation error, command not executed
});
it('should sanitize commit messages', () => {
// Test: message = '"; malicious_command; "'
// Expected: Sanitized or rejected
});
});
// Test cases for push.ts
describe('push handler security', () => {
it('should reject remote names with shell metacharacters', () => {
// Test: remote = "origin; malicious_command; #"
// Expected: Validation error, command not executed
});
});
```
### Shell Script Tests
```bash
# Test with malicious .env content
echo 'VAR="value; echo PWNED"' > test.env
# Expected: Should not execute the command
# Test with spaces in values
echo 'VAR="value with spaces"' > test.env
# Expected: Should handle correctly
# Test with special characters
echo 'VAR="value\$with\$dollars"' > test.env
# Expected: Should handle correctly
```
---
## 📚 References
- [OWASP Command Injection](https://owasp.org/www-community/attacks/Command_Injection)
- [Node.js Child Process Security](https://nodejs.org/api/child_process.html#child_process_security_concerns)
- [Shell Script Security Best Practices](https://mywiki.wooledge.org/BashGuide/Practices)
---
## Notes
- All findings are based on code diff analysis
- No runtime testing was performed
- Assumes attacker has access to API endpoints (authenticated or unauthenticated)
- Fixes should be tested thoroughly before deployment
---
**Last Updated:** $(date)
**Next Review:** After fixes are implemented

View File

@@ -2,6 +2,14 @@
- Setting the default model does not seem like it works.
# Performance (completed)
- [x] Graph performance mode for large graphs (compact nodes/edges + visible-only rendering)
- [x] Render containment on heavy scroll regions (kanban columns, chat history)
- [x] Reduce blur/shadow effects when lists get large
- [x] React Query tuning for heavy datasets (less refetch on focus/reconnect)
- [x] DnD/list rendering optimizations (virtualized kanban + memoized card sections)
# UX
- Consolidate all models to a single place in the settings instead of having AI profiles and all this other stuff

View File

@@ -44,6 +44,11 @@ CORS_ORIGIN=http://localhost:3007
# OPTIONAL - Server
# ============================================
# Host to bind the server to (default: 0.0.0.0)
# Use 0.0.0.0 to listen on all interfaces (recommended for Docker/remote access)
# Use 127.0.0.1 or localhost to restrict to local connections only
HOST=0.0.0.0
# Port to run the server on
PORT=3008
@@ -63,6 +68,14 @@ TERMINAL_PASSWORD=
ENABLE_REQUEST_LOGGING=false
# ============================================
# OPTIONAL - UI Behavior
# ============================================
# Skip the sandbox warning dialog on startup (default: false)
# Set to "true" to disable the warning entirely (useful for dev/CI environments)
AUTOMAKER_SKIP_SANDBOX_WARNING=false
# ============================================
# OPTIONAL - Debugging
# ============================================

View File

@@ -1,6 +1,6 @@
{
"name": "@automaker/server",
"version": "0.9.0",
"version": "0.13.0",
"description": "Backend server for Automaker - provides API for both web and Electron modes",
"author": "AutoMaker Team",
"license": "SEE LICENSE IN LICENSE",
@@ -32,7 +32,7 @@
"@automaker/prompts": "1.0.0",
"@automaker/types": "1.0.0",
"@automaker/utils": "1.0.0",
"@modelcontextprotocol/sdk": "1.25.1",
"@modelcontextprotocol/sdk": "1.25.2",
"@openai/codex-sdk": "^0.77.0",
"cookie-parser": "1.4.7",
"cors": "2.8.5",
@@ -40,7 +40,8 @@
"express": "5.2.1",
"morgan": "1.10.1",
"node-pty": "1.1.0-beta41",
"ws": "8.18.3"
"ws": "8.18.3",
"yaml": "2.7.0"
},
"devDependencies": {
"@types/cookie": "0.6.0",

View File

@@ -17,9 +17,19 @@ import dotenv from 'dotenv';
import { createEventEmitter, type EventEmitter } from './lib/events.js';
import { initAllowedPaths } from '@automaker/platform';
import { createLogger } from '@automaker/utils';
import { createLogger, setLogLevel, LogLevel } from '@automaker/utils';
const logger = createLogger('Server');
/**
* Map server log level string to LogLevel enum
*/
const LOG_LEVEL_MAP: Record<string, LogLevel> = {
error: LogLevel.ERROR,
warn: LogLevel.WARN,
info: LogLevel.INFO,
debug: LogLevel.DEBUG,
};
import { authMiddleware, validateWsConnectionToken, checkRawAuthentication } from './lib/auth.js';
import { requireJsonContentType } from './middleware/require-json-content-type.js';
import { createAuthRoutes } from './routes/auth/index.js';
@@ -33,7 +43,6 @@ import { createEnhancePromptRoutes } from './routes/enhance-prompt/index.js';
import { createWorktreeRoutes } from './routes/worktree/index.js';
import { createGitRoutes } from './routes/git/index.js';
import { createSetupRoutes } from './routes/setup/index.js';
import { createSuggestionsRoutes } from './routes/suggestions/index.js';
import { createModelsRoutes } from './routes/models/index.js';
import { createRunningAgentsRoutes } from './routes/running-agents/index.js';
import { createWorkspaceRoutes } from './routes/workspace/index.js';
@@ -55,6 +64,8 @@ import { createClaudeRoutes } from './routes/claude/index.js';
import { ClaudeUsageService } from './services/claude-usage-service.js';
import { createCodexRoutes } from './routes/codex/index.js';
import { CodexUsageService } from './services/codex-usage-service.js';
import { CodexAppServerService } from './services/codex-app-server-service.js';
import { CodexModelCacheService } from './services/codex-model-cache-service.js';
import { createGitHubRoutes } from './routes/github/index.js';
import { createContextRoutes } from './routes/context/index.js';
import { createBacklogPlanRoutes } from './routes/backlog-plan/index.js';
@@ -65,32 +76,76 @@ import { createPipelineRoutes } from './routes/pipeline/index.js';
import { pipelineService } from './services/pipeline-service.js';
import { createIdeationRoutes } from './routes/ideation/index.js';
import { IdeationService } from './services/ideation-service.js';
import { getDevServerService } from './services/dev-server-service.js';
import { eventHookService } from './services/event-hook-service.js';
import { createNotificationsRoutes } from './routes/notifications/index.js';
import { getNotificationService } from './services/notification-service.js';
import { createEventHistoryRoutes } from './routes/event-history/index.js';
import { getEventHistoryService } from './services/event-history-service.js';
import { getTestRunnerService } from './services/test-runner-service.js';
import { createProviderUsageRoutes } from './routes/provider-usage/index.js';
import { ProviderUsageTracker } from './services/provider-usage-tracker.js';
// Load environment variables
dotenv.config();
const PORT = parseInt(process.env.PORT || '3008', 10);
const HOST = process.env.HOST || '0.0.0.0';
const HOSTNAME = process.env.HOSTNAME || 'localhost';
const DATA_DIR = process.env.DATA_DIR || './data';
const ENABLE_REQUEST_LOGGING = process.env.ENABLE_REQUEST_LOGGING !== 'false'; // Default to true
logger.info('[SERVER_STARTUP] process.env.DATA_DIR:', process.env.DATA_DIR);
logger.info('[SERVER_STARTUP] Resolved DATA_DIR:', DATA_DIR);
logger.info('[SERVER_STARTUP] process.cwd():', process.cwd());
const ENABLE_REQUEST_LOGGING_DEFAULT = process.env.ENABLE_REQUEST_LOGGING !== 'false'; // Default to true
// Runtime-configurable request logging flag (can be changed via settings)
let requestLoggingEnabled = ENABLE_REQUEST_LOGGING_DEFAULT;
/**
* Enable or disable HTTP request logging at runtime
*/
export function setRequestLoggingEnabled(enabled: boolean): void {
requestLoggingEnabled = enabled;
}
/**
* Get current request logging state
*/
export function isRequestLoggingEnabled(): boolean {
return requestLoggingEnabled;
}
// Width for log box content (excluding borders)
const BOX_CONTENT_WIDTH = 67;
// Check for required environment variables
const hasAnthropicKey = !!process.env.ANTHROPIC_API_KEY;
if (!hasAnthropicKey) {
const wHeader = '⚠️ WARNING: No Claude authentication configured'.padEnd(BOX_CONTENT_WIDTH);
const w1 = 'The Claude Agent SDK requires authentication to function.'.padEnd(BOX_CONTENT_WIDTH);
const w2 = 'Set your Anthropic API key:'.padEnd(BOX_CONTENT_WIDTH);
const w3 = ' export ANTHROPIC_API_KEY="sk-ant-..."'.padEnd(BOX_CONTENT_WIDTH);
const w4 = 'Or use the setup wizard in Settings to configure authentication.'.padEnd(
BOX_CONTENT_WIDTH
);
logger.warn(`
╔═══════════════════════════════════════════════════════════════════════
⚠️ WARNING: No Claude authentication configured
║ ║
The Claude Agent SDK requires authentication to function.
Set your Anthropic API key:
export ANTHROPIC_API_KEY="sk-ant-..."
Or use the setup wizard in Settings to configure authentication.
╚═══════════════════════════════════════════════════════════════════════╝
╔═════════════════════════════════════════════════════════════════════╗
${wHeader}
╠═════════════════════════════════════════════════════════════════════╣
${w1}
${w2}
${w3}
${w4}
║ ║
╚═════════════════════════════════════════════════════════════════════╝
`);
} else {
logger.info('✓ ANTHROPIC_API_KEY detected (API key auth)');
logger.info('✓ ANTHROPIC_API_KEY detected');
}
// Initialize security
@@ -100,22 +155,21 @@ initAllowedPaths();
const app = express();
// Middleware
// Custom colored logger showing only endpoint and status code (configurable via ENABLE_REQUEST_LOGGING env var)
if (ENABLE_REQUEST_LOGGING) {
morgan.token('status-colored', (_req, res) => {
const status = res.statusCode;
if (status >= 500) return `\x1b[31m${status}\x1b[0m`; // Red for server errors
if (status >= 400) return `\x1b[33m${status}\x1b[0m`; // Yellow for client errors
if (status >= 300) return `\x1b[36m${status}\x1b[0m`; // Cyan for redirects
return `\x1b[32m${status}\x1b[0m`; // Green for success
});
// Custom colored logger showing only endpoint and status code (dynamically configurable)
morgan.token('status-colored', (_req, res) => {
const status = res.statusCode;
if (status >= 500) return `\x1b[31m${status}\x1b[0m`; // Red for server errors
if (status >= 400) return `\x1b[33m${status}\x1b[0m`; // Yellow for client errors
if (status >= 300) return `\x1b[36m${status}\x1b[0m`; // Cyan for redirects
return `\x1b[32m${status}\x1b[0m`; // Green for success
});
app.use(
morgan(':method :url :status-colored', {
skip: (req) => req.url === '/api/health', // Skip health check logs
})
);
}
app.use(
morgan(':method :url :status-colored', {
// Skip when request logging is disabled or for health check endpoints
skip: (req) => !requestLoggingEnabled || req.url === '/api/health',
})
);
// CORS configuration
// When using credentials (cookies), origin cannot be '*'
// We dynamically allow the requesting origin for local development
@@ -139,14 +193,25 @@ app.use(
return;
}
// For local development, allow localhost origins
if (
origin.startsWith('http://localhost:') ||
origin.startsWith('http://127.0.0.1:') ||
origin.startsWith('http://[::1]:')
) {
callback(null, origin);
return;
// For local development, allow all localhost/loopback origins (any port)
try {
const url = new URL(origin);
const hostname = url.hostname;
if (
hostname === 'localhost' ||
hostname === '127.0.0.1' ||
hostname === '::1' ||
hostname === '0.0.0.0' ||
hostname.startsWith('192.168.') ||
hostname.startsWith('10.') ||
hostname.startsWith('172.')
) {
callback(null, origin);
return;
}
} catch (err) {
// Ignore URL parsing errors
}
// Reject other origins by default for security
@@ -168,14 +233,72 @@ const agentService = new AgentService(DATA_DIR, events, settingsService);
const featureLoader = new FeatureLoader();
const autoModeService = new AutoModeService(events, settingsService);
const claudeUsageService = new ClaudeUsageService();
const codexUsageService = new CodexUsageService();
const codexAppServerService = new CodexAppServerService();
const codexModelCacheService = new CodexModelCacheService(DATA_DIR, codexAppServerService);
const codexUsageService = new CodexUsageService(codexAppServerService);
const mcpTestService = new MCPTestService(settingsService);
const ideationService = new IdeationService(events, settingsService, featureLoader);
const providerUsageTracker = new ProviderUsageTracker(codexUsageService);
// Initialize DevServerService with event emitter for real-time log streaming
const devServerService = getDevServerService();
devServerService.setEventEmitter(events);
// Initialize Notification Service with event emitter for real-time updates
const notificationService = getNotificationService();
notificationService.setEventEmitter(events);
// Initialize Event History Service
const eventHistoryService = getEventHistoryService();
// Initialize Test Runner Service with event emitter for real-time test output streaming
const testRunnerService = getTestRunnerService();
testRunnerService.setEventEmitter(events);
// Initialize Event Hook Service for custom event triggers (with history storage)
eventHookService.initialize(events, settingsService, eventHistoryService, featureLoader);
// Initialize services
(async () => {
// Migrate settings from legacy Electron userData location if needed
// This handles users upgrading from versions that stored settings in ~/.config/Automaker (Linux),
// ~/Library/Application Support/Automaker (macOS), or %APPDATA%\Automaker (Windows)
// to the new shared ./data directory
try {
const migrationResult = await settingsService.migrateFromLegacyElectronPath();
if (migrationResult.migrated) {
logger.info(`Settings migrated from legacy location: ${migrationResult.legacyPath}`);
logger.info(`Migrated files: ${migrationResult.migratedFiles.join(', ')}`);
}
if (migrationResult.errors.length > 0) {
logger.warn('Migration errors:', migrationResult.errors);
}
} catch (err) {
logger.warn('Failed to check for legacy settings migration:', err);
}
// Apply logging settings from saved settings
try {
const settings = await settingsService.getGlobalSettings();
if (settings.serverLogLevel && LOG_LEVEL_MAP[settings.serverLogLevel] !== undefined) {
setLogLevel(LOG_LEVEL_MAP[settings.serverLogLevel]);
logger.info(`Server log level set to: ${settings.serverLogLevel}`);
}
// Apply request logging setting (default true if not set)
const enableRequestLog = settings.enableRequestLogging ?? true;
setRequestLoggingEnabled(enableRequestLog);
logger.info(`HTTP request logging: ${enableRequestLog ? 'enabled' : 'disabled'}`);
} catch (err) {
logger.warn('Failed to load logging settings, using defaults');
}
await agentService.initialize();
logger.info('Agent service initialized');
// Bootstrap Codex model cache in background (don't block server startup)
void codexModelCacheService.getModels().catch((err) => {
logger.error('Failed to bootstrap Codex model cache:', err);
});
})();
// Run stale validation cleanup every hour to prevent memory leaks from crashed validations
@@ -205,12 +328,11 @@ app.get('/api/health/detailed', createDetailedHandler());
app.use('/api/fs', createFsRoutes(events));
app.use('/api/agent', createAgentRoutes(agentService, events));
app.use('/api/sessions', createSessionsRoutes(agentService));
app.use('/api/features', createFeaturesRoutes(featureLoader));
app.use('/api/features', createFeaturesRoutes(featureLoader, settingsService, events));
app.use('/api/auto-mode', createAutoModeRoutes(autoModeService));
app.use('/api/enhance-prompt', createEnhancePromptRoutes(settingsService));
app.use('/api/worktree', createWorktreeRoutes());
app.use('/api/worktree', createWorktreeRoutes(events, settingsService));
app.use('/api/git', createGitRoutes());
app.use('/api/suggestions', createSuggestionsRoutes(events, settingsService));
app.use('/api/models', createModelsRoutes());
app.use('/api/spec-regeneration', createSpecRegenerationRoutes(events, settingsService));
app.use('/api/running-agents', createRunningAgentsRoutes(autoModeService));
@@ -219,13 +341,16 @@ app.use('/api/templates', createTemplatesRoutes());
app.use('/api/terminal', createTerminalRoutes());
app.use('/api/settings', createSettingsRoutes(settingsService));
app.use('/api/claude', createClaudeRoutes(claudeUsageService));
app.use('/api/codex', createCodexRoutes(codexUsageService));
app.use('/api/codex', createCodexRoutes(codexUsageService, codexModelCacheService));
app.use('/api/github', createGitHubRoutes(events, settingsService));
app.use('/api/context', createContextRoutes(settingsService));
app.use('/api/backlog-plan', createBacklogPlanRoutes(events, settingsService));
app.use('/api/mcp', createMCPRoutes(mcpTestService));
app.use('/api/pipeline', createPipelineRoutes(pipelineService));
app.use('/api/ideation', createIdeationRoutes(events, ideationService, featureLoader));
app.use('/api/notifications', createNotificationsRoutes(notificationService));
app.use('/api/event-history', createEventHistoryRoutes(eventHistoryService, settingsService));
app.use('/api/provider-usage', createProviderUsageRoutes(providerUsageTracker));
// Create HTTP server
const server = createServer(app);
@@ -537,46 +662,81 @@ terminalWss.on('connection', (ws: WebSocket, req: import('http').IncomingMessage
});
// Start server with error handling for port conflicts
const startServer = (port: number) => {
server.listen(port, () => {
const startServer = (port: number, host: string) => {
server.listen(port, host, () => {
const terminalStatus = isTerminalEnabled()
? isTerminalPasswordRequired()
? 'enabled (password protected)'
: 'enabled'
: 'disabled';
const portStr = port.toString().padEnd(4);
// Build URLs for display
const listenAddr = `${host}:${port}`;
const httpUrl = `http://${HOSTNAME}:${port}`;
const wsEventsUrl = `ws://${HOSTNAME}:${port}/api/events`;
const wsTerminalUrl = `ws://${HOSTNAME}:${port}/api/terminal/ws`;
const healthUrl = `http://${HOSTNAME}:${port}/api/health`;
const sHeader = '🚀 Automaker Backend Server'.padEnd(BOX_CONTENT_WIDTH);
const s1 = `Listening: ${listenAddr}`.padEnd(BOX_CONTENT_WIDTH);
const s2 = `HTTP API: ${httpUrl}`.padEnd(BOX_CONTENT_WIDTH);
const s3 = `WebSocket: ${wsEventsUrl}`.padEnd(BOX_CONTENT_WIDTH);
const s4 = `Terminal WS: ${wsTerminalUrl}`.padEnd(BOX_CONTENT_WIDTH);
const s5 = `Health: ${healthUrl}`.padEnd(BOX_CONTENT_WIDTH);
const s6 = `Terminal: ${terminalStatus}`.padEnd(BOX_CONTENT_WIDTH);
logger.info(`
╔═══════════════════════════════════════════════════════╗
Automaker Backend Server
╠═══════════════════════════════════════════════════════╣
HTTP API: http://localhost:${portStr}
WebSocket: ws://localhost:${portStr}/api/events
Terminal: ws://localhost:${portStr}/api/terminal/ws
Health: http://localhost:${portStr}/api/health
Terminal: ${terminalStatus.padEnd(37)}
╚═══════════════════════════════════════════════════════╝
╔═════════════════════════════════════════════════════════════════════
${sHeader}
╠═════════════════════════════════════════════════════════════════════
${s1}
${s2}
${s3}
${s4}
${s5}
${s6}
║ ║
╚═════════════════════════════════════════════════════════════════════╝
`);
});
server.on('error', (error: NodeJS.ErrnoException) => {
if (error.code === 'EADDRINUSE') {
const portStr = port.toString();
const nextPortStr = (port + 1).toString();
const killCmd = `lsof -ti:${portStr} | xargs kill -9`;
const altCmd = `PORT=${nextPortStr} npm run dev:server`;
const eHeader = `❌ ERROR: Port ${portStr} is already in use`.padEnd(BOX_CONTENT_WIDTH);
const e1 = 'Another process is using this port.'.padEnd(BOX_CONTENT_WIDTH);
const e2 = 'To fix this, try one of:'.padEnd(BOX_CONTENT_WIDTH);
const e3 = '1. Kill the process using the port:'.padEnd(BOX_CONTENT_WIDTH);
const e4 = ` ${killCmd}`.padEnd(BOX_CONTENT_WIDTH);
const e5 = '2. Use a different port:'.padEnd(BOX_CONTENT_WIDTH);
const e6 = ` ${altCmd}`.padEnd(BOX_CONTENT_WIDTH);
const e7 = '3. Use the init.sh script which handles this:'.padEnd(BOX_CONTENT_WIDTH);
const e8 = ' ./init.sh'.padEnd(BOX_CONTENT_WIDTH);
logger.error(`
╔═══════════════════════════════════════════════════════╗
❌ ERROR: Port ${port} is already in use
╠═══════════════════════════════════════════════════════╣
Another process is using this port.
To fix this, try one of:
1. Kill the process using the port:
lsof -ti:${port} | xargs kill -9
2. Use a different port:
PORT=${port + 1} npm run dev:server
3. Use the init.sh script which handles this:
./init.sh
╚═══════════════════════════════════════════════════════╝
╔═════════════════════════════════════════════════════════════════════
${eHeader}
╠═════════════════════════════════════════════════════════════════════
${e1}
${e2}
${e3}
${e4}
${e5}
${e6}
${e7}
${e8}
║ ║
╚═════════════════════════════════════════════════════════════════════╝
`);
process.exit(1);
} else {
@@ -586,7 +746,27 @@ const startServer = (port: number) => {
});
};
startServer(PORT);
startServer(PORT, HOST);
// Global error handlers to prevent crashes from uncaught errors
process.on('unhandledRejection', (reason: unknown, _promise: Promise<unknown>) => {
logger.error('Unhandled Promise Rejection:', {
reason: reason instanceof Error ? reason.message : String(reason),
stack: reason instanceof Error ? reason.stack : undefined,
});
// Don't exit - log the error and continue running
// This prevents the server from crashing due to unhandled rejections
});
process.on('uncaughtException', (error: Error) => {
logger.error('Uncaught Exception:', {
message: error.message,
stack: error.stack,
});
// Exit on uncaught exceptions to prevent undefined behavior
// The process is in an unknown state after an uncaught exception
process.exit(1);
});
// Graceful shutdown
process.on('SIGTERM', () => {

View File

@@ -11,8 +11,12 @@ export { specOutputSchema } from '@automaker/types';
/**
* Escape special XML characters
* Handles undefined/null values by converting them to empty strings
*/
function escapeXml(str: string): string {
export function escapeXml(str: string | undefined | null): string {
if (str == null) {
return '';
}
return str
.replace(/&/g, '&amp;')
.replace(/</g, '&lt;')

View File

@@ -23,6 +23,13 @@ const SESSION_COOKIE_NAME = 'automaker_session';
const SESSION_MAX_AGE_MS = 30 * 24 * 60 * 60 * 1000; // 30 days
const WS_TOKEN_MAX_AGE_MS = 5 * 60 * 1000; // 5 minutes for WebSocket connection tokens
/**
* Check if an environment variable is set to 'true'
*/
function isEnvTrue(envVar: string | undefined): boolean {
return envVar === 'true';
}
// Session store - persisted to file for survival across server restarts
const validSessions = new Map<string, { createdAt: number; expiresAt: number }>();
@@ -130,19 +137,47 @@ function ensureApiKey(): string {
// API key - always generated/loaded on startup for CSRF protection
const API_KEY = ensureApiKey();
// Width for log box content (excluding borders)
const BOX_CONTENT_WIDTH = 67;
// Print API key to console for web mode users (unless suppressed for production logging)
if (process.env.AUTOMAKER_HIDE_API_KEY !== 'true') {
if (!isEnvTrue(process.env.AUTOMAKER_HIDE_API_KEY)) {
const autoLoginEnabled = isEnvTrue(process.env.AUTOMAKER_AUTO_LOGIN);
const autoLoginStatus = autoLoginEnabled ? 'enabled (auto-login active)' : 'disabled';
// Build box lines with exact padding
const header = '🔐 API Key for Web Mode Authentication'.padEnd(BOX_CONTENT_WIDTH);
const line1 = "When accessing via browser, you'll be prompted to enter this key:".padEnd(
BOX_CONTENT_WIDTH
);
const line2 = API_KEY.padEnd(BOX_CONTENT_WIDTH);
const line3 = 'In Electron mode, authentication is handled automatically.'.padEnd(
BOX_CONTENT_WIDTH
);
const line4 = `Auto-login (AUTOMAKER_AUTO_LOGIN): ${autoLoginStatus}`.padEnd(BOX_CONTENT_WIDTH);
const tipHeader = '💡 Tips'.padEnd(BOX_CONTENT_WIDTH);
const line5 = 'Set AUTOMAKER_API_KEY env var to use a fixed key'.padEnd(BOX_CONTENT_WIDTH);
const line6 = 'Set AUTOMAKER_AUTO_LOGIN=true to skip the login prompt'.padEnd(BOX_CONTENT_WIDTH);
logger.info(`
╔═══════════════════════════════════════════════════════════════════════
🔐 API Key for Web Mode Authentication
╠═══════════════════════════════════════════════════════════════════════
When accessing via browser, you'll be prompted to enter this key:
${API_KEY}
In Electron mode, authentication is handled automatically.
╚═══════════════════════════════════════════════════════════════════════╝
╔═════════════════════════════════════════════════════════════════════╗
${header}
╠═════════════════════════════════════════════════════════════════════╣
║ ║
${line1}
║ ║
${line2}
║ ║
${line3}
║ ║
${line4}
║ ║
╠═════════════════════════════════════════════════════════════════════╣
${tipHeader}
╠═════════════════════════════════════════════════════════════════════╣
${line5}
${line6}
╚═════════════════════════════════════════════════════════════════════╝
`);
} else {
logger.info('API key banner hidden (AUTOMAKER_HIDE_API_KEY=true)');
@@ -318,6 +353,15 @@ function checkAuthentication(
return { authenticated: false, errorType: 'invalid_api_key' };
}
// Check for session token in query parameter (web mode - needed for image loads)
const queryToken = query.token;
if (queryToken) {
if (validateSession(queryToken)) {
return { authenticated: true };
}
return { authenticated: false, errorType: 'invalid_session' };
}
// Check for session cookie (web mode)
const sessionToken = cookies[SESSION_COOKIE_NAME];
if (sessionToken && validateSession(sessionToken)) {
@@ -333,10 +377,17 @@ function checkAuthentication(
* Accepts either:
* 1. X-API-Key header (for Electron mode)
* 2. X-Session-Token header (for web mode with explicit token)
* 3. apiKey query parameter (fallback for cases where headers can't be set)
* 4. Session cookie (for web mode)
* 3. apiKey query parameter (fallback for Electron, cases where headers can't be set)
* 4. token query parameter (fallback for web mode, needed for image loads via CSS/img tags)
* 5. Session cookie (for web mode)
*/
export function authMiddleware(req: Request, res: Response, next: NextFunction): void {
// Allow disabling auth for local/trusted networks
if (isEnvTrue(process.env.AUTOMAKER_DISABLE_AUTH)) {
next();
return;
}
const result = checkAuthentication(
req.headers as Record<string, string | string[] | undefined>,
req.query as Record<string, string | undefined>,
@@ -382,9 +433,10 @@ export function isAuthEnabled(): boolean {
* Get authentication status for health endpoint
*/
export function getAuthStatus(): { enabled: boolean; method: string } {
const disabled = isEnvTrue(process.env.AUTOMAKER_DISABLE_AUTH);
return {
enabled: true,
method: 'api_key_or_session',
enabled: !disabled,
method: disabled ? 'disabled' : 'api_key_or_session',
};
}
@@ -392,6 +444,7 @@ export function getAuthStatus(): { enabled: boolean; method: string } {
* Check if a request is authenticated (for status endpoint)
*/
export function isRequestAuthenticated(req: Request): boolean {
if (isEnvTrue(process.env.AUTOMAKER_DISABLE_AUTH)) return true;
const result = checkAuthentication(
req.headers as Record<string, string | string[] | undefined>,
req.query as Record<string, string | undefined>,
@@ -409,5 +462,6 @@ export function checkRawAuthentication(
query: Record<string, string | undefined>,
cookies: Record<string, string | undefined>
): boolean {
if (isEnvTrue(process.env.AUTOMAKER_DISABLE_AUTH)) return true;
return checkAuthentication(headers, query, cookies).authenticated;
}

View File

@@ -5,9 +5,11 @@
* Never assumes authenticated - only returns true if CLI confirms.
*/
import { spawnProcess, getCodexAuthPath } from '@automaker/platform';
import { spawnProcess } from '@automaker/platform';
import { findCodexCliPath } from '@automaker/platform';
import * as fs from 'fs';
import { createLogger } from '@automaker/utils';
const logger = createLogger('CodexAuth');
const CODEX_COMMAND = 'codex';
const OPENAI_API_KEY_ENV = 'OPENAI_API_KEY';
@@ -26,36 +28,16 @@ export interface CodexAuthCheckResult {
export async function checkCodexAuthentication(
cliPath?: string | null
): Promise<CodexAuthCheckResult> {
console.log('[CodexAuth] checkCodexAuthentication called with cliPath:', cliPath);
const resolvedCliPath = cliPath || (await findCodexCliPath());
const hasApiKey = !!process.env[OPENAI_API_KEY_ENV];
console.log('[CodexAuth] resolvedCliPath:', resolvedCliPath);
console.log('[CodexAuth] hasApiKey:', hasApiKey);
// Debug: Check auth file
const authFilePath = getCodexAuthPath();
console.log('[CodexAuth] Auth file path:', authFilePath);
try {
const authFileExists = fs.existsSync(authFilePath);
console.log('[CodexAuth] Auth file exists:', authFileExists);
if (authFileExists) {
const authContent = fs.readFileSync(authFilePath, 'utf-8');
console.log('[CodexAuth] Auth file content:', authContent.substring(0, 500)); // First 500 chars
}
} catch (error) {
console.log('[CodexAuth] Error reading auth file:', error);
}
// If CLI is not installed, cannot be authenticated
if (!resolvedCliPath) {
console.log('[CodexAuth] No CLI path found, returning not authenticated');
logger.info('CLI not found');
return { authenticated: false, method: 'none' };
}
try {
console.log('[CodexAuth] Running: ' + resolvedCliPath + ' login status');
const result = await spawnProcess({
command: resolvedCliPath || CODEX_COMMAND,
args: ['login', 'status'],
@@ -66,33 +48,21 @@ export async function checkCodexAuthentication(
},
});
console.log('[CodexAuth] Command result:');
console.log('[CodexAuth] exitCode:', result.exitCode);
console.log('[CodexAuth] stdout:', JSON.stringify(result.stdout));
console.log('[CodexAuth] stderr:', JSON.stringify(result.stderr));
// Check both stdout and stderr for "logged in" - Codex CLI outputs to stderr
const combinedOutput = (result.stdout + result.stderr).toLowerCase();
const isLoggedIn = combinedOutput.includes('logged in');
console.log('[CodexAuth] isLoggedIn (contains "logged in" in stdout or stderr):', isLoggedIn);
if (result.exitCode === 0 && isLoggedIn) {
// Determine auth method based on what we know
const method = hasApiKey ? 'api_key_env' : 'cli_authenticated';
console.log('[CodexAuth] Authenticated! method:', method);
logger.info(`✓ Authenticated (${method})`);
return { authenticated: true, method };
}
console.log(
'[CodexAuth] Not authenticated. exitCode:',
result.exitCode,
'isLoggedIn:',
isLoggedIn
);
logger.info('Not authenticated');
return { authenticated: false, method: 'none' };
} catch (error) {
console.log('[CodexAuth] Error running command:', error);
logger.error('Failed to check authentication:', error);
return { authenticated: false, method: 'none' };
}
console.log('[CodexAuth] Returning not authenticated');
return { authenticated: false, method: 'none' };
}

View File

@@ -129,10 +129,30 @@ export const TOOL_PRESETS = {
specGeneration: ['Read', 'Glob', 'Grep'] as const,
/** Full tool access for feature implementation */
fullAccess: ['Read', 'Write', 'Edit', 'Glob', 'Grep', 'Bash', 'WebSearch', 'WebFetch'] as const,
fullAccess: [
'Read',
'Write',
'Edit',
'Glob',
'Grep',
'Bash',
'WebSearch',
'WebFetch',
'TodoWrite',
] as const,
/** Tools for chat/interactive mode */
chat: ['Read', 'Write', 'Edit', 'Glob', 'Grep', 'Bash', 'WebSearch', 'WebFetch'] as const,
chat: [
'Read',
'Write',
'Edit',
'Glob',
'Grep',
'Bash',
'WebSearch',
'WebFetch',
'TodoWrite',
] as const,
} as const;
/**

View File

@@ -5,12 +5,30 @@
import type { SettingsService } from '../services/settings-service.js';
import type { ContextFilesResult, ContextFileInfo } from '@automaker/utils';
import { createLogger } from '@automaker/utils';
import type { MCPServerConfig, McpServerConfig, PromptCustomization } from '@automaker/types';
import type {
MCPServerConfig,
McpServerConfig,
PromptCustomization,
ClaudeApiProfile,
ClaudeCompatibleProvider,
PhaseModelKey,
PhaseModelEntry,
Credentials,
} from '@automaker/types';
import { DEFAULT_PHASE_MODELS } from '@automaker/types';
import {
mergeAutoModePrompts,
mergeAgentPrompts,
mergeBacklogPlanPrompts,
mergeEnhancementPrompts,
mergeCommitMessagePrompts,
mergeTitleGenerationPrompts,
mergeIssueValidationPrompts,
mergeIdeationPrompts,
mergeAppSpecPrompts,
mergeContextDescriptionPrompts,
mergeSuggestionsPrompts,
mergeTaskExecutionPrompts,
} from '@automaker/prompts';
const logger = createLogger('SettingsHelper');
@@ -218,6 +236,14 @@ export async function getPromptCustomization(
agent: ReturnType<typeof mergeAgentPrompts>;
backlogPlan: ReturnType<typeof mergeBacklogPlanPrompts>;
enhancement: ReturnType<typeof mergeEnhancementPrompts>;
commitMessage: ReturnType<typeof mergeCommitMessagePrompts>;
titleGeneration: ReturnType<typeof mergeTitleGenerationPrompts>;
issueValidation: ReturnType<typeof mergeIssueValidationPrompts>;
ideation: ReturnType<typeof mergeIdeationPrompts>;
appSpec: ReturnType<typeof mergeAppSpecPrompts>;
contextDescription: ReturnType<typeof mergeContextDescriptionPrompts>;
suggestions: ReturnType<typeof mergeSuggestionsPrompts>;
taskExecution: ReturnType<typeof mergeTaskExecutionPrompts>;
}> {
let customization: PromptCustomization = {};
@@ -239,6 +265,14 @@ export async function getPromptCustomization(
agent: mergeAgentPrompts(customization.agent),
backlogPlan: mergeBacklogPlanPrompts(customization.backlogPlan),
enhancement: mergeEnhancementPrompts(customization.enhancement),
commitMessage: mergeCommitMessagePrompts(customization.commitMessage),
titleGeneration: mergeTitleGenerationPrompts(customization.titleGeneration),
issueValidation: mergeIssueValidationPrompts(customization.issueValidation),
ideation: mergeIdeationPrompts(customization.ideation),
appSpec: mergeAppSpecPrompts(customization.appSpec),
contextDescription: mergeContextDescriptionPrompts(customization.contextDescription),
suggestions: mergeSuggestionsPrompts(customization.suggestions),
taskExecution: mergeTaskExecutionPrompts(customization.taskExecution),
};
}
@@ -321,3 +355,376 @@ export async function getCustomSubagents(
return Object.keys(merged).length > 0 ? merged : undefined;
}
/** Result from getActiveClaudeApiProfile */
export interface ActiveClaudeApiProfileResult {
/** The active profile, or undefined if using direct Anthropic API */
profile: ClaudeApiProfile | undefined;
/** Credentials for resolving 'credentials' apiKeySource */
credentials: import('@automaker/types').Credentials | undefined;
}
/**
* Get the active Claude API profile and credentials from settings.
* Checks project settings first for per-project overrides, then falls back to global settings.
* Returns both the profile and credentials for resolving 'credentials' apiKeySource.
*
* @deprecated Use getProviderById and getPhaseModelWithOverrides instead for the new provider system.
* This function is kept for backward compatibility during migration.
*
* @param settingsService - Optional settings service instance
* @param logPrefix - Prefix for log messages (e.g., '[AgentService]')
* @param projectPath - Optional project path for per-project override
* @returns Promise resolving to object with profile and credentials
*/
export async function getActiveClaudeApiProfile(
settingsService?: SettingsService | null,
logPrefix = '[SettingsHelper]',
projectPath?: string
): Promise<ActiveClaudeApiProfileResult> {
if (!settingsService) {
return { profile: undefined, credentials: undefined };
}
try {
const globalSettings = await settingsService.getGlobalSettings();
const credentials = await settingsService.getCredentials();
const profiles = globalSettings.claudeApiProfiles || [];
// Check for project-level override first
let activeProfileId: string | null | undefined;
let isProjectOverride = false;
if (projectPath) {
const projectSettings = await settingsService.getProjectSettings(projectPath);
// undefined = use global, null = explicit no profile, string = specific profile
if (projectSettings.activeClaudeApiProfileId !== undefined) {
activeProfileId = projectSettings.activeClaudeApiProfileId;
isProjectOverride = true;
}
}
// Fall back to global if project doesn't specify
if (activeProfileId === undefined && !isProjectOverride) {
activeProfileId = globalSettings.activeClaudeApiProfileId;
}
// No active profile selected - use direct Anthropic API
if (!activeProfileId) {
if (isProjectOverride && activeProfileId === null) {
logger.info(`${logPrefix} Project explicitly using Direct Anthropic API`);
}
return { profile: undefined, credentials };
}
// Find the active profile by ID
const activeProfile = profiles.find((p) => p.id === activeProfileId);
if (activeProfile) {
const overrideSuffix = isProjectOverride ? ' (project override)' : '';
logger.info(`${logPrefix} Using Claude API profile: ${activeProfile.name}${overrideSuffix}`);
return { profile: activeProfile, credentials };
} else {
logger.warn(
`${logPrefix} Active profile ID "${activeProfileId}" not found, falling back to direct Anthropic API`
);
return { profile: undefined, credentials };
}
} catch (error) {
logger.error(`${logPrefix} Failed to load Claude API profile:`, error);
return { profile: undefined, credentials: undefined };
}
}
// ============================================================================
// New Provider System Helpers
// ============================================================================
/** Result from getProviderById */
export interface ProviderByIdResult {
/** The provider, or undefined if not found */
provider: ClaudeCompatibleProvider | undefined;
/** Credentials for resolving 'credentials' apiKeySource */
credentials: Credentials | undefined;
}
/**
* Get a ClaudeCompatibleProvider by its ID.
* Returns the provider configuration and credentials for API key resolution.
*
* @param providerId - The provider ID to look up
* @param settingsService - Settings service instance
* @param logPrefix - Prefix for log messages
* @returns Promise resolving to object with provider and credentials
*/
export async function getProviderById(
providerId: string,
settingsService: SettingsService,
logPrefix = '[SettingsHelper]'
): Promise<ProviderByIdResult> {
try {
const globalSettings = await settingsService.getGlobalSettings();
const credentials = await settingsService.getCredentials();
const providers = globalSettings.claudeCompatibleProviders || [];
const provider = providers.find((p) => p.id === providerId);
if (provider) {
if (provider.enabled === false) {
logger.warn(`${logPrefix} Provider "${provider.name}" (${providerId}) is disabled`);
} else {
logger.debug(`${logPrefix} Found provider: ${provider.name}`);
}
return { provider, credentials };
} else {
logger.warn(`${logPrefix} Provider not found: ${providerId}`);
return { provider: undefined, credentials };
}
} catch (error) {
logger.error(`${logPrefix} Failed to load provider by ID:`, error);
return { provider: undefined, credentials: undefined };
}
}
/** Result from getPhaseModelWithOverrides */
export interface PhaseModelWithOverridesResult {
/** The resolved phase model entry */
phaseModel: PhaseModelEntry;
/** Whether a project override was applied */
isProjectOverride: boolean;
/** The provider if providerId is set and found */
provider: ClaudeCompatibleProvider | undefined;
/** Credentials for API key resolution */
credentials: Credentials | undefined;
}
/**
* Get the phase model configuration for a specific phase, applying project overrides if available.
* Also resolves the provider if the phase model has a providerId.
*
* @param phase - The phase key (e.g., 'enhancementModel', 'specGenerationModel')
* @param settingsService - Optional settings service instance (returns defaults if undefined)
* @param projectPath - Optional project path for checking overrides
* @param logPrefix - Prefix for log messages
* @returns Promise resolving to phase model with provider info
*/
export async function getPhaseModelWithOverrides(
phase: PhaseModelKey,
settingsService?: SettingsService | null,
projectPath?: string,
logPrefix = '[SettingsHelper]'
): Promise<PhaseModelWithOverridesResult> {
// Handle undefined settingsService gracefully
if (!settingsService) {
logger.info(`${logPrefix} SettingsService not available, using default for ${phase}`);
return {
phaseModel: DEFAULT_PHASE_MODELS[phase] || { model: 'sonnet' },
isProjectOverride: false,
provider: undefined,
credentials: undefined,
};
}
try {
const globalSettings = await settingsService.getGlobalSettings();
const credentials = await settingsService.getCredentials();
const globalPhaseModels = globalSettings.phaseModels || {};
// Start with global phase model
let phaseModel = globalPhaseModels[phase];
let isProjectOverride = false;
// Check for project override
if (projectPath) {
const projectSettings = await settingsService.getProjectSettings(projectPath);
const projectOverrides = projectSettings.phaseModelOverrides || {};
if (projectOverrides[phase]) {
phaseModel = projectOverrides[phase];
isProjectOverride = true;
logger.debug(`${logPrefix} Using project override for ${phase}`);
}
}
// If no phase model found, use per-phase default
if (!phaseModel) {
phaseModel = DEFAULT_PHASE_MODELS[phase] || { model: 'sonnet' };
logger.debug(`${logPrefix} No ${phase} configured, using default: ${phaseModel.model}`);
}
// Resolve provider if providerId is set
let provider: ClaudeCompatibleProvider | undefined;
if (phaseModel.providerId) {
const providers = globalSettings.claudeCompatibleProviders || [];
provider = providers.find((p) => p.id === phaseModel.providerId);
if (provider) {
if (provider.enabled === false) {
logger.warn(
`${logPrefix} Provider "${provider.name}" for ${phase} is disabled, falling back to direct API`
);
provider = undefined;
} else {
logger.debug(`${logPrefix} Using provider "${provider.name}" for ${phase}`);
}
} else {
logger.warn(
`${logPrefix} Provider ${phaseModel.providerId} not found for ${phase}, falling back to direct API`
);
}
}
return {
phaseModel,
isProjectOverride,
provider,
credentials,
};
} catch (error) {
logger.error(`${logPrefix} Failed to get phase model with overrides:`, error);
// Return a safe default
return {
phaseModel: { model: 'sonnet' },
isProjectOverride: false,
provider: undefined,
credentials: undefined,
};
}
}
/** Result from getProviderByModelId */
export interface ProviderByModelIdResult {
/** The provider that contains this model, or undefined if not found */
provider: ClaudeCompatibleProvider | undefined;
/** The model configuration if found */
modelConfig: import('@automaker/types').ProviderModel | undefined;
/** Credentials for API key resolution */
credentials: Credentials | undefined;
/** The resolved Claude model ID to use for API calls (from mapsToClaudeModel) */
resolvedModel: string | undefined;
}
/**
* Find a ClaudeCompatibleProvider by one of its model IDs.
* Searches through all enabled providers to find one that contains the specified model.
* This is useful when you have a model string from the UI but need the provider config.
*
* Also resolves the `mapsToClaudeModel` field to get the actual Claude model ID to use
* when calling the API (e.g., "GLM-4.5-Air" -> "claude-haiku-4-5").
*
* @param modelId - The model ID to search for (e.g., "GLM-4.7", "MiniMax-M2.1")
* @param settingsService - Settings service instance
* @param logPrefix - Prefix for log messages
* @returns Promise resolving to object with provider, model config, credentials, and resolved model
*/
export async function getProviderByModelId(
modelId: string,
settingsService: SettingsService,
logPrefix = '[SettingsHelper]'
): Promise<ProviderByModelIdResult> {
try {
const globalSettings = await settingsService.getGlobalSettings();
const credentials = await settingsService.getCredentials();
const providers = globalSettings.claudeCompatibleProviders || [];
// Search through all enabled providers for this model
for (const provider of providers) {
// Skip disabled providers
if (provider.enabled === false) {
continue;
}
// Check if this provider has the model
const modelConfig = provider.models?.find(
(m) => m.id === modelId || m.id.toLowerCase() === modelId.toLowerCase()
);
if (modelConfig) {
logger.info(`${logPrefix} Found model "${modelId}" in provider "${provider.name}"`);
// Resolve the mapped Claude model if specified
let resolvedModel: string | undefined;
if (modelConfig.mapsToClaudeModel) {
// Import resolveModelString to convert alias to full model ID
const { resolveModelString } = await import('@automaker/model-resolver');
resolvedModel = resolveModelString(modelConfig.mapsToClaudeModel);
logger.info(
`${logPrefix} Model "${modelId}" maps to Claude model "${modelConfig.mapsToClaudeModel}" -> "${resolvedModel}"`
);
}
return { provider, modelConfig, credentials, resolvedModel };
}
}
// Model not found in any provider
logger.debug(`${logPrefix} Model "${modelId}" not found in any provider`);
return {
provider: undefined,
modelConfig: undefined,
credentials: undefined,
resolvedModel: undefined,
};
} catch (error) {
logger.error(`${logPrefix} Failed to find provider by model ID:`, error);
return {
provider: undefined,
modelConfig: undefined,
credentials: undefined,
resolvedModel: undefined,
};
}
}
/**
* Get all enabled provider models for use in model dropdowns.
* Returns models from all enabled ClaudeCompatibleProviders.
*
* @param settingsService - Settings service instance
* @param logPrefix - Prefix for log messages
* @returns Promise resolving to array of provider models with their provider info
*/
export async function getAllProviderModels(
settingsService: SettingsService,
logPrefix = '[SettingsHelper]'
): Promise<
Array<{
providerId: string;
providerName: string;
model: import('@automaker/types').ProviderModel;
}>
> {
try {
const globalSettings = await settingsService.getGlobalSettings();
const providers = globalSettings.claudeCompatibleProviders || [];
const allModels: Array<{
providerId: string;
providerName: string;
model: import('@automaker/types').ProviderModel;
}> = [];
for (const provider of providers) {
// Skip disabled providers
if (provider.enabled === false) {
continue;
}
for (const model of provider.models || []) {
allModels.push({
providerId: provider.id,
providerName: provider.name,
model,
});
}
}
logger.debug(
`${logPrefix} Found ${allModels.length} models from ${providers.length} providers`
);
return allModels;
} catch (error) {
logger.error(`${logPrefix} Failed to get all provider models:`, error);
return [];
}
}

View File

@@ -5,22 +5,24 @@
import * as secureFs from './secure-fs.js';
import * as path from 'path';
import type { PRState, WorktreePRInfo } from '@automaker/types';
// Re-export types for backwards compatibility
export type { PRState, WorktreePRInfo };
/** Maximum length for sanitized branch names in filesystem paths */
const MAX_SANITIZED_BRANCH_PATH_LENGTH = 200;
export interface WorktreePRInfo {
number: number;
url: string;
title: string;
state: string;
createdAt: string;
}
export interface WorktreeMetadata {
branch: string;
createdAt: string;
pr?: WorktreePRInfo;
/** Whether the init script has been executed for this worktree */
initScriptRan?: boolean;
/** Status of the init script execution */
initScriptStatus?: 'running' | 'success' | 'failed';
/** Error message if init script failed */
initScriptError?: string;
}
/**

View File

@@ -0,0 +1,611 @@
/**
* XML Extraction Utilities
*
* Robust XML parsing utilities for extracting and updating sections
* from app_spec.txt XML content. Uses regex-based parsing which is
* sufficient for our controlled XML structure.
*
* Note: If more complex XML parsing is needed in the future, consider
* using a library like 'fast-xml-parser' or 'xml2js'.
*/
import { createLogger } from '@automaker/utils';
import type { SpecOutput } from '@automaker/types';
const logger = createLogger('XmlExtractor');
/**
* Represents an implemented feature extracted from XML
*/
export interface ImplementedFeature {
name: string;
description: string;
file_locations?: string[];
}
/**
* Logger interface for optional custom logging
*/
export interface XmlExtractorLogger {
debug: (message: string, ...args: unknown[]) => void;
warn?: (message: string, ...args: unknown[]) => void;
}
/**
* Options for XML extraction operations
*/
export interface ExtractXmlOptions {
/** Custom logger (defaults to internal logger) */
logger?: XmlExtractorLogger;
}
/**
* Escape special XML characters
* Handles undefined/null values by converting them to empty strings
*/
export function escapeXml(str: string | undefined | null): string {
if (str == null) {
return '';
}
return str
.replace(/&/g, '&amp;')
.replace(/</g, '&lt;')
.replace(/>/g, '&gt;')
.replace(/"/g, '&quot;')
.replace(/'/g, '&apos;');
}
/**
* Unescape XML entities back to regular characters
*/
export function unescapeXml(str: string): string {
return str
.replace(/&apos;/g, "'")
.replace(/&quot;/g, '"')
.replace(/&gt;/g, '>')
.replace(/&lt;/g, '<')
.replace(/&amp;/g, '&');
}
/**
* Extract the content of a specific XML section
*
* @param xmlContent - The full XML content
* @param tagName - The tag name to extract (e.g., 'implemented_features')
* @param options - Optional extraction options
* @returns The content between the tags, or null if not found
*/
export function extractXmlSection(
xmlContent: string,
tagName: string,
options: ExtractXmlOptions = {}
): string | null {
const log = options.logger || logger;
const regex = new RegExp(`<${tagName}>([\\s\\S]*?)<\\/${tagName}>`, 'i');
const match = xmlContent.match(regex);
if (match) {
log.debug(`Extracted <${tagName}> section`);
return match[1];
}
log.debug(`Section <${tagName}> not found`);
return null;
}
/**
* Extract all values from repeated XML elements
*
* @param xmlContent - The XML content to search
* @param tagName - The tag name to extract values from
* @param options - Optional extraction options
* @returns Array of extracted values (unescaped)
*/
export function extractXmlElements(
xmlContent: string,
tagName: string,
options: ExtractXmlOptions = {}
): string[] {
const log = options.logger || logger;
const values: string[] = [];
const regex = new RegExp(`<${tagName}>([\\s\\S]*?)<\\/${tagName}>`, 'g');
const matches = xmlContent.matchAll(regex);
for (const match of matches) {
values.push(unescapeXml(match[1].trim()));
}
log.debug(`Extracted ${values.length} <${tagName}> elements`);
return values;
}
/**
* Extract implemented features from app_spec.txt XML content
*
* @param specContent - The full XML content of app_spec.txt
* @param options - Optional extraction options
* @returns Array of implemented features with name, description, and optional file_locations
*/
export function extractImplementedFeatures(
specContent: string,
options: ExtractXmlOptions = {}
): ImplementedFeature[] {
const log = options.logger || logger;
const features: ImplementedFeature[] = [];
// Match <implemented_features>...</implemented_features> section
const implementedSection = extractXmlSection(specContent, 'implemented_features', options);
if (!implementedSection) {
log.debug('No implemented_features section found');
return features;
}
// Extract individual feature blocks
const featureRegex = /<feature>([\s\S]*?)<\/feature>/g;
const featureMatches = implementedSection.matchAll(featureRegex);
for (const featureMatch of featureMatches) {
const featureContent = featureMatch[1];
// Extract name
const nameMatch = featureContent.match(/<name>([\s\S]*?)<\/name>/);
const name = nameMatch ? unescapeXml(nameMatch[1].trim()) : '';
// Extract description
const descMatch = featureContent.match(/<description>([\s\S]*?)<\/description>/);
const description = descMatch ? unescapeXml(descMatch[1].trim()) : '';
// Extract file_locations if present
const locationsSection = extractXmlSection(featureContent, 'file_locations', options);
const file_locations = locationsSection
? extractXmlElements(locationsSection, 'location', options)
: undefined;
if (name) {
features.push({
name,
description,
...(file_locations && file_locations.length > 0 ? { file_locations } : {}),
});
}
}
log.debug(`Extracted ${features.length} implemented features`);
return features;
}
/**
* Extract only the feature names from implemented_features section
*
* @param specContent - The full XML content of app_spec.txt
* @param options - Optional extraction options
* @returns Array of feature names
*/
export function extractImplementedFeatureNames(
specContent: string,
options: ExtractXmlOptions = {}
): string[] {
const features = extractImplementedFeatures(specContent, options);
return features.map((f) => f.name);
}
/**
* Generate XML for a single implemented feature
*
* @param feature - The feature to convert to XML
* @param indent - The base indentation level (default: 2 spaces)
* @returns XML string for the feature
*/
export function featureToXml(feature: ImplementedFeature, indent: string = ' '): string {
const i2 = indent.repeat(2);
const i3 = indent.repeat(3);
const i4 = indent.repeat(4);
let xml = `${i2}<feature>
${i3}<name>${escapeXml(feature.name)}</name>
${i3}<description>${escapeXml(feature.description)}</description>`;
if (feature.file_locations && feature.file_locations.length > 0) {
xml += `
${i3}<file_locations>
${feature.file_locations.map((loc) => `${i4}<location>${escapeXml(loc)}</location>`).join('\n')}
${i3}</file_locations>`;
}
xml += `
${i2}</feature>`;
return xml;
}
/**
* Generate XML for an array of implemented features
*
* @param features - Array of features to convert to XML
* @param indent - The base indentation level (default: 2 spaces)
* @returns XML string for the implemented_features section content
*/
export function featuresToXml(features: ImplementedFeature[], indent: string = ' '): string {
return features.map((f) => featureToXml(f, indent)).join('\n');
}
/**
* Update the implemented_features section in XML content
*
* @param specContent - The full XML content
* @param newFeatures - The new features to set
* @param options - Optional extraction options
* @returns Updated XML content with the new implemented_features section
*/
export function updateImplementedFeaturesSection(
specContent: string,
newFeatures: ImplementedFeature[],
options: ExtractXmlOptions = {}
): string {
const log = options.logger || logger;
const indent = ' ';
// Generate new section content
const newSectionContent = featuresToXml(newFeatures, indent);
// Build the new section
const newSection = `<implemented_features>
${newSectionContent}
${indent}</implemented_features>`;
// Check if section exists
const sectionRegex = /<implemented_features>[\s\S]*?<\/implemented_features>/;
if (sectionRegex.test(specContent)) {
log.debug('Replacing existing implemented_features section');
return specContent.replace(sectionRegex, newSection);
}
// If section doesn't exist, try to insert after core_capabilities
const coreCapabilitiesEnd = '</core_capabilities>';
const insertIndex = specContent.indexOf(coreCapabilitiesEnd);
if (insertIndex !== -1) {
const insertPosition = insertIndex + coreCapabilitiesEnd.length;
log.debug('Inserting implemented_features after core_capabilities');
return (
specContent.slice(0, insertPosition) +
'\n\n' +
indent +
newSection +
specContent.slice(insertPosition)
);
}
// As a fallback, insert before </project_specification>
const projectSpecEnd = '</project_specification>';
const fallbackIndex = specContent.indexOf(projectSpecEnd);
if (fallbackIndex !== -1) {
log.debug('Inserting implemented_features before </project_specification>');
return (
specContent.slice(0, fallbackIndex) +
indent +
newSection +
'\n' +
specContent.slice(fallbackIndex)
);
}
log.warn?.('Could not find appropriate insertion point for implemented_features');
log.debug('Could not find appropriate insertion point for implemented_features');
return specContent;
}
/**
* Add a new feature to the implemented_features section
*
* @param specContent - The full XML content
* @param newFeature - The feature to add
* @param options - Optional extraction options
* @returns Updated XML content with the new feature added
*/
export function addImplementedFeature(
specContent: string,
newFeature: ImplementedFeature,
options: ExtractXmlOptions = {}
): string {
const log = options.logger || logger;
// Extract existing features
const existingFeatures = extractImplementedFeatures(specContent, options);
// Check for duplicates by name
const isDuplicate = existingFeatures.some(
(f) => f.name.toLowerCase() === newFeature.name.toLowerCase()
);
if (isDuplicate) {
log.debug(`Feature "${newFeature.name}" already exists, skipping`);
return specContent;
}
// Add the new feature
const updatedFeatures = [...existingFeatures, newFeature];
log.debug(`Adding feature "${newFeature.name}"`);
return updateImplementedFeaturesSection(specContent, updatedFeatures, options);
}
/**
* Remove a feature from the implemented_features section by name
*
* @param specContent - The full XML content
* @param featureName - The name of the feature to remove
* @param options - Optional extraction options
* @returns Updated XML content with the feature removed
*/
export function removeImplementedFeature(
specContent: string,
featureName: string,
options: ExtractXmlOptions = {}
): string {
const log = options.logger || logger;
// Extract existing features
const existingFeatures = extractImplementedFeatures(specContent, options);
// Filter out the feature to remove
const updatedFeatures = existingFeatures.filter(
(f) => f.name.toLowerCase() !== featureName.toLowerCase()
);
if (updatedFeatures.length === existingFeatures.length) {
log.debug(`Feature "${featureName}" not found, no changes made`);
return specContent;
}
log.debug(`Removing feature "${featureName}"`);
return updateImplementedFeaturesSection(specContent, updatedFeatures, options);
}
/**
* Update an existing feature in the implemented_features section
*
* @param specContent - The full XML content
* @param featureName - The name of the feature to update
* @param updates - Partial updates to apply to the feature
* @param options - Optional extraction options
* @returns Updated XML content with the feature modified
*/
export function updateImplementedFeature(
specContent: string,
featureName: string,
updates: Partial<ImplementedFeature>,
options: ExtractXmlOptions = {}
): string {
const log = options.logger || logger;
// Extract existing features
const existingFeatures = extractImplementedFeatures(specContent, options);
// Find and update the feature
let found = false;
const updatedFeatures = existingFeatures.map((f) => {
if (f.name.toLowerCase() === featureName.toLowerCase()) {
found = true;
return {
...f,
...updates,
// Preserve the original name if not explicitly updated
name: updates.name ?? f.name,
};
}
return f;
});
if (!found) {
log.debug(`Feature "${featureName}" not found, no changes made`);
return specContent;
}
log.debug(`Updating feature "${featureName}"`);
return updateImplementedFeaturesSection(specContent, updatedFeatures, options);
}
/**
* Check if a feature exists in the implemented_features section
*
* @param specContent - The full XML content
* @param featureName - The name of the feature to check
* @param options - Optional extraction options
* @returns True if the feature exists
*/
export function hasImplementedFeature(
specContent: string,
featureName: string,
options: ExtractXmlOptions = {}
): boolean {
const features = extractImplementedFeatures(specContent, options);
return features.some((f) => f.name.toLowerCase() === featureName.toLowerCase());
}
/**
* Convert extracted features to SpecOutput.implemented_features format
*
* @param features - Array of extracted features
* @returns Features in SpecOutput format
*/
export function toSpecOutputFeatures(
features: ImplementedFeature[]
): SpecOutput['implemented_features'] {
return features.map((f) => ({
name: f.name,
description: f.description,
...(f.file_locations && f.file_locations.length > 0
? { file_locations: f.file_locations }
: {}),
}));
}
/**
* Convert SpecOutput.implemented_features to ImplementedFeature format
*
* @param specFeatures - Features from SpecOutput
* @returns Features in ImplementedFeature format
*/
export function fromSpecOutputFeatures(
specFeatures: SpecOutput['implemented_features']
): ImplementedFeature[] {
return specFeatures.map((f) => ({
name: f.name,
description: f.description,
...(f.file_locations && f.file_locations.length > 0
? { file_locations: f.file_locations }
: {}),
}));
}
/**
* Represents a roadmap phase extracted from XML
*/
export interface RoadmapPhase {
name: string;
status: string;
description?: string;
}
/**
* Extract the technology stack from app_spec.txt XML content
*
* @param specContent - The full XML content
* @param options - Optional extraction options
* @returns Array of technology names
*/
export function extractTechnologyStack(
specContent: string,
options: ExtractXmlOptions = {}
): string[] {
const log = options.logger || logger;
const techSection = extractXmlSection(specContent, 'technology_stack', options);
if (!techSection) {
log.debug('No technology_stack section found');
return [];
}
const technologies = extractXmlElements(techSection, 'technology', options);
log.debug(`Extracted ${technologies.length} technologies`);
return technologies;
}
/**
* Update the technology_stack section in XML content
*
* @param specContent - The full XML content
* @param technologies - The new technology list
* @param options - Optional extraction options
* @returns Updated XML content
*/
export function updateTechnologyStack(
specContent: string,
technologies: string[],
options: ExtractXmlOptions = {}
): string {
const log = options.logger || logger;
const indent = ' ';
const i2 = indent.repeat(2);
// Generate new section content
const techXml = technologies
.map((t) => `${i2}<technology>${escapeXml(t)}</technology>`)
.join('\n');
const newSection = `<technology_stack>\n${techXml}\n${indent}</technology_stack>`;
// Check if section exists
const sectionRegex = /<technology_stack>[\s\S]*?<\/technology_stack>/;
if (sectionRegex.test(specContent)) {
log.debug('Replacing existing technology_stack section');
return specContent.replace(sectionRegex, newSection);
}
log.debug('No technology_stack section found to update');
return specContent;
}
/**
* Extract roadmap phases from app_spec.txt XML content
*
* @param specContent - The full XML content
* @param options - Optional extraction options
* @returns Array of roadmap phases
*/
export function extractRoadmapPhases(
specContent: string,
options: ExtractXmlOptions = {}
): RoadmapPhase[] {
const log = options.logger || logger;
const phases: RoadmapPhase[] = [];
const roadmapSection = extractXmlSection(specContent, 'implementation_roadmap', options);
if (!roadmapSection) {
log.debug('No implementation_roadmap section found');
return phases;
}
// Extract individual phase blocks
const phaseRegex = /<phase>([\s\S]*?)<\/phase>/g;
const phaseMatches = roadmapSection.matchAll(phaseRegex);
for (const phaseMatch of phaseMatches) {
const phaseContent = phaseMatch[1];
const nameMatch = phaseContent.match(/<name>([\s\S]*?)<\/name>/);
const name = nameMatch ? unescapeXml(nameMatch[1].trim()) : '';
const statusMatch = phaseContent.match(/<status>([\s\S]*?)<\/status>/);
const status = statusMatch ? unescapeXml(statusMatch[1].trim()) : 'pending';
const descMatch = phaseContent.match(/<description>([\s\S]*?)<\/description>/);
const description = descMatch ? unescapeXml(descMatch[1].trim()) : undefined;
if (name) {
phases.push({ name, status, description });
}
}
log.debug(`Extracted ${phases.length} roadmap phases`);
return phases;
}
/**
* Update a roadmap phase status in XML content
*
* @param specContent - The full XML content
* @param phaseName - The name of the phase to update
* @param newStatus - The new status value
* @param options - Optional extraction options
* @returns Updated XML content
*/
export function updateRoadmapPhaseStatus(
specContent: string,
phaseName: string,
newStatus: string,
options: ExtractXmlOptions = {}
): string {
const log = options.logger || logger;
// Find the phase and update its status
// Match the phase block containing the specific name
const phaseRegex = new RegExp(
`(<phase>\\s*<name>\\s*${escapeXml(phaseName)}\\s*<\\/name>\\s*<status>)[\\s\\S]*?(<\\/status>)`,
'i'
);
if (phaseRegex.test(specContent)) {
log.debug(`Updating phase "${phaseName}" status to "${newStatus}"`);
return specContent.replace(phaseRegex, `$1${escapeXml(newStatus)}$2`);
}
log.debug(`Phase "${phaseName}" not found`);
return specContent;
}

View File

@@ -8,12 +8,28 @@ import type { Request, Response, NextFunction } from 'express';
import { validatePath, PathNotAllowedError } from '@automaker/platform';
/**
* Creates a middleware that validates specified path parameters in req.body
* Helper to get parameter value from request (checks body first, then query)
*/
function getParamValue(req: Request, paramName: string): unknown {
// Check body first (for POST/PUT/PATCH requests)
if (req.body && req.body[paramName] !== undefined) {
return req.body[paramName];
}
// Fall back to query params (for GET requests)
if (req.query && req.query[paramName] !== undefined) {
return req.query[paramName];
}
return undefined;
}
/**
* Creates a middleware that validates specified path parameters in req.body or req.query
* @param paramNames - Names of parameters to validate (e.g., 'projectPath', 'worktreePath')
* @example
* router.post('/create', validatePathParams('projectPath'), handler);
* router.post('/delete', validatePathParams('projectPath', 'worktreePath'), handler);
* router.post('/send', validatePathParams('workingDirectory?', 'imagePaths[]'), handler);
* router.get('/logs', validatePathParams('worktreePath'), handler); // Works with query params too
*
* Special syntax:
* - 'paramName?' - Optional parameter (only validated if present)
@@ -26,8 +42,8 @@ export function validatePathParams(...paramNames: string[]) {
// Handle optional parameters (paramName?)
if (paramName.endsWith('?')) {
const actualName = paramName.slice(0, -1);
const value = req.body[actualName];
if (value) {
const value = getParamValue(req, actualName);
if (value && typeof value === 'string') {
validatePath(value);
}
continue;
@@ -36,18 +52,20 @@ export function validatePathParams(...paramNames: string[]) {
// Handle array parameters (paramName[])
if (paramName.endsWith('[]')) {
const actualName = paramName.slice(0, -2);
const values = req.body[actualName];
const values = getParamValue(req, actualName);
if (Array.isArray(values) && values.length > 0) {
for (const value of values) {
validatePath(value);
if (typeof value === 'string') {
validatePath(value);
}
}
}
continue;
}
// Handle regular parameters
const value = req.body[paramName];
if (value) {
const value = getParamValue(req, paramName);
if (value && typeof value === 'string') {
validatePath(value);
}
}

View File

@@ -10,7 +10,21 @@ import { BaseProvider } from './base-provider.js';
import { classifyError, getUserFriendlyErrorMessage, createLogger } from '@automaker/utils';
const logger = createLogger('ClaudeProvider');
import { getThinkingTokenBudget, validateBareModelId } from '@automaker/types';
import {
getThinkingTokenBudget,
validateBareModelId,
type ClaudeApiProfile,
type ClaudeCompatibleProvider,
type Credentials,
} from '@automaker/types';
/**
* ProviderConfig - Union type for provider configuration
*
* Accepts either the legacy ClaudeApiProfile or new ClaudeCompatibleProvider.
* Both share the same connection settings structure.
*/
type ProviderConfig = ClaudeApiProfile | ClaudeCompatibleProvider;
import type {
ExecuteOptions,
ProviderMessage,
@@ -21,7 +35,19 @@ import type {
// Explicit allowlist of environment variables to pass to the SDK.
// Only these vars are passed - nothing else from process.env leaks through.
const ALLOWED_ENV_VARS = [
// Authentication
'ANTHROPIC_API_KEY',
'ANTHROPIC_AUTH_TOKEN',
// Endpoint configuration
'ANTHROPIC_BASE_URL',
'API_TIMEOUT_MS',
// Model mappings
'ANTHROPIC_DEFAULT_HAIKU_MODEL',
'ANTHROPIC_DEFAULT_SONNET_MODEL',
'ANTHROPIC_DEFAULT_OPUS_MODEL',
// Traffic control
'CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC',
// System vars (always from process.env)
'PATH',
'HOME',
'SHELL',
@@ -31,16 +57,132 @@ const ALLOWED_ENV_VARS = [
'LC_ALL',
];
// System vars are always passed from process.env regardless of profile
const SYSTEM_ENV_VARS = ['PATH', 'HOME', 'SHELL', 'TERM', 'USER', 'LANG', 'LC_ALL'];
/**
* Build environment for the SDK with only explicitly allowed variables
* Check if the config is a ClaudeCompatibleProvider (new system)
* by checking for the 'models' array property
*/
function buildEnv(): Record<string, string | undefined> {
function isClaudeCompatibleProvider(config: ProviderConfig): config is ClaudeCompatibleProvider {
return 'models' in config && Array.isArray(config.models);
}
/**
* Build environment for the SDK with only explicitly allowed variables.
* When a provider/profile is provided, uses its configuration (clean switch - don't inherit from process.env).
* When no provider is provided, uses direct Anthropic API settings from process.env.
*
* Supports both:
* - ClaudeCompatibleProvider (new system with models[] array)
* - ClaudeApiProfile (legacy system with modelMappings)
*
* @param providerConfig - Optional provider configuration for alternative endpoint
* @param credentials - Optional credentials object for resolving 'credentials' apiKeySource
*/
function buildEnv(
providerConfig?: ProviderConfig,
credentials?: Credentials
): Record<string, string | undefined> {
const env: Record<string, string | undefined> = {};
for (const key of ALLOWED_ENV_VARS) {
if (providerConfig) {
// Use provider configuration (clean switch - don't inherit non-system vars from process.env)
logger.debug('[buildEnv] Using provider configuration:', {
name: providerConfig.name,
baseUrl: providerConfig.baseUrl,
apiKeySource: providerConfig.apiKeySource ?? 'inline',
isNewProvider: isClaudeCompatibleProvider(providerConfig),
});
// Resolve API key based on source strategy
let apiKey: string | undefined;
const source = providerConfig.apiKeySource ?? 'inline'; // Default to inline for backwards compat
switch (source) {
case 'inline':
apiKey = providerConfig.apiKey;
break;
case 'env':
apiKey = process.env.ANTHROPIC_API_KEY;
break;
case 'credentials':
apiKey = credentials?.apiKeys?.anthropic;
break;
}
// Warn if no API key found
if (!apiKey) {
logger.warn(`No API key found for provider "${providerConfig.name}" with source "${source}"`);
}
// Authentication
if (providerConfig.useAuthToken) {
env['ANTHROPIC_AUTH_TOKEN'] = apiKey;
} else {
env['ANTHROPIC_API_KEY'] = apiKey;
}
// Endpoint configuration
env['ANTHROPIC_BASE_URL'] = providerConfig.baseUrl;
logger.debug(`[buildEnv] Set ANTHROPIC_BASE_URL to: ${providerConfig.baseUrl}`);
if (providerConfig.timeoutMs) {
env['API_TIMEOUT_MS'] = String(providerConfig.timeoutMs);
}
// Model mappings - only for legacy ClaudeApiProfile
// For ClaudeCompatibleProvider, the model is passed directly (no mapping needed)
if (!isClaudeCompatibleProvider(providerConfig) && providerConfig.modelMappings) {
if (providerConfig.modelMappings.haiku) {
env['ANTHROPIC_DEFAULT_HAIKU_MODEL'] = providerConfig.modelMappings.haiku;
}
if (providerConfig.modelMappings.sonnet) {
env['ANTHROPIC_DEFAULT_SONNET_MODEL'] = providerConfig.modelMappings.sonnet;
}
if (providerConfig.modelMappings.opus) {
env['ANTHROPIC_DEFAULT_OPUS_MODEL'] = providerConfig.modelMappings.opus;
}
}
// Traffic control
if (providerConfig.disableNonessentialTraffic) {
env['CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC'] = '1';
}
} else {
// Use direct Anthropic API - pass through credentials or environment variables
// This supports:
// 1. API Key mode: ANTHROPIC_API_KEY from credentials (UI settings) or env
// 2. Claude Max plan: Uses CLI OAuth auth (SDK handles this automatically)
// 3. Custom endpoints via ANTHROPIC_BASE_URL env var (backward compatibility)
//
// Priority: credentials file (UI settings) -> environment variable
// Note: Only auth and endpoint vars are passed. Model mappings and traffic
// control are NOT passed (those require a profile for explicit configuration).
if (credentials?.apiKeys?.anthropic) {
env['ANTHROPIC_API_KEY'] = credentials.apiKeys.anthropic;
} else if (process.env.ANTHROPIC_API_KEY) {
env['ANTHROPIC_API_KEY'] = process.env.ANTHROPIC_API_KEY;
}
// If using Claude Max plan via CLI auth, the SDK handles auth automatically
// when no API key is provided. We don't set ANTHROPIC_AUTH_TOKEN here
// unless it was explicitly set in process.env (rare edge case).
if (process.env.ANTHROPIC_AUTH_TOKEN) {
env['ANTHROPIC_AUTH_TOKEN'] = process.env.ANTHROPIC_AUTH_TOKEN;
}
// Pass through ANTHROPIC_BASE_URL if set in environment (backward compatibility)
if (process.env.ANTHROPIC_BASE_URL) {
env['ANTHROPIC_BASE_URL'] = process.env.ANTHROPIC_BASE_URL;
}
}
// Always add system vars from process.env
for (const key of SYSTEM_ENV_VARS) {
if (process.env[key]) {
env[key] = process.env[key];
}
}
return env;
}
@@ -68,8 +210,15 @@ export class ClaudeProvider extends BaseProvider {
conversationHistory,
sdkSessionId,
thinkingLevel,
claudeApiProfile,
claudeCompatibleProvider,
credentials,
} = options;
// Determine which provider config to use
// claudeCompatibleProvider takes precedence over claudeApiProfile
const providerConfig = claudeCompatibleProvider || claudeApiProfile;
// Convert thinking level to token budget
const maxThinkingTokens = getThinkingTokenBudget(thinkingLevel);
@@ -80,7 +229,9 @@ export class ClaudeProvider extends BaseProvider {
maxTurns,
cwd,
// Pass only explicitly allowed environment variables to SDK
env: buildEnv(),
// When a provider is active, uses provider settings (clean switch)
// When no provider, uses direct Anthropic API (from process.env or CLI OAuth)
env: buildEnv(providerConfig, credentials),
// Pass through allowedTools if provided by caller (decided by sdk-options.ts)
...(allowedTools && { allowedTools }),
// AUTONOMOUS MODE: Always bypass permissions for fully autonomous operation
@@ -99,6 +250,8 @@ export class ClaudeProvider extends BaseProvider {
...(maxThinkingTokens && { maxThinkingTokens }),
// Subagents configuration for specialized task delegation
...(options.agents && { agents: options.agents }),
// Pass through outputFormat for structured JSON outputs
...(options.outputFormat && { outputFormat: options.outputFormat }),
};
// Build prompt payload
@@ -123,6 +276,18 @@ export class ClaudeProvider extends BaseProvider {
promptPayload = prompt;
}
// Log the environment being passed to the SDK for debugging
const envForSdk = sdkOptions.env as Record<string, string | undefined>;
logger.debug('[ClaudeProvider] SDK Configuration:', {
model: sdkOptions.model,
baseUrl: envForSdk?.['ANTHROPIC_BASE_URL'] || '(default Anthropic API)',
hasApiKey: !!envForSdk?.['ANTHROPIC_API_KEY'],
hasAuthToken: !!envForSdk?.['ANTHROPIC_AUTH_TOKEN'],
providerName: providerConfig?.name || '(direct Anthropic)',
maxTurns: sdkOptions.maxTurns,
maxThinkingTokens: sdkOptions.maxThinkingTokens,
});
// Execute via Claude Agent SDK
try {
const stream = query({ prompt: promptPayload, options: sdkOptions });

View File

@@ -26,22 +26,23 @@
* ```
*/
import { execSync } from 'child_process';
import * as fs from 'fs';
import * as path from 'path';
import * as os from 'os';
import { BaseProvider } from './base-provider.js';
import type { ProviderConfig, ExecuteOptions, ProviderMessage } from './types.js';
import {
spawnJSONLProcess,
type SubprocessOptions,
isWslAvailable,
findCliInWsl,
createWslCommand,
findCliInWsl,
isWslAvailable,
spawnJSONLProcess,
windowsToWslPath,
type SubprocessOptions,
type WslCliResult,
} from '@automaker/platform';
import { calculateReasoningTimeout } from '@automaker/types';
import { createLogger, isAbortError } from '@automaker/utils';
import { execSync } from 'child_process';
import * as fs from 'fs';
import * as os from 'os';
import * as path from 'path';
import { BaseProvider } from './base-provider.js';
import type { ExecuteOptions, ProviderConfig, ProviderMessage } from './types.js';
/**
* Spawn strategy for CLI tools on Windows
@@ -107,6 +108,15 @@ export interface CliDetectionResult {
// Create logger for CLI operations
const cliLogger = createLogger('CliProvider');
/**
* Base timeout for CLI operations in milliseconds.
* CLI tools have longer startup and processing times compared to direct API calls,
* so we use a higher base timeout (120s) than the default provider timeout (30s).
* This is multiplied by reasoning effort multipliers when applicable.
* @see calculateReasoningTimeout from @automaker/types
*/
const CLI_BASE_TIMEOUT_MS = 120000;
/**
* Abstract base class for CLI-based providers
*
@@ -450,6 +460,10 @@ export abstract class CliProvider extends BaseProvider {
}
}
// Calculate dynamic timeout based on reasoning effort.
// This addresses GitHub issue #530 where reasoning models with 'xhigh' effort would timeout.
const timeout = calculateReasoningTimeout(options.reasoningEffort, CLI_BASE_TIMEOUT_MS);
// WSL strategy
if (this.useWsl && this.wslCliPath) {
const wslCwd = windowsToWslPath(cwd);
@@ -473,7 +487,7 @@ export abstract class CliProvider extends BaseProvider {
cwd, // Windows cwd for spawn
env: filteredEnv,
abortController: options.abortController,
timeout: 120000, // CLI operations may take longer
timeout,
};
}
@@ -488,7 +502,7 @@ export abstract class CliProvider extends BaseProvider {
cwd,
env: filteredEnv,
abortController: options.abortController,
timeout: 120000,
timeout,
};
}
@@ -501,7 +515,7 @@ export abstract class CliProvider extends BaseProvider {
cwd,
env: filteredEnv,
abortController: options.abortController,
timeout: 120000,
timeout,
};
}
@@ -522,8 +536,13 @@ export abstract class CliProvider extends BaseProvider {
throw new Error(`${this.getCliName()} CLI not found. ${this.getInstallInstructions()}`);
}
const cliArgs = this.buildCliArgs(options);
const subprocessOptions = this.buildSubprocessOptions(options, cliArgs);
// Many CLI-based providers do not support a separate "system" message.
// If a systemPrompt is provided, embed it into the prompt so downstream models
// still receive critical formatting/schema instructions (e.g., JSON-only outputs).
const effectiveOptions = this.embedSystemPromptIntoPrompt(options);
const cliArgs = this.buildCliArgs(effectiveOptions);
const subprocessOptions = this.buildSubprocessOptions(effectiveOptions, cliArgs);
try {
for await (const rawEvent of spawnJSONLProcess(subprocessOptions)) {
@@ -555,4 +574,52 @@ export abstract class CliProvider extends BaseProvider {
throw error;
}
}
/**
* Embed system prompt text into the user prompt for CLI providers.
*
* Most CLI providers we integrate with only accept a single prompt via stdin/args.
* When upstream code supplies `options.systemPrompt`, we prepend it to the prompt
* content and clear `systemPrompt` to avoid any accidental double-injection by
* subclasses.
*/
protected embedSystemPromptIntoPrompt(options: ExecuteOptions): ExecuteOptions {
if (!options.systemPrompt) {
return options;
}
// Only string system prompts can be reliably embedded for CLI providers.
// Presets are provider-specific (e.g., Claude SDK) and cannot be represented
// universally. If a preset is provided, we only embed its optional `append`.
const systemText =
typeof options.systemPrompt === 'string'
? options.systemPrompt
: options.systemPrompt.append
? options.systemPrompt.append
: '';
if (!systemText) {
return { ...options, systemPrompt: undefined };
}
// Preserve original prompt structure.
if (typeof options.prompt === 'string') {
return {
...options,
prompt: `${systemText}\n\n---\n\n${options.prompt}`,
systemPrompt: undefined,
};
}
if (Array.isArray(options.prompt)) {
return {
...options,
prompt: [{ type: 'text', text: systemText }, ...options.prompt],
systemPrompt: undefined,
};
}
// Should be unreachable due to ExecuteOptions typing, but keep safe.
return { ...options, systemPrompt: undefined };
}
}

View File

@@ -21,6 +21,7 @@ import {
extractTextFromContent,
classifyError,
getUserFriendlyErrorMessage,
createLogger,
} from '@automaker/utils';
import type {
ExecuteOptions,
@@ -32,6 +33,8 @@ import {
CODEX_MODEL_MAP,
supportsReasoningEffort,
validateBareModelId,
calculateReasoningTimeout,
DEFAULT_TIMEOUT_MS,
type CodexApprovalPolicy,
type CodexSandboxMode,
type CodexAuthStatus,
@@ -44,6 +47,7 @@ import {
getCodexTodoToolName,
} from './codex-tool-mapping.js';
import { SettingsService } from '../services/settings-service.js';
import { createTempEnvOverride } from '../lib/auth-utils.js';
import { checkSandboxCompatibility } from '../lib/sdk-options.js';
import { CODEX_MODELS } from './codex-models.js';
@@ -89,7 +93,14 @@ const CODEX_ITEM_TYPES = {
const SYSTEM_PROMPT_LABEL = 'System instructions';
const HISTORY_HEADER = 'Current request:\n';
const TEXT_ENCODING = 'utf-8';
const DEFAULT_TIMEOUT_MS = 30000;
/**
* Default timeout for Codex CLI operations in milliseconds.
* This is the "no output" timeout - if the CLI doesn't produce any JSONL output
* for this duration, the process is killed. For reasoning models with high
* reasoning effort, this timeout is dynamically extended via calculateReasoningTimeout().
* @see calculateReasoningTimeout from @automaker/types
*/
const CODEX_CLI_TIMEOUT_MS = DEFAULT_TIMEOUT_MS;
const CONTEXT_WINDOW_256K = 256000;
const MAX_OUTPUT_32K = 32000;
const MAX_OUTPUT_16K = 16000;
@@ -141,6 +152,7 @@ type CodexExecutionMode = typeof CODEX_EXECUTION_MODE_CLI | typeof CODEX_EXECUTI
type CodexExecutionPlan = {
mode: CodexExecutionMode;
cliPath: string | null;
openAiApiKey?: string | null;
};
const ALLOWED_ENV_VARS = [
@@ -165,6 +177,22 @@ function buildEnv(): Record<string, string> {
return env;
}
async function resolveOpenAiApiKey(): Promise<string | null> {
const envKey = process.env[OPENAI_API_KEY_ENV];
if (envKey) {
return envKey;
}
try {
const settingsService = new SettingsService(getCodexSettingsDir());
const credentials = await settingsService.getCredentials();
const storedKey = credentials.apiKeys.openai?.trim();
return storedKey ? storedKey : null;
} catch {
return null;
}
}
function hasMcpServersConfigured(options: ExecuteOptions): boolean {
return Boolean(options.mcpServers && Object.keys(options.mcpServers).length > 0);
}
@@ -180,18 +208,21 @@ function isSdkEligible(options: ExecuteOptions): boolean {
async function resolveCodexExecutionPlan(options: ExecuteOptions): Promise<CodexExecutionPlan> {
const cliPath = await findCodexCliPath();
const authIndicators = await getCodexAuthIndicators();
const hasApiKey = Boolean(process.env[OPENAI_API_KEY_ENV]);
const openAiApiKey = await resolveOpenAiApiKey();
const hasApiKey = Boolean(openAiApiKey);
const cliAuthenticated = authIndicators.hasOAuthToken || authIndicators.hasApiKey || hasApiKey;
const sdkEligible = isSdkEligible(options);
const cliAvailable = Boolean(cliPath);
if (hasApiKey) {
return {
mode: CODEX_EXECUTION_MODE_SDK,
cliPath,
openAiApiKey,
};
}
if (sdkEligible) {
if (hasApiKey) {
return {
mode: CODEX_EXECUTION_MODE_SDK,
cliPath,
};
}
if (!cliAvailable) {
throw new Error(ERROR_CODEX_SDK_AUTH_REQUIRED);
}
@@ -208,6 +239,7 @@ async function resolveCodexExecutionPlan(options: ExecuteOptions): Promise<Codex
return {
mode: CODEX_EXECUTION_MODE_CLI,
cliPath,
openAiApiKey,
};
}
@@ -658,6 +690,8 @@ async function loadCodexInstructions(cwd: string, enabled: boolean): Promise<str
.join('\n\n');
}
const logger = createLogger('CodexProvider');
export class CodexProvider extends BaseProvider {
getName(): string {
return 'codex';
@@ -698,7 +732,14 @@ export class CodexProvider extends BaseProvider {
const executionPlan = await resolveCodexExecutionPlan(options);
if (executionPlan.mode === CODEX_EXECUTION_MODE_SDK) {
yield* executeCodexSdkQuery(options, combinedSystemPrompt);
const cleanupEnv = executionPlan.openAiApiKey
? createTempEnvOverride({ [OPENAI_API_KEY_ENV]: executionPlan.openAiApiKey })
: null;
try {
yield* executeCodexSdkQuery(options, combinedSystemPrompt);
} finally {
cleanupEnv?.();
}
return;
}
@@ -777,13 +818,24 @@ export class CodexProvider extends BaseProvider {
'-', // Read prompt from stdin to avoid shell escaping issues
];
const envOverrides = buildEnv();
if (executionPlan.openAiApiKey && !envOverrides[OPENAI_API_KEY_ENV]) {
envOverrides[OPENAI_API_KEY_ENV] = executionPlan.openAiApiKey;
}
// Calculate dynamic timeout based on reasoning effort.
// Higher reasoning effort (e.g., 'xhigh' for "xtra thinking" mode) requires more time
// for the model to generate reasoning tokens before producing output.
// This fixes GitHub issue #530 where features would get stuck with reasoning models.
const timeout = calculateReasoningTimeout(options.reasoningEffort, CODEX_CLI_TIMEOUT_MS);
const stream = spawnJSONLProcess({
command: commandPath,
args,
cwd: options.cwd,
env: buildEnv(),
env: envOverrides,
abortController: options.abortController,
timeout: DEFAULT_TIMEOUT_MS,
timeout,
stdinData: promptText, // Pass prompt via stdin
});
@@ -967,21 +1019,11 @@ export class CodexProvider extends BaseProvider {
}
async detectInstallation(): Promise<InstallationStatus> {
console.log('[CodexProvider.detectInstallation] Starting...');
const cliPath = await findCodexCliPath();
const hasApiKey = !!process.env[OPENAI_API_KEY_ENV];
const hasApiKey = Boolean(await resolveOpenAiApiKey());
const authIndicators = await getCodexAuthIndicators();
const installed = !!cliPath;
console.log('[CodexProvider.detectInstallation] cliPath:', cliPath);
console.log('[CodexProvider.detectInstallation] hasApiKey:', hasApiKey);
console.log(
'[CodexProvider.detectInstallation] authIndicators:',
JSON.stringify(authIndicators)
);
console.log('[CodexProvider.detectInstallation] installed:', installed);
let version = '';
if (installed) {
try {
@@ -991,20 +1033,16 @@ export class CodexProvider extends BaseProvider {
cwd: process.cwd(),
});
version = result.stdout.trim();
console.log('[CodexProvider.detectInstallation] version:', version);
} catch (error) {
console.log('[CodexProvider.detectInstallation] Error getting version:', error);
version = '';
}
}
// Determine auth status - always verify with CLI, never assume authenticated
console.log('[CodexProvider.detectInstallation] Calling checkCodexAuthentication...');
const authCheck = await checkCodexAuthentication(cliPath);
console.log('[CodexProvider.detectInstallation] authCheck result:', JSON.stringify(authCheck));
const authenticated = authCheck.authenticated;
const result = {
return {
installed,
path: cliPath || undefined,
version: version || undefined,
@@ -1012,8 +1050,6 @@ export class CodexProvider extends BaseProvider {
hasApiKey,
authenticated,
};
console.log('[CodexProvider.detectInstallation] Final result:', JSON.stringify(result));
return result;
}
getAvailableModels(): ModelDefinition[] {
@@ -1025,36 +1061,24 @@ export class CodexProvider extends BaseProvider {
* Check authentication status for Codex CLI
*/
async checkAuth(): Promise<CodexAuthStatus> {
console.log('[CodexProvider.checkAuth] Starting auth check...');
const cliPath = await findCodexCliPath();
const hasApiKey = !!process.env[OPENAI_API_KEY_ENV];
const hasApiKey = Boolean(await resolveOpenAiApiKey());
const authIndicators = await getCodexAuthIndicators();
console.log('[CodexProvider.checkAuth] cliPath:', cliPath);
console.log('[CodexProvider.checkAuth] hasApiKey:', hasApiKey);
console.log('[CodexProvider.checkAuth] authIndicators:', JSON.stringify(authIndicators));
// Check for API key in environment
if (hasApiKey) {
console.log('[CodexProvider.checkAuth] Has API key, returning authenticated');
return { authenticated: true, method: 'api_key' };
}
// Check for OAuth/token from Codex CLI
if (authIndicators.hasOAuthToken || authIndicators.hasApiKey) {
console.log(
'[CodexProvider.checkAuth] Has OAuth token or API key in auth file, returning authenticated'
);
return { authenticated: true, method: 'oauth' };
}
// CLI is installed but not authenticated via indicators - try CLI command
console.log('[CodexProvider.checkAuth] No indicators found, trying CLI command...');
if (cliPath) {
try {
// Try 'codex login status' first (same as checkCodexAuthentication)
console.log('[CodexProvider.checkAuth] Running: ' + cliPath + ' login status');
const result = await spawnProcess({
command: cliPath || CODEX_COMMAND,
args: ['login', 'status'],
@@ -1064,26 +1088,19 @@ export class CodexProvider extends BaseProvider {
TERM: 'dumb',
},
});
console.log('[CodexProvider.checkAuth] login status result:');
console.log('[CodexProvider.checkAuth] exitCode:', result.exitCode);
console.log('[CodexProvider.checkAuth] stdout:', JSON.stringify(result.stdout));
console.log('[CodexProvider.checkAuth] stderr:', JSON.stringify(result.stderr));
// Check both stdout and stderr - Codex CLI outputs to stderr
const combinedOutput = (result.stdout + result.stderr).toLowerCase();
const isLoggedIn = combinedOutput.includes('logged in');
console.log('[CodexProvider.checkAuth] isLoggedIn:', isLoggedIn);
if (result.exitCode === 0 && isLoggedIn) {
console.log('[CodexProvider.checkAuth] CLI says logged in, returning authenticated');
return { authenticated: true, method: 'oauth' };
}
} catch (error) {
console.log('[CodexProvider.checkAuth] Error running login status:', error);
logger.warn('Error running login status command during auth check:', error);
}
}
console.log('[CodexProvider.checkAuth] Not authenticated');
return { authenticated: false, method: 'none' };
}

View File

@@ -44,7 +44,7 @@ export class CursorConfigManager {
// Return default config with all available models
return {
defaultModel: 'auto',
defaultModel: 'cursor-auto',
models: getAllCursorModelIds(),
};
}
@@ -77,7 +77,7 @@ export class CursorConfigManager {
* Get the default model
*/
getDefaultModel(): CursorModelId {
return this.config.defaultModel || 'auto';
return this.config.defaultModel || 'cursor-auto';
}
/**
@@ -93,7 +93,7 @@ export class CursorConfigManager {
* Get enabled models
*/
getEnabledModels(): CursorModelId[] {
return this.config.models || ['auto'];
return this.config.models || ['cursor-auto'];
}
/**
@@ -174,7 +174,7 @@ export class CursorConfigManager {
*/
reset(): void {
this.config = {
defaultModel: 'auto',
defaultModel: 'cursor-auto',
models: getAllCursorModelIds(),
};
this.saveConfig();

View File

@@ -337,10 +337,11 @@ export class CursorProvider extends CliProvider {
'--stream-partial-output' // Real-time streaming
);
// Only add --force if NOT in read-only mode
// Without --force, Cursor CLI suggests changes but doesn't apply them
// With --force, Cursor CLI can actually edit files
if (!options.readOnly) {
// In read-only mode, use --mode ask for Q&A style (no tools)
// Otherwise, add --force to allow file edits
if (options.readOnly) {
cliArgs.push('--mode', 'ask');
} else {
cliArgs.push('--force');
}
@@ -672,10 +673,13 @@ export class CursorProvider extends CliProvider {
);
}
// Extract prompt text to pass via stdin (avoids shell escaping issues)
const promptText = this.extractPromptText(options);
// Embed system prompt into user prompt (Cursor CLI doesn't support separate system messages)
const effectiveOptions = this.embedSystemPromptIntoPrompt(options);
const cliArgs = this.buildCliArgs(options);
// Extract prompt text to pass via stdin (avoids shell escaping issues)
const promptText = this.extractPromptText(effectiveOptions);
const cliArgs = this.buildCliArgs(effectiveOptions);
const subprocessOptions = this.buildSubprocessOptions(options, cliArgs);
// Pass prompt via stdin to avoid shell interpretation of special characters

View File

@@ -0,0 +1,815 @@
/**
* Gemini Provider - Executes queries using the Gemini CLI
*
* Extends CliProvider with Gemini-specific:
* - Event normalization for Gemini's JSONL streaming format
* - Google account and API key authentication support
* - Thinking level configuration
*
* Based on https://github.com/google-gemini/gemini-cli
*/
import { execSync } from 'child_process';
import * as fs from 'fs/promises';
import * as path from 'path';
import * as os from 'os';
import { CliProvider, type CliSpawnConfig, type CliErrorInfo } from './cli-provider.js';
import type {
ProviderConfig,
ExecuteOptions,
ProviderMessage,
InstallationStatus,
ModelDefinition,
ContentBlock,
} from './types.js';
import { validateBareModelId } from '@automaker/types';
import { GEMINI_MODEL_MAP, type GeminiAuthStatus } from '@automaker/types';
import { createLogger, isAbortError } from '@automaker/utils';
import { spawnJSONLProcess } from '@automaker/platform';
// Create logger for this module
const logger = createLogger('GeminiProvider');
// =============================================================================
// Gemini Stream Event Types
// =============================================================================
/**
* Base event structure from Gemini CLI --output-format stream-json
*
* Actual CLI output format:
* {"type":"init","timestamp":"...","session_id":"...","model":"..."}
* {"type":"message","timestamp":"...","role":"user","content":"..."}
* {"type":"message","timestamp":"...","role":"assistant","content":"...","delta":true}
* {"type":"tool_use","timestamp":"...","tool_name":"...","tool_id":"...","parameters":{...}}
* {"type":"tool_result","timestamp":"...","tool_id":"...","status":"success","output":"..."}
* {"type":"result","timestamp":"...","status":"success","stats":{...}}
*/
interface GeminiStreamEvent {
type: 'init' | 'message' | 'tool_use' | 'tool_result' | 'result' | 'error';
timestamp?: string;
session_id?: string;
}
interface GeminiInitEvent extends GeminiStreamEvent {
type: 'init';
session_id: string;
model: string;
}
interface GeminiMessageEvent extends GeminiStreamEvent {
type: 'message';
role: 'user' | 'assistant';
content: string;
delta?: boolean;
session_id?: string;
}
interface GeminiToolUseEvent extends GeminiStreamEvent {
type: 'tool_use';
tool_id: string;
tool_name: string;
parameters: Record<string, unknown>;
session_id?: string;
}
interface GeminiToolResultEvent extends GeminiStreamEvent {
type: 'tool_result';
tool_id: string;
status: 'success' | 'error';
output: string;
session_id?: string;
}
interface GeminiResultEvent extends GeminiStreamEvent {
type: 'result';
status: 'success' | 'error';
stats?: {
total_tokens?: number;
input_tokens?: number;
output_tokens?: number;
cached?: number;
input?: number;
duration_ms?: number;
tool_calls?: number;
};
error?: string;
session_id?: string;
}
// =============================================================================
// Error Codes
// =============================================================================
export enum GeminiErrorCode {
NOT_INSTALLED = 'GEMINI_NOT_INSTALLED',
NOT_AUTHENTICATED = 'GEMINI_NOT_AUTHENTICATED',
RATE_LIMITED = 'GEMINI_RATE_LIMITED',
MODEL_UNAVAILABLE = 'GEMINI_MODEL_UNAVAILABLE',
NETWORK_ERROR = 'GEMINI_NETWORK_ERROR',
PROCESS_CRASHED = 'GEMINI_PROCESS_CRASHED',
TIMEOUT = 'GEMINI_TIMEOUT',
UNKNOWN = 'GEMINI_UNKNOWN_ERROR',
}
export interface GeminiError extends Error {
code: GeminiErrorCode;
recoverable: boolean;
suggestion?: string;
}
// =============================================================================
// Tool Name Normalization
// =============================================================================
/**
* Gemini CLI tool name to standard tool name mapping
* This allows the UI to properly categorize and display Gemini tool calls
*/
const GEMINI_TOOL_NAME_MAP: Record<string, string> = {
write_todos: 'TodoWrite',
read_file: 'Read',
read_many_files: 'Read',
replace: 'Edit',
write_file: 'Write',
run_shell_command: 'Bash',
search_file_content: 'Grep',
glob: 'Glob',
list_directory: 'Ls',
web_fetch: 'WebFetch',
google_web_search: 'WebSearch',
};
/**
* Normalize Gemini tool names to standard tool names
*/
function normalizeGeminiToolName(geminiToolName: string): string {
return GEMINI_TOOL_NAME_MAP[geminiToolName] || geminiToolName;
}
/**
* Normalize Gemini tool input parameters to standard format
*
* Gemini `write_todos` format:
* {"todos": [{"description": "Task text", "status": "pending|in_progress|completed|cancelled"}]}
*
* Claude `TodoWrite` format:
* {"todos": [{"content": "Task text", "status": "pending|in_progress|completed", "activeForm": "..."}]}
*/
function normalizeGeminiToolInput(
toolName: string,
input: Record<string, unknown>
): Record<string, unknown> {
// Normalize write_todos: map 'description' to 'content', handle 'cancelled' status
if (toolName === 'write_todos' && Array.isArray(input.todos)) {
return {
todos: input.todos.map((todo: { description?: string; status?: string }) => ({
content: todo.description || '',
// Map 'cancelled' to 'completed' since Claude doesn't have cancelled status
status: todo.status === 'cancelled' ? 'completed' : todo.status,
// Use description as activeForm since Gemini doesn't have it
activeForm: todo.description || '',
})),
};
}
return input;
}
/**
* GeminiProvider - Integrates Gemini CLI as an AI provider
*
* Features:
* - Google account OAuth login support
* - API key authentication (GEMINI_API_KEY)
* - Vertex AI support
* - Thinking level configuration
* - Streaming JSON output
*/
export class GeminiProvider extends CliProvider {
constructor(config: ProviderConfig = {}) {
super(config);
// Trigger CLI detection on construction
this.ensureCliDetected();
}
// ==========================================================================
// CliProvider Abstract Method Implementations
// ==========================================================================
getName(): string {
return 'gemini';
}
getCliName(): string {
return 'gemini';
}
getSpawnConfig(): CliSpawnConfig {
return {
windowsStrategy: 'npx', // Gemini CLI can be run via npx
npxPackage: '@google/gemini-cli', // Official Google Gemini CLI package
commonPaths: {
linux: [
path.join(os.homedir(), '.local/bin/gemini'),
'/usr/local/bin/gemini',
path.join(os.homedir(), '.npm-global/bin/gemini'),
],
darwin: [
path.join(os.homedir(), '.local/bin/gemini'),
'/usr/local/bin/gemini',
'/opt/homebrew/bin/gemini',
path.join(os.homedir(), '.npm-global/bin/gemini'),
],
win32: [
path.join(os.homedir(), 'AppData', 'Roaming', 'npm', 'gemini.cmd'),
path.join(os.homedir(), '.npm-global', 'gemini.cmd'),
],
},
};
}
/**
* Extract prompt text from ExecuteOptions
*/
private extractPromptText(options: ExecuteOptions): string {
if (typeof options.prompt === 'string') {
return options.prompt;
} else if (Array.isArray(options.prompt)) {
return options.prompt
.filter((p) => p.type === 'text' && p.text)
.map((p) => p.text)
.join('\n');
} else {
throw new Error('Invalid prompt format');
}
}
buildCliArgs(options: ExecuteOptions): string[] {
// Model comes in stripped of provider prefix (e.g., '2.5-flash' from 'gemini-2.5-flash')
// We need to add 'gemini-' back since it's part of the actual CLI model name
const bareModel = options.model || '2.5-flash';
const cliArgs: string[] = [];
// Streaming JSON output format for real-time updates
cliArgs.push('--output-format', 'stream-json');
// Model selection - Gemini CLI expects full model names like "gemini-2.5-flash"
// Unlike Cursor CLI where 'cursor-' is just a routing prefix, for Gemini CLI
// the 'gemini-' is part of the actual model name Google expects
if (bareModel && bareModel !== 'auto') {
// Add gemini- prefix if not already present (handles edge cases)
const cliModel = bareModel.startsWith('gemini-') ? bareModel : `gemini-${bareModel}`;
cliArgs.push('--model', cliModel);
}
// Disable sandbox mode for faster execution (sandbox adds overhead)
cliArgs.push('--sandbox', 'false');
// YOLO mode for automatic approval (required for non-interactive use)
// Use explicit approval-mode for clearer semantics
cliArgs.push('--approval-mode', 'yolo');
// Explicitly include the working directory in allowed workspace directories
// This ensures Gemini CLI allows file operations in the project directory,
// even if it has a different workspace cached from a previous session
if (options.cwd) {
cliArgs.push('--include-directories', options.cwd);
}
// Note: Gemini CLI doesn't have a --thinking-level flag.
// Thinking capabilities are determined by the model selection (e.g., gemini-2.5-pro).
// The model handles thinking internally based on the task complexity.
// The prompt will be passed as the last positional argument
// We'll append it in executeQuery after extracting the text
return cliArgs;
}
/**
* Convert Gemini event to AutoMaker ProviderMessage format
*/
normalizeEvent(event: unknown): ProviderMessage | null {
const geminiEvent = event as GeminiStreamEvent;
switch (geminiEvent.type) {
case 'init': {
// Init event - capture session but don't yield a message
const initEvent = geminiEvent as GeminiInitEvent;
logger.debug(
`Gemini init event: session=${initEvent.session_id}, model=${initEvent.model}`
);
return null;
}
case 'message': {
const messageEvent = geminiEvent as GeminiMessageEvent;
// Skip user messages - already handled by caller
if (messageEvent.role === 'user') {
return null;
}
// Handle assistant messages
if (messageEvent.role === 'assistant') {
return {
type: 'assistant',
session_id: messageEvent.session_id,
message: {
role: 'assistant',
content: [{ type: 'text', text: messageEvent.content }],
},
};
}
return null;
}
case 'tool_use': {
const toolEvent = geminiEvent as GeminiToolUseEvent;
const normalizedName = normalizeGeminiToolName(toolEvent.tool_name);
const normalizedInput = normalizeGeminiToolInput(
toolEvent.tool_name,
toolEvent.parameters as Record<string, unknown>
);
return {
type: 'assistant',
session_id: toolEvent.session_id,
message: {
role: 'assistant',
content: [
{
type: 'tool_use',
name: normalizedName,
tool_use_id: toolEvent.tool_id,
input: normalizedInput,
},
],
},
};
}
case 'tool_result': {
const toolResultEvent = geminiEvent as GeminiToolResultEvent;
// If tool result is an error, prefix with error indicator
const content =
toolResultEvent.status === 'error'
? `[ERROR] ${toolResultEvent.output}`
: toolResultEvent.output;
return {
type: 'assistant',
session_id: toolResultEvent.session_id,
message: {
role: 'assistant',
content: [
{
type: 'tool_result',
tool_use_id: toolResultEvent.tool_id,
content,
},
],
},
};
}
case 'result': {
const resultEvent = geminiEvent as GeminiResultEvent;
if (resultEvent.status === 'error') {
return {
type: 'error',
session_id: resultEvent.session_id,
error: resultEvent.error || 'Unknown error',
};
}
// Success result - include stats for logging
logger.debug(
`Gemini result: status=${resultEvent.status}, tokens=${resultEvent.stats?.total_tokens}`
);
return {
type: 'result',
subtype: 'success',
session_id: resultEvent.session_id,
};
}
case 'error': {
const errorEvent = geminiEvent as GeminiResultEvent;
return {
type: 'error',
session_id: errorEvent.session_id,
error: errorEvent.error || 'Unknown error',
};
}
default:
logger.debug(`Unknown Gemini event type: ${geminiEvent.type}`);
return null;
}
}
// ==========================================================================
// CliProvider Overrides
// ==========================================================================
/**
* Override error mapping for Gemini-specific error codes
*/
protected mapError(stderr: string, exitCode: number | null): CliErrorInfo {
const lower = stderr.toLowerCase();
if (
lower.includes('not authenticated') ||
lower.includes('please log in') ||
lower.includes('unauthorized') ||
lower.includes('login required') ||
lower.includes('error authenticating') ||
lower.includes('loadcodeassist') ||
(lower.includes('econnrefused') && lower.includes('8888'))
) {
return {
code: GeminiErrorCode.NOT_AUTHENTICATED,
message: 'Gemini CLI is not authenticated',
recoverable: true,
suggestion:
'Run "gemini" interactively to log in, or set GEMINI_API_KEY environment variable',
};
}
if (
lower.includes('rate limit') ||
lower.includes('too many requests') ||
lower.includes('429') ||
lower.includes('quota exceeded')
) {
return {
code: GeminiErrorCode.RATE_LIMITED,
message: 'Gemini API rate limit exceeded',
recoverable: true,
suggestion: 'Wait a few minutes and try again. Free tier: 60 req/min, 1000 req/day',
};
}
if (
lower.includes('model not available') ||
lower.includes('invalid model') ||
lower.includes('unknown model') ||
lower.includes('modelnotfounderror') ||
lower.includes('model not found') ||
(lower.includes('not found') && lower.includes('404'))
) {
return {
code: GeminiErrorCode.MODEL_UNAVAILABLE,
message: 'Requested model is not available',
recoverable: true,
suggestion: 'Try using "gemini-2.5-flash" or select a different model',
};
}
if (
lower.includes('network') ||
lower.includes('connection') ||
lower.includes('econnrefused') ||
lower.includes('timeout')
) {
return {
code: GeminiErrorCode.NETWORK_ERROR,
message: 'Network connection error',
recoverable: true,
suggestion: 'Check your internet connection and try again',
};
}
if (exitCode === 137 || lower.includes('killed') || lower.includes('sigterm')) {
return {
code: GeminiErrorCode.PROCESS_CRASHED,
message: 'Gemini CLI process was terminated',
recoverable: true,
suggestion: 'The process may have run out of memory. Try a simpler task.',
};
}
return {
code: GeminiErrorCode.UNKNOWN,
message: stderr || `Gemini CLI exited with code ${exitCode}`,
recoverable: false,
};
}
/**
* Override install instructions for Gemini-specific guidance
*/
protected getInstallInstructions(): string {
return 'Install with: npm install -g @google/gemini-cli (or visit https://github.com/google-gemini/gemini-cli)';
}
/**
* Execute a prompt using Gemini CLI with streaming
*/
async *executeQuery(options: ExecuteOptions): AsyncGenerator<ProviderMessage> {
this.ensureCliDetected();
// Validate that model doesn't have a provider prefix
validateBareModelId(options.model, 'GeminiProvider');
if (!this.cliPath) {
throw this.createError(
GeminiErrorCode.NOT_INSTALLED,
'Gemini CLI is not installed',
true,
this.getInstallInstructions()
);
}
// Extract prompt text to pass as positional argument
const promptText = this.extractPromptText(options);
// Build CLI args and append the prompt as the last positional argument
const cliArgs = this.buildCliArgs(options);
cliArgs.push(promptText); // Gemini CLI uses positional args for the prompt
const subprocessOptions = this.buildSubprocessOptions(options, cliArgs);
let sessionId: string | undefined;
logger.debug(`GeminiProvider.executeQuery called with model: "${options.model}"`);
try {
for await (const rawEvent of spawnJSONLProcess(subprocessOptions)) {
const event = rawEvent as GeminiStreamEvent;
// Capture session ID from init event
if (event.type === 'init') {
const initEvent = event as GeminiInitEvent;
sessionId = initEvent.session_id;
logger.debug(`Session started: ${sessionId}, model: ${initEvent.model}`);
}
// Normalize and yield the event
const normalized = this.normalizeEvent(event);
if (normalized) {
if (!normalized.session_id && sessionId) {
normalized.session_id = sessionId;
}
yield normalized;
}
}
} catch (error) {
if (isAbortError(error)) {
logger.debug('Query aborted');
return;
}
// Map CLI errors to GeminiError
if (error instanceof Error && 'stderr' in error) {
const errorInfo = this.mapError(
(error as { stderr?: string }).stderr || error.message,
(error as { exitCode?: number | null }).exitCode ?? null
);
throw this.createError(
errorInfo.code as GeminiErrorCode,
errorInfo.message,
errorInfo.recoverable,
errorInfo.suggestion
);
}
throw error;
}
}
// ==========================================================================
// Gemini-Specific Methods
// ==========================================================================
/**
* Create a GeminiError with details
*/
private createError(
code: GeminiErrorCode,
message: string,
recoverable: boolean = false,
suggestion?: string
): GeminiError {
const error = new Error(message) as GeminiError;
error.code = code;
error.recoverable = recoverable;
error.suggestion = suggestion;
error.name = 'GeminiError';
return error;
}
/**
* Get Gemini CLI version
*/
async getVersion(): Promise<string | null> {
this.ensureCliDetected();
if (!this.cliPath) return null;
try {
const result = execSync(`"${this.cliPath}" --version`, {
encoding: 'utf8',
timeout: 5000,
stdio: 'pipe',
}).trim();
return result;
} catch {
return null;
}
}
/**
* Check authentication status
*
* Uses a fast credential check approach:
* 1. Check for GEMINI_API_KEY environment variable
* 2. Check for Google Cloud credentials
* 3. Check for Gemini settings file with stored credentials
* 4. Quick CLI auth test with --help (fast, doesn't make API calls)
*/
async checkAuth(): Promise<GeminiAuthStatus> {
this.ensureCliDetected();
if (!this.cliPath) {
logger.debug('checkAuth: CLI not found');
return { authenticated: false, method: 'none' };
}
logger.debug('checkAuth: Starting credential check');
// Determine the likely auth method based on environment
const hasApiKey = !!process.env.GEMINI_API_KEY;
const hasEnvApiKey = hasApiKey;
const hasVertexAi = !!(
process.env.GOOGLE_APPLICATION_CREDENTIALS || process.env.GOOGLE_CLOUD_PROJECT
);
logger.debug(`checkAuth: hasApiKey=${hasApiKey}, hasVertexAi=${hasVertexAi}`);
// Check for Gemini credentials file (~/.gemini/settings.json)
const geminiConfigDir = path.join(os.homedir(), '.gemini');
const settingsPath = path.join(geminiConfigDir, 'settings.json');
let hasCredentialsFile = false;
let authType: string | null = null;
try {
await fs.access(settingsPath);
logger.debug(`checkAuth: Found settings file at ${settingsPath}`);
try {
const content = await fs.readFile(settingsPath, 'utf8');
const settings = JSON.parse(content);
// Auth config is at security.auth.selectedType (e.g., "oauth-personal", "oauth-adc", "api-key")
const selectedType = settings?.security?.auth?.selectedType;
if (selectedType) {
hasCredentialsFile = true;
authType = selectedType;
logger.debug(`checkAuth: Settings file has auth config, selectedType=${selectedType}`);
} else {
logger.debug(`checkAuth: Settings file found but no auth type configured`);
}
} catch (e) {
logger.debug(`checkAuth: Failed to parse settings file: ${e}`);
}
} catch {
logger.debug('checkAuth: No settings file found');
}
// If we have an API key, we're authenticated
if (hasApiKey) {
logger.debug('checkAuth: Using API key authentication');
return {
authenticated: true,
method: 'api_key',
hasApiKey,
hasEnvApiKey,
hasCredentialsFile,
};
}
// If we have Vertex AI credentials, we're authenticated
if (hasVertexAi) {
logger.debug('checkAuth: Using Vertex AI authentication');
return {
authenticated: true,
method: 'vertex_ai',
hasApiKey,
hasEnvApiKey,
hasCredentialsFile,
};
}
// Check if settings file indicates configured authentication
if (hasCredentialsFile && authType) {
// OAuth types: "oauth-personal", "oauth-adc"
// API key type: "api-key"
// Code assist: "code-assist" (requires IDE integration)
if (authType.startsWith('oauth')) {
logger.debug(`checkAuth: OAuth authentication configured (${authType})`);
return {
authenticated: true,
method: 'google_login',
hasApiKey,
hasEnvApiKey,
hasCredentialsFile,
};
}
if (authType === 'api-key') {
logger.debug('checkAuth: API key authentication configured in settings');
return {
authenticated: true,
method: 'api_key',
hasApiKey,
hasEnvApiKey,
hasCredentialsFile,
};
}
if (authType === 'code-assist' || authType === 'codeassist') {
logger.debug('checkAuth: Code Assist auth configured but requires local server');
return {
authenticated: false,
method: 'google_login',
hasApiKey,
hasEnvApiKey,
hasCredentialsFile,
error:
'Code Assist authentication requires IDE integration. Please use "gemini" CLI to log in with a different method, or set GEMINI_API_KEY.',
};
}
// Unknown auth type but something is configured
logger.debug(`checkAuth: Unknown auth type configured: ${authType}`);
return {
authenticated: true,
method: 'google_login',
hasApiKey,
hasEnvApiKey,
hasCredentialsFile,
};
}
// No credentials found
logger.debug('checkAuth: No valid credentials found');
return {
authenticated: false,
method: 'none',
hasApiKey,
hasEnvApiKey,
hasCredentialsFile,
error:
'No authentication configured. Run "gemini" interactively to log in, or set GEMINI_API_KEY.',
};
}
/**
* Detect installation status (required by BaseProvider)
*/
async detectInstallation(): Promise<InstallationStatus> {
const installed = await this.isInstalled();
const version = installed ? await this.getVersion() : undefined;
const auth = await this.checkAuth();
return {
installed,
version: version || undefined,
path: this.cliPath || undefined,
method: 'cli',
hasApiKey: !!process.env.GEMINI_API_KEY,
authenticated: auth.authenticated,
};
}
/**
* Get the detected CLI path (public accessor for status endpoints)
*/
getCliPath(): string | null {
this.ensureCliDetected();
return this.cliPath;
}
/**
* Get available Gemini models
*/
getAvailableModels(): ModelDefinition[] {
return Object.entries(GEMINI_MODEL_MAP).map(([id, config]) => ({
id, // Full model ID with gemini- prefix (e.g., 'gemini-2.5-flash')
name: config.label,
modelString: id, // Same as id - CLI uses the full model name
provider: 'gemini',
description: config.description,
supportsTools: true,
supportsVision: config.supportsVision,
contextWindow: config.contextWindow,
}));
}
/**
* Check if a feature is supported
*/
supportsFeature(feature: string): boolean {
const supported = ['tools', 'text', 'streaming', 'vision', 'thinking'];
return supported.includes(feature);
}
}

View File

@@ -16,6 +16,16 @@ export type {
ProviderMessage,
InstallationStatus,
ModelDefinition,
AgentDefinition,
ReasoningEffort,
SystemPromptPreset,
ConversationMessage,
ContentBlock,
ValidationResult,
McpServerConfig,
McpStdioServerConfig,
McpSSEServerConfig,
McpHttpServerConfig,
} from './types.js';
// Claude provider
@@ -30,3 +40,11 @@ export { OpencodeProvider } from './opencode-provider.js';
// Provider factory
export { ProviderFactory } from './provider-factory.js';
// Simple query service - unified interface for basic AI queries
export { simpleQuery, streamingQuery } from './simple-query-service.js';
export type {
SimpleQueryOptions,
SimpleQueryResult,
StreamingQueryOptions,
} from './simple-query-service.js';

File diff suppressed because it is too large Load Diff

View File

@@ -7,7 +7,13 @@
import { BaseProvider } from './base-provider.js';
import type { InstallationStatus, ModelDefinition } from './types.js';
import { isCursorModel, isCodexModel, isOpencodeModel, type ModelProvider } from '@automaker/types';
import {
isCursorModel,
isCodexModel,
isOpencodeModel,
isGeminiModel,
type ModelProvider,
} from '@automaker/types';
import * as fs from 'fs';
import * as path from 'path';
@@ -16,6 +22,7 @@ const DISCONNECTED_MARKERS: Record<string, string> = {
codex: '.codex-disconnected',
cursor: '.cursor-disconnected',
opencode: '.opencode-disconnected',
gemini: '.gemini-disconnected',
};
/**
@@ -239,8 +246,8 @@ export class ProviderFactory {
model.modelString === modelId ||
model.id.endsWith(`-${modelId}`) ||
model.modelString.endsWith(`-${modelId}`) ||
model.modelString === modelId.replace(/^(claude|cursor|codex)-/, '') ||
model.modelString === modelId.replace(/-(claude|cursor|codex)$/, '')
model.modelString === modelId.replace(/^(claude|cursor|codex|gemini)-/, '') ||
model.modelString === modelId.replace(/-(claude|cursor|codex|gemini)$/, '')
) {
return model.supportsVision ?? true;
}
@@ -267,6 +274,7 @@ import { ClaudeProvider } from './claude-provider.js';
import { CursorProvider } from './cursor-provider.js';
import { CodexProvider } from './codex-provider.js';
import { OpencodeProvider } from './opencode-provider.js';
import { GeminiProvider } from './gemini-provider.js';
// Register Claude provider
registerProvider('claude', {
@@ -301,3 +309,11 @@ registerProvider('opencode', {
canHandleModel: (model: string) => isOpencodeModel(model),
priority: 3, // Between codex (5) and claude (0)
});
// Register Gemini provider
registerProvider('gemini', {
factory: () => new GeminiProvider(),
aliases: ['google'],
canHandleModel: (model: string) => isGeminiModel(model),
priority: 4, // Between opencode (3) and codex (5)
});

View File

@@ -0,0 +1,275 @@
/**
* Simple Query Service - Simplified interface for basic AI queries
*
* Use this for routes that need simple text responses without
* complex event handling. This service abstracts away the provider
* selection and streaming details, providing a clean interface
* for common query patterns.
*
* Benefits:
* - No direct SDK imports needed in route files
* - Consistent provider routing based on model
* - Automatic text extraction from streaming responses
* - Structured output support for JSON schema responses
* - Eliminates duplicate extractTextFromStream() functions
*/
import { ProviderFactory } from './provider-factory.js';
import type {
ProviderMessage,
ContentBlock,
ThinkingLevel,
ReasoningEffort,
ClaudeApiProfile,
ClaudeCompatibleProvider,
Credentials,
} from '@automaker/types';
import { stripProviderPrefix } from '@automaker/types';
/**
* Options for simple query execution
*/
export interface SimpleQueryOptions {
/** The prompt to send to the AI (can be text or multi-part content) */
prompt: string | Array<{ type: string; text?: string; source?: object }>;
/** Model to use (with or without provider prefix) */
model?: string;
/** Working directory for the query */
cwd: string;
/** System prompt (combined with user prompt for some providers) */
systemPrompt?: string;
/** Maximum turns for agentic operations (default: 1) */
maxTurns?: number;
/** Tools to allow (default: [] for simple queries) */
allowedTools?: string[];
/** Abort controller for cancellation */
abortController?: AbortController;
/** Structured output format for JSON responses */
outputFormat?: {
type: 'json_schema';
schema: Record<string, unknown>;
};
/** Thinking level for Claude models */
thinkingLevel?: ThinkingLevel;
/** Reasoning effort for Codex/OpenAI models */
reasoningEffort?: ReasoningEffort;
/** If true, runs in read-only mode (no file writes) */
readOnly?: boolean;
/** Setting sources for CLAUDE.md loading */
settingSources?: Array<'user' | 'project' | 'local'>;
/**
* Active Claude API profile for alternative endpoint configuration
* @deprecated Use claudeCompatibleProvider instead
*/
claudeApiProfile?: ClaudeApiProfile;
/**
* Claude-compatible provider for alternative endpoint configuration.
* Takes precedence over claudeApiProfile if both are set.
*/
claudeCompatibleProvider?: ClaudeCompatibleProvider;
/** Credentials for resolving 'credentials' apiKeySource in Claude API profiles/providers */
credentials?: Credentials;
}
/**
* Result from a simple query
*/
export interface SimpleQueryResult {
/** The accumulated text response */
text: string;
/** Structured output if outputFormat was specified and provider supports it */
structured_output?: Record<string, unknown>;
}
/**
* Options for streaming query execution
*/
export interface StreamingQueryOptions extends SimpleQueryOptions {
/** Callback for each text chunk received */
onText?: (text: string) => void;
/** Callback for tool use events */
onToolUse?: (tool: string, input: unknown) => void;
/** Callback for thinking blocks (if available) */
onThinking?: (thinking: string) => void;
}
/**
* Default model to use when none specified
*/
const DEFAULT_MODEL = 'claude-sonnet-4-20250514';
/**
* Execute a simple query and return the text result
*
* Use this for simple, non-streaming queries where you just need
* the final text response. For more complex use cases with progress
* callbacks, use streamingQuery() instead.
*
* @example
* ```typescript
* const result = await simpleQuery({
* prompt: 'Generate a title for: user authentication',
* cwd: process.cwd(),
* systemPrompt: 'You are a title generator...',
* maxTurns: 1,
* allowedTools: [],
* });
* console.log(result.text); // "Add user authentication"
* ```
*/
export async function simpleQuery(options: SimpleQueryOptions): Promise<SimpleQueryResult> {
const model = options.model || DEFAULT_MODEL;
const provider = ProviderFactory.getProviderForModel(model);
const bareModel = stripProviderPrefix(model);
let responseText = '';
let structuredOutput: Record<string, unknown> | undefined;
// Build provider options
const providerOptions = {
prompt: options.prompt,
model: bareModel,
originalModel: model,
cwd: options.cwd,
systemPrompt: options.systemPrompt,
maxTurns: options.maxTurns ?? 1,
allowedTools: options.allowedTools ?? [],
abortController: options.abortController,
outputFormat: options.outputFormat,
thinkingLevel: options.thinkingLevel,
reasoningEffort: options.reasoningEffort,
readOnly: options.readOnly,
settingSources: options.settingSources,
claudeApiProfile: options.claudeApiProfile, // Legacy: Pass active Claude API profile for alternative endpoint configuration
claudeCompatibleProvider: options.claudeCompatibleProvider, // New: Pass Claude-compatible provider (takes precedence)
credentials: options.credentials, // Pass credentials for resolving 'credentials' apiKeySource
};
for await (const msg of provider.executeQuery(providerOptions)) {
// Handle error messages
if (msg.type === 'error') {
const errorMessage = msg.error || 'Provider returned an error';
throw new Error(errorMessage);
}
// Extract text from assistant messages
if (msg.type === 'assistant' && msg.message?.content) {
for (const block of msg.message.content) {
if (block.type === 'text' && block.text) {
responseText += block.text;
}
}
}
// Handle result messages
if (msg.type === 'result') {
if (msg.subtype === 'success') {
// Use result text if longer than accumulated text
if (msg.result && msg.result.length > responseText.length) {
responseText = msg.result;
}
// Capture structured output if present
if (msg.structured_output) {
structuredOutput = msg.structured_output;
}
} else if (msg.subtype === 'error_max_turns') {
// Max turns reached - return what we have
break;
} else if (msg.subtype === 'error_max_structured_output_retries') {
throw new Error('Could not produce valid structured output after retries');
}
}
}
return { text: responseText, structured_output: structuredOutput };
}
/**
* Execute a streaming query with event callbacks
*
* Use this for queries where you need real-time progress updates,
* such as when displaying streaming output to a user.
*
* @example
* ```typescript
* const result = await streamingQuery({
* prompt: 'Analyze this project and suggest improvements',
* cwd: '/path/to/project',
* maxTurns: 250,
* allowedTools: ['Read', 'Glob', 'Grep'],
* onText: (text) => emitProgress(text),
* onToolUse: (tool, input) => emitToolUse(tool, input),
* });
* ```
*/
export async function streamingQuery(options: StreamingQueryOptions): Promise<SimpleQueryResult> {
const model = options.model || DEFAULT_MODEL;
const provider = ProviderFactory.getProviderForModel(model);
const bareModel = stripProviderPrefix(model);
let responseText = '';
let structuredOutput: Record<string, unknown> | undefined;
// Build provider options
const providerOptions = {
prompt: options.prompt,
model: bareModel,
originalModel: model,
cwd: options.cwd,
systemPrompt: options.systemPrompt,
maxTurns: options.maxTurns ?? 250,
allowedTools: options.allowedTools ?? ['Read', 'Glob', 'Grep'],
abortController: options.abortController,
outputFormat: options.outputFormat,
thinkingLevel: options.thinkingLevel,
reasoningEffort: options.reasoningEffort,
readOnly: options.readOnly,
settingSources: options.settingSources,
claudeApiProfile: options.claudeApiProfile, // Legacy: Pass active Claude API profile for alternative endpoint configuration
claudeCompatibleProvider: options.claudeCompatibleProvider, // New: Pass Claude-compatible provider (takes precedence)
credentials: options.credentials, // Pass credentials for resolving 'credentials' apiKeySource
};
for await (const msg of provider.executeQuery(providerOptions)) {
// Handle error messages
if (msg.type === 'error') {
const errorMessage = msg.error || 'Provider returned an error';
throw new Error(errorMessage);
}
// Extract content from assistant messages
if (msg.type === 'assistant' && msg.message?.content) {
for (const block of msg.message.content) {
if (block.type === 'text' && block.text) {
responseText += block.text;
options.onText?.(block.text);
} else if (block.type === 'tool_use' && block.name) {
options.onToolUse?.(block.name, block.input);
} else if (block.type === 'thinking' && block.thinking) {
options.onThinking?.(block.thinking);
}
}
}
// Handle result messages
if (msg.type === 'result') {
if (msg.subtype === 'success') {
// Use result text if longer than accumulated text
if (msg.result && msg.result.length > responseText.length) {
responseText = msg.result;
}
// Capture structured output if present
if (msg.structured_output) {
structuredOutput = msg.structured_output;
}
} else if (msg.subtype === 'error_max_turns') {
// Max turns reached - return what we have
break;
} else if (msg.subtype === 'error_max_structured_output_retries') {
throw new Error('Could not produce valid structured output after retries');
}
}
}
return { text: responseText, structured_output: structuredOutput };
}

View File

@@ -19,4 +19,7 @@ export type {
InstallationStatus,
ValidationResult,
ModelDefinition,
AgentDefinition,
ReasoningEffort,
SystemPromptPreset,
} from '@automaker/types';

View File

@@ -6,26 +6,103 @@ import { createLogger } from '@automaker/utils';
const logger = createLogger('SpecRegeneration');
// Shared state for tracking generation status - private
let isRunning = false;
let currentAbortController: AbortController | null = null;
// Types for running generation
export type GenerationType = 'spec_regeneration' | 'feature_generation' | 'sync';
interface RunningGeneration {
isRunning: boolean;
type: GenerationType;
startedAt: string;
}
// Shared state for tracking generation status - scoped by project path
const runningProjects = new Map<string, RunningGeneration>();
const abortControllers = new Map<string, AbortController>();
/**
* Get the current running state
* Get the running state for a specific project
*/
export function getSpecRegenerationStatus(): {
export function getSpecRegenerationStatus(projectPath?: string): {
isRunning: boolean;
currentAbortController: AbortController | null;
projectPath?: string;
type?: GenerationType;
startedAt?: string;
} {
return { isRunning, currentAbortController };
if (projectPath) {
const generation = runningProjects.get(projectPath);
return {
isRunning: generation?.isRunning || false,
currentAbortController: abortControllers.get(projectPath) || null,
projectPath,
type: generation?.type,
startedAt: generation?.startedAt,
};
}
// Fallback: check if any project is running (for backward compatibility)
const isAnyRunning = Array.from(runningProjects.values()).some((g) => g.isRunning);
return { isRunning: isAnyRunning, currentAbortController: null };
}
/**
* Set the running state and abort controller
* Get the project path that is currently running (if any)
*/
export function setRunningState(running: boolean, controller: AbortController | null = null): void {
isRunning = running;
currentAbortController = controller;
export function getRunningProjectPath(): string | null {
for (const [path, running] of runningProjects.entries()) {
if (running) return path;
}
return null;
}
/**
* Set the running state and abort controller for a specific project
*/
export function setRunningState(
projectPath: string,
running: boolean,
controller: AbortController | null = null,
type: GenerationType = 'spec_regeneration'
): void {
if (running) {
runningProjects.set(projectPath, {
isRunning: true,
type,
startedAt: new Date().toISOString(),
});
if (controller) {
abortControllers.set(projectPath, controller);
}
} else {
runningProjects.delete(projectPath);
abortControllers.delete(projectPath);
}
}
/**
* Get all running spec/feature generations for the running agents view
*/
export function getAllRunningGenerations(): Array<{
projectPath: string;
type: GenerationType;
startedAt: string;
}> {
const results: Array<{
projectPath: string;
type: GenerationType;
startedAt: string;
}> = [];
for (const [projectPath, generation] of runningProjects.entries()) {
if (generation.isRunning) {
results.push({
projectPath,
type: generation.type,
startedAt: generation.startedAt,
});
}
}
return results;
}
/**

View File

@@ -5,19 +5,21 @@
* (defaults to Sonnet for balanced speed and quality).
*/
import { query } from '@anthropic-ai/claude-agent-sdk';
import * as secureFs from '../../lib/secure-fs.js';
import type { EventEmitter } from '../../lib/events.js';
import { createLogger } from '@automaker/utils';
import { DEFAULT_PHASE_MODELS, isCursorModel, stripProviderPrefix } from '@automaker/types';
import { DEFAULT_PHASE_MODELS } from '@automaker/types';
import { resolvePhaseModel } from '@automaker/model-resolver';
import { createFeatureGenerationOptions } from '../../lib/sdk-options.js';
import { ProviderFactory } from '../../providers/provider-factory.js';
import { logAuthStatus } from './common.js';
import { streamingQuery } from '../../providers/simple-query-service.js';
import { parseAndCreateFeatures } from './parse-and-create-features.js';
import { getAppSpecPath } from '@automaker/platform';
import type { SettingsService } from '../../services/settings-service.js';
import { getAutoLoadClaudeMdSetting } from '../../lib/settings-helpers.js';
import {
getAutoLoadClaudeMdSetting,
getPromptCustomization,
getPhaseModelWithOverrides,
} from '../../lib/settings-helpers.js';
import { FeatureLoader } from '../../services/feature-loader.js';
const logger = createLogger('SpecRegeneration');
@@ -56,38 +58,48 @@ export async function generateFeaturesFromSpec(
return;
}
// Get customized prompts from settings
const prompts = await getPromptCustomization(settingsService, '[FeatureGeneration]');
// Load existing features to prevent duplicates
const featureLoader = new FeatureLoader();
const existingFeatures = await featureLoader.getAll(projectPath);
logger.info(`Found ${existingFeatures.length} existing features to exclude from generation`);
// Build existing features context for the prompt
let existingFeaturesContext = '';
if (existingFeatures.length > 0) {
const featuresList = existingFeatures
.map(
(f) =>
`- "${f.title}" (ID: ${f.id}): ${f.description?.substring(0, 100) || 'No description'}`
)
.join('\n');
existingFeaturesContext = `
## EXISTING FEATURES (DO NOT REGENERATE THESE)
The following ${existingFeatures.length} features already exist in the project. You MUST NOT generate features that duplicate or overlap with these:
${featuresList}
CRITICAL INSTRUCTIONS:
- DO NOT generate any features with the same or similar titles as the existing features listed above
- DO NOT generate features that cover the same functionality as existing features
- ONLY generate NEW features that are not yet in the system
- If a feature from the roadmap already exists, skip it entirely
- Generate unique feature IDs that do not conflict with existing IDs: ${existingFeatures.map((f) => f.id).join(', ')}
`;
}
const prompt = `Based on this project specification:
${spec}
${existingFeaturesContext}
${prompts.appSpec.generateFeaturesFromSpecPrompt}
Generate a prioritized list of implementable features. For each feature provide:
1. **id**: A unique lowercase-hyphenated identifier
2. **category**: Functional category (e.g., "Core", "UI", "API", "Authentication", "Database")
3. **title**: Short descriptive title
4. **description**: What this feature does (2-3 sentences)
5. **priority**: 1 (high), 2 (medium), or 3 (low)
6. **complexity**: "simple", "moderate", or "complex"
7. **dependencies**: Array of feature IDs this depends on (can be empty)
Format as JSON:
{
"features": [
{
"id": "feature-id",
"category": "Feature Category",
"title": "Feature Title",
"description": "What it does",
"priority": 1,
"complexity": "moderate",
"dependencies": []
}
]
}
Generate ${featureCount} features that build on each other logically.
IMPORTANT: Do not ask for clarification. The specification is provided above. Generate the JSON immediately.`;
Generate ${featureCount} NEW features that build on each other logically. Remember: ONLY generate features that DO NOT already exist.`;
logger.info('========== PROMPT BEING SENT ==========');
logger.info(`Prompt length: ${prompt.length} chars`);
@@ -107,129 +119,53 @@ IMPORTANT: Do not ask for clarification. The specification is provided above. Ge
'[FeatureGeneration]'
);
// Get model from phase settings
const settings = await settingsService?.getGlobalSettings();
const phaseModelEntry =
settings?.phaseModels?.featureGenerationModel || DEFAULT_PHASE_MODELS.featureGenerationModel;
// Get model from phase settings with provider info
const {
phaseModel: phaseModelEntry,
provider,
credentials,
} = settingsService
? await getPhaseModelWithOverrides(
'featureGenerationModel',
settingsService,
projectPath,
'[FeatureGeneration]'
)
: {
phaseModel: DEFAULT_PHASE_MODELS.featureGenerationModel,
provider: undefined,
credentials: undefined,
};
const { model, thinkingLevel } = resolvePhaseModel(phaseModelEntry);
logger.info('Using model:', model);
logger.info('Using model:', model, provider ? `via provider: ${provider.name}` : 'direct API');
let responseText = '';
let messageCount = 0;
// Use streamingQuery with event callbacks
const result = await streamingQuery({
prompt,
model,
cwd: projectPath,
maxTurns: 250,
allowedTools: ['Read', 'Glob', 'Grep'],
abortController,
thinkingLevel,
readOnly: true, // Feature generation only reads code, doesn't write
settingSources: autoLoadClaudeMd ? ['user', 'project', 'local'] : undefined,
claudeCompatibleProvider: provider, // Pass provider for alternative endpoint configuration
credentials, // Pass credentials for resolving 'credentials' apiKeySource
onText: (text) => {
logger.debug(`Feature text block received (${text.length} chars)`);
events.emit('spec-regeneration:event', {
type: 'spec_regeneration_progress',
content: text,
projectPath: projectPath,
});
},
});
// Route to appropriate provider based on model type
if (isCursorModel(model)) {
// Use Cursor provider for Cursor models
logger.info('[FeatureGeneration] Using Cursor provider');
const responseText = result.text;
const provider = ProviderFactory.getProviderForModel(model);
// Strip provider prefix - providers expect bare model IDs
const bareModel = stripProviderPrefix(model);
// Add explicit instructions for Cursor to return JSON in response
const cursorPrompt = `${prompt}
CRITICAL INSTRUCTIONS:
1. DO NOT write any files. Return the JSON in your response only.
2. Respond with ONLY a JSON object - no explanations, no markdown, just raw JSON.
3. Your entire response should be valid JSON starting with { and ending with }. No text before or after.`;
for await (const msg of provider.executeQuery({
prompt: cursorPrompt,
model: bareModel,
cwd: projectPath,
maxTurns: 250,
allowedTools: ['Read', 'Glob', 'Grep'],
abortController,
readOnly: true, // Feature generation only reads code, doesn't write
})) {
messageCount++;
if (msg.type === 'assistant' && msg.message?.content) {
for (const block of msg.message.content) {
if (block.type === 'text' && block.text) {
responseText += block.text;
logger.debug(`Feature text block received (${block.text.length} chars)`);
events.emit('spec-regeneration:event', {
type: 'spec_regeneration_progress',
content: block.text,
projectPath: projectPath,
});
}
}
} else if (msg.type === 'result' && msg.subtype === 'success' && msg.result) {
// Use result if it's a final accumulated message
if (msg.result.length > responseText.length) {
responseText = msg.result;
}
}
}
} else {
// Use Claude SDK for Claude models
logger.info('[FeatureGeneration] Using Claude SDK');
const options = createFeatureGenerationOptions({
cwd: projectPath,
abortController,
autoLoadClaudeMd,
model,
thinkingLevel, // Pass thinking level for extended thinking
});
logger.debug('SDK Options:', JSON.stringify(options, null, 2));
logger.info('Calling Claude Agent SDK query() for features...');
logAuthStatus('Right before SDK query() for features');
let stream;
try {
stream = query({ prompt, options });
logger.debug('query() returned stream successfully');
} catch (queryError) {
logger.error('❌ query() threw an exception:');
logger.error('Error:', queryError);
throw queryError;
}
logger.debug('Starting to iterate over feature stream...');
try {
for await (const msg of stream) {
messageCount++;
logger.debug(
`Feature stream message #${messageCount}:`,
JSON.stringify({ type: msg.type, subtype: (msg as any).subtype }, null, 2)
);
if (msg.type === 'assistant' && msg.message.content) {
for (const block of msg.message.content) {
if (block.type === 'text') {
responseText += block.text;
logger.debug(`Feature text block received (${block.text.length} chars)`);
events.emit('spec-regeneration:event', {
type: 'spec_regeneration_progress',
content: block.text,
projectPath: projectPath,
});
}
}
} else if (msg.type === 'result' && (msg as any).subtype === 'success') {
logger.debug('Received success result for features');
responseText = (msg as any).result || responseText;
} else if ((msg as { type: string }).type === 'error') {
logger.error('❌ Received error message from feature stream:');
logger.error('Error message:', JSON.stringify(msg, null, 2));
}
}
} catch (streamError) {
logger.error('❌ Error while iterating feature stream:');
logger.error('Stream error:', streamError);
throw streamError;
}
}
logger.info(`Feature stream complete. Total messages: ${messageCount}`);
logger.info(`Feature stream complete.`);
logger.info(`Feature response length: ${responseText.length} chars`);
logger.info('========== FULL RESPONSE TEXT ==========');
logger.info(responseText);

View File

@@ -5,27 +5,22 @@
* (defaults to Opus for high-quality specification generation).
*/
import { query } from '@anthropic-ai/claude-agent-sdk';
import path from 'path';
import * as secureFs from '../../lib/secure-fs.js';
import type { EventEmitter } from '../../lib/events.js';
import {
specOutputSchema,
specToXml,
getStructuredSpecPromptInstruction,
type SpecOutput,
} from '../../lib/app-spec-format.js';
import { specOutputSchema, specToXml, type SpecOutput } from '../../lib/app-spec-format.js';
import { createLogger } from '@automaker/utils';
import { DEFAULT_PHASE_MODELS, isCursorModel, stripProviderPrefix } from '@automaker/types';
import { DEFAULT_PHASE_MODELS, isCursorModel } from '@automaker/types';
import { resolvePhaseModel } from '@automaker/model-resolver';
import { createSpecGenerationOptions } from '../../lib/sdk-options.js';
import { extractJson } from '../../lib/json-extractor.js';
import { ProviderFactory } from '../../providers/provider-factory.js';
import { logAuthStatus } from './common.js';
import { streamingQuery } from '../../providers/simple-query-service.js';
import { generateFeaturesFromSpec } from './generate-features-from-spec.js';
import { ensureAutomakerDir, getAppSpecPath } from '@automaker/platform';
import type { SettingsService } from '../../services/settings-service.js';
import { getAutoLoadClaudeMdSetting } from '../../lib/settings-helpers.js';
import {
getAutoLoadClaudeMdSetting,
getPromptCustomization,
getPhaseModelWithOverrides,
} from '../../lib/settings-helpers.js';
const logger = createLogger('SpecRegeneration');
@@ -47,6 +42,9 @@ export async function generateSpec(
logger.info('analyzeProject:', analyzeProject);
logger.info('maxFeatures:', maxFeatures);
// Get customized prompts from settings
const prompts = await getPromptCustomization(settingsService, '[SpecRegeneration]');
// Build the prompt based on whether we should analyze the project
let analysisInstructions = '';
let techStackDefaults = '';
@@ -70,9 +68,7 @@ export async function generateSpec(
Use these technologies as the foundation for the specification.`;
}
const prompt = `You are helping to define a software project specification.
IMPORTANT: Never ask for clarification or additional information. Use the information provided and make reasonable assumptions to create the best possible specification. If details are missing, infer them based on common patterns and best practices.
const prompt = `${prompts.appSpec.generateSpecSystemPrompt}
Project Overview:
${projectOverview}
@@ -81,7 +77,7 @@ ${techStackDefaults}
${analysisInstructions}
${getStructuredSpecPromptInstruction()}`;
${prompts.appSpec.structuredSpecInstructions}`;
logger.info('========== PROMPT BEING SENT ==========');
logger.info(`Prompt length: ${prompt.length} chars`);
@@ -100,30 +96,37 @@ ${getStructuredSpecPromptInstruction()}`;
'[SpecRegeneration]'
);
// Get model from phase settings
const settings = await settingsService?.getGlobalSettings();
const phaseModelEntry =
settings?.phaseModels?.specGenerationModel || DEFAULT_PHASE_MODELS.specGenerationModel;
// Get model from phase settings with provider info
const {
phaseModel: phaseModelEntry,
provider,
credentials,
} = settingsService
? await getPhaseModelWithOverrides(
'specGenerationModel',
settingsService,
projectPath,
'[SpecRegeneration]'
)
: {
phaseModel: DEFAULT_PHASE_MODELS.specGenerationModel,
provider: undefined,
credentials: undefined,
};
const { model, thinkingLevel } = resolvePhaseModel(phaseModelEntry);
logger.info('Using model:', model);
logger.info('Using model:', model, provider ? `via provider: ${provider.name}` : 'direct API');
let responseText = '';
let messageCount = 0;
let structuredOutput: SpecOutput | null = null;
// Route to appropriate provider based on model type
if (isCursorModel(model)) {
// Use Cursor provider for Cursor models
logger.info('[SpecGeneration] Using Cursor provider');
// Determine if we should use structured output (Claude supports it, Cursor doesn't)
const useStructuredOutput = !isCursorModel(model);
const provider = ProviderFactory.getProviderForModel(model);
// Strip provider prefix - providers expect bare model IDs
const bareModel = stripProviderPrefix(model);
// For Cursor, include the JSON schema in the prompt with clear instructions
// to return JSON in the response (not write to a file)
const cursorPrompt = `${prompt}
// Build the final prompt - for Cursor, include JSON schema instructions
let finalPrompt = prompt;
if (!useStructuredOutput) {
finalPrompt = `${prompt}
CRITICAL INSTRUCTIONS:
1. DO NOT write any files. DO NOT create any files like "project_specification.json".
@@ -133,153 +136,59 @@ CRITICAL INSTRUCTIONS:
${JSON.stringify(specOutputSchema, null, 2)}
Your entire response should be valid JSON starting with { and ending with }. No text before or after.`;
for await (const msg of provider.executeQuery({
prompt: cursorPrompt,
model: bareModel,
cwd: projectPath,
maxTurns: 250,
allowedTools: ['Read', 'Glob', 'Grep'],
abortController,
readOnly: true, // Spec generation only reads code, we write the spec ourselves
})) {
messageCount++;
if (msg.type === 'assistant' && msg.message?.content) {
for (const block of msg.message.content) {
if (block.type === 'text' && block.text) {
responseText += block.text;
logger.info(
`Text block received (${block.text.length} chars), total now: ${responseText.length} chars`
);
events.emit('spec-regeneration:event', {
type: 'spec_regeneration_progress',
content: block.text,
projectPath: projectPath,
});
} else if (block.type === 'tool_use') {
logger.info('Tool use:', block.name);
events.emit('spec-regeneration:event', {
type: 'spec_tool',
tool: block.name,
input: block.input,
});
}
}
} else if (msg.type === 'result' && msg.subtype === 'success' && msg.result) {
// Use result if it's a final accumulated message
if (msg.result.length > responseText.length) {
responseText = msg.result;
}
}
}
// Parse JSON from the response text using shared utility
if (responseText) {
structuredOutput = extractJson<SpecOutput>(responseText, { logger });
}
} else {
// Use Claude SDK for Claude models
logger.info('[SpecGeneration] Using Claude SDK');
const options = createSpecGenerationOptions({
cwd: projectPath,
abortController,
autoLoadClaudeMd,
model,
thinkingLevel, // Pass thinking level for extended thinking
outputFormat: {
type: 'json_schema',
schema: specOutputSchema,
},
});
logger.debug('SDK Options:', JSON.stringify(options, null, 2));
logger.info('Calling Claude Agent SDK query()...');
// Log auth status right before the SDK call
logAuthStatus('Right before SDK query()');
let stream;
try {
stream = query({ prompt, options });
logger.debug('query() returned stream successfully');
} catch (queryError) {
logger.error('❌ query() threw an exception:');
logger.error('Error:', queryError);
throw queryError;
}
logger.info('Starting to iterate over stream...');
try {
for await (const msg of stream) {
messageCount++;
logger.info(
`Stream message #${messageCount}: type=${msg.type}, subtype=${(msg as any).subtype}`
);
if (msg.type === 'assistant') {
const msgAny = msg as any;
if (msgAny.message?.content) {
for (const block of msgAny.message.content) {
if (block.type === 'text') {
responseText += block.text;
logger.info(
`Text block received (${block.text.length} chars), total now: ${responseText.length} chars`
);
events.emit('spec-regeneration:event', {
type: 'spec_regeneration_progress',
content: block.text,
projectPath: projectPath,
});
} else if (block.type === 'tool_use') {
logger.info('Tool use:', block.name);
events.emit('spec-regeneration:event', {
type: 'spec_tool',
tool: block.name,
input: block.input,
});
}
}
}
} else if (msg.type === 'result' && (msg as any).subtype === 'success') {
logger.info('Received success result');
// Check for structured output - this is the reliable way to get spec data
const resultMsg = msg as any;
if (resultMsg.structured_output) {
structuredOutput = resultMsg.structured_output as SpecOutput;
logger.info('✅ Received structured output');
logger.debug('Structured output:', JSON.stringify(structuredOutput, null, 2));
} else {
logger.warn('⚠️ No structured output in result, will fall back to text parsing');
}
} else if (msg.type === 'result') {
// Handle error result types
const subtype = (msg as any).subtype;
logger.info(`Result message: subtype=${subtype}`);
if (subtype === 'error_max_turns') {
logger.error('❌ Hit max turns limit!');
} else if (subtype === 'error_max_structured_output_retries') {
logger.error('❌ Failed to produce valid structured output after retries');
throw new Error('Could not produce valid spec output');
}
} else if ((msg as { type: string }).type === 'error') {
logger.error('❌ Received error message from stream:');
logger.error('Error message:', JSON.stringify(msg, null, 2));
} else if (msg.type === 'user') {
// Log user messages (tool results)
logger.info(`User message (tool result): ${JSON.stringify(msg).substring(0, 500)}`);
}
}
} catch (streamError) {
logger.error('❌ Error while iterating stream:');
logger.error('Stream error:', streamError);
throw streamError;
}
}
logger.info(`Stream iteration complete. Total messages: ${messageCount}`);
// Use streamingQuery with event callbacks
const result = await streamingQuery({
prompt: finalPrompt,
model,
cwd: projectPath,
maxTurns: 250,
allowedTools: ['Read', 'Glob', 'Grep'],
abortController,
thinkingLevel,
readOnly: true, // Spec generation only reads code, we write the spec ourselves
settingSources: autoLoadClaudeMd ? ['user', 'project', 'local'] : undefined,
claudeCompatibleProvider: provider, // Pass provider for alternative endpoint configuration
credentials, // Pass credentials for resolving 'credentials' apiKeySource
outputFormat: useStructuredOutput
? {
type: 'json_schema',
schema: specOutputSchema,
}
: undefined,
onText: (text) => {
responseText += text;
logger.info(
`Text block received (${text.length} chars), total now: ${responseText.length} chars`
);
events.emit('spec-regeneration:event', {
type: 'spec_regeneration_progress',
content: text,
projectPath: projectPath,
});
},
onToolUse: (tool, input) => {
logger.info('Tool use:', tool);
events.emit('spec-regeneration:event', {
type: 'spec_tool',
tool,
input,
});
},
});
// Get structured output if available
if (result.structured_output) {
structuredOutput = result.structured_output as unknown as SpecOutput;
logger.info('✅ Received structured output');
logger.debug('Structured output:', JSON.stringify(structuredOutput, null, 2));
} else if (!useStructuredOutput && responseText) {
// For non-Claude providers, parse JSON from response text
structuredOutput = extractJson<SpecOutput>(responseText, { logger });
}
logger.info(`Stream iteration complete.`);
logger.info(`Response text length: ${responseText.length} chars`);
// Determine XML content to save
@@ -311,19 +220,33 @@ Your entire response should be valid JSON starting with { and ending with }. No
xmlContent = responseText.substring(xmlStart, xmlEnd + '</project_specification>'.length);
logger.info(`Extracted XML content: ${xmlContent.length} chars (from position ${xmlStart})`);
} else {
// No valid XML structure found in the response text
// This happens when structured output was expected but not received, and the agent
// output conversational text instead of XML (e.g., "The project directory appears to be empty...")
// We should NOT save this conversational text as it's not a valid spec
logger.error('❌ Response does not contain valid <project_specification> XML structure');
logger.error(
'This typically happens when structured output failed and the agent produced conversational text instead of XML'
);
throw new Error(
'Failed to generate spec: No valid XML structure found in response. ' +
'The response contained conversational text but no <project_specification> tags. ' +
'Please try again.'
);
// No XML found, try JSON extraction
logger.warn('⚠️ No XML tags found, attempting JSON extraction...');
const extractedJson = extractJson<SpecOutput>(responseText, { logger });
if (
extractedJson &&
typeof extractedJson.project_name === 'string' &&
typeof extractedJson.overview === 'string' &&
Array.isArray(extractedJson.technology_stack) &&
Array.isArray(extractedJson.core_capabilities) &&
Array.isArray(extractedJson.implemented_features)
) {
logger.info('✅ Successfully extracted JSON from response text');
xmlContent = specToXml(extractedJson);
logger.info(`✅ Converted extracted JSON to XML: ${xmlContent.length} chars`);
} else {
// Neither XML nor valid JSON found
logger.error('❌ Response does not contain valid XML or JSON structure');
logger.error(
'This typically happens when structured output failed and the agent produced conversational text instead of structured output'
);
throw new Error(
'Failed to generate spec: No valid XML or JSON structure found in response. ' +
'The response contained conversational text but no <project_specification> tags or valid JSON. ' +
'Please try again.'
);
}
}
}

View File

@@ -7,6 +7,7 @@ import type { EventEmitter } from '../../lib/events.js';
import { createCreateHandler } from './routes/create.js';
import { createGenerateHandler } from './routes/generate.js';
import { createGenerateFeaturesHandler } from './routes/generate-features.js';
import { createSyncHandler } from './routes/sync.js';
import { createStopHandler } from './routes/stop.js';
import { createStatusHandler } from './routes/status.js';
import type { SettingsService } from '../../services/settings-service.js';
@@ -20,6 +21,7 @@ export function createSpecRegenerationRoutes(
router.post('/create', createCreateHandler(events));
router.post('/generate', createGenerateHandler(events, settingsService));
router.post('/generate-features', createGenerateFeaturesHandler(events, settingsService));
router.post('/sync', createSyncHandler(events, settingsService));
router.post('/stop', createStopHandler());
router.get('/status', createStatusHandler());

View File

@@ -5,9 +5,10 @@
import path from 'path';
import * as secureFs from '../../lib/secure-fs.js';
import type { EventEmitter } from '../../lib/events.js';
import { createLogger } from '@automaker/utils';
import { createLogger, atomicWriteJson, DEFAULT_BACKUP_COUNT } from '@automaker/utils';
import { getFeaturesDir } from '@automaker/platform';
import { extractJsonWithArray } from '../../lib/json-extractor.js';
import { getNotificationService } from '../../services/notification-service.js';
const logger = createLogger('SpecRegeneration');
@@ -73,10 +74,10 @@ export async function parseAndCreateFeatures(
updatedAt: new Date().toISOString(),
};
await secureFs.writeFile(
path.join(featureDir, 'feature.json'),
JSON.stringify(featureData, null, 2)
);
// Use atomic write with backup support for crash protection
await atomicWriteJson(path.join(featureDir, 'feature.json'), featureData, {
backupCount: DEFAULT_BACKUP_COUNT,
});
createdFeatures.push({ id: feature.id, title: feature.title });
}
@@ -88,6 +89,15 @@ export async function parseAndCreateFeatures(
message: `Spec regeneration complete! Created ${createdFeatures.length} features.`,
projectPath: projectPath,
});
// Create notification for spec generation completion
const notificationService = getNotificationService();
await notificationService.createNotification({
type: 'spec_regeneration_complete',
title: 'Spec Generation Complete',
message: `Created ${createdFeatures.length} features from the project specification.`,
projectPath: projectPath,
});
} catch (error) {
logger.error('❌ parseAndCreateFeatures() failed:');
logger.error('Error:', error);

View File

@@ -47,17 +47,17 @@ export function createCreateHandler(events: EventEmitter) {
return;
}
const { isRunning } = getSpecRegenerationStatus();
const { isRunning } = getSpecRegenerationStatus(projectPath);
if (isRunning) {
logger.warn('Generation already running, rejecting request');
res.json({ success: false, error: 'Spec generation already running' });
logger.warn('Generation already running for project:', projectPath);
res.json({ success: false, error: 'Spec generation already running for this project' });
return;
}
logAuthStatus('Before starting generation');
const abortController = new AbortController();
setRunningState(true, abortController);
setRunningState(projectPath, true, abortController);
logger.info('Starting background generation task...');
// Start generation in background
@@ -80,7 +80,7 @@ export function createCreateHandler(events: EventEmitter) {
})
.finally(() => {
logger.info('Generation task finished (success or error)');
setRunningState(false, null);
setRunningState(projectPath, false, null);
});
logger.info('Returning success response (generation running in background)');

View File

@@ -40,17 +40,17 @@ export function createGenerateFeaturesHandler(
return;
}
const { isRunning } = getSpecRegenerationStatus();
const { isRunning } = getSpecRegenerationStatus(projectPath);
if (isRunning) {
logger.warn('Generation already running, rejecting request');
res.json({ success: false, error: 'Generation already running' });
logger.warn('Generation already running for project:', projectPath);
res.json({ success: false, error: 'Generation already running for this project' });
return;
}
logAuthStatus('Before starting feature generation');
const abortController = new AbortController();
setRunningState(true, abortController);
setRunningState(projectPath, true, abortController, 'feature_generation');
logger.info('Starting background feature generation task...');
generateFeaturesFromSpec(projectPath, events, abortController, maxFeatures, settingsService)
@@ -63,7 +63,7 @@ export function createGenerateFeaturesHandler(
})
.finally(() => {
logger.info('Feature generation task finished (success or error)');
setRunningState(false, null);
setRunningState(projectPath, false, null);
});
logger.info('Returning success response (generation running in background)');

View File

@@ -48,17 +48,17 @@ export function createGenerateHandler(events: EventEmitter, settingsService?: Se
return;
}
const { isRunning } = getSpecRegenerationStatus();
const { isRunning } = getSpecRegenerationStatus(projectPath);
if (isRunning) {
logger.warn('Generation already running, rejecting request');
res.json({ success: false, error: 'Spec generation already running' });
logger.warn('Generation already running for project:', projectPath);
res.json({ success: false, error: 'Spec generation already running for this project' });
return;
}
logAuthStatus('Before starting generation');
const abortController = new AbortController();
setRunningState(true, abortController);
setRunningState(projectPath, true, abortController);
logger.info('Starting background generation task...');
generateSpec(
@@ -81,7 +81,7 @@ export function createGenerateHandler(events: EventEmitter, settingsService?: Se
})
.finally(() => {
logger.info('Generation task finished (success or error)');
setRunningState(false, null);
setRunningState(projectPath, false, null);
});
logger.info('Returning success response (generation running in background)');

View File

@@ -6,10 +6,11 @@ import type { Request, Response } from 'express';
import { getSpecRegenerationStatus, getErrorMessage } from '../common.js';
export function createStatusHandler() {
return async (_req: Request, res: Response): Promise<void> => {
return async (req: Request, res: Response): Promise<void> => {
try {
const { isRunning } = getSpecRegenerationStatus();
res.json({ success: true, isRunning });
const projectPath = req.query.projectPath as string | undefined;
const { isRunning } = getSpecRegenerationStatus(projectPath);
res.json({ success: true, isRunning, projectPath });
} catch (error) {
res.status(500).json({ success: false, error: getErrorMessage(error) });
}

View File

@@ -6,13 +6,16 @@ import type { Request, Response } from 'express';
import { getSpecRegenerationStatus, setRunningState, getErrorMessage } from '../common.js';
export function createStopHandler() {
return async (_req: Request, res: Response): Promise<void> => {
return async (req: Request, res: Response): Promise<void> => {
try {
const { currentAbortController } = getSpecRegenerationStatus();
const { projectPath } = req.body as { projectPath?: string };
const { currentAbortController } = getSpecRegenerationStatus(projectPath);
if (currentAbortController) {
currentAbortController.abort();
}
setRunningState(false, null);
if (projectPath) {
setRunningState(projectPath, false, null);
}
res.json({ success: true });
} catch (error) {
res.status(500).json({ success: false, error: getErrorMessage(error) });

View File

@@ -0,0 +1,76 @@
/**
* POST /sync endpoint - Sync spec with codebase and features
*/
import type { Request, Response } from 'express';
import type { EventEmitter } from '../../../lib/events.js';
import { createLogger } from '@automaker/utils';
import {
getSpecRegenerationStatus,
setRunningState,
logAuthStatus,
logError,
getErrorMessage,
} from '../common.js';
import { syncSpec } from '../sync-spec.js';
import type { SettingsService } from '../../../services/settings-service.js';
const logger = createLogger('SpecSync');
export function createSyncHandler(events: EventEmitter, settingsService?: SettingsService) {
return async (req: Request, res: Response): Promise<void> => {
logger.info('========== /sync endpoint called ==========');
logger.debug('Request body:', JSON.stringify(req.body, null, 2));
try {
const { projectPath } = req.body as {
projectPath: string;
};
logger.debug('projectPath:', projectPath);
if (!projectPath) {
logger.error('Missing projectPath parameter');
res.status(400).json({ success: false, error: 'projectPath required' });
return;
}
const { isRunning } = getSpecRegenerationStatus(projectPath);
if (isRunning) {
logger.warn('Generation/sync already running for project:', projectPath);
res.json({ success: false, error: 'Operation already running for this project' });
return;
}
logAuthStatus('Before starting spec sync');
const abortController = new AbortController();
setRunningState(projectPath, true, abortController, 'sync');
logger.info('Starting background spec sync task...');
syncSpec(projectPath, events, abortController, settingsService)
.then((result) => {
logger.info('Spec sync completed successfully');
logger.info('Result:', JSON.stringify(result, null, 2));
})
.catch((error) => {
logError(error, 'Spec sync failed with error');
events.emit('spec-regeneration:event', {
type: 'spec_regeneration_error',
error: getErrorMessage(error),
projectPath,
});
})
.finally(() => {
logger.info('Spec sync task finished (success or error)');
setRunningState(projectPath, false, null);
});
logger.info('Returning success response (sync running in background)');
res.json({ success: true });
} catch (error) {
logError(error, 'Sync route handler failed');
res.status(500).json({ success: false, error: getErrorMessage(error) });
}
};
}

View File

@@ -0,0 +1,328 @@
/**
* Sync spec with current codebase and feature state
*
* Updates the spec file based on:
* - Completed Automaker features
* - Code analysis for tech stack and implementations
* - Roadmap phase status updates
*/
import * as secureFs from '../../lib/secure-fs.js';
import type { EventEmitter } from '../../lib/events.js';
import { createLogger } from '@automaker/utils';
import { DEFAULT_PHASE_MODELS } from '@automaker/types';
import { resolvePhaseModel } from '@automaker/model-resolver';
import { streamingQuery } from '../../providers/simple-query-service.js';
import { getAppSpecPath } from '@automaker/platform';
import type { SettingsService } from '../../services/settings-service.js';
import {
getAutoLoadClaudeMdSetting,
getPhaseModelWithOverrides,
} from '../../lib/settings-helpers.js';
import { FeatureLoader } from '../../services/feature-loader.js';
import {
extractImplementedFeatures,
extractTechnologyStack,
extractRoadmapPhases,
updateImplementedFeaturesSection,
updateTechnologyStack,
updateRoadmapPhaseStatus,
type ImplementedFeature,
type RoadmapPhase,
} from '../../lib/xml-extractor.js';
import { getNotificationService } from '../../services/notification-service.js';
const logger = createLogger('SpecSync');
/**
* Result of a sync operation
*/
export interface SyncResult {
techStackUpdates: {
added: string[];
removed: string[];
};
implementedFeaturesUpdates: {
addedFromFeatures: string[];
removed: string[];
};
roadmapUpdates: Array<{ phaseName: string; newStatus: string }>;
summary: string;
}
/**
* Sync the spec with current codebase and feature state
*/
export async function syncSpec(
projectPath: string,
events: EventEmitter,
abortController: AbortController,
settingsService?: SettingsService
): Promise<SyncResult> {
logger.info('========== syncSpec() started ==========');
logger.info('projectPath:', projectPath);
const result: SyncResult = {
techStackUpdates: { added: [], removed: [] },
implementedFeaturesUpdates: { addedFromFeatures: [], removed: [] },
roadmapUpdates: [],
summary: '',
};
// Read existing spec
const specPath = getAppSpecPath(projectPath);
let specContent: string;
try {
specContent = (await secureFs.readFile(specPath, 'utf-8')) as string;
logger.info(`Spec loaded successfully (${specContent.length} chars)`);
} catch (readError) {
logger.error('Failed to read spec file:', readError);
events.emit('spec-regeneration:event', {
type: 'spec_regeneration_error',
error: 'No project spec found. Create or regenerate spec first.',
projectPath,
});
throw new Error('No project spec found');
}
events.emit('spec-regeneration:event', {
type: 'spec_regeneration_progress',
content: '[Phase: sync] Starting spec sync...\n',
projectPath,
});
// Extract current state from spec
const currentImplementedFeatures = extractImplementedFeatures(specContent);
const currentTechStack = extractTechnologyStack(specContent);
const currentRoadmapPhases = extractRoadmapPhases(specContent);
logger.info(`Current spec has ${currentImplementedFeatures.length} implemented features`);
logger.info(`Current spec has ${currentTechStack.length} technologies`);
logger.info(`Current spec has ${currentRoadmapPhases.length} roadmap phases`);
// Load completed Automaker features
const featureLoader = new FeatureLoader();
const allFeatures = await featureLoader.getAll(projectPath);
const completedFeatures = allFeatures.filter(
(f) => f.status === 'completed' || f.status === 'verified'
);
logger.info(`Found ${completedFeatures.length} completed/verified features in Automaker`);
events.emit('spec-regeneration:event', {
type: 'spec_regeneration_progress',
content: `Found ${completedFeatures.length} completed features to sync...\n`,
projectPath,
});
// Build new implemented features list from completed Automaker features
const newImplementedFeatures: ImplementedFeature[] = [];
const existingNames = new Set(currentImplementedFeatures.map((f) => f.name.toLowerCase()));
for (const feature of completedFeatures) {
const name = feature.title || `Feature: ${feature.id}`;
if (!existingNames.has(name.toLowerCase())) {
newImplementedFeatures.push({
name,
description: feature.description || '',
});
result.implementedFeaturesUpdates.addedFromFeatures.push(name);
}
}
// Merge: keep existing + add new from completed features
const mergedFeatures = [...currentImplementedFeatures, ...newImplementedFeatures];
// Update spec with merged features
if (result.implementedFeaturesUpdates.addedFromFeatures.length > 0) {
specContent = updateImplementedFeaturesSection(specContent, mergedFeatures);
logger.info(
`Added ${result.implementedFeaturesUpdates.addedFromFeatures.length} features to spec`
);
}
// Analyze codebase for tech stack updates using AI
events.emit('spec-regeneration:event', {
type: 'spec_regeneration_progress',
content: 'Analyzing codebase for technology updates...\n',
projectPath,
});
const autoLoadClaudeMd = await getAutoLoadClaudeMdSetting(
projectPath,
settingsService,
'[SpecSync]'
);
// Get model from phase settings with provider info
const {
phaseModel: phaseModelEntry,
provider,
credentials,
} = settingsService
? await getPhaseModelWithOverrides(
'specGenerationModel',
settingsService,
projectPath,
'[SpecSync]'
)
: {
phaseModel: DEFAULT_PHASE_MODELS.specGenerationModel,
provider: undefined,
credentials: undefined,
};
const { model, thinkingLevel } = resolvePhaseModel(phaseModelEntry);
logger.info('Using model:', model, provider ? `via provider: ${provider.name}` : 'direct API');
// Use AI to analyze tech stack
const techAnalysisPrompt = `Analyze this project and return ONLY a JSON object with the current technology stack.
Current known technologies: ${currentTechStack.join(', ')}
Look at package.json, config files, and source code to identify:
- Frameworks (React, Vue, Express, etc.)
- Languages (TypeScript, JavaScript, Python, etc.)
- Build tools (Vite, Webpack, etc.)
- Databases (PostgreSQL, MongoDB, etc.)
- Key libraries and tools
Return ONLY this JSON format, no other text:
{
"technologies": ["Technology 1", "Technology 2", ...]
}`;
try {
const techResult = await streamingQuery({
prompt: techAnalysisPrompt,
model,
cwd: projectPath,
maxTurns: 10,
allowedTools: ['Read', 'Glob', 'Grep'],
abortController,
thinkingLevel,
readOnly: true,
settingSources: autoLoadClaudeMd ? ['user', 'project', 'local'] : undefined,
claudeCompatibleProvider: provider, // Pass provider for alternative endpoint configuration
credentials, // Pass credentials for resolving 'credentials' apiKeySource
onText: (text) => {
logger.debug(`Tech analysis text: ${text.substring(0, 100)}`);
},
});
// Parse tech stack from response
const jsonMatch = techResult.text.match(/\{[\s\S]*"technologies"[\s\S]*\}/);
if (jsonMatch) {
const parsed = JSON.parse(jsonMatch[0]);
if (Array.isArray(parsed.technologies)) {
const newTechStack = parsed.technologies as string[];
// Calculate differences
const currentSet = new Set(currentTechStack.map((t) => t.toLowerCase()));
const newSet = new Set(newTechStack.map((t) => t.toLowerCase()));
for (const tech of newTechStack) {
if (!currentSet.has(tech.toLowerCase())) {
result.techStackUpdates.added.push(tech);
}
}
for (const tech of currentTechStack) {
if (!newSet.has(tech.toLowerCase())) {
result.techStackUpdates.removed.push(tech);
}
}
// Update spec with new tech stack if there are changes
if (
result.techStackUpdates.added.length > 0 ||
result.techStackUpdates.removed.length > 0
) {
specContent = updateTechnologyStack(specContent, newTechStack);
logger.info(
`Updated tech stack: +${result.techStackUpdates.added.length}, -${result.techStackUpdates.removed.length}`
);
}
}
}
} catch (error) {
logger.warn('Failed to analyze tech stack:', error);
// Continue with other sync operations
}
// Update roadmap phase statuses based on completed features
events.emit('spec-regeneration:event', {
type: 'spec_regeneration_progress',
content: 'Checking roadmap phase statuses...\n',
projectPath,
});
// For each phase, check if all its features are completed
// This is a heuristic - we check if the phase name appears in any feature titles/descriptions
for (const phase of currentRoadmapPhases) {
if (phase.status === 'completed') continue; // Already completed
// Check if this phase should be marked as completed
// A phase is considered complete if we have completed features that mention it
const phaseNameLower = phase.name.toLowerCase();
const relatedCompletedFeatures = completedFeatures.filter(
(f) =>
f.title?.toLowerCase().includes(phaseNameLower) ||
f.description?.toLowerCase().includes(phaseNameLower) ||
f.category?.toLowerCase().includes(phaseNameLower)
);
// If we have related completed features and the phase is still pending/in_progress,
// update it to in_progress or completed based on feature count
if (relatedCompletedFeatures.length > 0 && phase.status !== 'completed') {
const newStatus = 'in_progress';
specContent = updateRoadmapPhaseStatus(specContent, phase.name, newStatus);
result.roadmapUpdates.push({ phaseName: phase.name, newStatus });
logger.info(`Updated phase "${phase.name}" to ${newStatus}`);
}
}
// Save updated spec
await secureFs.writeFile(specPath, specContent, 'utf-8');
logger.info('Spec saved successfully');
// Build summary
const summaryParts: string[] = [];
if (result.implementedFeaturesUpdates.addedFromFeatures.length > 0) {
summaryParts.push(
`Added ${result.implementedFeaturesUpdates.addedFromFeatures.length} implemented features`
);
}
if (result.techStackUpdates.added.length > 0) {
summaryParts.push(`Added ${result.techStackUpdates.added.length} technologies`);
}
if (result.techStackUpdates.removed.length > 0) {
summaryParts.push(`Removed ${result.techStackUpdates.removed.length} technologies`);
}
if (result.roadmapUpdates.length > 0) {
summaryParts.push(`Updated ${result.roadmapUpdates.length} roadmap phases`);
}
result.summary = summaryParts.length > 0 ? summaryParts.join(', ') : 'Spec is already up to date';
// Create notification
const notificationService = getNotificationService();
await notificationService.createNotification({
type: 'spec_regeneration_complete',
title: 'Spec Sync Complete',
message: result.summary,
projectPath,
});
events.emit('spec-regeneration:event', {
type: 'spec_regeneration_complete',
message: `Spec sync complete! ${result.summary}`,
projectPath,
});
logger.info('========== syncSpec() completed ==========');
logger.info('Summary:', result.summary);
return result;
}

View File

@@ -117,9 +117,27 @@ export function createAuthRoutes(): Router {
*
* Returns whether the current request is authenticated.
* Used by the UI to determine if login is needed.
*
* If AUTOMAKER_AUTO_LOGIN=true is set, automatically creates a session
* for unauthenticated requests (useful for development).
*/
router.get('/status', (req, res) => {
const authenticated = isRequestAuthenticated(req);
router.get('/status', async (req, res) => {
let authenticated = isRequestAuthenticated(req);
// Auto-login for development: create session automatically if enabled
// Only works in non-production environments as a safeguard
if (
!authenticated &&
process.env.AUTOMAKER_AUTO_LOGIN === 'true' &&
process.env.NODE_ENV !== 'production'
) {
const sessionToken = await createSession();
const cookieOptions = getSessionCookieOptions();
const cookieName = getSessionCookieName();
res.cookie(cookieName, sessionToken, cookieOptions);
authenticated = true;
}
res.json({
success: true,
authenticated,

View File

@@ -10,6 +10,8 @@ import { validatePathParams } from '../../middleware/validate-paths.js';
import { createStopFeatureHandler } from './routes/stop-feature.js';
import { createStatusHandler } from './routes/status.js';
import { createRunFeatureHandler } from './routes/run-feature.js';
import { createStartHandler } from './routes/start.js';
import { createStopHandler } from './routes/stop.js';
import { createVerifyFeatureHandler } from './routes/verify-feature.js';
import { createResumeFeatureHandler } from './routes/resume-feature.js';
import { createContextExistsHandler } from './routes/context-exists.js';
@@ -17,10 +19,15 @@ import { createAnalyzeProjectHandler } from './routes/analyze-project.js';
import { createFollowUpFeatureHandler } from './routes/follow-up-feature.js';
import { createCommitFeatureHandler } from './routes/commit-feature.js';
import { createApprovePlanHandler } from './routes/approve-plan.js';
import { createResumeInterruptedHandler } from './routes/resume-interrupted.js';
export function createAutoModeRoutes(autoModeService: AutoModeService): Router {
const router = Router();
// Auto loop control routes
router.post('/start', validatePathParams('projectPath'), createStartHandler(autoModeService));
router.post('/stop', validatePathParams('projectPath'), createStopHandler(autoModeService));
router.post('/stop-feature', createStopFeatureHandler(autoModeService));
router.post('/status', validatePathParams('projectPath?'), createStatusHandler(autoModeService));
router.post(
@@ -63,6 +70,11 @@ export function createAutoModeRoutes(autoModeService: AutoModeService): Router {
validatePathParams('projectPath'),
createApprovePlanHandler(autoModeService)
);
router.post(
'/resume-interrupted',
validatePathParams('projectPath'),
createResumeInterruptedHandler(autoModeService)
);
return router;
}

View File

@@ -0,0 +1,42 @@
/**
* Resume Interrupted Features Handler
*
* Checks for features that were interrupted (in pipeline steps or in_progress)
* when the server was restarted and resumes them.
*/
import type { Request, Response } from 'express';
import { createLogger } from '@automaker/utils';
import type { AutoModeService } from '../../../services/auto-mode-service.js';
const logger = createLogger('ResumeInterrupted');
interface ResumeInterruptedRequest {
projectPath: string;
}
export function createResumeInterruptedHandler(autoModeService: AutoModeService) {
return async (req: Request, res: Response): Promise<void> => {
const { projectPath } = req.body as ResumeInterruptedRequest;
if (!projectPath) {
res.status(400).json({ error: 'Project path is required' });
return;
}
logger.info(`Checking for interrupted features in ${projectPath}`);
try {
await autoModeService.resumeInterruptedFeatures(projectPath);
res.json({
success: true,
message: 'Resume check completed',
});
} catch (error) {
logger.error('Error resuming interrupted features:', error);
res.status(500).json({
error: error instanceof Error ? error.message : 'Unknown error',
});
}
};
}

View File

@@ -26,6 +26,24 @@ export function createRunFeatureHandler(autoModeService: AutoModeService) {
return;
}
// Check per-worktree capacity before starting
const capacity = await autoModeService.checkWorktreeCapacity(projectPath, featureId);
if (!capacity.hasCapacity) {
const worktreeDesc = capacity.branchName
? `worktree "${capacity.branchName}"`
: 'main worktree';
res.status(429).json({
success: false,
error: `Agent limit reached for ${worktreeDesc} (${capacity.currentAgents}/${capacity.maxAgents}). Wait for running tasks to complete or increase the limit.`,
details: {
currentAgents: capacity.currentAgents,
maxAgents: capacity.maxAgents,
branchName: capacity.branchName,
},
});
return;
}
// Start execution in background
// executeFeature derives workDir from feature.branchName
autoModeService

View File

@@ -0,0 +1,67 @@
/**
* POST /start endpoint - Start auto mode loop for a project
*/
import type { Request, Response } from 'express';
import type { AutoModeService } from '../../../services/auto-mode-service.js';
import { createLogger } from '@automaker/utils';
import { getErrorMessage, logError } from '../common.js';
const logger = createLogger('AutoMode');
export function createStartHandler(autoModeService: AutoModeService) {
return async (req: Request, res: Response): Promise<void> => {
try {
const { projectPath, branchName, maxConcurrency } = req.body as {
projectPath: string;
branchName?: string | null;
maxConcurrency?: number;
};
if (!projectPath) {
res.status(400).json({
success: false,
error: 'projectPath is required',
});
return;
}
// Normalize branchName: undefined becomes null
const normalizedBranchName = branchName ?? null;
const worktreeDesc = normalizedBranchName
? `worktree ${normalizedBranchName}`
: 'main worktree';
// Check if already running
if (autoModeService.isAutoLoopRunningForProject(projectPath, normalizedBranchName)) {
res.json({
success: true,
message: `Auto mode is already running for ${worktreeDesc}`,
alreadyRunning: true,
branchName: normalizedBranchName,
});
return;
}
// Start the auto loop for this project/worktree
const resolvedMaxConcurrency = await autoModeService.startAutoLoopForProject(
projectPath,
normalizedBranchName,
maxConcurrency
);
logger.info(
`Started auto loop for ${worktreeDesc} in project: ${projectPath} with maxConcurrency: ${resolvedMaxConcurrency}`
);
res.json({
success: true,
message: `Auto mode started with max ${resolvedMaxConcurrency} concurrent features`,
branchName: normalizedBranchName,
});
} catch (error) {
logError(error, 'Start auto mode failed');
res.status(500).json({ success: false, error: getErrorMessage(error) });
}
};
}

View File

@@ -1,5 +1,8 @@
/**
* POST /status endpoint - Get auto mode status
*
* If projectPath is provided, returns per-project status including autoloop state.
* If no projectPath, returns global status for backward compatibility.
*/
import type { Request, Response } from 'express';
@@ -9,10 +12,41 @@ import { getErrorMessage, logError } from '../common.js';
export function createStatusHandler(autoModeService: AutoModeService) {
return async (req: Request, res: Response): Promise<void> => {
try {
const { projectPath, branchName } = req.body as {
projectPath?: string;
branchName?: string | null;
};
// If projectPath is provided, return per-project/worktree status
if (projectPath) {
// Normalize branchName: undefined becomes null
const normalizedBranchName = branchName ?? null;
const projectStatus = autoModeService.getStatusForProject(
projectPath,
normalizedBranchName
);
res.json({
success: true,
isRunning: projectStatus.runningCount > 0,
isAutoLoopRunning: projectStatus.isAutoLoopRunning,
runningFeatures: projectStatus.runningFeatures,
runningCount: projectStatus.runningCount,
maxConcurrency: projectStatus.maxConcurrency,
projectPath,
branchName: normalizedBranchName,
});
return;
}
// Fall back to global status for backward compatibility
const status = autoModeService.getStatus();
const activeProjects = autoModeService.getActiveAutoLoopProjects();
const activeWorktrees = autoModeService.getActiveAutoLoopWorktrees();
res.json({
success: true,
...status,
activeAutoLoopProjects: activeProjects,
activeAutoLoopWorktrees: activeWorktrees,
});
} catch (error) {
logError(error, 'Get status failed');

View File

@@ -0,0 +1,66 @@
/**
* POST /stop endpoint - Stop auto mode loop for a project
*/
import type { Request, Response } from 'express';
import type { AutoModeService } from '../../../services/auto-mode-service.js';
import { createLogger } from '@automaker/utils';
import { getErrorMessage, logError } from '../common.js';
const logger = createLogger('AutoMode');
export function createStopHandler(autoModeService: AutoModeService) {
return async (req: Request, res: Response): Promise<void> => {
try {
const { projectPath, branchName } = req.body as {
projectPath: string;
branchName?: string | null;
};
if (!projectPath) {
res.status(400).json({
success: false,
error: 'projectPath is required',
});
return;
}
// Normalize branchName: undefined becomes null
const normalizedBranchName = branchName ?? null;
const worktreeDesc = normalizedBranchName
? `worktree ${normalizedBranchName}`
: 'main worktree';
// Check if running
if (!autoModeService.isAutoLoopRunningForProject(projectPath, normalizedBranchName)) {
res.json({
success: true,
message: `Auto mode is not running for ${worktreeDesc}`,
wasRunning: false,
branchName: normalizedBranchName,
});
return;
}
// Stop the auto loop for this project/worktree
const runningCount = await autoModeService.stopAutoLoopForProject(
projectPath,
normalizedBranchName
);
logger.info(
`Stopped auto loop for ${worktreeDesc} in project: ${projectPath}, ${runningCount} features still running`
);
res.json({
success: true,
message: 'Auto mode stopped',
runningFeaturesCount: runningCount,
branchName: normalizedBranchName,
});
} catch (error) {
logError(error, 'Stop auto mode failed');
res.status(500).json({ success: false, error: getErrorMessage(error) });
}
};
}

View File

@@ -3,12 +3,31 @@
*/
import { createLogger } from '@automaker/utils';
import { ensureAutomakerDir, getAutomakerDir } from '@automaker/platform';
import * as secureFs from '../../lib/secure-fs.js';
import path from 'path';
import type { BacklogPlanResult } from '@automaker/types';
const logger = createLogger('BacklogPlan');
// State for tracking running generation
let isRunning = false;
let currentAbortController: AbortController | null = null;
let runningDetails: {
projectPath: string;
prompt: string;
model?: string;
startedAt: string;
} | null = null;
const BACKLOG_PLAN_FILENAME = 'backlog-plan.json';
export interface StoredBacklogPlan {
savedAt: string;
prompt: string;
model?: string;
result: BacklogPlanResult;
}
export function getBacklogPlanStatus(): { isRunning: boolean } {
return { isRunning };
@@ -16,20 +35,125 @@ export function getBacklogPlanStatus(): { isRunning: boolean } {
export function setRunningState(running: boolean, abortController?: AbortController | null): void {
isRunning = running;
if (!running) {
runningDetails = null;
}
if (abortController !== undefined) {
currentAbortController = abortController;
}
}
export function setRunningDetails(
details: {
projectPath: string;
prompt: string;
model?: string;
startedAt: string;
} | null
): void {
runningDetails = details;
}
export function getRunningDetails(): {
projectPath: string;
prompt: string;
model?: string;
startedAt: string;
} | null {
return runningDetails;
}
function getBacklogPlanPath(projectPath: string): string {
return path.join(getAutomakerDir(projectPath), BACKLOG_PLAN_FILENAME);
}
export async function saveBacklogPlan(projectPath: string, plan: StoredBacklogPlan): Promise<void> {
await ensureAutomakerDir(projectPath);
const filePath = getBacklogPlanPath(projectPath);
await secureFs.writeFile(filePath, JSON.stringify(plan, null, 2), 'utf-8');
}
export async function loadBacklogPlan(projectPath: string): Promise<StoredBacklogPlan | null> {
try {
const filePath = getBacklogPlanPath(projectPath);
const raw = await secureFs.readFile(filePath, 'utf-8');
const parsed = JSON.parse(raw as string) as StoredBacklogPlan;
if (!Array.isArray(parsed?.result?.changes)) {
return null;
}
return parsed;
} catch {
return null;
}
}
export async function clearBacklogPlan(projectPath: string): Promise<void> {
try {
const filePath = getBacklogPlanPath(projectPath);
await secureFs.unlink(filePath);
} catch {
// ignore missing file
}
}
export function getAbortController(): AbortController | null {
return currentAbortController;
}
export function getErrorMessage(error: unknown): string {
if (error instanceof Error) {
return error.message;
/**
* Map SDK/CLI errors to user-friendly messages
*/
export function mapBacklogPlanError(rawMessage: string): string {
// Claude Code spawn failures
if (
rawMessage.includes('Failed to spawn Claude Code process') ||
rawMessage.includes('spawn node ENOENT') ||
rawMessage.includes('Claude Code executable not found') ||
rawMessage.includes('Claude Code native binary not found')
) {
return 'Claude CLI could not be launched. Make sure the Claude CLI is installed and available in PATH, or check that Node.js is correctly installed. Try running "which claude" or "claude --version" in your terminal to verify.';
}
return String(error);
// Claude Code process crash
if (rawMessage.includes('Claude Code process exited')) {
return 'Claude exited unexpectedly. Try again. If it keeps happening, re-run `claude login` or update your API key in Setup.';
}
// Rate limiting
if (rawMessage.toLowerCase().includes('rate limit') || rawMessage.includes('429')) {
return 'Rate limited. Please wait a moment and try again.';
}
// Network errors
if (
rawMessage.toLowerCase().includes('network') ||
rawMessage.toLowerCase().includes('econnrefused') ||
rawMessage.toLowerCase().includes('timeout')
) {
return 'Network error. Check your internet connection and try again.';
}
// Authentication errors
if (
rawMessage.toLowerCase().includes('not authenticated') ||
rawMessage.toLowerCase().includes('unauthorized') ||
rawMessage.includes('401')
) {
return 'Authentication failed. Please check your API key or run `claude login` to authenticate.';
}
// Return original message for unknown errors
return rawMessage;
}
export function getErrorMessage(error: unknown): string {
let rawMessage: string;
if (error instanceof Error) {
rawMessage = error.message;
} else {
rawMessage = String(error);
}
return mapBacklogPlanError(rawMessage);
}
export function logError(error: unknown, context: string): void {

View File

@@ -17,9 +17,19 @@ import { resolvePhaseModel } from '@automaker/model-resolver';
import { FeatureLoader } from '../../services/feature-loader.js';
import { ProviderFactory } from '../../providers/provider-factory.js';
import { extractJsonWithArray } from '../../lib/json-extractor.js';
import { logger, setRunningState, getErrorMessage } from './common.js';
import {
logger,
setRunningState,
setRunningDetails,
getErrorMessage,
saveBacklogPlan,
} from './common.js';
import type { SettingsService } from '../../services/settings-service.js';
import { getAutoLoadClaudeMdSetting, getPromptCustomization } from '../../lib/settings-helpers.js';
import {
getAutoLoadClaudeMdSetting,
getPromptCustomization,
getPhaseModelWithOverrides,
} from '../../lib/settings-helpers.js';
const featureLoader = new FeatureLoader();
@@ -111,18 +121,42 @@ export async function generateBacklogPlan(
content: 'Generating plan with AI...',
});
// Get the model to use from settings or provided override
// Get the model to use from settings or provided override with provider info
let effectiveModel = model;
let thinkingLevel: ThinkingLevel | undefined;
if (!effectiveModel) {
const settings = await settingsService?.getGlobalSettings();
const phaseModelEntry =
settings?.phaseModels?.backlogPlanningModel || DEFAULT_PHASE_MODELS.backlogPlanningModel;
const resolved = resolvePhaseModel(phaseModelEntry);
let claudeCompatibleProvider: import('@automaker/types').ClaudeCompatibleProvider | undefined;
let credentials: import('@automaker/types').Credentials | undefined;
if (effectiveModel) {
// Use explicit override - resolve model alias and get credentials
const resolved = resolvePhaseModel({ model: effectiveModel });
effectiveModel = resolved.model;
thinkingLevel = resolved.thinkingLevel;
credentials = await settingsService?.getCredentials();
} else if (settingsService) {
// Use settings-based model with provider info
const phaseResult = await getPhaseModelWithOverrides(
'backlogPlanningModel',
settingsService,
projectPath,
'[BacklogPlan]'
);
const resolved = resolvePhaseModel(phaseResult.phaseModel);
effectiveModel = resolved.model;
thinkingLevel = resolved.thinkingLevel;
claudeCompatibleProvider = phaseResult.provider;
credentials = phaseResult.credentials;
} else {
// Fallback to defaults
const resolved = resolvePhaseModel(DEFAULT_PHASE_MODELS.backlogPlanningModel);
effectiveModel = resolved.model;
thinkingLevel = resolved.thinkingLevel;
}
logger.info('[BacklogPlan] Using model:', effectiveModel);
logger.info(
'[BacklogPlan] Using model:',
effectiveModel,
claudeCompatibleProvider ? `via provider: ${claudeCompatibleProvider.name}` : 'direct API'
);
const provider = ProviderFactory.getProviderForModel(effectiveModel);
// Strip provider prefix - providers expect bare model IDs
@@ -167,6 +201,8 @@ ${userPrompt}`;
settingSources: autoLoadClaudeMd ? ['user', 'project'] : undefined,
readOnly: true, // Plan generation only generates text, doesn't write files
thinkingLevel, // Pass thinking level for extended thinking
claudeCompatibleProvider, // Pass provider for alternative endpoint configuration
credentials, // Pass credentials for resolving 'credentials' apiKeySource
});
let responseText = '';
@@ -200,6 +236,13 @@ ${userPrompt}`;
// Parse the response
const result = parsePlanResponse(responseText);
await saveBacklogPlan(projectPath, {
savedAt: new Date().toISOString(),
prompt,
model: effectiveModel,
result,
});
events.emit('backlog-plan:event', {
type: 'backlog_plan_complete',
result,
@@ -218,5 +261,6 @@ ${userPrompt}`;
throw error;
} finally {
setRunningState(false, null);
setRunningDetails(null);
}
}

View File

@@ -9,6 +9,7 @@ import { createGenerateHandler } from './routes/generate.js';
import { createStopHandler } from './routes/stop.js';
import { createStatusHandler } from './routes/status.js';
import { createApplyHandler } from './routes/apply.js';
import { createClearHandler } from './routes/clear.js';
import type { SettingsService } from '../../services/settings-service.js';
export function createBacklogPlanRoutes(
@@ -23,8 +24,9 @@ export function createBacklogPlanRoutes(
createGenerateHandler(events, settingsService)
);
router.post('/stop', createStopHandler());
router.get('/status', createStatusHandler());
router.get('/status', validatePathParams('projectPath'), createStatusHandler());
router.post('/apply', validatePathParams('projectPath'), createApplyHandler());
router.post('/clear', validatePathParams('projectPath'), createClearHandler());
return router;
}

View File

@@ -5,18 +5,29 @@
import type { Request, Response } from 'express';
import type { BacklogPlanResult, BacklogChange, Feature } from '@automaker/types';
import { FeatureLoader } from '../../../services/feature-loader.js';
import { getErrorMessage, logError, logger } from '../common.js';
import { clearBacklogPlan, getErrorMessage, logError, logger } from '../common.js';
const featureLoader = new FeatureLoader();
export function createApplyHandler() {
return async (req: Request, res: Response): Promise<void> => {
try {
const { projectPath, plan } = req.body as {
const {
projectPath,
plan,
branchName: rawBranchName,
} = req.body as {
projectPath: string;
plan: BacklogPlanResult;
branchName?: string;
};
// Validate branchName: must be undefined or a non-empty trimmed string
const branchName =
typeof rawBranchName === 'string' && rawBranchName.trim().length > 0
? rawBranchName.trim()
: undefined;
if (!projectPath) {
res.status(400).json({ success: false, error: 'projectPath required' });
return;
@@ -74,14 +85,16 @@ export function createApplyHandler() {
if (!change.feature) continue;
try {
// Create the new feature
// Create the new feature - use the AI-generated ID if provided
const newFeature = await featureLoader.create(projectPath, {
id: change.feature.id, // Use descriptive ID from AI if provided
title: change.feature.title,
description: change.feature.description || '',
category: change.feature.category || 'Uncategorized',
dependencies: change.feature.dependencies,
priority: change.feature.priority,
status: 'backlog',
branchName,
});
appliedChanges.push(`added:${newFeature.id}`);
@@ -135,6 +148,17 @@ export function createApplyHandler() {
}
}
// Clear the plan before responding
try {
await clearBacklogPlan(projectPath);
} catch (error) {
logger.warn(
`[BacklogPlan] Failed to clear backlog plan after apply:`,
getErrorMessage(error)
);
// Don't throw - operation succeeded, just cleanup failed
}
res.json({
success: true,
appliedChanges,

View File

@@ -0,0 +1,25 @@
/**
* POST /clear endpoint - Clear saved backlog plan
*/
import type { Request, Response } from 'express';
import { clearBacklogPlan, getErrorMessage, logError } from '../common.js';
export function createClearHandler() {
return async (req: Request, res: Response): Promise<void> => {
try {
const { projectPath } = req.body as { projectPath: string };
if (!projectPath) {
res.status(400).json({ success: false, error: 'projectPath required' });
return;
}
await clearBacklogPlan(projectPath);
res.json({ success: true });
} catch (error) {
logError(error, 'Clear backlog plan failed');
res.status(500).json({ success: false, error: getErrorMessage(error) });
}
};
}

View File

@@ -4,7 +4,13 @@
import type { Request, Response } from 'express';
import type { EventEmitter } from '../../../lib/events.js';
import { getBacklogPlanStatus, setRunningState, getErrorMessage, logError } from '../common.js';
import {
getBacklogPlanStatus,
setRunningState,
setRunningDetails,
getErrorMessage,
logError,
} from '../common.js';
import { generateBacklogPlan } from '../generate-plan.js';
import type { SettingsService } from '../../../services/settings-service.js';
@@ -37,20 +43,26 @@ export function createGenerateHandler(events: EventEmitter, settingsService?: Se
}
setRunningState(true);
setRunningDetails({
projectPath,
prompt,
model,
startedAt: new Date().toISOString(),
});
const abortController = new AbortController();
setRunningState(true, abortController);
// Start generation in background
// Note: generateBacklogPlan handles its own error event emission,
// so we only log here to avoid duplicate error toasts
generateBacklogPlan(projectPath, prompt, events, abortController, settingsService, model)
.catch((error) => {
// Just log - error event already emitted by generateBacklogPlan
logError(error, 'Generate backlog plan failed (background)');
events.emit('backlog-plan:event', {
type: 'backlog_plan_error',
error: getErrorMessage(error),
});
})
.finally(() => {
setRunningState(false, null);
setRunningDetails(null);
});
res.json({ success: true });

View File

@@ -3,13 +3,15 @@
*/
import type { Request, Response } from 'express';
import { getBacklogPlanStatus, getErrorMessage, logError } from '../common.js';
import { getBacklogPlanStatus, loadBacklogPlan, getErrorMessage, logError } from '../common.js';
export function createStatusHandler() {
return async (_req: Request, res: Response): Promise<void> => {
return async (req: Request, res: Response): Promise<void> => {
try {
const status = getBacklogPlanStatus();
res.json({ success: true, ...status });
const projectPath = typeof req.query.projectPath === 'string' ? req.query.projectPath : '';
const savedPlan = projectPath ? await loadBacklogPlan(projectPath) : null;
res.json({ success: true, ...status, savedPlan });
} catch (error) {
logError(error, 'Get backlog plan status failed');
res.status(500).json({ success: false, error: getErrorMessage(error) });

View File

@@ -3,7 +3,13 @@
*/
import type { Request, Response } from 'express';
import { getAbortController, setRunningState, getErrorMessage, logError } from '../common.js';
import {
getAbortController,
setRunningState,
setRunningDetails,
getErrorMessage,
logError,
} from '../common.js';
export function createStopHandler() {
return async (_req: Request, res: Response): Promise<void> => {
@@ -12,6 +18,7 @@ export function createStopHandler() {
if (abortController) {
abortController.abort();
setRunningState(false, null);
setRunningDetails(null);
}
res.json({ success: true });
} catch (error) {

View File

@@ -34,6 +34,13 @@ export function createClaudeRoutes(service: ClaudeUsageService): Router {
error: 'Authentication required',
message: "Please run 'claude login' to authenticate",
});
} else if (message.includes('TRUST_PROMPT_PENDING')) {
// Trust prompt appeared but couldn't be auto-approved
res.status(200).json({
error: 'Trust prompt pending',
message:
'Claude CLI needs folder permission. Please run "claude" in your terminal and approve access.',
});
} else if (message.includes('timed out')) {
res.status(200).json({
error: 'Command timed out',

View File

@@ -1,17 +1,21 @@
import { Router, Request, Response } from 'express';
import { CodexUsageService } from '../../services/codex-usage-service.js';
import { CodexModelCacheService } from '../../services/codex-model-cache-service.js';
import { createLogger } from '@automaker/utils';
const logger = createLogger('Codex');
export function createCodexRoutes(service: CodexUsageService): Router {
export function createCodexRoutes(
usageService: CodexUsageService,
modelCacheService: CodexModelCacheService
): Router {
const router = Router();
// Get current usage (attempts to fetch from Codex CLI)
router.get('/usage', async (req: Request, res: Response) => {
router.get('/usage', async (_req: Request, res: Response) => {
try {
// Check if Codex CLI is available first
const isAvailable = await service.isAvailable();
const isAvailable = await usageService.isAvailable();
if (!isAvailable) {
// IMPORTANT: This endpoint is behind Automaker session auth already.
// Use a 200 + error payload for Codex CLI issues so the UI doesn't
@@ -23,7 +27,7 @@ export function createCodexRoutes(service: CodexUsageService): Router {
return;
}
const usage = await service.fetchUsageData();
const usage = await usageService.fetchUsageData();
res.json(usage);
} catch (error) {
const message = error instanceof Error ? error.message : 'Unknown error';
@@ -52,5 +56,35 @@ export function createCodexRoutes(service: CodexUsageService): Router {
}
});
// Get available Codex models (cached)
router.get('/models', async (req: Request, res: Response) => {
try {
const forceRefresh = req.query.refresh === 'true';
const { models, cachedAt } = await modelCacheService.getModelsWithMetadata(forceRefresh);
if (models.length === 0) {
res.status(503).json({
success: false,
error: 'Codex CLI not available or not authenticated',
message: "Please install Codex CLI and run 'codex login' to authenticate",
});
return;
}
res.json({
success: true,
models,
cachedAt,
});
} catch (error) {
logger.error('Error fetching models:', error);
const message = error instanceof Error ? error.message : 'Unknown error';
res.status(500).json({
success: false,
error: message,
});
}
});
return router;
}

View File

@@ -11,17 +11,18 @@
*/
import type { Request, Response } from 'express';
import { query } from '@anthropic-ai/claude-agent-sdk';
import { createLogger } from '@automaker/utils';
import { DEFAULT_PHASE_MODELS, isCursorModel, stripProviderPrefix } from '@automaker/types';
import { PathNotAllowedError } from '@automaker/platform';
import { resolvePhaseModel } from '@automaker/model-resolver';
import { createCustomOptions } from '../../../lib/sdk-options.js';
import { ProviderFactory } from '../../../providers/provider-factory.js';
import { simpleQuery } from '../../../providers/simple-query-service.js';
import * as secureFs from '../../../lib/secure-fs.js';
import * as path from 'path';
import type { SettingsService } from '../../../services/settings-service.js';
import { getAutoLoadClaudeMdSetting } from '../../../lib/settings-helpers.js';
import {
getAutoLoadClaudeMdSetting,
getPromptCustomization,
getPhaseModelWithOverrides,
} from '../../../lib/settings-helpers.js';
const logger = createLogger('DescribeFile');
@@ -49,31 +50,6 @@ interface DescribeFileErrorResponse {
error: string;
}
/**
* Extract text content from Claude SDK response messages
*/
async function extractTextFromStream(
// eslint-disable-next-line @typescript-eslint/no-explicit-any
stream: AsyncIterable<any>
): Promise<string> {
let responseText = '';
for await (const msg of stream) {
if (msg.type === 'assistant' && msg.message?.content) {
const blocks = msg.message.content as Array<{ type: string; text?: string }>;
for (const block of blocks) {
if (block.type === 'text' && block.text) {
responseText += block.text;
}
}
} else if (msg.type === 'result' && msg.subtype === 'success') {
responseText = msg.result || responseText;
}
}
return responseText;
}
/**
* Create the describe-file request handler
*
@@ -157,18 +133,17 @@ export function createDescribeFileHandler(
// Get the filename for context
const fileName = path.basename(resolvedPath);
// Get customized prompts from settings
const prompts = await getPromptCustomization(settingsService, '[DescribeFile]');
// Build prompt with file content passed as structured data
// The file content is included directly, not via tool invocation
const instructionText = `Analyze the following file and provide a 1-2 sentence description suitable for use as context in an AI coding assistant. Focus on what the file contains, its purpose, and why an AI agent might want to use this context in the future (e.g., "API documentation for the authentication endpoints", "Configuration file for database connections", "Coding style guidelines for the project").
const prompt = `${prompts.contextDescription.describeFilePrompt}
Respond with ONLY the description text, no additional formatting, preamble, or explanation.
File: ${fileName}${truncated ? ' (truncated)' : ''}
File: ${fileName}${truncated ? ' (truncated)' : ''}`;
const promptContent = [
{ type: 'text' as const, text: instructionText },
{ type: 'text' as const, text: `\n\n--- FILE CONTENT ---\n${contentToAnalyze}` },
];
--- FILE CONTENT ---
${contentToAnalyze}`;
// Use the file's directory as the working directory
const cwd = path.dirname(resolvedPath);
@@ -180,77 +155,39 @@ File: ${fileName}${truncated ? ' (truncated)' : ''}`;
'[DescribeFile]'
);
// Get model from phase settings
const settings = await settingsService?.getGlobalSettings();
logger.info(`Raw phaseModels from settings:`, JSON.stringify(settings?.phaseModels, null, 2));
const phaseModelEntry =
settings?.phaseModels?.fileDescriptionModel || DEFAULT_PHASE_MODELS.fileDescriptionModel;
logger.info(`fileDescriptionModel entry:`, JSON.stringify(phaseModelEntry));
// Get model from phase settings with provider info
const {
phaseModel: phaseModelEntry,
provider,
credentials,
} = await getPhaseModelWithOverrides(
'fileDescriptionModel',
settingsService,
cwd,
'[DescribeFile]'
);
const { model, thinkingLevel } = resolvePhaseModel(phaseModelEntry);
logger.info(`Resolved model: ${model}, thinkingLevel: ${thinkingLevel}`);
logger.info(
`Resolved model: ${model}, thinkingLevel: ${thinkingLevel}`,
provider ? `via provider: ${provider.name}` : 'direct API'
);
let description: string;
// Use simpleQuery - provider abstraction handles routing to correct provider
const result = await simpleQuery({
prompt,
model,
cwd,
maxTurns: 1,
allowedTools: [],
thinkingLevel,
readOnly: true, // File description only reads, doesn't write
settingSources: autoLoadClaudeMd ? ['user', 'project', 'local'] : undefined,
claudeCompatibleProvider: provider, // Pass provider for alternative endpoint configuration
credentials, // Pass credentials for resolving 'credentials' apiKeySource
});
// Route to appropriate provider based on model type
if (isCursorModel(model)) {
// Use Cursor provider for Cursor models
logger.info(`Using Cursor provider for model: ${model}`);
const provider = ProviderFactory.getProviderForModel(model);
// Strip provider prefix - providers expect bare model IDs
const bareModel = stripProviderPrefix(model);
// Build a simple text prompt for Cursor (no multi-part content blocks)
const cursorPrompt = `${instructionText}\n\n--- FILE CONTENT ---\n${contentToAnalyze}`;
let responseText = '';
for await (const msg of provider.executeQuery({
prompt: cursorPrompt,
model: bareModel,
cwd,
maxTurns: 1,
allowedTools: [],
readOnly: true, // File description only reads, doesn't write
})) {
if (msg.type === 'assistant' && msg.message?.content) {
for (const block of msg.message.content) {
if (block.type === 'text' && block.text) {
responseText += block.text;
}
}
}
}
description = responseText;
} else {
// Use Claude SDK for Claude models
logger.info(`Using Claude SDK for model: ${model}`);
// Use centralized SDK options with proper cwd validation
// No tools needed since we're passing file content directly
const sdkOptions = createCustomOptions({
cwd,
model,
maxTurns: 1,
allowedTools: [],
autoLoadClaudeMd,
thinkingLevel, // Pass thinking level for extended thinking
});
const promptGenerator = (async function* () {
yield {
type: 'user' as const,
session_id: '',
message: { role: 'user' as const, content: promptContent },
parent_tool_use_id: null,
};
})();
const stream = query({ prompt: promptGenerator, options: sdkOptions });
// Extract the description from the response
description = await extractTextFromStream(stream);
}
const description = result.text;
if (!description || description.trim().length === 0) {
logger.warn('Received empty response from Claude');

View File

@@ -12,16 +12,18 @@
*/
import type { Request, Response } from 'express';
import { query } from '@anthropic-ai/claude-agent-sdk';
import { createLogger, readImageAsBase64 } from '@automaker/utils';
import { DEFAULT_PHASE_MODELS, isCursorModel, stripProviderPrefix } from '@automaker/types';
import { isCursorModel } from '@automaker/types';
import { resolvePhaseModel } from '@automaker/model-resolver';
import { createCustomOptions } from '../../../lib/sdk-options.js';
import { ProviderFactory } from '../../../providers/provider-factory.js';
import { simpleQuery } from '../../../providers/simple-query-service.js';
import * as secureFs from '../../../lib/secure-fs.js';
import * as path from 'path';
import type { SettingsService } from '../../../services/settings-service.js';
import { getAutoLoadClaudeMdSetting } from '../../../lib/settings-helpers.js';
import {
getAutoLoadClaudeMdSetting,
getPromptCustomization,
getPhaseModelWithOverrides,
} from '../../../lib/settings-helpers.js';
const logger = createLogger('DescribeImage');
@@ -178,57 +180,10 @@ function mapDescribeImageError(rawMessage: string | undefined): {
return baseResponse;
}
/**
* Extract text content from Claude SDK response messages and log high-signal stream events.
*/
async function extractTextFromStream(
// eslint-disable-next-line @typescript-eslint/no-explicit-any
stream: AsyncIterable<any>,
requestId: string
): Promise<string> {
let responseText = '';
let messageCount = 0;
logger.info(`[${requestId}] [Stream] Begin reading SDK stream...`);
for await (const msg of stream) {
messageCount++;
const msgType = msg?.type;
const msgSubtype = msg?.subtype;
// Keep this concise but informative. Full error object is logged in catch blocks.
logger.info(
`[${requestId}] [Stream] #${messageCount} type=${String(msgType)} subtype=${String(msgSubtype ?? '')}`
);
if (msgType === 'assistant' && msg.message?.content) {
const blocks = msg.message.content as Array<{ type: string; text?: string }>;
logger.info(`[${requestId}] [Stream] assistant blocks=${blocks.length}`);
for (const block of blocks) {
if (block.type === 'text' && block.text) {
responseText += block.text;
}
}
}
if (msgType === 'result' && msgSubtype === 'success') {
if (typeof msg.result === 'string' && msg.result.length > 0) {
responseText = msg.result;
}
}
}
logger.info(
`[${requestId}] [Stream] End of stream. messages=${messageCount} textLength=${responseText.length}`
);
return responseText;
}
/**
* Create the describe-image request handler
*
* Uses Claude SDK query with multi-part content blocks to include the image (base64),
* Uses the provider abstraction with multi-part content blocks to include the image (base64),
* matching the agent runner behavior.
*
* @param settingsService - Optional settings service for loading autoLoadClaudeMd setting
@@ -309,27 +264,6 @@ export function createDescribeImageHandler(
`[${requestId}] image meta filename=${imageData.filename} mime=${imageData.mimeType} base64Len=${base64Length} estBytes=${estimatedBytes}`
);
// Build multi-part prompt with image block (no Read tool required)
const instructionText =
`Describe this image in 1-2 sentences suitable for use as context in an AI coding assistant. ` +
`Focus on what the image shows and its purpose (e.g., "UI mockup showing login form with email/password fields", ` +
`"Architecture diagram of microservices", "Screenshot of error message in terminal").\n\n` +
`Respond with ONLY the description text, no additional formatting, preamble, or explanation.`;
const promptContent = [
{ type: 'text' as const, text: instructionText },
{
type: 'image' as const,
source: {
type: 'base64' as const,
media_type: imageData.mimeType,
data: imageData.base64,
},
},
];
logger.info(`[${requestId}] Built multi-part prompt blocks=${promptContent.length}`);
const cwd = path.dirname(actualPath);
logger.info(`[${requestId}] Using cwd=${cwd}`);
@@ -340,93 +274,78 @@ export function createDescribeImageHandler(
'[DescribeImage]'
);
// Get model from phase settings
const settings = await settingsService?.getGlobalSettings();
const phaseModelEntry =
settings?.phaseModels?.imageDescriptionModel || DEFAULT_PHASE_MODELS.imageDescriptionModel;
// Get model from phase settings with provider info
const {
phaseModel: phaseModelEntry,
provider,
credentials,
} = await getPhaseModelWithOverrides(
'imageDescriptionModel',
settingsService,
cwd,
'[DescribeImage]'
);
const { model, thinkingLevel } = resolvePhaseModel(phaseModelEntry);
logger.info(`[${requestId}] Using model: ${model}`);
logger.info(
`[${requestId}] Using model: ${model}`,
provider ? `via provider: ${provider.name}` : 'direct API'
);
let description: string;
// Get customized prompts from settings
const prompts = await getPromptCustomization(settingsService, '[DescribeImage]');
// Build the instruction text from centralized prompts
const instructionText = prompts.contextDescription.describeImagePrompt;
// Build prompt based on provider capability
// Some providers (like Cursor) may not support image content blocks
let prompt: string | Array<{ type: string; text?: string; source?: object }>;
// Route to appropriate provider based on model type
if (isCursorModel(model)) {
// Use Cursor provider for Cursor models
// Note: Cursor may have limited support for image content blocks
logger.info(`[${requestId}] Using Cursor provider for model: ${model}`);
const provider = ProviderFactory.getProviderForModel(model);
// Strip provider prefix - providers expect bare model IDs
const bareModel = stripProviderPrefix(model);
// Build prompt with image reference for Cursor
// Note: Cursor CLI may not support base64 image blocks directly,
// so we include the image path as context
const cursorPrompt = `${instructionText}\n\nImage file: ${actualPath}\nMIME type: ${imageData.mimeType}`;
let responseText = '';
const queryStart = Date.now();
for await (const msg of provider.executeQuery({
prompt: cursorPrompt,
model: bareModel,
cwd,
maxTurns: 1,
allowedTools: ['Read'], // Allow Read tool so Cursor can read the image if needed
readOnly: true, // Image description only reads, doesn't write
})) {
if (msg.type === 'assistant' && msg.message?.content) {
for (const block of msg.message.content) {
if (block.type === 'text' && block.text) {
responseText += block.text;
}
}
}
}
logger.info(`[${requestId}] Cursor query completed in ${Date.now() - queryStart}ms`);
description = responseText;
// Cursor may not support base64 image blocks directly
// Use text prompt with image path reference
logger.info(`[${requestId}] Using text prompt for Cursor model`);
prompt = `${instructionText}\n\nImage file: ${actualPath}\nMIME type: ${imageData.mimeType}`;
} else {
// Use Claude SDK for Claude models (supports image content blocks)
logger.info(`[${requestId}] Using Claude SDK for model: ${model}`);
// Use the same centralized option builder used across the server (validates cwd)
const sdkOptions = createCustomOptions({
cwd,
model,
maxTurns: 1,
allowedTools: [],
autoLoadClaudeMd,
thinkingLevel, // Pass thinking level for extended thinking
});
logger.info(
`[${requestId}] SDK options model=${sdkOptions.model} maxTurns=${sdkOptions.maxTurns} allowedTools=${JSON.stringify(
sdkOptions.allowedTools
)}`
);
const promptGenerator = (async function* () {
yield {
type: 'user' as const,
session_id: '',
message: { role: 'user' as const, content: promptContent },
parent_tool_use_id: null,
};
})();
logger.info(`[${requestId}] Calling query()...`);
const queryStart = Date.now();
const stream = query({ prompt: promptGenerator, options: sdkOptions });
logger.info(`[${requestId}] query() returned stream in ${Date.now() - queryStart}ms`);
// Extract the description from the response
const extractStart = Date.now();
description = await extractTextFromStream(stream, requestId);
logger.info(`[${requestId}] extractMs=${Date.now() - extractStart}`);
// Claude and other vision-capable models support multi-part prompts with images
logger.info(`[${requestId}] Using multi-part prompt with image block`);
prompt = [
{ type: 'text', text: instructionText },
{
type: 'image',
source: {
type: 'base64',
media_type: imageData.mimeType,
data: imageData.base64,
},
},
];
}
logger.info(`[${requestId}] Calling simpleQuery...`);
const queryStart = Date.now();
// Use simpleQuery - provider abstraction handles routing
const result = await simpleQuery({
prompt,
model,
cwd,
maxTurns: 1,
allowedTools: isCursorModel(model) ? ['Read'] : [], // Allow Read for Cursor to read image if needed
thinkingLevel,
readOnly: true, // Image description only reads, doesn't write
settingSources: autoLoadClaudeMd ? ['user', 'project', 'local'] : undefined,
claudeCompatibleProvider: provider, // Pass provider for alternative endpoint configuration
credentials, // Pass credentials for resolving 'credentials' apiKeySource
});
logger.info(`[${requestId}] simpleQuery completed in ${Date.now() - queryStart}ms`);
const description = result.text;
if (!description || description.trim().length === 0) {
logger.warn(`[${requestId}] Received empty response from Claude`);
logger.warn(`[${requestId}] Received empty response from AI`);
const response: DescribeImageErrorResponse = {
success: false,
error: 'Failed to generate description - empty response',

View File

@@ -1,24 +1,18 @@
/**
* POST /enhance-prompt endpoint - Enhance user input text
*
* Uses Claude AI or Cursor to enhance text based on the specified enhancement mode.
* Supports modes: improve, technical, simplify, acceptance
* Uses the provider abstraction to enhance text based on the specified
* enhancement mode. Works with any configured provider (Claude, Cursor, etc.).
* Supports modes: improve, technical, simplify, acceptance, ux-reviewer
*/
import type { Request, Response } from 'express';
import { query } from '@anthropic-ai/claude-agent-sdk';
import { createLogger } from '@automaker/utils';
import { resolveModelString } from '@automaker/model-resolver';
import {
CLAUDE_MODEL_MAP,
isCursorModel,
stripProviderPrefix,
ThinkingLevel,
getThinkingTokenBudget,
} from '@automaker/types';
import { ProviderFactory } from '../../../providers/provider-factory.js';
import { CLAUDE_MODEL_MAP, type ThinkingLevel } from '@automaker/types';
import { simpleQuery } from '../../../providers/simple-query-service.js';
import type { SettingsService } from '../../../services/settings-service.js';
import { getPromptCustomization } from '../../../lib/settings-helpers.js';
import { getPromptCustomization, getProviderByModelId } from '../../../lib/settings-helpers.js';
import {
buildUserPrompt,
isValidEnhancementMode,
@@ -37,8 +31,10 @@ interface EnhanceRequestBody {
enhancementMode: string;
/** Optional model override */
model?: string;
/** Optional thinking level for Claude models (ignored for Cursor models) */
/** Optional thinking level for Claude models */
thinkingLevel?: ThinkingLevel;
/** Optional project path for per-project Claude API profile */
projectPath?: string;
}
/**
@@ -57,76 +53,6 @@ interface EnhanceErrorResponse {
error: string;
}
/**
* Extract text content from Claude SDK response messages
*
* @param stream - The async iterable from the query function
* @returns The extracted text content
*/
async function extractTextFromStream(
stream: AsyncIterable<{
type: string;
subtype?: string;
result?: string;
message?: {
content?: Array<{ type: string; text?: string }>;
};
}>
): Promise<string> {
let responseText = '';
for await (const msg of stream) {
if (msg.type === 'assistant' && msg.message?.content) {
for (const block of msg.message.content) {
if (block.type === 'text' && block.text) {
responseText += block.text;
}
}
} else if (msg.type === 'result' && msg.subtype === 'success') {
responseText = msg.result || responseText;
}
}
return responseText;
}
/**
* Execute enhancement using Cursor provider
*
* @param prompt - The enhancement prompt
* @param model - The Cursor model to use
* @returns The enhanced text
*/
async function executeWithCursor(prompt: string, model: string): Promise<string> {
const provider = ProviderFactory.getProviderForModel(model);
// Strip provider prefix - providers expect bare model IDs
const bareModel = stripProviderPrefix(model);
let responseText = '';
for await (const msg of provider.executeQuery({
prompt,
model: bareModel,
cwd: process.cwd(), // Enhancement doesn't need a specific working directory
readOnly: true, // Prompt enhancement only generates text, doesn't write files
})) {
if (msg.type === 'assistant' && msg.message?.content) {
for (const block of msg.message.content) {
if (block.type === 'text' && block.text) {
responseText += block.text;
}
}
} else if (msg.type === 'result' && msg.subtype === 'success' && msg.result) {
// Use result if it's a final accumulated message
if (msg.result.length > responseText.length) {
responseText = msg.result;
}
}
}
return responseText;
}
/**
* Create the enhance request handler
*
@@ -138,7 +64,7 @@ export function createEnhanceHandler(
): (req: Request, res: Response) => Promise<void> {
return async (req: Request, res: Response): Promise<void> => {
try {
const { originalText, enhancementMode, model, thinkingLevel } =
const { originalText, enhancementMode, model, thinkingLevel, projectPath } =
req.body as EnhanceRequestBody;
// Validate required fields
@@ -188,54 +114,60 @@ export function createEnhanceHandler(
technical: prompts.enhancement.technicalSystemPrompt,
simplify: prompts.enhancement.simplifySystemPrompt,
acceptance: prompts.enhancement.acceptanceSystemPrompt,
'ux-reviewer': prompts.enhancement.uxReviewerSystemPrompt,
};
const systemPrompt = systemPromptMap[validMode];
logger.debug(`Using ${validMode} system prompt (length: ${systemPrompt.length} chars)`);
// Build the user prompt with few-shot examples
// This helps the model understand this is text transformation, not a coding task
const userPrompt = buildUserPrompt(validMode, trimmedText, true);
// Resolve the model - use the passed model, default to sonnet for quality
const resolvedModel = resolveModelString(model, CLAUDE_MODEL_MAP.sonnet);
// Check if the model is a provider model (like "GLM-4.5-Air")
// If so, get the provider config and resolved Claude model
let claudeCompatibleProvider: import('@automaker/types').ClaudeCompatibleProvider | undefined;
let providerResolvedModel: string | undefined;
let credentials = await settingsService?.getCredentials();
if (model && settingsService) {
const providerResult = await getProviderByModelId(
model,
settingsService,
'[EnhancePrompt]'
);
if (providerResult.provider) {
claudeCompatibleProvider = providerResult.provider;
providerResolvedModel = providerResult.resolvedModel;
credentials = providerResult.credentials;
logger.info(
`Using provider "${providerResult.provider.name}" for model "${model}"` +
(providerResolvedModel ? ` -> resolved to "${providerResolvedModel}"` : '')
);
}
}
// Resolve the model - use provider resolved model, passed model, or default to sonnet
const resolvedModel =
providerResolvedModel || resolveModelString(model, CLAUDE_MODEL_MAP.sonnet);
logger.debug(`Using model: ${resolvedModel}`);
let enhancedText: string;
// Use simpleQuery - provider abstraction handles routing to correct provider
// The system prompt is combined with user prompt since some providers
// don't have a separate system prompt concept
const result = await simpleQuery({
prompt: `${systemPrompt}\n\n${userPrompt}`,
model: resolvedModel,
cwd: process.cwd(), // Enhancement doesn't need a specific working directory
maxTurns: 1,
allowedTools: [],
thinkingLevel,
readOnly: true, // Prompt enhancement only generates text, doesn't write files
credentials, // Pass credentials for resolving 'credentials' apiKeySource
claudeCompatibleProvider, // Pass provider for alternative endpoint configuration
});
// Route to appropriate provider based on model
if (isCursorModel(resolvedModel)) {
// Use Cursor provider for Cursor models
logger.info(`Using Cursor provider for model: ${resolvedModel}`);
// Cursor doesn't have a separate system prompt concept, so combine them
const combinedPrompt = `${systemPrompt}\n\n${userPrompt}`;
enhancedText = await executeWithCursor(combinedPrompt, resolvedModel);
} else {
// Use Claude SDK for Claude models
logger.info(`Using Claude provider for model: ${resolvedModel}`);
// Convert thinkingLevel to maxThinkingTokens for SDK
const maxThinkingTokens = getThinkingTokenBudget(thinkingLevel);
const queryOptions: Parameters<typeof query>[0]['options'] = {
model: resolvedModel,
systemPrompt,
maxTurns: 1,
allowedTools: [],
permissionMode: 'acceptEdits',
};
if (maxThinkingTokens) {
queryOptions.maxThinkingTokens = maxThinkingTokens;
}
const stream = query({
prompt: userPrompt,
options: queryOptions,
});
enhancedText = await extractTextFromStream(stream);
}
const enhancedText = result.text;
if (!enhancedText || enhancedText.trim().length === 0) {
logger.warn('Received empty response from AI');

View File

@@ -0,0 +1,19 @@
/**
* Common utilities for event history routes
*/
import { createLogger } from '@automaker/utils';
import { getErrorMessage as getErrorMessageShared, createLogError } from '../common.js';
/** Logger instance for event history operations */
export const logger = createLogger('EventHistory');
/**
* Extract user-friendly error message from error objects
*/
export { getErrorMessageShared as getErrorMessage };
/**
* Log error with automatic logger binding
*/
export const logError = createLogError(logger);

View File

@@ -0,0 +1,68 @@
/**
* Event History routes - HTTP API for event history management
*
* Provides endpoints for:
* - Listing events with filtering
* - Getting individual event details
* - Deleting events
* - Clearing all events
* - Replaying events to test hooks
*
* Mounted at /api/event-history in the main server.
*/
import { Router } from 'express';
import type { EventHistoryService } from '../../services/event-history-service.js';
import type { SettingsService } from '../../services/settings-service.js';
import { validatePathParams } from '../../middleware/validate-paths.js';
import { createListHandler } from './routes/list.js';
import { createGetHandler } from './routes/get.js';
import { createDeleteHandler } from './routes/delete.js';
import { createClearHandler } from './routes/clear.js';
import { createReplayHandler } from './routes/replay.js';
/**
* Create event history router with all endpoints
*
* Endpoints:
* - POST /list - List events with optional filtering
* - POST /get - Get a single event by ID
* - POST /delete - Delete an event by ID
* - POST /clear - Clear all events for a project
* - POST /replay - Replay an event to trigger hooks
*
* @param eventHistoryService - Instance of EventHistoryService
* @param settingsService - Instance of SettingsService (for replay)
* @returns Express Router configured with all event history endpoints
*/
export function createEventHistoryRoutes(
eventHistoryService: EventHistoryService,
settingsService: SettingsService
): Router {
const router = Router();
// List events with filtering
router.post('/list', validatePathParams('projectPath'), createListHandler(eventHistoryService));
// Get single event
router.post('/get', validatePathParams('projectPath'), createGetHandler(eventHistoryService));
// Delete event
router.post(
'/delete',
validatePathParams('projectPath'),
createDeleteHandler(eventHistoryService)
);
// Clear all events
router.post('/clear', validatePathParams('projectPath'), createClearHandler(eventHistoryService));
// Replay event
router.post(
'/replay',
validatePathParams('projectPath'),
createReplayHandler(eventHistoryService, settingsService)
);
return router;
}

View File

@@ -0,0 +1,33 @@
/**
* POST /api/event-history/clear - Clear all events for a project
*
* Request body: { projectPath: string }
* Response: { success: true, cleared: number }
*/
import type { Request, Response } from 'express';
import type { EventHistoryService } from '../../../services/event-history-service.js';
import { getErrorMessage, logError } from '../common.js';
export function createClearHandler(eventHistoryService: EventHistoryService) {
return async (req: Request, res: Response): Promise<void> => {
try {
const { projectPath } = req.body as { projectPath: string };
if (!projectPath || typeof projectPath !== 'string') {
res.status(400).json({ success: false, error: 'projectPath is required' });
return;
}
const cleared = await eventHistoryService.clearEvents(projectPath);
res.json({
success: true,
cleared,
});
} catch (error) {
logError(error, 'Clear events failed');
res.status(500).json({ success: false, error: getErrorMessage(error) });
}
};
}

View File

@@ -0,0 +1,43 @@
/**
* POST /api/event-history/delete - Delete an event by ID
*
* Request body: { projectPath: string, eventId: string }
* Response: { success: true } or { success: false, error: string }
*/
import type { Request, Response } from 'express';
import type { EventHistoryService } from '../../../services/event-history-service.js';
import { getErrorMessage, logError } from '../common.js';
export function createDeleteHandler(eventHistoryService: EventHistoryService) {
return async (req: Request, res: Response): Promise<void> => {
try {
const { projectPath, eventId } = req.body as {
projectPath: string;
eventId: string;
};
if (!projectPath || typeof projectPath !== 'string') {
res.status(400).json({ success: false, error: 'projectPath is required' });
return;
}
if (!eventId || typeof eventId !== 'string') {
res.status(400).json({ success: false, error: 'eventId is required' });
return;
}
const deleted = await eventHistoryService.deleteEvent(projectPath, eventId);
if (!deleted) {
res.status(404).json({ success: false, error: 'Event not found' });
return;
}
res.json({ success: true });
} catch (error) {
logError(error, 'Delete event failed');
res.status(500).json({ success: false, error: getErrorMessage(error) });
}
};
}

View File

@@ -0,0 +1,46 @@
/**
* POST /api/event-history/get - Get a single event by ID
*
* Request body: { projectPath: string, eventId: string }
* Response: { success: true, event: StoredEvent } or { success: false, error: string }
*/
import type { Request, Response } from 'express';
import type { EventHistoryService } from '../../../services/event-history-service.js';
import { getErrorMessage, logError } from '../common.js';
export function createGetHandler(eventHistoryService: EventHistoryService) {
return async (req: Request, res: Response): Promise<void> => {
try {
const { projectPath, eventId } = req.body as {
projectPath: string;
eventId: string;
};
if (!projectPath || typeof projectPath !== 'string') {
res.status(400).json({ success: false, error: 'projectPath is required' });
return;
}
if (!eventId || typeof eventId !== 'string') {
res.status(400).json({ success: false, error: 'eventId is required' });
return;
}
const event = await eventHistoryService.getEvent(projectPath, eventId);
if (!event) {
res.status(404).json({ success: false, error: 'Event not found' });
return;
}
res.json({
success: true,
event,
});
} catch (error) {
logError(error, 'Get event failed');
res.status(500).json({ success: false, error: getErrorMessage(error) });
}
};
}

View File

@@ -0,0 +1,53 @@
/**
* POST /api/event-history/list - List events for a project
*
* Request body: {
* projectPath: string,
* filter?: {
* trigger?: EventHookTrigger,
* featureId?: string,
* since?: string,
* until?: string,
* limit?: number,
* offset?: number
* }
* }
* Response: { success: true, events: StoredEventSummary[], total: number }
*/
import type { Request, Response } from 'express';
import type { EventHistoryService } from '../../../services/event-history-service.js';
import type { EventHistoryFilter } from '@automaker/types';
import { getErrorMessage, logError } from '../common.js';
export function createListHandler(eventHistoryService: EventHistoryService) {
return async (req: Request, res: Response): Promise<void> => {
try {
const { projectPath, filter } = req.body as {
projectPath: string;
filter?: EventHistoryFilter;
};
if (!projectPath || typeof projectPath !== 'string') {
res.status(400).json({ success: false, error: 'projectPath is required' });
return;
}
const events = await eventHistoryService.getEvents(projectPath, filter);
const total = await eventHistoryService.getEventCount(projectPath, {
...filter,
limit: undefined,
offset: undefined,
});
res.json({
success: true,
events,
total,
});
} catch (error) {
logError(error, 'List events failed');
res.status(500).json({ success: false, error: getErrorMessage(error) });
}
};
}

View File

@@ -0,0 +1,234 @@
/**
* POST /api/event-history/replay - Replay an event to trigger hooks
*
* Request body: {
* projectPath: string,
* eventId: string,
* hookIds?: string[] // Optional: specific hooks to run (if not provided, runs all enabled matching hooks)
* }
* Response: { success: true, result: EventReplayResult }
*/
import type { Request, Response } from 'express';
import type { EventHistoryService } from '../../../services/event-history-service.js';
import type { SettingsService } from '../../../services/settings-service.js';
import type { EventReplayResult, EventReplayHookResult, EventHook } from '@automaker/types';
import { exec } from 'child_process';
import { promisify } from 'util';
import { getErrorMessage, logError, logger } from '../common.js';
const execAsync = promisify(exec);
/** Default timeout for shell commands (30 seconds) */
const DEFAULT_SHELL_TIMEOUT = 30000;
/** Default timeout for HTTP requests (10 seconds) */
const DEFAULT_HTTP_TIMEOUT = 10000;
interface HookContext {
featureId?: string;
featureName?: string;
projectPath?: string;
projectName?: string;
error?: string;
errorType?: string;
timestamp: string;
eventType: string;
}
/**
* Substitute {{variable}} placeholders in a string
*/
function substituteVariables(template: string, context: HookContext): string {
return template.replace(/\{\{(\w+)\}\}/g, (match, variable) => {
const value = context[variable as keyof HookContext];
if (value === undefined || value === null) {
return '';
}
return String(value);
});
}
/**
* Execute a single hook and return the result
*/
async function executeHook(hook: EventHook, context: HookContext): Promise<EventReplayHookResult> {
const hookName = hook.name || hook.id;
const startTime = Date.now();
try {
if (hook.action.type === 'shell') {
const command = substituteVariables(hook.action.command, context);
const timeout = hook.action.timeout || DEFAULT_SHELL_TIMEOUT;
logger.info(`Replaying shell hook "${hookName}": ${command}`);
await execAsync(command, {
timeout,
maxBuffer: 1024 * 1024,
});
return {
hookId: hook.id,
hookName: hook.name,
success: true,
durationMs: Date.now() - startTime,
};
} else if (hook.action.type === 'http') {
const url = substituteVariables(hook.action.url, context);
const method = hook.action.method || 'POST';
const headers: Record<string, string> = {
'Content-Type': 'application/json',
};
if (hook.action.headers) {
for (const [key, value] of Object.entries(hook.action.headers)) {
headers[key] = substituteVariables(value, context);
}
}
let body: string | undefined;
if (hook.action.body) {
body = substituteVariables(hook.action.body, context);
} else if (method !== 'GET') {
body = JSON.stringify({
eventType: context.eventType,
timestamp: context.timestamp,
featureId: context.featureId,
projectPath: context.projectPath,
projectName: context.projectName,
error: context.error,
});
}
logger.info(`Replaying HTTP hook "${hookName}": ${method} ${url}`);
const controller = new AbortController();
const timeoutId = setTimeout(() => controller.abort(), DEFAULT_HTTP_TIMEOUT);
const response = await fetch(url, {
method,
headers,
body: method !== 'GET' ? body : undefined,
signal: controller.signal,
});
clearTimeout(timeoutId);
if (!response.ok) {
return {
hookId: hook.id,
hookName: hook.name,
success: false,
error: `HTTP ${response.status}: ${response.statusText}`,
durationMs: Date.now() - startTime,
};
}
return {
hookId: hook.id,
hookName: hook.name,
success: true,
durationMs: Date.now() - startTime,
};
}
return {
hookId: hook.id,
hookName: hook.name,
success: false,
error: 'Unknown hook action type',
durationMs: Date.now() - startTime,
};
} catch (error) {
const errorMessage =
error instanceof Error
? error.name === 'AbortError'
? 'Request timed out'
: error.message
: String(error);
return {
hookId: hook.id,
hookName: hook.name,
success: false,
error: errorMessage,
durationMs: Date.now() - startTime,
};
}
}
export function createReplayHandler(
eventHistoryService: EventHistoryService,
settingsService: SettingsService
) {
return async (req: Request, res: Response): Promise<void> => {
try {
const { projectPath, eventId, hookIds } = req.body as {
projectPath: string;
eventId: string;
hookIds?: string[];
};
if (!projectPath || typeof projectPath !== 'string') {
res.status(400).json({ success: false, error: 'projectPath is required' });
return;
}
if (!eventId || typeof eventId !== 'string') {
res.status(400).json({ success: false, error: 'eventId is required' });
return;
}
// Get the event
const event = await eventHistoryService.getEvent(projectPath, eventId);
if (!event) {
res.status(404).json({ success: false, error: 'Event not found' });
return;
}
// Get hooks from settings
const settings = await settingsService.getGlobalSettings();
let hooks = settings.eventHooks || [];
// Filter to matching trigger and enabled hooks
hooks = hooks.filter((h) => h.enabled && h.trigger === event.trigger);
// If specific hook IDs requested, filter to those
if (hookIds && hookIds.length > 0) {
hooks = hooks.filter((h) => hookIds.includes(h.id));
}
// Build context for variable substitution
const context: HookContext = {
featureId: event.featureId,
featureName: event.featureName,
projectPath: event.projectPath,
projectName: event.projectName,
error: event.error,
errorType: event.errorType,
timestamp: event.timestamp,
eventType: event.trigger,
};
// Execute all hooks in parallel
const hookResults = await Promise.all(hooks.map((hook) => executeHook(hook, context)));
const result: EventReplayResult = {
eventId,
hooksTriggered: hooks.length,
hookResults,
};
logger.info(`Replayed event ${eventId}: ${hooks.length} hooks triggered`);
res.json({
success: true,
result,
});
} catch (error) {
logError(error, 'Replay event failed');
res.status(500).json({ success: false, error: getErrorMessage(error) });
}
};
}

View File

@@ -4,32 +4,57 @@
import { Router } from 'express';
import { FeatureLoader } from '../../services/feature-loader.js';
import type { SettingsService } from '../../services/settings-service.js';
import type { EventEmitter } from '../../lib/events.js';
import { validatePathParams } from '../../middleware/validate-paths.js';
import { createListHandler } from './routes/list.js';
import { createGetHandler } from './routes/get.js';
import { createCreateHandler } from './routes/create.js';
import { createUpdateHandler } from './routes/update.js';
import { createBulkUpdateHandler } from './routes/bulk-update.js';
import { createBulkDeleteHandler } from './routes/bulk-delete.js';
import { createDeleteHandler } from './routes/delete.js';
import { createAgentOutputHandler, createRawOutputHandler } from './routes/agent-output.js';
import { createGenerateTitleHandler } from './routes/generate-title.js';
import { createExportHandler } from './routes/export.js';
import { createImportHandler, createConflictCheckHandler } from './routes/import.js';
export function createFeaturesRoutes(featureLoader: FeatureLoader): Router {
export function createFeaturesRoutes(
featureLoader: FeatureLoader,
settingsService?: SettingsService,
events?: EventEmitter
): Router {
const router = Router();
router.post('/list', validatePathParams('projectPath'), createListHandler(featureLoader));
router.post('/get', validatePathParams('projectPath'), createGetHandler(featureLoader));
router.post('/create', validatePathParams('projectPath'), createCreateHandler(featureLoader));
router.post(
'/create',
validatePathParams('projectPath'),
createCreateHandler(featureLoader, events)
);
router.post('/update', validatePathParams('projectPath'), createUpdateHandler(featureLoader));
router.post(
'/bulk-update',
validatePathParams('projectPath'),
createBulkUpdateHandler(featureLoader)
);
router.post(
'/bulk-delete',
validatePathParams('projectPath'),
createBulkDeleteHandler(featureLoader)
);
router.post('/delete', validatePathParams('projectPath'), createDeleteHandler(featureLoader));
router.post('/agent-output', createAgentOutputHandler(featureLoader));
router.post('/raw-output', createRawOutputHandler(featureLoader));
router.post('/generate-title', createGenerateTitleHandler());
router.post('/generate-title', createGenerateTitleHandler(settingsService));
router.post('/export', validatePathParams('projectPath'), createExportHandler(featureLoader));
router.post('/import', validatePathParams('projectPath'), createImportHandler(featureLoader));
router.post(
'/check-conflicts',
validatePathParams('projectPath'),
createConflictCheckHandler(featureLoader)
);
return router;
}

View File

@@ -0,0 +1,69 @@
/**
* POST /bulk-delete endpoint - Delete multiple features at once
*/
import type { Request, Response } from 'express';
import { FeatureLoader } from '../../../services/feature-loader.js';
import { getErrorMessage, logError } from '../common.js';
interface BulkDeleteRequest {
projectPath: string;
featureIds: string[];
}
interface BulkDeleteResult {
featureId: string;
success: boolean;
error?: string;
}
export function createBulkDeleteHandler(featureLoader: FeatureLoader) {
return async (req: Request, res: Response): Promise<void> => {
try {
const { projectPath, featureIds } = req.body as BulkDeleteRequest;
if (!projectPath || !featureIds || !Array.isArray(featureIds) || featureIds.length === 0) {
res.status(400).json({
success: false,
error: 'projectPath and featureIds (non-empty array) are required',
});
return;
}
// Process in parallel batches of 20 for efficiency
const BATCH_SIZE = 20;
const results: BulkDeleteResult[] = [];
for (let i = 0; i < featureIds.length; i += BATCH_SIZE) {
const batch = featureIds.slice(i, i + BATCH_SIZE);
const batchResults = await Promise.all(
batch.map(async (featureId) => {
const success = await featureLoader.delete(projectPath, featureId);
if (success) {
return { featureId, success: true };
}
return {
featureId,
success: false,
error: 'Deletion failed. Check server logs for details.',
};
})
);
results.push(...batchResults);
}
const successCount = results.reduce((count, r) => count + (r.success ? 1 : 0), 0);
const failureCount = results.length - successCount;
res.json({
success: failureCount === 0,
deletedCount: successCount,
failedCount: failureCount,
results,
});
} catch (error) {
logError(error, 'Bulk delete features failed');
res.status(500).json({ success: false, error: getErrorMessage(error) });
}
};
}

View File

@@ -43,17 +43,36 @@ export function createBulkUpdateHandler(featureLoader: FeatureLoader) {
const results: BulkUpdateResult[] = [];
const updatedFeatures: Feature[] = [];
for (const featureId of featureIds) {
try {
const updated = await featureLoader.update(projectPath, featureId, updates);
results.push({ featureId, success: true });
updatedFeatures.push(updated);
} catch (error) {
results.push({
featureId,
success: false,
error: getErrorMessage(error),
});
// Process in parallel batches of 20 for efficiency
const BATCH_SIZE = 20;
for (let i = 0; i < featureIds.length; i += BATCH_SIZE) {
const batch = featureIds.slice(i, i + BATCH_SIZE);
const batchResults = await Promise.all(
batch.map(async (featureId) => {
try {
const updated = await featureLoader.update(projectPath, featureId, updates);
return { featureId, success: true as const, feature: updated };
} catch (error) {
return {
featureId,
success: false as const,
error: getErrorMessage(error),
};
}
})
);
for (const result of batchResults) {
if (result.success) {
results.push({ featureId: result.featureId, success: true });
updatedFeatures.push(result.feature);
} else {
results.push({
featureId: result.featureId,
success: false,
error: result.error,
});
}
}
}

View File

@@ -4,10 +4,11 @@
import type { Request, Response } from 'express';
import { FeatureLoader } from '../../../services/feature-loader.js';
import type { EventEmitter } from '../../../lib/events.js';
import type { Feature } from '@automaker/types';
import { getErrorMessage, logError } from '../common.js';
export function createCreateHandler(featureLoader: FeatureLoader) {
export function createCreateHandler(featureLoader: FeatureLoader, events?: EventEmitter) {
return async (req: Request, res: Response): Promise<void> => {
try {
const { projectPath, feature } = req.body as {
@@ -23,7 +24,30 @@ export function createCreateHandler(featureLoader: FeatureLoader) {
return;
}
// Check for duplicate title if title is provided
if (feature.title && feature.title.trim()) {
const duplicate = await featureLoader.findDuplicateTitle(projectPath, feature.title);
if (duplicate) {
res.status(409).json({
success: false,
error: `A feature with title "${feature.title}" already exists`,
duplicateFeatureId: duplicate.id,
});
return;
}
}
const created = await featureLoader.create(projectPath, feature);
// Emit feature_created event for hooks
if (events) {
events.emit('feature:created', {
featureId: created.id,
featureName: created.name,
projectPath,
});
}
res.json({ success: true, feature: created });
} catch (error) {
logError(error, 'Create feature failed');

View File

@@ -0,0 +1,96 @@
/**
* POST /export endpoint - Export features to JSON or YAML format
*/
import type { Request, Response } from 'express';
import type { FeatureLoader } from '../../../services/feature-loader.js';
import {
getFeatureExportService,
type ExportFormat,
type BulkExportOptions,
} from '../../../services/feature-export-service.js';
import { getErrorMessage, logError } from '../common.js';
interface ExportRequest {
projectPath: string;
/** Feature IDs to export. If empty/undefined, exports all features */
featureIds?: string[];
/** Export format: 'json' or 'yaml' */
format?: ExportFormat;
/** Whether to include description history */
includeHistory?: boolean;
/** Whether to include plan spec */
includePlanSpec?: boolean;
/** Filter by category */
category?: string;
/** Filter by status */
status?: string;
/** Pretty print output */
prettyPrint?: boolean;
/** Optional metadata to include */
metadata?: {
projectName?: string;
projectPath?: string;
branch?: string;
[key: string]: unknown;
};
}
export function createExportHandler(featureLoader: FeatureLoader) {
const exportService = getFeatureExportService();
return async (req: Request, res: Response): Promise<void> => {
try {
const {
projectPath,
featureIds,
format = 'json',
includeHistory = true,
includePlanSpec = true,
category,
status,
prettyPrint = true,
metadata,
} = req.body as ExportRequest;
if (!projectPath) {
res.status(400).json({ success: false, error: 'projectPath is required' });
return;
}
// Validate format
if (format !== 'json' && format !== 'yaml') {
res.status(400).json({
success: false,
error: 'format must be "json" or "yaml"',
});
return;
}
const options: BulkExportOptions = {
format,
includeHistory,
includePlanSpec,
category,
status,
featureIds,
prettyPrint,
metadata,
};
const exportData = await exportService.exportFeatures(projectPath, options);
// Return the export data as a string in the response
res.json({
success: true,
data: exportData,
format,
contentType: format === 'json' ? 'application/json' : 'application/x-yaml',
filename: `features-export.${format === 'json' ? 'json' : 'yaml'}`,
});
} catch (error) {
logError(error, 'Export features failed');
res.status(500).json({ success: false, error: getErrorMessage(error) });
}
};
}

View File

@@ -1,18 +1,22 @@
/**
* POST /features/generate-title endpoint - Generate a concise title from description
*
* Uses Claude Haiku to generate a short, descriptive title from feature description.
* Uses the provider abstraction to generate a short, descriptive title
* from a feature description. Works with any configured provider (Claude, Cursor, etc.).
*/
import type { Request, Response } from 'express';
import { query } from '@anthropic-ai/claude-agent-sdk';
import { createLogger } from '@automaker/utils';
import { CLAUDE_MODEL_MAP } from '@automaker/model-resolver';
import { simpleQuery } from '../../../providers/simple-query-service.js';
import type { SettingsService } from '../../../services/settings-service.js';
import { getPromptCustomization } from '../../../lib/settings-helpers.js';
const logger = createLogger('GenerateTitle');
interface GenerateTitleRequestBody {
description: string;
projectPath?: string;
}
interface GenerateTitleSuccessResponse {
@@ -25,46 +29,12 @@ interface GenerateTitleErrorResponse {
error: string;
}
const SYSTEM_PROMPT = `You are a title generator. Your task is to create a concise, descriptive title (5-10 words max) for a software feature based on its description.
Rules:
- Output ONLY the title, nothing else
- Keep it short and action-oriented (e.g., "Add dark mode toggle", "Fix login validation")
- Start with a verb when possible (Add, Fix, Update, Implement, Create, etc.)
- No quotes, periods, or extra formatting
- Capture the essence of the feature in a scannable way`;
async function extractTextFromStream(
stream: AsyncIterable<{
type: string;
subtype?: string;
result?: string;
message?: {
content?: Array<{ type: string; text?: string }>;
};
}>
): Promise<string> {
let responseText = '';
for await (const msg of stream) {
if (msg.type === 'assistant' && msg.message?.content) {
for (const block of msg.message.content) {
if (block.type === 'text' && block.text) {
responseText += block.text;
}
}
} else if (msg.type === 'result' && msg.subtype === 'success') {
responseText = msg.result || responseText;
}
}
return responseText;
}
export function createGenerateTitleHandler(): (req: Request, res: Response) => Promise<void> {
export function createGenerateTitleHandler(
settingsService?: SettingsService
): (req: Request, res: Response) => Promise<void> {
return async (req: Request, res: Response): Promise<void> => {
try {
const { description } = req.body as GenerateTitleRequestBody;
const { description, projectPath } = req.body as GenerateTitleRequestBody;
if (!description || typeof description !== 'string') {
const response: GenerateTitleErrorResponse = {
@@ -87,23 +57,29 @@ export function createGenerateTitleHandler(): (req: Request, res: Response) => P
logger.info(`Generating title for description: ${trimmedDescription.substring(0, 50)}...`);
// Get customized prompts from settings
const prompts = await getPromptCustomization(settingsService, '[GenerateTitle]');
const systemPrompt = prompts.titleGeneration.systemPrompt;
// Get credentials for API calls (uses hardcoded haiku model, no phase setting)
const credentials = await settingsService?.getCredentials();
const userPrompt = `Generate a concise title for this feature:\n\n${trimmedDescription}`;
const stream = query({
prompt: userPrompt,
options: {
model: CLAUDE_MODEL_MAP.haiku,
systemPrompt: SYSTEM_PROMPT,
maxTurns: 1,
allowedTools: [],
permissionMode: 'default',
},
// Use simpleQuery - provider abstraction handles all the streaming/extraction
const result = await simpleQuery({
prompt: `${systemPrompt}\n\n${userPrompt}`,
model: CLAUDE_MODEL_MAP.haiku,
cwd: process.cwd(),
maxTurns: 1,
allowedTools: [],
credentials, // Pass credentials for resolving 'credentials' apiKeySource
});
const title = await extractTextFromStream(stream);
const title = result.text;
if (!title || title.trim().length === 0) {
logger.warn('Received empty response from Claude');
logger.warn('Received empty response from AI');
const response: GenerateTitleErrorResponse = {
success: false,
error: 'Failed to generate title - empty response',

View File

@@ -0,0 +1,210 @@
/**
* POST /import endpoint - Import features from JSON or YAML format
*/
import type { Request, Response } from 'express';
import type { FeatureLoader } from '../../../services/feature-loader.js';
import type { FeatureImportResult, Feature, FeatureExport } from '@automaker/types';
import { getFeatureExportService } from '../../../services/feature-export-service.js';
import { getErrorMessage, logError } from '../common.js';
interface ImportRequest {
projectPath: string;
/** Raw JSON or YAML string containing feature data */
data: string;
/** Whether to overwrite existing features with same ID */
overwrite?: boolean;
/** Whether to preserve branch info from imported features */
preserveBranchInfo?: boolean;
/** Optional category to assign to all imported features */
targetCategory?: string;
}
interface ConflictCheckRequest {
projectPath: string;
/** Raw JSON or YAML string containing feature data */
data: string;
}
interface ConflictInfo {
featureId: string;
title?: string;
existingTitle?: string;
hasConflict: boolean;
}
export function createImportHandler(featureLoader: FeatureLoader) {
const exportService = getFeatureExportService();
return async (req: Request, res: Response): Promise<void> => {
try {
const {
projectPath,
data,
overwrite = false,
preserveBranchInfo = false,
targetCategory,
} = req.body as ImportRequest;
if (!projectPath) {
res.status(400).json({ success: false, error: 'projectPath is required' });
return;
}
if (!data) {
res.status(400).json({ success: false, error: 'data is required' });
return;
}
// Detect format and parse the data
const format = exportService.detectFormat(data);
if (!format) {
res.status(400).json({
success: false,
error: 'Invalid data format. Expected valid JSON or YAML.',
});
return;
}
const parsed = exportService.parseImportData(data);
if (!parsed) {
res.status(400).json({
success: false,
error: 'Failed to parse import data. Ensure it is valid JSON or YAML.',
});
return;
}
// Determine if this is a single feature or bulk import
const isBulkImport =
'features' in parsed && Array.isArray((parsed as { features: unknown }).features);
let results: FeatureImportResult[];
if (isBulkImport) {
// Bulk import
results = await exportService.importFeatures(projectPath, data, {
overwrite,
preserveBranchInfo,
targetCategory,
});
} else {
// Single feature import - we know it's not a bulk export at this point
// It must be either a Feature or FeatureExport
const singleData = parsed as Feature | FeatureExport;
const result = await exportService.importFeature(projectPath, {
data: singleData,
overwrite,
preserveBranchInfo,
targetCategory,
});
results = [result];
}
const successCount = results.filter((r) => r.success).length;
const failureCount = results.filter((r) => !r.success).length;
const allSuccessful = failureCount === 0;
res.json({
success: allSuccessful,
importedCount: successCount,
failedCount: failureCount,
results,
});
} catch (error) {
logError(error, 'Import features failed');
res.status(500).json({ success: false, error: getErrorMessage(error) });
}
};
}
/**
* Create handler for checking conflicts before import
*/
export function createConflictCheckHandler(featureLoader: FeatureLoader) {
const exportService = getFeatureExportService();
return async (req: Request, res: Response): Promise<void> => {
try {
const { projectPath, data } = req.body as ConflictCheckRequest;
if (!projectPath) {
res.status(400).json({ success: false, error: 'projectPath is required' });
return;
}
if (!data) {
res.status(400).json({ success: false, error: 'data is required' });
return;
}
// Parse the import data
const format = exportService.detectFormat(data);
if (!format) {
res.status(400).json({
success: false,
error: 'Invalid data format. Expected valid JSON or YAML.',
});
return;
}
const parsed = exportService.parseImportData(data);
if (!parsed) {
res.status(400).json({
success: false,
error: 'Failed to parse import data.',
});
return;
}
// Extract features from the data using type guards
let featuresToCheck: Array<{ id: string; title?: string }> = [];
if (exportService.isBulkExport(parsed)) {
// Bulk export format
featuresToCheck = parsed.features.map((f) => ({
id: f.feature.id,
title: f.feature.title,
}));
} else if (exportService.isFeatureExport(parsed)) {
// Single FeatureExport format
featuresToCheck = [
{
id: parsed.feature.id,
title: parsed.feature.title,
},
];
} else if (exportService.isRawFeature(parsed)) {
// Raw Feature format
featuresToCheck = [{ id: parsed.id, title: parsed.title }];
}
// Check each feature for conflicts in parallel
const conflicts: ConflictInfo[] = await Promise.all(
featuresToCheck.map(async (feature) => {
const existing = await featureLoader.get(projectPath, feature.id);
return {
featureId: feature.id,
title: feature.title,
existingTitle: existing?.title,
hasConflict: !!existing,
};
})
);
const hasConflicts = conflicts.some((c) => c.hasConflict);
res.json({
success: true,
hasConflicts,
conflicts,
totalFeatures: featuresToCheck.length,
conflictCount: conflicts.filter((c) => c.hasConflict).length,
});
} catch (error) {
logError(error, 'Conflict check failed');
res.status(500).json({ success: false, error: getErrorMessage(error) });
}
};
}

View File

@@ -4,20 +4,33 @@
import type { Request, Response } from 'express';
import { FeatureLoader } from '../../../services/feature-loader.js';
import type { Feature } from '@automaker/types';
import type { Feature, FeatureStatus } from '@automaker/types';
import { getErrorMessage, logError } from '../common.js';
import { createLogger } from '@automaker/utils';
const logger = createLogger('features/update');
// Statuses that should trigger syncing to app_spec.txt
const SYNC_TRIGGER_STATUSES: FeatureStatus[] = ['verified', 'completed'];
export function createUpdateHandler(featureLoader: FeatureLoader) {
return async (req: Request, res: Response): Promise<void> => {
try {
const { projectPath, featureId, updates, descriptionHistorySource, enhancementMode } =
req.body as {
projectPath: string;
featureId: string;
updates: Partial<Feature>;
descriptionHistorySource?: 'enhance' | 'edit';
enhancementMode?: 'improve' | 'technical' | 'simplify' | 'acceptance';
};
const {
projectPath,
featureId,
updates,
descriptionHistorySource,
enhancementMode,
preEnhancementDescription,
} = req.body as {
projectPath: string;
featureId: string;
updates: Partial<Feature>;
descriptionHistorySource?: 'enhance' | 'edit';
enhancementMode?: 'improve' | 'technical' | 'simplify' | 'acceptance' | 'ux-reviewer';
preEnhancementDescription?: string;
};
if (!projectPath || !featureId || !updates) {
res.status(400).json({
@@ -27,13 +40,52 @@ export function createUpdateHandler(featureLoader: FeatureLoader) {
return;
}
// Check for duplicate title if title is being updated
if (updates.title && updates.title.trim()) {
const duplicate = await featureLoader.findDuplicateTitle(
projectPath,
updates.title,
featureId // Exclude the current feature from duplicate check
);
if (duplicate) {
res.status(409).json({
success: false,
error: `A feature with title "${updates.title}" already exists`,
duplicateFeatureId: duplicate.id,
});
return;
}
}
// Get the current feature to detect status changes
const currentFeature = await featureLoader.get(projectPath, featureId);
const previousStatus = currentFeature?.status as FeatureStatus | undefined;
const newStatus = updates.status as FeatureStatus | undefined;
const updated = await featureLoader.update(
projectPath,
featureId,
updates,
descriptionHistorySource,
enhancementMode
enhancementMode,
preEnhancementDescription
);
// Trigger sync to app_spec.txt when status changes to verified or completed
if (newStatus && SYNC_TRIGGER_STATUSES.includes(newStatus) && previousStatus !== newStatus) {
try {
const synced = await featureLoader.syncFeatureToAppSpec(projectPath, updated);
if (synced) {
logger.info(
`Synced feature "${updated.title || updated.id}" to app_spec.txt on status change to ${newStatus}`
);
}
} catch (syncError) {
// Log the sync error but don't fail the update operation
logger.error(`Failed to sync feature to app_spec.txt:`, syncError);
}
}
res.json({ success: true, feature: updated });
} catch (error) {
logError(error, 'Update feature failed');

View File

@@ -1,5 +1,12 @@
/**
* GET /image endpoint - Serve image files
*
* Requires authentication via auth middleware:
* - apiKey query parameter (Electron mode)
* - token query parameter (web mode)
* - session cookie (web mode)
* - X-API-Key header (Electron mode)
* - X-Session-Token header (web mode)
*/
import type { Request, Response } from 'express';

View File

@@ -5,6 +5,43 @@
import type { Request, Response } from 'express';
import { execAsync, execEnv, getErrorMessage, logError } from './common.js';
const GIT_REMOTE_ORIGIN_COMMAND = 'git remote get-url origin';
const GH_REPO_VIEW_COMMAND = 'gh repo view --json name,owner';
const GITHUB_REPO_URL_PREFIX = 'https://github.com/';
const GITHUB_HTTPS_REMOTE_REGEX = /https:\/\/github\.com\/([^/]+)\/([^/.]+)/;
const GITHUB_SSH_REMOTE_REGEX = /git@github\.com:([^/]+)\/([^/.]+)/;
interface GhRepoViewResponse {
name?: string;
owner?: {
login?: string;
};
}
async function resolveRepoFromGh(projectPath: string): Promise<{
owner: string;
repo: string;
} | null> {
try {
const { stdout } = await execAsync(GH_REPO_VIEW_COMMAND, {
cwd: projectPath,
env: execEnv,
});
const data = JSON.parse(stdout) as GhRepoViewResponse;
const owner = typeof data.owner?.login === 'string' ? data.owner.login : null;
const repo = typeof data.name === 'string' ? data.name : null;
if (!owner || !repo) {
return null;
}
return { owner, repo };
} catch {
return null;
}
}
export interface GitHubRemoteStatus {
hasGitHubRemote: boolean;
remoteUrl: string | null;
@@ -21,19 +58,38 @@ export async function checkGitHubRemote(projectPath: string): Promise<GitHubRemo
};
try {
// Get the remote URL (origin by default)
const { stdout } = await execAsync('git remote get-url origin', {
cwd: projectPath,
env: execEnv,
});
let remoteUrl = '';
try {
// Get the remote URL (origin by default)
const { stdout } = await execAsync(GIT_REMOTE_ORIGIN_COMMAND, {
cwd: projectPath,
env: execEnv,
});
remoteUrl = stdout.trim();
status.remoteUrl = remoteUrl || null;
} catch {
// Ignore missing origin remote
}
const remoteUrl = stdout.trim();
status.remoteUrl = remoteUrl;
const ghRepo = await resolveRepoFromGh(projectPath);
if (ghRepo) {
status.hasGitHubRemote = true;
status.owner = ghRepo.owner;
status.repo = ghRepo.repo;
if (!status.remoteUrl) {
status.remoteUrl = `${GITHUB_REPO_URL_PREFIX}${ghRepo.owner}/${ghRepo.repo}`;
}
return status;
}
// Check if it's a GitHub URL
// Formats: https://github.com/owner/repo.git, git@github.com:owner/repo.git
const httpsMatch = remoteUrl.match(/https:\/\/github\.com\/([^/]+)\/([^/.]+)/);
const sshMatch = remoteUrl.match(/git@github\.com:([^/]+)\/([^/.]+)/);
if (!remoteUrl) {
return status;
}
const httpsMatch = remoteUrl.match(GITHUB_HTTPS_REMOTE_REGEX);
const sshMatch = remoteUrl.match(GITHUB_SSH_REMOTE_REGEX);
const match = httpsMatch || sshMatch;
if (match) {

View File

@@ -25,19 +25,24 @@ interface GraphQLComment {
updatedAt: string;
}
interface GraphQLCommentConnection {
totalCount: number;
pageInfo: {
hasNextPage: boolean;
endCursor: string | null;
};
nodes: GraphQLComment[];
}
interface GraphQLIssueOrPullRequest {
__typename: 'Issue' | 'PullRequest';
comments: GraphQLCommentConnection;
}
interface GraphQLResponse {
data?: {
repository?: {
issue?: {
comments: {
totalCount: number;
pageInfo: {
hasNextPage: boolean;
endCursor: string | null;
};
nodes: GraphQLComment[];
};
};
issueOrPullRequest?: GraphQLIssueOrPullRequest | null;
};
};
errors?: Array<{ message: string }>;
@@ -45,6 +50,7 @@ interface GraphQLResponse {
/** Timeout for GitHub API requests in milliseconds */
const GITHUB_API_TIMEOUT_MS = 30000;
const COMMENTS_PAGE_SIZE = 50;
/**
* Validate cursor format (GraphQL cursors are typically base64 strings)
@@ -54,7 +60,7 @@ function isValidCursor(cursor: string): boolean {
}
/**
* Fetch comments for a specific issue using GitHub GraphQL API
* Fetch comments for a specific issue or pull request using GitHub GraphQL API
*/
async function fetchIssueComments(
projectPath: string,
@@ -70,24 +76,52 @@ async function fetchIssueComments(
// Use GraphQL variables instead of string interpolation for safety
const query = `
query GetIssueComments($owner: String!, $repo: String!, $issueNumber: Int!, $cursor: String) {
query GetIssueComments(
$owner: String!
$repo: String!
$issueNumber: Int!
$cursor: String
$pageSize: Int!
) {
repository(owner: $owner, name: $repo) {
issue(number: $issueNumber) {
comments(first: 50, after: $cursor) {
totalCount
pageInfo {
hasNextPage
endCursor
}
nodes {
id
author {
login
avatarUrl
issueOrPullRequest(number: $issueNumber) {
__typename
... on Issue {
comments(first: $pageSize, after: $cursor) {
totalCount
pageInfo {
hasNextPage
endCursor
}
nodes {
id
author {
login
avatarUrl
}
body
createdAt
updatedAt
}
}
}
... on PullRequest {
comments(first: $pageSize, after: $cursor) {
totalCount
pageInfo {
hasNextPage
endCursor
}
nodes {
id
author {
login
avatarUrl
}
body
createdAt
updatedAt
}
body
createdAt
updatedAt
}
}
}
@@ -99,6 +133,7 @@ async function fetchIssueComments(
repo,
issueNumber,
cursor: cursor || null,
pageSize: COMMENTS_PAGE_SIZE,
};
const requestBody = JSON.stringify({ query, variables });
@@ -140,10 +175,10 @@ async function fetchIssueComments(
throw new Error(response.errors[0].message);
}
const commentsData = response.data?.repository?.issue?.comments;
const commentsData = response.data?.repository?.issueOrPullRequest?.comments;
if (!commentsData) {
throw new Error('Issue not found or no comments data available');
throw new Error('Issue or pull request not found or no comments data available');
}
const comments: GitHubComment[] = commentsData.nodes.map((node) => ({

View File

@@ -9,6 +9,17 @@ import { checkGitHubRemote } from './check-github-remote.js';
import { createLogger } from '@automaker/utils';
const logger = createLogger('ListIssues');
const OPEN_ISSUES_LIMIT = 100;
const CLOSED_ISSUES_LIMIT = 50;
const ISSUE_LIST_FIELDS = 'number,title,state,author,createdAt,labels,url,body,assignees';
const ISSUE_STATE_OPEN = 'open';
const ISSUE_STATE_CLOSED = 'closed';
const GH_ISSUE_LIST_COMMAND = 'gh issue list';
const GH_STATE_FLAG = '--state';
const GH_JSON_FLAG = '--json';
const GH_LIMIT_FLAG = '--limit';
const LINKED_PRS_BATCH_SIZE = 20;
const LINKED_PRS_TIMELINE_ITEMS = 10;
export interface GitHubLabel {
name: string;
@@ -69,34 +80,68 @@ async function fetchLinkedPRs(
// Build GraphQL query for batch fetching linked PRs
// We fetch up to 20 issues at a time to avoid query limits
const batchSize = 20;
for (let i = 0; i < issueNumbers.length; i += batchSize) {
const batch = issueNumbers.slice(i, i + batchSize);
for (let i = 0; i < issueNumbers.length; i += LINKED_PRS_BATCH_SIZE) {
const batch = issueNumbers.slice(i, i + LINKED_PRS_BATCH_SIZE);
const issueQueries = batch
.map(
(num, idx) => `
issue${idx}: issue(number: ${num}) {
number
timelineItems(first: 10, itemTypes: [CROSS_REFERENCED_EVENT, CONNECTED_EVENT]) {
nodes {
... on CrossReferencedEvent {
source {
... on PullRequest {
number
title
state
url
issue${idx}: issueOrPullRequest(number: ${num}) {
... on Issue {
number
timelineItems(
first: ${LINKED_PRS_TIMELINE_ITEMS}
itemTypes: [CROSS_REFERENCED_EVENT, CONNECTED_EVENT]
) {
nodes {
... on CrossReferencedEvent {
source {
... on PullRequest {
number
title
state
url
}
}
}
... on ConnectedEvent {
subject {
... on PullRequest {
number
title
state
url
}
}
}
}
... on ConnectedEvent {
subject {
... on PullRequest {
number
title
state
url
}
}
... on PullRequest {
number
timelineItems(
first: ${LINKED_PRS_TIMELINE_ITEMS}
itemTypes: [CROSS_REFERENCED_EVENT, CONNECTED_EVENT]
) {
nodes {
... on CrossReferencedEvent {
source {
... on PullRequest {
number
title
state
url
}
}
}
... on ConnectedEvent {
subject {
... on PullRequest {
number
title
state
url
}
}
}
}
@@ -213,16 +258,35 @@ export function createListIssuesHandler() {
}
// Fetch open and closed issues in parallel (now including assignees)
const repoQualifier =
remoteStatus.owner && remoteStatus.repo ? `${remoteStatus.owner}/${remoteStatus.repo}` : '';
const repoFlag = repoQualifier ? `-R ${repoQualifier}` : '';
const [openResult, closedResult] = await Promise.all([
execAsync(
'gh issue list --state open --json number,title,state,author,createdAt,labels,url,body,assignees --limit 100',
[
GH_ISSUE_LIST_COMMAND,
repoFlag,
`${GH_STATE_FLAG} ${ISSUE_STATE_OPEN}`,
`${GH_JSON_FLAG} ${ISSUE_LIST_FIELDS}`,
`${GH_LIMIT_FLAG} ${OPEN_ISSUES_LIMIT}`,
]
.filter(Boolean)
.join(' '),
{
cwd: projectPath,
env: execEnv,
}
),
execAsync(
'gh issue list --state closed --json number,title,state,author,createdAt,labels,url,body,assignees --limit 50',
[
GH_ISSUE_LIST_COMMAND,
repoFlag,
`${GH_STATE_FLAG} ${ISSUE_STATE_CLOSED}`,
`${GH_JSON_FLAG} ${ISSUE_LIST_FIELDS}`,
`${GH_LIMIT_FLAG} ${CLOSED_ISSUES_LIMIT}`,
]
.filter(Boolean)
.join(' '),
{
cwd: projectPath,
env: execEnv,

View File

@@ -6,6 +6,17 @@ import type { Request, Response } from 'express';
import { execAsync, execEnv, getErrorMessage, logError } from './common.js';
import { checkGitHubRemote } from './check-github-remote.js';
const OPEN_PRS_LIMIT = 100;
const MERGED_PRS_LIMIT = 50;
const PR_LIST_FIELDS =
'number,title,state,author,createdAt,labels,url,isDraft,headRefName,reviewDecision,mergeable,body';
const PR_STATE_OPEN = 'open';
const PR_STATE_MERGED = 'merged';
const GH_PR_LIST_COMMAND = 'gh pr list';
const GH_STATE_FLAG = '--state';
const GH_JSON_FLAG = '--json';
const GH_LIMIT_FLAG = '--limit';
export interface GitHubLabel {
name: string;
color: string;
@@ -57,16 +68,36 @@ export function createListPRsHandler() {
return;
}
const repoQualifier =
remoteStatus.owner && remoteStatus.repo ? `${remoteStatus.owner}/${remoteStatus.repo}` : '';
const repoFlag = repoQualifier ? `-R ${repoQualifier}` : '';
const [openResult, mergedResult] = await Promise.all([
execAsync(
'gh pr list --state open --json number,title,state,author,createdAt,labels,url,isDraft,headRefName,reviewDecision,mergeable,body --limit 100',
[
GH_PR_LIST_COMMAND,
repoFlag,
`${GH_STATE_FLAG} ${PR_STATE_OPEN}`,
`${GH_JSON_FLAG} ${PR_LIST_FIELDS}`,
`${GH_LIMIT_FLAG} ${OPEN_PRS_LIMIT}`,
]
.filter(Boolean)
.join(' '),
{
cwd: projectPath,
env: execEnv,
}
),
execAsync(
'gh pr list --state merged --json number,title,state,author,createdAt,labels,url,isDraft,headRefName,reviewDecision,mergeable,body --limit 50',
[
GH_PR_LIST_COMMAND,
repoFlag,
`${GH_STATE_FLAG} ${PR_STATE_MERGED}`,
`${GH_JSON_FLAG} ${PR_LIST_FIELDS}`,
`${GH_LIMIT_FLAG} ${MERGED_PRS_LIMIT}`,
]
.filter(Boolean)
.join(' '),
{
cwd: projectPath,
env: execEnv,

View File

@@ -1,36 +1,44 @@
/**
* POST /validate-issue endpoint - Validate a GitHub issue using Claude SDK or Cursor (async)
* POST /validate-issue endpoint - Validate a GitHub issue using provider abstraction (async)
*
* Scans the codebase to determine if an issue is valid, invalid, or needs clarification.
* Runs asynchronously and emits events for progress and completion.
* Supports both Claude models and Cursor models.
* Supports Claude, Codex, Cursor, and OpenCode models.
*/
import type { Request, Response } from 'express';
import { query } from '@anthropic-ai/claude-agent-sdk';
import type { EventEmitter } from '../../../lib/events.js';
import type {
IssueValidationResult,
IssueValidationEvent,
ModelAlias,
CursorModelId,
ModelId,
GitHubComment,
LinkedPRInfo,
ThinkingLevel,
ReasoningEffort,
} from '@automaker/types';
import {
DEFAULT_PHASE_MODELS,
isClaudeModel,
isCodexModel,
isCursorModel,
isOpencodeModel,
} from '@automaker/types';
import { isCursorModel, DEFAULT_PHASE_MODELS, stripProviderPrefix } from '@automaker/types';
import { resolvePhaseModel } from '@automaker/model-resolver';
import { createSuggestionsOptions } from '../../../lib/sdk-options.js';
import { extractJson } from '../../../lib/json-extractor.js';
import { writeValidation } from '../../../lib/validation-storage.js';
import { ProviderFactory } from '../../../providers/provider-factory.js';
import { streamingQuery } from '../../../providers/simple-query-service.js';
import {
issueValidationSchema,
ISSUE_VALIDATION_SYSTEM_PROMPT,
buildValidationPrompt,
ValidationComment,
ValidationLinkedPR,
} from './validation-schema.js';
import {
getPromptCustomization,
getAutoLoadClaudeMdSetting,
getProviderByModelId,
} from '../../../lib/settings-helpers.js';
import {
trySetValidationRunning,
clearValidationStatus,
@@ -39,10 +47,6 @@ import {
logger,
} from './validation-common.js';
import type { SettingsService } from '../../../services/settings-service.js';
import { getAutoLoadClaudeMdSetting } from '../../../lib/settings-helpers.js';
/** Valid Claude model values for validation */
const VALID_CLAUDE_MODELS: readonly ModelAlias[] = ['opus', 'sonnet', 'haiku'] as const;
/**
* Request body for issue validation
@@ -53,10 +57,12 @@ interface ValidateIssueRequestBody {
issueTitle: string;
issueBody: string;
issueLabels?: string[];
/** Model to use for validation (opus, sonnet, haiku, or cursor model IDs) */
model?: ModelAlias | CursorModelId;
/** Thinking level for Claude models (ignored for Cursor models) */
/** Model to use for validation (Claude alias or provider model ID) */
model?: ModelId;
/** Thinking level for Claude models (ignored for non-Claude models) */
thinkingLevel?: ThinkingLevel;
/** Reasoning effort for Codex models (ignored for non-Codex models) */
reasoningEffort?: ReasoningEffort;
/** Comments to include in validation analysis */
comments?: GitHubComment[];
/** Linked pull requests for this issue */
@@ -68,7 +74,7 @@ interface ValidateIssueRequestBody {
*
* Emits events for start, progress, complete, and error.
* Stores result on completion.
* Supports both Claude models (with structured output) and Cursor models (with JSON parsing).
* Supports Claude/Codex models (structured output) and Cursor/OpenCode models (JSON parsing).
*/
async function runValidation(
projectPath: string,
@@ -76,13 +82,14 @@ async function runValidation(
issueTitle: string,
issueBody: string,
issueLabels: string[] | undefined,
model: ModelAlias | CursorModelId,
model: ModelId,
events: EventEmitter,
abortController: AbortController,
settingsService?: SettingsService,
comments?: ValidationComment[],
linkedPRs?: ValidationLinkedPR[],
thinkingLevel?: ThinkingLevel
thinkingLevel?: ThinkingLevel,
reasoningEffort?: ReasoningEffort
): Promise<void> {
// Emit start event
const startEvent: IssueValidationEvent = {
@@ -102,7 +109,7 @@ async function runValidation(
try {
// Build the prompt (include comments and linked PRs if provided)
const prompt = buildValidationPrompt(
const basePrompt = buildValidationPrompt(
issueNumber,
issueTitle,
issueBody,
@@ -111,20 +118,19 @@ async function runValidation(
linkedPRs
);
let validationResult: IssueValidationResult | null = null;
let responseText = '';
// Route to appropriate provider based on model
if (isCursorModel(model)) {
// Use Cursor provider for Cursor models
logger.info(`Using Cursor provider for validation with model: ${model}`);
// Get customized prompts from settings
const prompts = await getPromptCustomization(settingsService, '[ValidateIssue]');
const issueValidationSystemPrompt = prompts.issueValidation.systemPrompt;
const provider = ProviderFactory.getProviderForModel(model);
// Strip provider prefix - providers expect bare model IDs
const bareModel = stripProviderPrefix(model);
// Determine if we should use structured output (Claude/Codex support it, Cursor/OpenCode don't)
const useStructuredOutput = isClaudeModel(model) || isCodexModel(model);
// For Cursor, include the system prompt and schema in the user prompt
const cursorPrompt = `${ISSUE_VALIDATION_SYSTEM_PROMPT}
// Build the final prompt - for Cursor, include system prompt and JSON schema instructions
let finalPrompt = basePrompt;
if (!useStructuredOutput) {
finalPrompt = `${issueValidationSystemPrompt}
CRITICAL INSTRUCTIONS:
1. DO NOT write any files. Return the JSON in your response only.
@@ -135,121 +141,101 @@ ${JSON.stringify(issueValidationSchema, null, 2)}
Your entire response should be valid JSON starting with { and ending with }. No text before or after.
${prompt}`;
${basePrompt}`;
}
for await (const msg of provider.executeQuery({
prompt: cursorPrompt,
model: bareModel,
cwd: projectPath,
readOnly: true, // Issue validation only reads code, doesn't write
})) {
if (msg.type === 'assistant' && msg.message?.content) {
for (const block of msg.message.content) {
if (block.type === 'text' && block.text) {
responseText += block.text;
// Load autoLoadClaudeMd setting
const autoLoadClaudeMd = await getAutoLoadClaudeMdSetting(
projectPath,
settingsService,
'[ValidateIssue]'
);
// Emit progress event
const progressEvent: IssueValidationEvent = {
type: 'issue_validation_progress',
issueNumber,
content: block.text,
projectPath,
};
events.emit('issue-validation:event', progressEvent);
}
}
} else if (msg.type === 'result' && msg.subtype === 'success' && msg.result) {
// Use result if it's a final accumulated message
if (msg.result.length > responseText.length) {
responseText = msg.result;
}
}
}
// Parse JSON from the response text using shared utility
if (responseText) {
validationResult = extractJson<IssueValidationResult>(responseText, { logger });
}
} else {
// Use Claude SDK for Claude models
logger.info(`Using Claude provider for validation with model: ${model}`);
// Load autoLoadClaudeMd setting
const autoLoadClaudeMd = await getAutoLoadClaudeMdSetting(
projectPath,
settingsService,
'[ValidateIssue]'
);
// Use thinkingLevel from request if provided, otherwise fall back to settings
let effectiveThinkingLevel: ThinkingLevel | undefined = thinkingLevel;
// Use request overrides if provided, otherwise fall back to settings
let effectiveThinkingLevel: ThinkingLevel | undefined = thinkingLevel;
let effectiveReasoningEffort: ReasoningEffort | undefined = reasoningEffort;
if (!effectiveThinkingLevel || !effectiveReasoningEffort) {
const settings = await settingsService?.getGlobalSettings();
const phaseModelEntry =
settings?.phaseModels?.validationModel || DEFAULT_PHASE_MODELS.validationModel;
const resolved = resolvePhaseModel(phaseModelEntry);
if (!effectiveThinkingLevel) {
const settings = await settingsService?.getGlobalSettings();
const phaseModelEntry =
settings?.phaseModels?.validationModel || DEFAULT_PHASE_MODELS.validationModel;
const resolved = resolvePhaseModel(phaseModelEntry);
effectiveThinkingLevel = resolved.thinkingLevel;
}
// Create SDK options with structured output and abort controller
const options = createSuggestionsOptions({
cwd: projectPath,
model: model as ModelAlias,
systemPrompt: ISSUE_VALIDATION_SYSTEM_PROMPT,
abortController,
autoLoadClaudeMd,
thinkingLevel: effectiveThinkingLevel,
outputFormat: {
type: 'json_schema',
schema: issueValidationSchema as Record<string, unknown>,
},
});
// Execute the query
const stream = query({ prompt, options });
for await (const msg of stream) {
// Collect assistant text for debugging and emit progress
if (msg.type === 'assistant' && msg.message?.content) {
for (const block of msg.message.content) {
if (block.type === 'text') {
responseText += block.text;
// Emit progress event
const progressEvent: IssueValidationEvent = {
type: 'issue_validation_progress',
issueNumber,
content: block.text,
projectPath,
};
events.emit('issue-validation:event', progressEvent);
}
}
}
// Extract structured output on success
if (msg.type === 'result' && msg.subtype === 'success') {
const resultMsg = msg as { structured_output?: IssueValidationResult };
if (resultMsg.structured_output) {
validationResult = resultMsg.structured_output;
logger.debug('Received structured output:', validationResult);
}
}
// Handle errors
if (msg.type === 'result') {
const resultMsg = msg as { subtype?: string };
if (resultMsg.subtype === 'error_max_structured_output_retries') {
logger.error('Failed to produce valid structured output after retries');
throw new Error('Could not produce valid validation output');
}
}
if (!effectiveReasoningEffort && typeof phaseModelEntry !== 'string') {
effectiveReasoningEffort = phaseModelEntry.reasoningEffort;
}
}
// Check if the model is a provider model (like "GLM-4.5-Air")
// If so, get the provider config and resolved Claude model
let claudeCompatibleProvider: import('@automaker/types').ClaudeCompatibleProvider | undefined;
let providerResolvedModel: string | undefined;
let credentials = await settingsService?.getCredentials();
if (settingsService) {
const providerResult = await getProviderByModelId(model, settingsService, '[ValidateIssue]');
if (providerResult.provider) {
claudeCompatibleProvider = providerResult.provider;
providerResolvedModel = providerResult.resolvedModel;
credentials = providerResult.credentials;
logger.info(
`Using provider "${providerResult.provider.name}" for model "${model}"` +
(providerResolvedModel ? ` -> resolved to "${providerResolvedModel}"` : '')
);
}
}
// Use provider resolved model if available, otherwise use original model
const effectiveModel = providerResolvedModel || (model as string);
logger.info(`Using model: ${effectiveModel}`);
// Use streamingQuery with event callbacks
const result = await streamingQuery({
prompt: finalPrompt,
model: effectiveModel,
cwd: projectPath,
systemPrompt: useStructuredOutput ? issueValidationSystemPrompt : undefined,
abortController,
thinkingLevel: effectiveThinkingLevel,
reasoningEffort: effectiveReasoningEffort,
readOnly: true, // Issue validation only reads code, doesn't write
settingSources: autoLoadClaudeMd ? ['user', 'project', 'local'] : undefined,
claudeCompatibleProvider, // Pass provider for alternative endpoint configuration
credentials, // Pass credentials for resolving 'credentials' apiKeySource
outputFormat: useStructuredOutput
? {
type: 'json_schema',
schema: issueValidationSchema as Record<string, unknown>,
}
: undefined,
onText: (text) => {
responseText += text;
// Emit progress event
const progressEvent: IssueValidationEvent = {
type: 'issue_validation_progress',
issueNumber,
content: text,
projectPath,
};
events.emit('issue-validation:event', progressEvent);
},
});
// Clear timeout
clearTimeout(timeoutId);
// Get validation result from structured output or parse from text
let validationResult: IssueValidationResult | null = null;
if (result.structured_output) {
validationResult = result.structured_output as unknown as IssueValidationResult;
logger.debug('Received structured output:', validationResult);
} else if (responseText) {
// Parse JSON from response text
validationResult = extractJson<IssueValidationResult>(responseText, { logger });
}
// Require validation result
if (!validationResult) {
logger.error('No validation result received from AI provider');
@@ -299,7 +285,7 @@ ${prompt}`;
/**
* Creates the handler for validating GitHub issues against the codebase.
*
* Uses Claude SDK with:
* Uses the provider abstraction with:
* - Read-only tools (Read, Glob, Grep) for codebase analysis
* - JSON schema structured output for reliable parsing
* - System prompt guiding the validation process
@@ -319,6 +305,7 @@ export function createValidateIssueHandler(
issueLabels,
model = 'opus',
thinkingLevel,
reasoningEffort,
comments: rawComments,
linkedPRs: rawLinkedPRs,
} = req.body as ValidateIssueRequestBody;
@@ -366,14 +353,17 @@ export function createValidateIssueHandler(
return;
}
// Validate model parameter at runtime - accept Claude models or Cursor models
const isValidClaudeModel = VALID_CLAUDE_MODELS.includes(model as ModelAlias);
const isValidCursorModel = isCursorModel(model);
// Validate model parameter at runtime - accept any supported provider model
const isValidModel =
isClaudeModel(model) ||
isCursorModel(model) ||
isCodexModel(model) ||
isOpencodeModel(model);
if (!isValidClaudeModel && !isValidCursorModel) {
if (!isValidModel) {
res.status(400).json({
success: false,
error: `Invalid model. Must be one of: ${VALID_CLAUDE_MODELS.join(', ')}, or a Cursor model ID`,
error: 'Invalid model. Must be a Claude, Cursor, Codex, or OpenCode model ID (or alias).',
});
return;
}
@@ -404,7 +394,8 @@ export function createValidateIssueHandler(
settingsService,
validationComments,
validationLinkedPRs,
thinkingLevel
thinkingLevel,
reasoningEffort
)
.catch(() => {
// Error is already handled inside runValidation (event emitted)

View File

@@ -1,8 +1,11 @@
/**
* Issue Validation Schema and System Prompt
* Issue Validation Schema and Prompt Building
*
* Defines the JSON schema for Claude's structured output and
* the system prompt that guides the validation process.
* helper functions for building validation prompts.
*
* Note: The system prompt is now centralized in @automaker/prompts
* and accessed via getPromptCustomization() in validate-issue.ts
*/
/**
@@ -82,76 +85,6 @@ export const issueValidationSchema = {
additionalProperties: false,
} as const;
/**
* System prompt that guides Claude in validating GitHub issues.
* Instructs the model to use read-only tools to analyze the codebase.
*/
export const ISSUE_VALIDATION_SYSTEM_PROMPT = `You are an expert code analyst validating GitHub issues against a codebase.
Your task is to analyze a GitHub issue and determine if it's valid by scanning the codebase.
## Validation Process
1. **Read the issue carefully** - Understand what is being reported or requested
2. **Search the codebase** - Use Glob to find relevant files by pattern, Grep to search for keywords
3. **Examine the code** - Use Read to look at the actual implementation in relevant files
4. **Check linked PRs** - If there are linked pull requests, use \`gh pr diff <PR_NUMBER>\` to review the changes
5. **Form your verdict** - Based on your analysis, determine if the issue is valid
## Verdicts
- **valid**: The issue describes a real problem that exists in the codebase, or a clear feature request that can be implemented. The referenced files/components exist and the issue is actionable.
- **invalid**: The issue describes behavior that doesn't exist, references non-existent files or components, is based on a misunderstanding of the code, or the described "bug" is actually expected behavior.
- **needs_clarification**: The issue lacks sufficient detail to verify. Specify what additional information is needed in the missingInfo field.
## For Bug Reports, Check:
- Do the referenced files/components exist?
- Does the code match what the issue describes?
- Is the described behavior actually a bug or expected?
- Can you locate the code that would cause the reported issue?
## For Feature Requests, Check:
- Does the feature already exist?
- Is the implementation location clear?
- Is the request technically feasible given the codebase structure?
## Analyzing Linked Pull Requests
When an issue has linked PRs (especially open ones), you MUST analyze them:
1. **Run \`gh pr diff <PR_NUMBER>\`** to see what changes the PR makes
2. **Run \`gh pr view <PR_NUMBER>\`** to see PR description and status
3. **Evaluate if the PR fixes the issue** - Does the diff address the reported problem?
4. **Provide a recommendation**:
- \`wait_for_merge\`: The PR appears to fix the issue correctly. No additional work needed - just wait for it to be merged.
- \`pr_needs_work\`: The PR attempts to fix the issue but is incomplete or has problems.
- \`no_pr\`: No relevant PR exists for this issue.
5. **Include prAnalysis in your response** with:
- hasOpenPR: true/false
- prFixesIssue: true/false (based on diff analysis)
- prNumber: the PR number you analyzed
- prSummary: brief description of what the PR changes
- recommendation: one of the above values
## Response Guidelines
- **Always include relatedFiles** when you find relevant code
- **Set bugConfirmed to true** only if you can definitively confirm a bug exists in the code
- **Provide a suggestedFix** when you have a clear idea of how to address the issue
- **Use missingInfo** when the verdict is needs_clarification to list what's needed
- **Include prAnalysis** when there are linked PRs - this is critical for avoiding duplicate work
- **Set estimatedComplexity** to help prioritize:
- trivial: Simple text changes, one-line fixes
- simple: Small changes to one file
- moderate: Changes to multiple files or moderate logic changes
- complex: Significant refactoring or new feature implementation
- very_complex: Major architectural changes or cross-cutting concerns
Be thorough in your analysis but focus on files that are directly relevant to the issue.`;
/**
* Comment data structure for validation prompt
*/

View File

@@ -9,12 +9,14 @@ import type { Request, Response } from 'express';
export interface EnvironmentResponse {
isContainerized: boolean;
skipSandboxWarning?: boolean;
}
export function createEnvironmentHandler() {
return (_req: Request, res: Response): void => {
res.json({
isContainerized: process.env.IS_CONTAINERIZED === 'true',
skipSandboxWarning: process.env.AUTOMAKER_SKIP_SANDBOX_WARNING === 'true',
} satisfies EnvironmentResponse);
};
}

View File

@@ -0,0 +1,21 @@
/**
* Common utilities for notification routes
*
* Provides logger and error handling utilities shared across all notification endpoints.
*/
import { createLogger } from '@automaker/utils';
import { getErrorMessage as getErrorMessageShared, createLogError } from '../common.js';
/** Logger instance for notification-related operations */
export const logger = createLogger('Notifications');
/**
* Extract user-friendly error message from error objects
*/
export { getErrorMessageShared as getErrorMessage };
/**
* Log error with automatic logger binding
*/
export const logError = createLogError(logger);

View File

@@ -0,0 +1,62 @@
/**
* Notifications routes - HTTP API for project-level notifications
*
* Provides endpoints for:
* - Listing notifications
* - Getting unread count
* - Marking notifications as read
* - Dismissing notifications
*
* All endpoints use handler factories that receive the NotificationService instance.
* Mounted at /api/notifications in the main server.
*/
import { Router } from 'express';
import type { NotificationService } from '../../services/notification-service.js';
import { validatePathParams } from '../../middleware/validate-paths.js';
import { createListHandler } from './routes/list.js';
import { createUnreadCountHandler } from './routes/unread-count.js';
import { createMarkReadHandler } from './routes/mark-read.js';
import { createDismissHandler } from './routes/dismiss.js';
/**
* Create notifications router with all endpoints
*
* Endpoints:
* - POST /list - List all notifications for a project
* - POST /unread-count - Get unread notification count
* - POST /mark-read - Mark notification(s) as read
* - POST /dismiss - Dismiss notification(s)
*
* @param notificationService - Instance of NotificationService
* @returns Express Router configured with all notification endpoints
*/
export function createNotificationsRoutes(notificationService: NotificationService): Router {
const router = Router();
// List notifications
router.post('/list', validatePathParams('projectPath'), createListHandler(notificationService));
// Get unread count
router.post(
'/unread-count',
validatePathParams('projectPath'),
createUnreadCountHandler(notificationService)
);
// Mark as read (single or all)
router.post(
'/mark-read',
validatePathParams('projectPath'),
createMarkReadHandler(notificationService)
);
// Dismiss (single or all)
router.post(
'/dismiss',
validatePathParams('projectPath'),
createDismissHandler(notificationService)
);
return router;
}

View File

@@ -0,0 +1,53 @@
/**
* POST /api/notifications/dismiss - Dismiss notification(s)
*
* Request body: { projectPath: string, notificationId?: string }
* - If notificationId provided: dismisses that notification
* - If notificationId not provided: dismisses all notifications
*
* Response: { success: true, dismissed: boolean | count: number }
*/
import type { Request, Response } from 'express';
import type { NotificationService } from '../../../services/notification-service.js';
import { getErrorMessage, logError } from '../common.js';
/**
* Create handler for POST /api/notifications/dismiss
*
* @param notificationService - Instance of NotificationService
* @returns Express request handler
*/
export function createDismissHandler(notificationService: NotificationService) {
return async (req: Request, res: Response): Promise<void> => {
try {
const { projectPath, notificationId } = req.body;
if (!projectPath || typeof projectPath !== 'string') {
res.status(400).json({ success: false, error: 'projectPath is required' });
return;
}
// If notificationId provided, dismiss single notification
if (notificationId) {
const dismissed = await notificationService.dismissNotification(
projectPath,
notificationId
);
if (!dismissed) {
res.status(404).json({ success: false, error: 'Notification not found' });
return;
}
res.json({ success: true, dismissed: true });
return;
}
// Otherwise dismiss all
const count = await notificationService.dismissAll(projectPath);
res.json({ success: true, count });
} catch (error) {
logError(error, 'Dismiss failed');
res.status(500).json({ success: false, error: getErrorMessage(error) });
}
};
}

View File

@@ -0,0 +1,39 @@
/**
* POST /api/notifications/list - List all notifications for a project
*
* Request body: { projectPath: string }
* Response: { success: true, notifications: Notification[] }
*/
import type { Request, Response } from 'express';
import type { NotificationService } from '../../../services/notification-service.js';
import { getErrorMessage, logError } from '../common.js';
/**
* Create handler for POST /api/notifications/list
*
* @param notificationService - Instance of NotificationService
* @returns Express request handler
*/
export function createListHandler(notificationService: NotificationService) {
return async (req: Request, res: Response): Promise<void> => {
try {
const { projectPath } = req.body;
if (!projectPath || typeof projectPath !== 'string') {
res.status(400).json({ success: false, error: 'projectPath is required' });
return;
}
const notifications = await notificationService.getNotifications(projectPath);
res.json({
success: true,
notifications,
});
} catch (error) {
logError(error, 'List notifications failed');
res.status(500).json({ success: false, error: getErrorMessage(error) });
}
};
}

View File

@@ -0,0 +1,50 @@
/**
* POST /api/notifications/mark-read - Mark notification(s) as read
*
* Request body: { projectPath: string, notificationId?: string }
* - If notificationId provided: marks that notification as read
* - If notificationId not provided: marks all notifications as read
*
* Response: { success: true, count?: number, notification?: Notification }
*/
import type { Request, Response } from 'express';
import type { NotificationService } from '../../../services/notification-service.js';
import { getErrorMessage, logError } from '../common.js';
/**
* Create handler for POST /api/notifications/mark-read
*
* @param notificationService - Instance of NotificationService
* @returns Express request handler
*/
export function createMarkReadHandler(notificationService: NotificationService) {
return async (req: Request, res: Response): Promise<void> => {
try {
const { projectPath, notificationId } = req.body;
if (!projectPath || typeof projectPath !== 'string') {
res.status(400).json({ success: false, error: 'projectPath is required' });
return;
}
// If notificationId provided, mark single notification
if (notificationId) {
const notification = await notificationService.markAsRead(projectPath, notificationId);
if (!notification) {
res.status(404).json({ success: false, error: 'Notification not found' });
return;
}
res.json({ success: true, notification });
return;
}
// Otherwise mark all as read
const count = await notificationService.markAllAsRead(projectPath);
res.json({ success: true, count });
} catch (error) {
logError(error, 'Mark read failed');
res.status(500).json({ success: false, error: getErrorMessage(error) });
}
};
}

View File

@@ -0,0 +1,39 @@
/**
* POST /api/notifications/unread-count - Get unread notification count
*
* Request body: { projectPath: string }
* Response: { success: true, count: number }
*/
import type { Request, Response } from 'express';
import type { NotificationService } from '../../../services/notification-service.js';
import { getErrorMessage, logError } from '../common.js';
/**
* Create handler for POST /api/notifications/unread-count
*
* @param notificationService - Instance of NotificationService
* @returns Express request handler
*/
export function createUnreadCountHandler(notificationService: NotificationService) {
return async (req: Request, res: Response): Promise<void> => {
try {
const { projectPath } = req.body;
if (!projectPath || typeof projectPath !== 'string') {
res.status(400).json({ success: false, error: 'projectPath is required' });
return;
}
const count = await notificationService.getUnreadCount(projectPath);
res.json({
success: true,
count,
});
} catch (error) {
logError(error, 'Get unread count failed');
res.status(500).json({ success: false, error: getErrorMessage(error) });
}
};
}

View File

@@ -0,0 +1,143 @@
/**
* Provider Usage Routes
*
* API endpoints for fetching usage data from all AI providers.
*
* Endpoints:
* - GET /api/provider-usage - Get usage for all enabled providers
* - GET /api/provider-usage/:providerId - Get usage for a specific provider
* - GET /api/provider-usage/availability - Check availability of all providers
*/
import { Router, Request, Response } from 'express';
import { createLogger } from '@automaker/utils';
import type { UsageProviderId } from '@automaker/types';
import { ProviderUsageTracker } from '../../services/provider-usage-tracker.js';
const logger = createLogger('ProviderUsageRoutes');
// Valid provider IDs
const VALID_PROVIDER_IDS: UsageProviderId[] = [
'claude',
'codex',
'cursor',
'gemini',
'copilot',
'opencode',
'minimax',
'glm',
];
export function createProviderUsageRoutes(tracker: ProviderUsageTracker): Router {
const router = Router();
/**
* GET /api/provider-usage
* Fetch usage for all enabled providers
*/
router.get('/', async (req: Request, res: Response) => {
try {
const forceRefresh = req.query.refresh === 'true';
const usage = await tracker.fetchAllUsage(forceRefresh);
res.json({
success: true,
data: usage,
});
} catch (error) {
const message = error instanceof Error ? error.message : 'Unknown error';
logger.error('Error fetching all provider usage:', error);
res.status(500).json({
success: false,
error: message,
});
}
});
/**
* GET /api/provider-usage/availability
* Check which providers are available
*/
router.get('/availability', async (_req: Request, res: Response) => {
try {
const availability = await tracker.checkAvailability();
res.json({
success: true,
data: availability,
});
} catch (error) {
const message = error instanceof Error ? error.message : 'Unknown error';
logger.error('Error checking provider availability:', error);
res.status(500).json({
success: false,
error: message,
});
}
});
/**
* GET /api/provider-usage/:providerId
* Fetch usage for a specific provider
*/
router.get('/:providerId', async (req: Request, res: Response) => {
try {
const providerId = req.params.providerId as UsageProviderId;
// Validate provider ID
if (!VALID_PROVIDER_IDS.includes(providerId)) {
res.status(400).json({
success: false,
error: `Invalid provider ID: ${providerId}. Valid providers: ${VALID_PROVIDER_IDS.join(', ')}`,
});
return;
}
// Check if provider is enabled
if (!tracker.isProviderEnabled(providerId)) {
res.status(200).json({
success: true,
data: {
providerId,
providerName: providerId,
available: false,
lastUpdated: new Date().toISOString(),
error: 'Provider is disabled',
},
});
return;
}
const forceRefresh = req.query.refresh === 'true';
const usage = await tracker.fetchProviderUsage(providerId, forceRefresh);
if (!usage) {
res.status(200).json({
success: true,
data: {
providerId,
providerName: providerId,
available: false,
lastUpdated: new Date().toISOString(),
error: 'Failed to fetch usage data',
},
});
return;
}
res.json({
success: true,
data: usage,
});
} catch (error) {
const message = error instanceof Error ? error.message : 'Unknown error';
logger.error(`Error fetching usage for ${req.params.providerId}:`, error);
// Return 200 with error in data to avoid triggering logout
res.status(200).json({
success: false,
error: message,
});
}
});
return router;
}

View File

@@ -4,12 +4,58 @@
import type { Request, Response } from 'express';
import type { AutoModeService } from '../../../services/auto-mode-service.js';
import { getBacklogPlanStatus, getRunningDetails } from '../../backlog-plan/common.js';
import { getAllRunningGenerations } from '../../app-spec/common.js';
import path from 'path';
import { getErrorMessage, logError } from '../common.js';
export function createIndexHandler(autoModeService: AutoModeService) {
return async (_req: Request, res: Response): Promise<void> => {
try {
const runningAgents = await autoModeService.getRunningAgents();
const runningAgents = [...(await autoModeService.getRunningAgents())];
const backlogPlanStatus = getBacklogPlanStatus();
const backlogPlanDetails = getRunningDetails();
if (backlogPlanStatus.isRunning && backlogPlanDetails) {
runningAgents.push({
featureId: `backlog-plan:${backlogPlanDetails.projectPath}`,
projectPath: backlogPlanDetails.projectPath,
projectName: path.basename(backlogPlanDetails.projectPath),
isAutoMode: false,
title: 'Backlog plan',
description: backlogPlanDetails.prompt,
});
}
// Add spec/feature generation tasks
const specGenerations = getAllRunningGenerations();
for (const generation of specGenerations) {
let title: string;
let description: string;
switch (generation.type) {
case 'feature_generation':
title = 'Generating features from spec';
description = 'Creating features from the project specification';
break;
case 'sync':
title = 'Syncing spec with code';
description = 'Updating spec from codebase and completed features';
break;
default:
title = 'Regenerating spec';
description = 'Analyzing project and generating specification';
}
runningAgents.push({
featureId: `spec-generation:${generation.projectPath}`,
projectPath: generation.projectPath,
projectName: path.basename(generation.projectPath),
isAutoMode: false,
title,
description,
});
}
res.json({
success: true,

Some files were not shown because too many files have changed in this diff Show More