Compare commits

...

653 Commits

Author SHA1 Message Date
Shirone
5b620011ad feat: add CodeRabbit integration for AI-powered code reviews
This commit introduces the CodeRabbit service and its associated routes, enabling users to trigger, manage, and check the status of code reviews through a new API. Key features include:

- New routes for triggering code reviews, checking status, and stopping reviews.
- Integration with the CodeRabbit CLI for authentication and status checks.
- UI components for displaying code review results and settings management.
- Unit tests for the new code review functionality to ensure reliability.

This enhancement aims to streamline the code review process and leverage AI capabilities for improved code quality.
2026-01-24 21:10:33 +01:00
Shirone
327aef89a2 Merge pull request #562 from AutoMaker-Org/feature/v0.12.0rc-1768688900786-5ea1
refactor: standardize PR state representation across the application
2026-01-18 10:45:59 +00:00
Shirone
44e665f1bf fix: adress pr comments 2026-01-18 00:22:27 +01:00
Shirone
5b1e0105f4 refactor: standardize PR state representation across the application
Updated the PR state handling to use a consistent uppercase format ('OPEN', 'MERGED', 'CLOSED') throughout the codebase. This includes changes to the worktree metadata interface, PR creation logic, and related tests to ensure uniformity and prevent potential mismatches in state representation.

Additionally, modified the GitHub PR fetching logic to retrieve all PR states, allowing for better detection of state changes.

This refactor enhances clarity and consistency in how PR states are managed and displayed.
2026-01-17 23:58:19 +01:00
webdevcody
832d10e133 refactor: replace Loader2 with Spinner component across the application
This update standardizes the loading indicators by replacing all instances of Loader2 with the new Spinner component. The Spinner component provides a consistent look and feel for loading states throughout the UI, enhancing the user experience.

Changes include:
- Updated loading indicators in various components such as popovers, modals, and views.
- Ensured that the Spinner component is used with appropriate sizes for different contexts.

No functional changes were made; this is purely a visual and structural improvement.
2026-01-17 17:58:16 -05:00
DhanushSantosh
044c3d50d1 fix: mark dmg-license as optional dependency for cross-platform builds
dmg-license is a macOS-only package used for building DMG installers.
Moving it from devDependencies to optionalDependencies allows npm ci
to succeed on Linux and Windows without failing on platform checks.

macOS developers will still get the package when available.
Linux/Windows developers can now run npm ci without errors.

Fixes: npm ci failing on Linux with "EBADPLATFORM" error

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-18 01:28:14 +05:30
Shirone
a1de0a78a0 Merge pull request #545 from stefandevo/fix/sandbox-warning-persistence
fix: sandbox warning persistence and add env var option
2026-01-17 18:57:19 +00:00
Shirone
fef9639e01 Merge pull request #539 from stefandevo/fix/light-mode-agent-output
fix: respect theme in agent output modal and log viewer
2026-01-17 18:35:32 +00:00
Stefan de Vogelaere
aef479218d fix: use DEFAULT_FONT_VALUE for initial terminal font
The initial terminalState.fontFamily was set to a raw font string
that didn't match any option in TERMINAL_FONT_OPTIONS, causing the
dropdown to appear empty. Changed to use DEFAULT_FONT_VALUE sentinel.
2026-01-17 19:32:42 +01:00
Stefan de Vogelaere
ded5ecf4e9 refactor: reduce code duplication in font settings and sync logic
Address CodeRabbit review feedback:
- Create getEffectiveFont helper to deduplicate getEffectiveFontSans/Mono
- Extract getSettingsFieldValue and hasSettingsFieldChanged helpers
- Create reusable FontSelector component for font selection UI
- Refactor project-theme-section and appearance-section to use FontSelector
2026-01-17 19:30:00 +01:00
Stefan de Vogelaere
a01f299597 fix: resolve type errors after merging upstream v0.12.0rc
- Fix ThemeMode type casting in __root.tsx
- Use specRegeneration.create() instead of non-existent generateAppSpec
- Add missing keyboard shortcut entries for projectSettings and notifications
- Fix lucide-react type casts with intermediate unknown cast
- Remove unused pipelineConfig prop from ListRow component
- Align SettingsProject interface with Project type
- Fix defaultDeleteBranchWithWorktree property name
2026-01-17 19:20:49 +01:00
Stefan de Vogelaere
21c9e88a86 Merge remote-tracking branch 'upstream/v0.12.0rc' into fix/light-mode-agent-output 2026-01-17 19:10:49 +01:00
Shirone
af17f6e36f Merge pull request #535 from stefandevo/v0.12.0rc
feat: add font customization and 8 new themes
2026-01-17 18:06:04 +00:00
Stefan de Vogelaere
e69a2ad722 docs: add AUTOMAKER_SKIP_SANDBOX_WARNING env var documentation
Document the new environment variable in README.md and .env.example
2026-01-17 18:33:08 +01:00
DhanushSantosh
0480f6ccd6 fix: handle dynamic model IDs with slashes in the model name
isOpencodeModel was rejecting valid dynamic model IDs like
'openrouter/qwen/qwen3-14b:free' because it was splitting on all slashes
and expecting exactly 2 parts. This caused valid OpenCode models to be
treated as unknown, falling back to Claude.

Now correctly splits on the FIRST slash only, allowing model names
like 'qwen/qwen3-14b:free' to be recognized as valid.

Fixes: User selects openrouter/qwen/qwen3-14b:free → server falls back to Claude

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-17 21:13:47 +05:30
DhanushSantosh
24042d20c2 fix: filter dynamic OpenCode models by enabled status in model selector
The phase model selector was showing ALL discovered dynamic models regardless
of whether they were enabled in settings. Now it filters dynamic models by
enabledDynamicModelIds, matching the behavior of Cursor models and making
the enable/disable setting meaningful.

Users can now:
- Disable models in settings they don't want to use
- See only enabled dynamic models in the model selector dropdown
- Have the "Select all" checkbox properly control which models appear

This ensures consistency: enabling/disabling models in settings affects
which models are available for feature execution.

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-17 21:00:22 +05:30
DhanushSantosh
9c3b3a4104 fix: make dynamic models select-all checkbox respect search filters
The "Select all" checkbox for dynamic models was using the unfiltered models list,
causing the checkbox state to not reflect what users see when searching. Now it
correctly operates on the filtered models list so:

- Checkbox state matches the visible filtered models
- "Select all" only toggles models the user can see
- Indeterminate state shows if some filtered models are selected

This ensures the checkbox has a meaningful purpose when filtering/searching models.

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-17 20:32:10 +05:30
Stefan de Vogelaere
17e2cdfc85 fix: sandbox warning persistence and add env var option
Fix race condition where sandbox warning appeared on every refresh
even after checking "Do not show again". The issue was that the
sandbox check effect ran before settings were hydrated from the
server, so skipSandboxWarning was always false (the default).

Changes:
- Add settingsLoaded to sandbox check dependencies to ensure the
  user's preference is loaded before checking
- Add AUTOMAKER_SKIP_SANDBOX_WARNING env var option to skip the
  warning entirely (useful for dev/CI environments)
2026-01-17 15:33:51 +01:00
DhanushSantosh
466c34afd4 ci: improve release workflow artifact uploads
- Use explicit file patterns to exclude builder config/debug files (builder-*.yml, *.yaml)
- Include blockmap files for efficient delta updates in auto-update scenarios
- Ensure only production-ready artifacts are uploaded to GitHub releases

This prevents accidental inclusion of builder configuration files in the release assets.

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-17 19:18:15 +05:30
Shirone
b9567f5904 Merge pull request #542 from stefandevo/fix/api-key-info-on-dev-restart
docs: add hint about AUTOMAKER_API_KEY env var to API key banner
2026-01-17 13:09:36 +00:00
Shirone
c2cf8ae892 Merge pull request #540 from stefandevo/fix/gh-not-in-git-folder
fix: stop repeated GitHub PR fetch warnings for non-GitHub repos
2026-01-17 13:08:28 +00:00
Stefan de Vogelaere
3aa3c10ea4 docs: add hint about AUTOMAKER_API_KEY env var to API key banner
When the dev server restarts, developers need to re-enter the API key
in the browser. While the key is persisted to ./data/.api-key, this
file may be missing in clean dev scenarios.

This adds a helpful tip to the API key banner informing developers
they can set AUTOMAKER_API_KEY environment variable for a persistent
API key during development, avoiding the need to re-enter it after
server restarts.
2026-01-17 13:53:34 +01:00
Stefan de Vogelaere
5cd4183a7b fix: use fresh timestamp when setting cache entry
Use Date.now() after checkGitHubRemote() completes instead of the
pre-captured timestamp to ensure accurate 5-minute TTL.
2026-01-17 12:36:33 +01:00
Stefan de Vogelaere
2d9e38ad99 fix: stop repeated GitHub PR fetch warnings for non-GitHub repos
When opening a git repository without a GitHub remote, the server logs
were spammed with warnings every 5 seconds during worktree polling:

  WARN [Worktree] Failed to fetch GitHub PRs: Command failed: gh pr list
  ... no git remotes found

This happened because fetchGitHubPRs() ran `gh pr list` without first
checking if the project has a GitHub remote configured.

Changes:
- Add per-project cache for GitHub remote status with 5-minute TTL
- Check cache before attempting to fetch PRs, skip silently if no remote
- Add forceRefreshGitHub parameter to clear cache on manual refresh
- Pass forceRefreshGitHub when user clicks the refresh worktrees button

This allows users to add a GitHub remote and immediately detect it by
clicking the refresh button, while preventing log spam during normal
polling for projects without GitHub remotes.
2026-01-17 12:32:42 +01:00
Shirone
93d73f6d26 Merge pull request #529 from AutoMaker-Org/feature/v0.12.0rc-1768603410796-o2fn
fix: UUID generation fail in docker env
2026-01-17 11:19:01 +00:00
Stefan de Vogelaere
5209395a74 fix: respect theme in agent output modal and log viewer
The Agent Output modal and LogViewer component had hardcoded dark zinc
colors that didn't adapt to light mode themes. Replaced all hardcoded
colors with semantic Tailwind classes (bg-popover, text-foreground,
text-muted-foreground, bg-muted, border-border) that automatically
respect the active theme.
2026-01-17 11:44:33 +01:00
DhanushSantosh
ef6b9ac2d2 fix: add --force flag to npm ci for platform-specific dependencies
npm ci without --force rejects platform-specific packages like dmg-license
which is macOS-only. The --force flag tells npm to proceed even when
platform constraints are violated.

This allows Linux containers to skip dmg-license and continue with the
install, matching the behavior we want for Docker development.
2026-01-17 15:53:31 +05:30
DhanushSantosh
92afbeb6bd fix: run npm install as root to avoid permission issues
The named Docker volume for node_modules is created with root ownership,
causing EACCES errors when npm tries to write as the automaker user.

Solution:
- Run npm ci as root (installation phase)
- Use --legacy-peer-deps to properly handle optional dependencies
- Fix permissions after install
- Run server process as automaker user for security

This eliminates permission denied errors during npm install in dev containers.
2026-01-17 15:36:50 +05:30
DhanushSantosh
bbdc11ce47 fix: improve docker-compose npm install permissions and use npm ci
Fixes permission denied errors when installing dependencies in Docker containers:

Changes:
- Remove stale node_modules directories before installing (fresh start)
- Use 'npm ci --force' instead of 'npm install --force' for deterministic installs
- Add chmod to ensure writable permissions on node_modules
- Properly fix directory ownership and permissions before install

This prevents EACCES errors when multiple processes try to write to node_modules
and handles lingering permission issues from previous failed container runs.
2026-01-17 15:30:21 +05:30
DhanushSantosh
545bf2045d fix: add --force flag to npm install in docker-compose files
Allow npm to install platform-specific devDependencies (like dmg-license
which is macOS-only) by skipping platform checks in Linux Docker containers.
This matches the behavior already used in CI workflows.

Fixes Docker container startup failure:
- docker-compose.dev.yml (full stack development)
- docker-compose.dev-server.yml (server-only with local Electron)

The --force flag allows npm to proceed with installation even when some
optional/platform-specific dependencies can't be installed on the current
platform.
2026-01-17 15:00:48 +05:30
DhanushSantosh
a0471098fa fix: use specific data-testid selectors in project switcher assertions
Replace generic getByRole('button', { name: /.../ }) selectors with specific
getByTestId('project-switcher-project-') to avoid strict mode
violations where the selector resolves to multiple elements (project switcher
button and sidebar button).

Fixes failing E2E tests:
- feature-manual-review-flow.spec.ts
- new-project-creation.spec.ts
- open-existing-project.spec.ts
2026-01-17 14:53:06 +05:30
Stefan de Vogelaere
3320b40d15 feat: align terminal font settings with appearance fonts
- Terminal font dropdown now uses mono fonts from UI font options
- Unified font list between appearance section and terminal settings
- Terminal font persisted to GlobalSettings for import/export support
- Aligned global terminal settings popover with per-terminal popover:
  - Same settings in same order (Font Size, Run on New Terminal, Font Family, Scrollback, Line Height, Screen Reader)
  - Consistent styling (Radix Select instead of native select)
- Added terminal padding (12px vertical, 16px horizontal) for readability
2026-01-17 10:18:11 +01:00
DhanushSantosh
bac5e1c220 Merge upstream/v0.12.0rc into feature/fedora-rpm-support
Resolved conflict in backlog-plan/common.ts:
- Kept local (stricter) validation: Array.isArray(parsed?.result?.changes)
- This ensures type safety for the changes array
2026-01-17 14:44:37 +05:30
DhanushSantosh
33fa138d21 feat: add docker group support with sg docker command
Improve Docker access handling by detecting and using 'sg docker' command
when the user is in the docker group but hasn't logged out yet. This allows
running docker commands without requiring a full session restart after
`usermod -aG docker $USER`.

Changes:
- Detect docker group access and fall back to sg docker -c when needed
- Export DOCKER_CMD variable for use throughout the script
- Update all docker compose and docker ps commands to use DOCKER_CMD
- Improve error messages to guide users on fixing docker access issues
2026-01-17 14:40:08 +05:30
DhanushSantosh
bc09a22e1f fix: extract app version from apps/ui/package.json instead of monorepo root
The start-automaker.sh script now correctly sources the app version (0.12.0)
from apps/ui/package.json instead of the monorepo version (1.0.0) from the
root package.json. This ensures the launcher displays the correct Automaker
application version.
2026-01-17 14:19:12 +05:30
Stefan de Vogelaere
b771b51842 fix: address code review feedback
- Fix git+ssh URL to git+https for @electron/node-gyp (build compatibility)
- Remove duplicate @fontsource packages from root package.json
- Refactor font state initialization to reduce code duplication
2026-01-17 09:15:35 +01:00
Stefan de Vogelaere
1a7bf27ead feat: add new themes, Zed fonts, and sort theme/font lists
New themes added:
- Dark: Ayu Dark, Ayu Mirage, Ember, Matcha
- Light: Ayu Light, One Light, Bluloco, Feather

Other changes:
- Bundle Zed Sans and Zed Mono fonts from zed-industries/zed-fonts
- Sort font options alphabetically (default first)
- Sort theme options alphabetically (Dark/Light first)
- Improve Ayu Dark text contrast for better readability
- Fix Matcha theme to have green undertone instead of blue
2026-01-17 09:15:35 +01:00
Stefan de Vogelaere
f3b00d0f78 feat: add global font settings with per-project override
- Add fontFamilySans and fontFamilyMono to GlobalSettings type
- Add global font state and actions to app store
- Update getEffectiveFontSans/Mono to fall back to global settings
- Add font selectors to global Settings → Appearance
- Add "Use Global Font" checkboxes in Project Settings → Theme
- Add fonts to settings sync and migration
- Include fonts in import/export JSON
2026-01-17 09:15:34 +01:00
Stefan de Vogelaere
c747baaee2 fix: use sentinel value for default font selection
Radix UI Select doesn't allow empty string values, so use 'default'
as a sentinel value instead.
2026-01-17 09:15:34 +01:00
Stefan de Vogelaere
1322722db2 feat: add per-project font override settings
Add font selectors that allow per-project font customization for both
sans and mono fonts, independent of theme selection. Uses system fonts.

- Add fontFamilySans and fontFamilyMono to ProjectSettings and Project types
- Create ui-font-options.ts config with system font options
- Add store actions: setProjectFontSans, setProjectFontMono, getEffectiveFontSans, getEffectiveFontMono
- Apply font CSS variables in root component
- Add font selector UI in project-theme-section (Project Settings → Theme)
2026-01-17 09:15:34 +01:00
webdevcody
aa35eb3d3a feat: implement spec synchronization feature for improved project management
- Added a new `/sync` endpoint to synchronize the project specification with the current codebase and feature state.
- Introduced `syncSpec` function to handle the synchronization logic, updating technology stack, implemented features, and roadmap phases.
- Enhanced the running state management to track synchronization tasks alongside existing generation tasks.
- Updated UI components to support synchronization actions, including loading indicators and status updates.
- Improved logging and error handling for better visibility during sync operations.

These changes enhance project management capabilities by ensuring that the specification remains up-to-date with the latest code and feature developments.
2026-01-17 01:45:45 -05:00
webdevcody
616e2ef75f feat: add HOSTNAME and VITE_HOSTNAME support for improved server URL configuration
- Introduced `HOSTNAME` environment variable for user-facing URLs, defaulting to localhost.
- Updated server and client code to utilize `HOSTNAME` for constructing URLs instead of hardcoded localhost.
- Enhanced documentation in CLAUDE.md to reflect new configuration options.
- Added `VITE_HOSTNAME` for frontend API URLs, ensuring consistent hostname usage across the application.

These changes improve flexibility in server configuration and enhance the user experience by providing accurate URLs.
2026-01-16 22:40:36 -05:00
webdevcody
d98cae124f feat: enhance sidebar functionality for mobile and compact views
- Introduced a floating toggle button for mobile to show/hide the sidebar when collapsed.
- Updated sidebar behavior to completely hide on mobile when the new mobileSidebarHidden state is true.
- Added logic to conditionally render sidebar components based on screen size using the new useIsCompact hook.
- Enhanced SidebarHeader to include close and expand buttons for mobile views.
- Refactored CollapseToggleButton to hide in compact mode.
- Implemented HeaderActionsPanel for mobile actions in various views, improving accessibility and usability on smaller screens.

These changes improve the user experience on mobile devices by providing better navigation options and visibility controls.
2026-01-16 22:27:19 -05:00
Web Dev Cody
26aaef002d Merge pull request #537 from AutoMaker-Org/claude/issue-536-20260117-0132
feat: add configurable host binding for server and Vite dev server
2026-01-16 21:22:34 -05:00
claude[bot]
09bb59d090 feat: add configurable host binding for server and Vite dev server
- Add HOST environment variable (default: 0.0.0.0) to allow binding to specific network interfaces
- Update server to listen on configurable host instead of hardcoded localhost
- Update Vite dev server to respect HOST environment variable
- Enhanced server startup banner to display listening address
- Updated .env.example and CLAUDE.md documentation

Fixes #536

Co-authored-by: Web Dev Cody <webdevcody@users.noreply.github.com>
2026-01-17 01:34:06 +00:00
Shirone
2f38ffe2d5 Merge pull request #532 from AutoMaker-Org/feature/v0.12.0rc-1768605251997-8ufb
fix: feature.json corruption on crash lose
2026-01-17 00:00:18 +00:00
Shirone
12fa9d858d Merge pull request #533 from AutoMaker-Org/feature/v0.12.0rc-1768605477061-fhv5
fix: Codex freezes
2026-01-16 23:59:16 +00:00
Shirone
c4e1a58e0d refactor: update timeout constants in CLI and Codex providers
- Removed redundant definition of CLI base timeout in `cli-provider.ts` and added a detailed comment explaining its purpose.
- Updated `codex-provider.ts` to use the imported `DEFAULT_TIMEOUT_MS` directly instead of an alias.
- Enhanced unit tests to ensure fallback behavior for invalid reasoning effort values in timeout calculations.
2026-01-17 00:52:57 +01:00
Shirone
8661f33c6d feat: implement atomic file writing and recovery utilities
- Introduced atomic write functionality for JSON files to ensure data integrity during writes.
- Added recovery mechanisms to read JSON files with fallback options for corrupted or missing files.
- Enhanced existing services to utilize atomic write and recovery features for improved reliability.
- Updated tests to cover new atomic writing and recovery scenarios, ensuring robust error handling and data consistency.
2026-01-17 00:50:51 +01:00
Shirone
5c24ca2220 feat: implement dynamic timeout calculation for reasoning efforts in CLI and Codex providers
- Added `calculateReasoningTimeout` function to dynamically adjust timeouts based on reasoning effort levels.
- Updated CLI and Codex providers to utilize the new timeout calculation, addressing potential timeouts for high reasoning efforts.
- Enhanced unit tests to validate timeout behavior for various reasoning efforts, ensuring correct timeout values are applied.
2026-01-17 00:50:06 +01:00
webdevcody
14559354dd refactor: update sidebar navigation sections for clarity
- Added Notifications and Project Settings as standalone sections in the sidebar without labels for visual separation.
- Removed the previous 'Other' label to enhance the organization of navigation items.
2026-01-16 18:49:35 -05:00
webdevcody
3bf9dbd43a Merge branch 'v0.12.0rc' of github.com:AutoMaker-Org/automaker into v0.12.0rc 2026-01-16 18:39:31 -05:00
webdevcody
bd3999416b feat: implement notifications and event history features
- Added Notification Service to manage project-level notifications, including creation, listing, marking as read, and dismissing notifications.
- Introduced Event History Service to store and manage historical events, allowing for listing, retrieval, deletion, and replaying of events.
- Integrated notifications into the server and UI, providing real-time updates for feature statuses and operations.
- Enhanced sidebar and project switcher components to display unread notifications count.
- Created dedicated views for managing notifications and event history, improving user experience and accessibility.

These changes enhance the application's ability to inform users about important events and statuses, improving overall usability and responsiveness.
2026-01-16 18:37:11 -05:00
Shirone
cc9f7d48c8 fix: enhance authentication error handling in Claude usage service tests
- Updated test to send a specific authentication error pattern to the data callback.
- Triggered the exit handler to validate the handling of authentication errors.
- Improved error message expectations for better clarity during test failures.
2026-01-16 23:58:48 +01:00
Shirone
6bb0461be7 Merge pull request #527 from AutoMaker-Org/feature/v0.12.0rc-1768598412391-lnp7
feat: implement XML extraction utilities and enhance feature handling
2026-01-16 22:52:55 +00:00
Shirone
16ef026b38 refactor: Centralize UUID generation with fallback support 2026-01-16 23:49:36 +01:00
Shirone
50ed405c4a fix: adress pr comments 2026-01-16 23:41:23 +01:00
Web Dev Cody
5407e1a9ff Merge pull request #525 from stefandevo/feature/project-settings
feat: Separate Project Settings from Global Settings
2026-01-16 17:31:19 -05:00
Stefan de Vogelaere
5436b18f70 refactor: move Project Settings below Tools section in sidebar
- Remove Project Settings from Project section
- Add Project Settings as standalone section below Tools/GitHub
- Use empty label for visual separation without header
- Add horizontal separator line above sections without labels
- Rename to "Project Settings" for clarity
- Keep "Global Settings" at bottom of sidebar
2026-01-16 23:27:53 +01:00
Stefan de Vogelaere
8b7700364d refactor: move project settings to Project section, rename global settings
- Move "Settings" from Tools section to Project section in sidebar
- Rename bottom settings link from "Settings" to "Global Settings"
- Update keyboard shortcut description accordingly
2026-01-16 23:17:50 +01:00
Shirone
3bdf3cbb5c fix: improve branch name generation logic in BoardView and useBoardActions
- Updated the logic for auto-generating branch names to consistently use the primary branch (main/master) and avoid nested feature paths.
- Removed references to currentWorktreeBranch in favor of getPrimaryWorktreeBranch for better clarity and maintainability.
- Enhanced comments to clarify the purpose of branch name generation.
2026-01-16 23:14:22 +01:00
webdevcody
45d9c9a5d8 fix: adjust menu dimensions and formatting in start-automaker.sh
- Increased MENU_BOX_WIDTH and MENU_INNER_WIDTH for better layout.
- Updated printf statements in show_menu() for consistent spacing and alignment of menu options.
- Enhanced exit option formatting for improved readability.
2026-01-16 17:10:20 -05:00
Stefan de Vogelaere
6a23e6ce78 fix: address PR review feedback
- Fix race conditions when rapidly switching projects
  - Added cancellation logic to prevent stale responses from updating state
  - Both project settings and init script loading now properly cancelled on unmount

- Improve error handling in custom icon upload
  - Added toast notifications for validation errors (file type, file size)
  - Added toast notifications for upload success/failure
  - Handle network errors gracefully with user feedback
  - Handle file reader errors
2026-01-16 23:03:21 +01:00
Stefan de Vogelaere
4e53215104 chore: reset package-lock.json to match base branch 2026-01-16 23:03:21 +01:00
Stefan de Vogelaere
2899b6d416 feat: separate project settings from global settings
This PR introduces a new dedicated Project Settings screen accessible from
the sidebar, clearly separating project-specific settings from global
application settings.

- Added new route `/project-settings` with dedicated view
- Sidebar navigation item "Settings" in Tools section (Shift+S shortcut)
- Sidebar-based navigation matching global Settings pattern
- Sections: Identity, Worktrees, Theme, Danger Zone

**Moved to Project Settings:**
- Project name and icon customization
- Project-specific theme override
- Worktree isolation enable/disable (per-project override)
- Init script indicator visibility and auto-dismiss
- Delete branch by default preference
- Initialization script editor
- Delete project (Danger Zone)

**Remains in Global Settings:**
- Global theme (default for all projects)
- Global worktree isolation (default for new projects)
- Feature Defaults, Model Defaults
- API Keys, AI Providers, MCP Servers
- Terminal, Keyboard Shortcuts, Audio
- Account, Security, Developer settings

Both Theme and Worktree Isolation now follow a consistent override pattern:
1. Global Settings defines the default value
2. New projects inherit the global value
3. Project Settings can override for that specific project
4. Changing global setting doesn't affect projects with overrides

- Fixed: Changing global theme was incorrectly overwriting project themes
- Fixed: Project worktree setting not persisting across sessions
- Project settings now properly load from server on component mount

- Shell syntax editor: improved background contrast (bg-background)
- Shell syntax editor: removed distracting active line highlight
- Project Settings header matches Context/Memory views pattern

- `apps/ui/src/routes/project-settings.tsx`
- `apps/ui/src/components/views/project-settings-view/` (9 files)

- Global settings simplified (removed project-specific options)
- Sidebar navigation updated with project settings link
- App store: added project-specific useWorktrees state/actions
- Types: added projectSettings keyboard shortcut
- HTTP client: added missing project settings response fields
2026-01-16 23:03:21 +01:00
Kacper
b263cc615e feat: implement XML extraction utilities and enhance feature handling
- Introduced a new xml-extractor module with functions for XML parsing, including escaping/unescaping XML characters, extracting sections and elements, and managing implemented features.
- Added functionality to add, remove, update, and check for implemented features in the app_spec.txt file.
- Enhanced the create and update feature handlers to check for duplicate titles and trigger synchronization with app_spec.txt on status changes.
- Updated tests to cover new XML extraction utilities and feature handling logic, ensuring robust functionality and reliability.
2026-01-16 22:55:10 +01:00
webdevcody
97b0028919 chore: update package versions to 0.12.0 and 0.12.0rc
- Updated the version in package.json for the main project to 0.12.0rc.
- Updated the version in apps/server/package.json and apps/ui/package.json to 0.12.0.
- Adjusted the version extraction logic in start-automaker.sh to reference the correct package.json path.
2026-01-16 16:48:43 -05:00
webdevcody
fd1727a443 Merge branch 'v0.12.0rc' of github.com:AutoMaker-Org/automaker into v0.12.0rc 2026-01-16 16:11:56 -05:00
webdevcody
597cb9bfae refactor: remove dev.mjs and integrate start-automaker.sh for development mode
- Deleted the dev.mjs script, consolidating development mode functionality into start-automaker.sh.
- Updated package.json to use start-automaker.sh for the "dev" script and added a "start" script for production mode.
- Enhanced start-automaker.sh with production build capabilities and improved argument parsing for better user experience.
- Removed launcher-utils.mjs as its functionality has been integrated into start-automaker.sh.
2026-01-16 16:11:53 -05:00
Kacper
c2430e5bd3 feat: enhance PTY handling for Windows in ClaudeUsageService and TerminalService
- Added detection for Electron environment to improve compatibility with Windows PTY processes.
- Implemented winpty fallback for ConPTY failures, ensuring robust terminal session creation in Electron and other contexts.
- Updated error handling to provide clearer messages for authentication and terminal access issues.
- Refined usage data detection logic to avoid false positives, improving the accuracy of usage reporting.

These changes aim to enhance the reliability and user experience of terminal interactions on Windows, particularly in Electron applications.
2026-01-16 21:53:53 +01:00
Shirone
68df8efd10 Merge pull request #522 from AutoMaker-Org/feature/v0.12.0rc-1768590871767-bl1c
feat: add filters to github issues view
2026-01-16 20:08:05 +00:00
Kacper
c0d64bc994 fix: adress pr comments 2026-01-16 21:05:58 +01:00
Kacper
6237f1a0fe feat: add filtering capabilities to GitHub issues view
- Implemented a comprehensive filtering system for GitHub issues, allowing users to filter by state, labels, assignees, and validation status.
- Introduced a new IssuesFilterControls component for managing filter options.
- Updated the GitHubIssuesView to utilize the new filtering logic, enhancing the user experience by providing clearer visibility into matching issues.
- Added hooks for filtering logic and state management, ensuring efficient updates and rendering of filtered issues.

These changes aim to improve the usability of the issues view by enabling users to easily navigate and manage their issues based on specific criteria.
2026-01-16 20:56:23 +01:00
Web Dev Cody
30c50d9b78 Merge pull request #513 from JZilla808/feature/tui-launcher
feat: add TUI launcher script for easy app startup
2026-01-16 14:51:36 -05:00
Web Dev Cody
03516ac09e Merge pull request #519 from WikiRik/WikiRik/audit-fix
chore: run npm audit fix
2026-01-16 14:43:27 -05:00
Shirone
5e5a136f1f Merge pull request #521 from AutoMaker-Org/feature/v0.12.0rc-1768591325146-pye6
fix: Signals not supported on windows.
2026-01-16 19:39:00 +00:00
Kacper
98c50d44a4 test: mock Unix platform for SIGTERM behavior in ClaudeUsageService tests
Added a mock for the Unix platform in the SIGTERM test case to ensure proper behavior during testing on non-Windows systems. This change enhances the reliability of the tests by simulating the expected environment for process termination.
2026-01-16 20:38:29 +01:00
Kacper
0e9369816f fix: unify PTY process termination handling across platforms
Refactored the process termination logic in both ClaudeUsageService and TerminalService to use a centralized method for killing PTY processes. This ensures consistent handling of process termination across Windows and Unix-like systems, improving reliability and maintainability of the code.
2026-01-16 20:34:12 +01:00
Kacper
be63a59e9c fix: improve process termination handling for Windows
Updated the process termination logic in ClaudeUsageService to handle Windows environments correctly. The code now checks the operating system and calls the appropriate kill method, ensuring consistent behavior across platforms.
2026-01-16 20:27:53 +01:00
Kacper
dbb84aba23 fix: ensure proper type handling for JSON parsing in loadBacklogPlan function
Updated the JSON parsing in the loadBacklogPlan function to explicitly cast the raw input as a string, improving type safety and preventing potential runtime errors when handling backlog plan data.
2026-01-16 20:09:01 +01:00
Shirone
9819d2e91c Merge pull request #514 from Seonfx/fix/510-spec-generation-json-fallback
fix: add JSON fallback for spec generation with custom API endpoints
2026-01-16 19:01:47 +00:00
Kacper
4c24ba5a8b feat: enhance TUI launcher with Docker/Electron process detection
- Add 4 launch options matching dev.mjs (Web, Electron, Docker Dev, Electron+Docker)
- Add arrow key navigation in menu with visual selection indicator
- Add cross-platform port conflict detection and resolution (Windows/Unix)
- Add Docker container detection with Stop/Restart/Attach/Cancel options
- Add Electron process detection when switching between modes
- Add centered, styled output for Docker build progress
- Add HUSKY=0 to docker-compose files to prevent permission errors
- Fix Windows/Git Bash compatibility (platform detection, netstat/taskkill)
- Fix bash arithmetic issue with set -e causing script to hang

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-16 19:58:32 +01:00
Rik Smale
e67cab1e07 chore: fix lockfile 2026-01-16 19:23:18 +01:00
Rik Smale
132b8f7529 chore: run npm audit fix 2026-01-16 19:18:16 +01:00
Seonfx
d651e9d8d6 fix: address PR review feedback for JSON fallback
- Simplify escapeXml() using 'str == null' check (type narrowing)
- Add validation for extracted JSON before passing to specToXml()
- Prevents runtime errors when JSON doesn't match SpecOutput schema

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-01-16 13:43:56 -04:00
webdevcody
92f14508aa chore: update environment variable documentation for Anthropic API key
- Changed comments in docker-compose files to clarify that the ANTHROPIC_API_KEY is optional.
- Updated README to reflect changes in authentication setup, emphasizing integration with Claude Code CLI and removing outdated API key instructions.
- Improved clarity on authentication methods and streamlined the setup process for users.
2026-01-16 11:23:45 -05:00
DhanushSantosh
842b059fac fix: remove invalid local keyword in main script body
The 'local' keyword can only be used inside functions. Line 423 had
'local timeout_count=0' in the main script body which caused a bash error.
Removed the unused variable declaration.

Fixes: bash error 'local: can only be used in a function'

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-16 20:44:17 +05:30
DhanushSantosh
49f9ecc168 feat: enhance TUI launcher with production-ready features and documentation
Major improvements to start-automaker.sh launcher script:

**Architecture & Code Quality:**
- Organized into logical sections with clear separators (8 sections)
- Extracted all magic numbers into named constants at top
- Added comprehensive comments throughout

**Functionality:**
- Dynamic version extraction from package.json (no manual updates)
- Pre-flight checks: validates Node.js, npm, tput installed
- Platform detection: warns on Windows/unsupported systems
- Terminal size validation: checks min 70x20, displays warning if too small
- Input timeout: 30-second auto-timeout for hands-free operation
- History tracking: remembers last selected mode in ~/.automaker_launcher_history

**User Experience:**
- Added --help flag with comprehensive usage documentation
- Added --version flag showing version, Node.js, Bash info
- Added --check-deps flag to verify project dependencies
- Added --no-colors flag for terminals without color support
- Added --no-history flag to disable history tracking
- Enhanced cleanup function: restores cursor + echo, better signal handling
- Better error messages with actionable remediation steps
- Improved exit experience: "Goodbye! See you soon." message

**Robustness:**
- Real initialization checks (validates node_modules, build artifacts)
- Spinner uses frame counting instead of infinite loop (max 1.6s)
- Proper signal trap handling (EXIT, INT, TERM)
- Error recovery: respects --no-colors in pre-flight checks

**File Management:**
- Renamed from "start automaker.sh" to "start-automaker.sh" for consistency
- Made script more portable with SCRIPT_DIR detection

**Documentation:**
- Added section to README.md: "Interactive TUI Launcher"
- Documented all launch modes and options with examples
- Added feature list, history file location, usage tips
- Updated table of contents with TUI launcher section

Fixes: #511 (CI test failures resolved)
Improvements: Better UX for new users, production-ready error handling

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-16 20:27:53 +05:30
DhanushSantosh
e02fd889c2 fix: add --force flag to npm install in format-check workflow
Ensures dmg-license can be installed on Linux CI runners even though it's
a darwin-only package. The --force flag allows npm to skip platform mismatches.

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-16 19:51:23 +05:30
DhanushSantosh
52a821d3bb fix: add --force flag to npm install in CI to allow platform-specific devDependencies
dmg-license is a darwin-only package required for macOS DMG building. The CI runs on
Linux, so npm install fails when trying to install a platform-specific devDependency.

Using --force allows npm to skip platform mismatches instead of erroring out, allowing
the build to proceed on non-darwin platforms where the darwin-only dependency will simply
be skipped.

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-16 19:43:09 +05:30
DhanushSantosh
becd79f1e3 fix: add missing dmg-license dependency to fix release builds
The release workflow was failing for all platforms because macOS DMG
builder requires dmg-license. This single dependency was preventing
AppImage, DEB, RPM, DMG, and EXE artifacts from being built and
uploaded to any release since v0.7.3.

Includes lockfile updates and conversion of git+ssh:// URLs to https://
to prevent SSH key requirement issues in CI/CD and across environments.

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-16 19:38:10 +05:30
DhanushSantosh
883ad2a04b fix(backlog-plan): clear running details in generate-plan finally block
Ensure running details are cleared when generation completes or fails, preventing state leaks.

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-16 19:37:21 +05:30
DhanushSantosh
bf93cdf0c4 fix(backlog-plan): clear running details when stopping generation
Add setRunningDetails(null) to stop handler to prevent state leaks when aborting operation.

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-16 19:37:21 +05:30
DhanushSantosh
c0ea1c736a fix(backlog-plan): clear running details and handle plan cleanup safely
- Add setRunningDetails(null) in finally block of generate handler to prevent state leaks
- Move clearBacklogPlan before response in apply handler and wrap in try-catch to prevent errors after headers sent

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-16 19:37:21 +05:30
DhanushSantosh
8b448b9481 fix: address CodeRabbit security and validation issues in Fedora docs and backlog plan
Documentation improvements:
- Fix GitHub URL placeholder issues in install-fedora.md - GitHub /latest/download/ endpoint
  doesn't support version substitution, use explicit download URL pattern instead
- Improve security in network troubleshooting section:
  - Change ping target from claude.ai (marketing site) to api.anthropic.com (actual API)
  - Remove unsafe 'echo \$ANTHROPIC_API_KEY' command that exposes secrets in shell history
  - Use safe API key check with conditional output instead

Code improvements:
- apps/server/src/routes/backlog-plan/common.ts: Add Array.isArray() validation
  for stored plan shape before returning it. Ensures changes is actually an array,
  not just truthy, preventing downstream runtime errors.

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-16 19:37:21 +05:30
DhanushSantosh
12f2b9f2b3 fix: remove invalid license property from RPM configuration
The 'license' property is not supported by electron-builder's RPM schema.
Valid RPM properties are: afterInstall, afterRemove, appArmorProfile,
artifactName, category, compression, depends, description, desktop,
executableArgs, fpm, icon, maintainer, mimeTypes, packageCategory,
packageName, publish, synopsis, vendor.

This fix allows electron-builder to proceed to the RPM build stage.

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-16 19:37:21 +05:30
DhanushSantosh
017ff3ca0a fix: resolve TypeScript error in backlog plan loading
Fix type mismatch in loadBacklogPlan where secureFs.readFile with 'utf-8'
encoding returns union type string | Buffer, causing JSON.parse to fail type checking.
Cast raw to string to satisfy TypeScript strict mode.

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-16 19:37:21 +05:30
Seonfx
bcec178bbe fix: add JSON fallback for spec generation with custom API endpoints
Fixes spec generation failure when using custom API endpoints (e.g., GLM proxy)
that don't support structured output. The AI returns JSON instead of XML, but
the fallback parser only looked for XML tags.

Changes:
- escapeXml: Handle undefined/null values gracefully (converts to empty string)
- generate-spec: Add JSON extraction fallback when XML tags aren't found
  - Reuses existing extractJson() utility (already used for Cursor models)
  - Converts extracted JSON to XML using specToXml()

Closes #510

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-01-16 08:37:53 -04:00
Jay Zhou
e3347c7b9c feat: add TUI launcher script for easy app startup
Add a beautiful terminal user interface (TUI) script that provides an
interactive menu for launching Automaker in different modes:

- [1] Web Browser mode (localhost:3007)
- [2] Desktop App (Electron)
- [3] Desktop + Debug (Electron with DevTools)
- [Q] Exit

Features:
- ASCII art logo with gradient colors
- Centered, responsive layout that adapts to terminal size
- Animated spinner during launch sequence
- Cross-shell compatibility (bash/zsh)
- Clean exit handling with cursor restoration

This provides a more user-friendly alternative to remembering
npm commands, especially for new users getting started with
the project.
2026-01-16 03:34:47 -08:00
DhanushSantosh
6529446281 feat: add Fedora/RHEL RPM package support with comprehensive documentation
Add native RPM package building for Fedora-based distributions:
- Extend electron-builder configuration to include RPM target
- Add rpm-build installation to GitHub Actions CI/CD workflow
- Update artifact upload patterns to include .rpm files
- Declare proper RPM dependencies (gtk3, libnotify, nss, etc.)
- Use xz compression for optimal package size

Documentation:
- Update README.md with Fedora/RHEL installation instructions
- Create comprehensive docs/install-fedora.md guide covering:
  - Installation methods (dnf/yum, direct URL)
  - System requirements and capabilities
  - Configuration and troubleshooting
  - SELinux handling and firewall rules
  - Performance tips and security considerations
  - Building from source
- Support for Fedora 39+, RHEL 9+, Rocky Linux, AlmaLinux

End-to-end support enables Fedora users to install Automaker via:
  sudo dnf install ./Automaker-<version>-x86_64.rpm

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-16 12:31:30 +05:30
webdevcody
379551c40e feat: add JSON import/export functionality in settings view
- Introduced a new ImportExportDialog component for managing settings import and export via JSON.
- Integrated JsonSyntaxEditor for editing JSON settings with syntax highlighting.
- Updated SettingsView to include the import/export dialog and associated state management.
- Enhanced SettingsHeader with an import/export button for easy access.

These changes aim to improve user experience by allowing seamless transfer of settings between installations.
2026-01-16 00:34:59 -05:00
webdevcody
7465017600 feat: implement server logging and event hook features
- Introduced server log level configuration and HTTP request logging settings, allowing users to control the verbosity of server logs and enable or disable request logging at runtime.
- Added an Event Hook Service to execute custom actions based on system events, supporting shell commands and HTTP webhooks.
- Enhanced the UI with new sections for managing server logging preferences and event hooks, including a dialog for creating and editing hooks.
- Updated global settings to include server log level and request logging options, ensuring persistence across sessions.

These changes aim to improve debugging capabilities and provide users with customizable event-driven actions within the application.
2026-01-16 00:21:49 -05:00
webdevcody
874c5a36de feat: enable auto loading of ClaudeMd by default
- Updated the default setting for autoLoadClaudeMd from false to true in the global settings. This change aims to enhance user experience by automatically loading ClaudeMd, streamlining the workflow for users.
2026-01-15 23:54:08 -05:00
webdevcody
03436103d1 feat: implement backlog plan management and UI enhancements
- Added functionality to save, clear, and load backlog plans within the application.
- Introduced a new API endpoint for clearing saved backlog plans.
- Enhanced the backlog plan dialog to allow users to review and apply changes to their features.
- Integrated dependency management features in the UI, allowing users to select parent and child dependencies for features.
- Improved the graph view with options to manage plans and visualize dependencies effectively.
- Updated the sidebar and settings to include provider visibility toggles for better user control over model selection.

These changes aim to enhance the user experience by providing robust backlog management capabilities and improving the overall UI for feature planning.
2026-01-15 22:21:46 -05:00
Shirone
cb544e0011 Merge pull request #505 from AutoMaker-Org/feature/v0.12.0rc-1768509532254-tt6z
fix: "Remove Project" button not working on right click of the project
2026-01-15 22:05:24 +00:00
Shirone
df23c9e6ab Merge pull request #507 from AutoMaker-Org/feat/improve-ideation-view
feat: improve ideation view
2026-01-15 22:03:50 +00:00
Shirone
52cc82fb3f feat: enhance ideation dashboard and prompt components
- Added a helper function to map priority levels to badge variants in the IdeationDashboard.
- Improved UI elements in SuggestionCard for better spacing and visual hierarchy.
- Updated PromptCategoryGrid and PromptList components with enhanced hover effects and layout adjustments for a more responsive design.
- Refined button styles and interactions for better user experience across components.

These changes aim to improve the overall usability and aesthetics of the ideation view.
2026-01-15 23:01:12 +01:00
Shirone
d9571bfb8d Merge pull request #506 from AutoMaker-Org/feature/v0.12.0rc-1768509904121-pjft
feat: add discard all functionality to ideation view
2026-01-15 21:39:54 +00:00
Shirone
07d800b589 feat: add discard all functionality to ideation view
- Introduced a new button in the IdeationHeader for discarding all ideas when in dashboard mode.
- Implemented state management for discard readiness and count in IdeationView.
- Added confirmation dialog for discarding ideas in IdeationDashboard.
- Enhanced bulk action readiness checks to include discard operations.

This update improves user experience by allowing bulk discarding of ideas with confirmation, ensuring actions are intentional.
2026-01-15 22:37:26 +01:00
Shirone
ec042de69c fix: streamline context menu behavior for project removal dialog
- Ensure the context menu closes consistently after the confirmation dialog, regardless of user action.
- Reset confirmation state upon dialog closure to prevent unintended interactions.
2026-01-15 22:20:30 +01:00
Shirone
585ae32c32 fix: Prevent race condition in project removal dialog cleanup 2026-01-15 22:15:16 +01:00
Shirone
a89ba04109 fix: project removal not being executed
- Prevent context menu from closing when a confirmation dialog is open.
- Add success toast notification upon project removal.
- Refactor event handlers to account for dialog state, improving user experience.
2026-01-15 22:06:35 +01:00
Shirone
05a3b95d75 Merge pull request #501 from AutoMaker-Org/feature/v0.11.0rc-1768426435282-1ogl
feat: centralize prompts and add customization UI for App Spec, Context, Suggestions, Tasks
2026-01-15 20:20:56 +00:00
Shirone
0e269ca15d fix: update outdated server unit tests
- auto-mode-service-planning.test.ts: Add taskExecutionPrompts argument
  to buildFeaturePrompt calls, update test for implementation instructions
- claude-usage-service.test.ts: Skip deprecated Mac tests (service now
  uses PTY for all platforms), rename Windows tests to PTY tests, update
  to use process.cwd() instead of home directory
- claude-provider.test.ts: Add missing model parameter to environment
  variable passthrough tests

All tests now pass (1093 passed, 23 skipped).

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 21:16:46 +01:00
Shirone
fd03cb4afa refactor: split prompt customization into multiple files
Split prompt-customization-section.tsx into focused modules:
- types.ts (51 lines) - Type definitions
- tab-configs.ts (448 lines) - Configuration data for all tabs
- components.tsx (159 lines) - Reusable Banner, PromptField, PromptFieldList
- prompt-customization-section.tsx (176 lines) - Main component

Benefits:
- Main component reduced from ~810 to 176 lines
- Clear separation of concerns
- Easier to find and modify specific parts
- Configuration data isolated for easy updates

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 21:07:38 +01:00
Shirone
d6c5c93fe5 refactor: use data-driven configuration for prompt customization UI
- Replace repetitive JSX with TAB_CONFIGS array defining all tabs and fields
- Create reusable Banner component for info/warning banners
- Create PromptFieldList component for rendering fields from config
- Support nested sections (like Auto Mode's Template Prompts section)
- Reduce file from ~950 lines to ~810 lines (-15% code)

Benefits:
- Adding new prompt tabs/fields is now declarative (just add to config)
- Consistent structure enforced by TypeScript interfaces
- Much easier to maintain and extend

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 21:03:51 +01:00
Shirone
1abf219230 refactor: create reusable PromptTabContent component and add {{count}} placeholder
- Create PromptTabContent reusable component in prompt-customization-section.tsx
- Update all tabs (Agent, Commit Message, Title Generation, Ideation, App Spec,
  Context Description, Suggestions, Task Execution) to use the new component
- Add {{count}} placeholder to DEFAULT_SUGGESTIONS_SYSTEM_PROMPT for dynamic
  suggestion count

Addresses PR review comments from Gemini.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 21:00:32 +01:00
Shirone
3a2ba6dbfe feat: connect Task Execution prompts to auto-mode-service
Update auto-mode-service.ts to use centralized Task Execution prompts
from settings, making all 9 task execution prompts customizable via UI:

- buildFeaturePrompt: uses implementationInstructions and
  playwrightVerificationInstructions from settings
- buildTaskPrompt: uses taskPromptTemplate with variable substitution
- buildPipelineStepPrompt: updated to pass prompts through
- executeFeatureWithContext: uses resumeFeatureTemplate
- resolvePlanApproval recovery: uses continuationAfterApprovalTemplate
- Multi-agent continuation: uses continuationAfterApprovalTemplate
- recordLearningsFromFeature: uses learningExtractionSystemPrompt
  and learningExtractionUserPromptTemplate

All 12 prompt categories are now fully customizable from the UI.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 20:54:26 +01:00
Shirone
8fa8ba0a16 fix: address PR comments and complete prompt centralization
- Fix inline type imports in defaults.ts (move to top-level imports)
- Update ideation-service.ts to use centralized prompts from settings
- Update generate-title.ts to use centralized prompts
- Update validate-issue.ts to use centralized prompts
- Clean up validation-schema.ts (prompts already centralized)
- Minor server index cleanup

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 20:31:19 +01:00
Shirone
285f526e0c feat: centralize prompts and add customization UI for App Spec, Context, Suggestions, Tasks
- Add 4 new prompt type interfaces (AppSpecPrompts, ContextDescriptionPrompts,
  SuggestionsPrompts, TaskExecutionPrompts) with resolved types
- Add default prompts for all new categories to @automaker/prompts/defaults.ts
- Add merge functions for new prompt categories in merge.ts
- Update settings-helpers.ts getPromptCustomization() to return all 12 categories
- Update server routes (generate-spec, generate-features-from-spec, describe-file,
  describe-image, generate-suggestions) to use centralized prompts
- Add 4 new tabs in prompt customization UI (App Spec, Context, Suggestions, Tasks)
- Fix Ideation tab layout using grid-cols-4 for even distribution

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 20:13:14 +01:00
webdevcody
bd68b497ac Merge branch 'v0.12.0rc' of github.com:AutoMaker-Org/automaker into v0.12.0rc 2026-01-15 13:14:19 -05:00
webdevcody
06b047cfcb feat: implement bulk feature verification and enhance selection mode
- Added functionality for bulk verifying features in the BoardView, allowing users to mark multiple features as verified at once.
- Introduced a selection target mechanism to differentiate between 'backlog' and 'waiting_approval' features during selection mode.
- Updated the KanbanCard and SelectionActionBar components to support the new selection target logic, improving user experience for bulk actions.
- Enhanced the UI to provide appropriate actions based on the current selection target, including verification options for waiting approval features.
2026-01-15 13:14:15 -05:00
DhanushSantosh
c585cee12f feat: add dynamic usage status icon and tab-aware updates to usage button
- Add provider icon (Anthropic/OpenAI) that displays based on active tab
- Icon color reflects usage status (green/orange/red)
- Progress bar and stale indicator update dynamically when switching tabs
- Shows Claude metrics when Claude tab is active, Codex metrics when Codex tab is active

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-15 21:05:35 +05:30
DhanushSantosh
241fd0b252 Merge remote-tracking branch 'upstream/v0.12.0rc' into patchcraft 2026-01-15 19:38:37 +05:30
DhanushSantosh
164acc1b4e chore: update package-lock.json 2026-01-15 19:38:18 +05:30
webdevcody
78e5ddb4a8 chore: release v0.11.0
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-14 20:34:37 -05:00
Web Dev Cody
43904cdb02 Merge pull request #475 from AutoMaker-Org/v0.11.0rc
V0.11.0rc
2026-01-14 20:31:11 -05:00
Shirone
7ea1383e10 Merge pull request #492 from AutoMaker-Org/feature/v0.11.0rc-1768413895104-31pa
feat: merge worktree to main in dropdown menu
2026-01-14 22:10:02 +00:00
Shirone
425e38811f Merge pull request #493 from AutoMaker-Org/feature/v0.11.0rc-1768413909856-a0al
fix: agent output modal ui/ux and task list with spec/full plan mode
2026-01-14 21:49:47 +00:00
Shirone
f6bda66ed4 feat: enhance agent info panel with real-time task status updates and fresh planSpec integration
- Added support for real-time task status updates using WebSocket events, allowing the Kanban card to reflect current task progress accurately.
- Introduced a new state for fresh planSpec data fetched from the API to ensure the agent info panel displays up-to-date task information.
- Updated the effectiveTodos calculation to prioritize fresh planSpec data and incorporate real-time status, improving task display accuracy.
- Enhanced the logic to listen for relevant WebSocket events and update task statuses accordingly, ensuring synchronization with the agent output modal.
2026-01-14 22:39:47 +01:00
Web Dev Cody
0df7e4a33d Merge pull request #490 from thesobercoder/fix/openrouter-models-kanban
fix: load OpenCode models on Kanban
2026-01-14 16:12:45 -05:00
Web Dev Cody
41ad717b8e Merge pull request #494 from mcbodge/feature/add-copilot-support
Fix OpenCode GitHub Copilot authentication detection
2026-01-14 16:11:38 -05:00
Manuel Grillo
fec5f88d91 feat: add GitHub Copilot support for OAuth token validation 2026-01-14 21:44:33 +01:00
Shirone
724858d215 fix: adjust more 2026-01-14 21:12:48 +01:00
Shirone
2b93afbd43 Changes from feature/v0.11.0rc-1768413909856-a0al 2026-01-14 21:03:54 +01:00
Shirone
ca0f3ecedf fix: adjust task progress panel height and improve effective todos handling in agent info panel
- Reduced the maximum height of the task progress panel from 300px to 200px for better UI consistency.
- Introduced a new `effectiveTodos` calculation in the agent info panel to correctly display tasks from `planSpec` when available, ensuring accurate task counts and statuses.
- Updated references to use `effectiveTodos` instead of the original `agentInfo.todos` for task display logic in the agent info panel.
- Adjusted the height of various modal components to align with the new task progress panel height.
2026-01-14 21:02:24 +01:00
Shirone
ee0d0c6c59 fix: merge worktree handler now uses correct branch name and path
The merge handler previously hardcoded branch names as `feature/${featureId}`
and worktree paths as `.worktrees/${featureId}`, which failed for auto-generated
branches (e.g., `feature/v0.11.0rc-1768413895104-31pa`) and custom worktrees.

Changes:
- Server handler now accepts branchName and worktreePath directly from the UI
- Added branch existence validation before attempting merge
- Updated merge dialog with 2-step confirmation (type "merge" to confirm)
- Removed feature branch naming restriction - any branch can now be merged
- Updated API types and client to pass correct parameters

Closes #408

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-14 20:49:17 +01:00
Soham Dasgupta
ac38e85f3c Merge branch 'v0.11.0rc' into fix/openrouter-models-kanban 2026-01-15 00:01:00 +05:30
Shirone
ca3286a374 Merge pull request #489 from AutoMaker-Org/feature/v0.11.0rc-1768410827235-36uf
feat: enhance pr dialog base branch selection
2026-01-14 17:39:27 +00:00
Shirone
0898578c11 fix: Include remote branches in PR base selection even when local branch exists
The branch listing logic now correctly shows remote branches (e.g., "origin/main") even if a local branch with the same base name exists, since users need remote branches as PR base targets. Also extracted duplicate state reset logic in create-pr-dialog into a reusable function.
2026-01-14 18:36:14 +01:00
Shirone
07593f8704 feat: enhance list-branches endpoint to support fetching remote branches
- Updated the list-branches endpoint to accept an optional parameter for including remote branches.
- Implemented logic to fetch and deduplicate remote branches alongside local branches.
- Modified the CreatePRDialog component to utilize the updated API for branch selection, allowing users to select from both local and remote branches.
2026-01-14 18:25:31 +01:00
Shirone
3f8a8db7a5 Merge pull request #486 from AutoMaker-Org/fix/claude-usage-parsing
fix: Claude usage parsing for CLI v2.x and trust prompt handling
2026-01-14 16:56:26 +00:00
Shirone
13eead3855 fix: use process.cwd() consistently across all platforms
Address PR review comment - use process.cwd() for Windows too instead of
USERPROFILE/homedir fallback for consistency.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-14 17:53:13 +01:00
Shirone
cb910feae9 fix: Claude usage parsing for CLI v2.x and trust prompt handling
- Use node-pty on all platforms instead of expect on macOS for more reliable PTY handling
- Use process.cwd() as working directory (project dir is likely already trusted)
- Add detection for new trust prompt text variants ("Ready to code here", "permission to work")
- Add specific error handling for trust prompt pending state
- Show helpful UI message when trust prompt needs manual approval

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-14 17:48:44 +01:00
Shirone
c75f9a29cb Merge pull request #485 from AutoMaker-Org/feature/v0.11.0rc-1768406316676-kza4
feat: add feature request github template
2026-01-14 16:18:44 +00:00
Shirone
3c5e453b01 fix: adress pr comments 2026-01-14 17:13:41 +01:00
Shirone
63e0ffac42 Merge pull request #484 from AutoMaker-Org/feature/v0.11.0rc-1768405788678-28bn
fix: adress pr comments task not respecting worktrees
2026-01-14 16:10:15 +00:00
Shirone
d0155f28c8 feat: add feature request github template 2026-01-14 17:07:01 +01:00
Shirone
27ca08d98a fix: Set workMode to custom for PR and conflict flows 2026-01-14 17:02:52 +01:00
Shirone
df99950475 Merge pull request #481 from AutoMaker-Org/feature/v0.11.0rc-1768383713091-hnir
feat(ui): Add project theme selection to context menu
2026-01-14 15:28:18 +00:00
Web Dev Cody
6a85073d94 Merge pull request #339 from ramarivera/feat/custom-anthropic-endpoint
feat: support ANTHROPIC_BASE_URL and ANTHROPIC_AUTH_TOKEN for custom endpoints
2026-01-14 10:09:56 -05:00
Shirone
7b73ff34f1 fix: adress pr comments 2026-01-14 15:43:42 +01:00
Shirone
8419b12f3f feat(ui): Add project theme selection to context menu with clean code refactoring
Implement per-project theme override capability in the Discord-like layout:
- Add theme submenu to project context menu with live preview
- Reuse existing theme constants and useThemePreview hook from sidebar
- Extract reusable ThemeButton and ThemeColumn components (DRY principle)
- Replace magic z-index values with named constants

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-14 15:33:51 +01:00
Soham Dasgupta
f1a5bcd17a fix: load OpenRouter models on kanban board without visiting settings
Problem:
- OpenRouter dynamic models only appeared after visiting settings page
- PhaseModelSelector (used in Add/Edit Feature dialogs) only fetched Codex models
- dynamicOpencodeModels remained empty until OpencodeSettingsTab mounted

Solution:
- Add fetchOpencodeModels() action to app-store mirroring fetchCodexModels pattern
- Add state tracking: opencodeModelsLoading, opencodeModelsError, timestamps
- Call fetchOpencodeModels() in PhaseModelSelector useEffect on mount
- Use same caching strategy: 5min success cache, 30sec failure cooldown

Files changed:
- apps/ui/src/store/app-store.ts
  - Add OpenCode model loading state properties
  - Add fetchOpencodeModels action with error handling & caching
- apps/ui/src/components/views/settings-view/model-defaults/phase-model-selector.tsx
  - Add opencodeModelsLoading, fetchOpencodeModels to store hook
  - Add useEffect to fetch OpenCode models on mount

Result:
- OpenRouter models now appear in Add/Edit Feature dialogs immediately
- No need to visit settings page first
- Consistent with Codex model loading behavior
2026-01-14 19:46:43 +05:30
Web Dev Cody
28d8a4cc9e Merge pull request #473 from thesobercoder/fix/npm-cache-permissions
fix: ensure npm cache directory has correct permissions
2026-01-14 09:16:30 -05:00
Web Dev Cody
7108cdd2ca Merge pull request #480 from thesobercoder/fix/cli-provider-system-prompt
fix: embed systemPrompt into prompt for CLI-based providers
2026-01-14 09:16:01 -05:00
Soham Dasgupta
e7bfb19203 fix: embed systemPrompt into prompt for CLI-based providers
CLI-based providers (OpenCode, etc.) only accept a single prompt via
stdin/args and don't support separate system/user message channels like
Claude SDK. When systemPrompt is passed to these providers, it was
silently dropped, causing:

- BacklogPlan JSON parsing failures with OpenCode/GPT-5.2 (missing
  "output ONLY JSON" formatting instruction)
- Loss of critical formatting/schema instructions for structured outputs

This fix adds embedSystemPromptIntoPrompt() method to CliProvider base
class that:
- Prepends systemPrompt to the user prompt before CLI execution
- Handles both string and array prompts (vision support)
- Handles both string systemPrompt and SystemPromptPreset objects
- Uses standard \n\n---\n\n separator (consistent with codebase)
- Sets systemPrompt to undefined to prevent double-injection

Benefits OpencodeProvider immediately (uses base executeQuery).
CursorProvider still uses manual workarounds (overrides executeQuery).

Fixes the immediate BacklogPlan + OpenCode bug while maintaining
backward compatibility with existing Cursor workarounds.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-14 19:20:47 +05:30
Shirone
beac823472 Merge pull request #479 from AutoMaker-Org/fix/followup-run
fix: follow up prompts
2026-01-14 08:43:39 +00:00
Shirone
c7fac3d9e6 refactor: Replace worktreePath param with useWorktrees flag 2026-01-14 09:29:38 +01:00
Shirone
3689eb969d Merge pull request #478 from thesobercoder/chore/playwright-docker-prereqs
chore(docker): add Playwright Chromium prerequisites
2026-01-14 07:30:24 +00:00
Soham Dasgupta
5e330b7691 chore(docker): add Playwright Chromium deps
- Add missing system libraries required by Playwright/Chromium in server and dev images\n- Document optional Playwright browser cache volume in docker-compose.override.yml.example
2026-01-14 12:47:46 +05:30
webdevcody
5ec5fe82e6 refactor: Enhance project management features and UI components
- Updated create-pr.ts to improve commit error handling and logging.
- Enhanced project-switcher.tsx with new folder opening functionality and state management for project setup.
- Expanded icon-picker.tsx to include a comprehensive list of icons organized by category.
- Replaced dialog components with popover components for auto mode and plan settings, improving UI responsiveness.
- Refactored board-view components to streamline feature management and enhance user experience.
- Removed outdated dialog components and replaced them with popover alternatives for better accessibility.

These changes aim to improve the overall usability and functionality of the project management interface.
2026-01-13 22:35:45 -05:00
Shirone
ee13bf9a8f Merge pull request #476 from DenyCZ/fix/windows-npx
fix: Resolve windows npx spawn errors
2026-01-14 00:00:32 +00:00
Shirone
219af28afc Merge pull request #477 from AutoMaker-Org/fix/dynamic-branch-references
fix: use dynamic branch references instead of hardcoded origin/main
2026-01-13 23:59:49 +00:00
Shirone
b64025b134 fix: adress pr comments 2026-01-14 00:57:30 +01:00
Shirone
51e4e8489a fix: use dynamic branch references instead of hardcoded origin/main
- Fix handleResolveConflicts to use origin/${worktree.branch} instead of
  hardcoded origin/main for pull and resolve conflicts
- Add defaultBaseBranch prop to CreatePRDialog to use selected branch
- Fix branchCardCounts to use primary worktree branch as default
- Enable PR status and Address PR Comments for main branch tab
- Add automatic PR detection from GitHub for branches without stored metadata

This allows users working on release branches (like v0.11.0rc) to properly
pull from their branch's remote and see PR status for any branch.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-14 00:48:09 +01:00
DenyCZ
bb70d04b88 fix: Resolve windows npx spawn errors 2026-01-14 00:31:04 +01:00
Shirone
32f6c6d6eb Merge pull request #474 from AutoMaker-Org/feat/dev-server-log-panel
feat: add dev server log panel with real-time streaming
2026-01-13 21:04:58 +00:00
Shirone
b6688e630e Merge branch 'v0.11.0rc' into feat/dev-server-log-panel
Resolved conflict in worktree-panel.tsx by combining imports:
- DevServerLogsPanel from this branch
- WorktreeMobileDropdown, WorktreeActionsDropdown, BranchSwitchDropdown from v0.11.0rc

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-13 22:01:44 +01:00
Shirone
073f6d5793 feat: add dev server log panel with real-time streaming
Add the ability to view dev server logs in a dedicated panel with:
- Real-time log streaming via WebSocket events
- ANSI color support using xterm.js
- Scrollback buffer (50KB) for log history on reconnect
- Output throttling to prevent UI flooding
- "View Logs" option in worktree dropdown menu

Server changes:
- Add scrollback buffer and event emission to DevServerService
- Add GET /api/worktree/dev-server-logs endpoint
- Add dev-server:started, dev-server:output, dev-server:stopped events

UI changes:
- Add reusable XtermLogViewer component
- Add DevServerLogsPanel dialog component
- Add useDevServerLogs hook for WebSocket subscription

Closes #462

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-13 21:56:35 +01:00
Shirone
9153b06f09 Merge pull request #449 from AutoMaker-Org/feat/mobile-improvements-contributor
feat: Mobile responsiveness improvements from community contributor
2026-01-13 20:48:47 +00:00
Shirone
6cb2af8757 test: update claude-usage-service tests for improved error handling and timeout management
- Modified command arguments in tests to include '--add-dir' for better context.
- Updated error messages for authentication and timeout scenarios to provide clearer guidance.
- Adjusted timer values in tests to align with implementation delays, ensuring accurate simulation of usage data retrieval.
2026-01-13 21:37:16 +01:00
Shirone
ca3b013a7b Merge v0.11.0rc into feat/mobile-improvements-contributor
Resolves merge conflicts by keeping both features:
- enableAiCommitMessages (from our branch)
- defaultFeatureModel (from v0.11.0rc)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-13 21:31:44 +01:00
Web Dev Cody
abde1ba40a Merge pull request #472 from AutoMaker-Org/claude/issue-469-20260113-1744
feat: Add Discord-like project switcher sidebar with icon support
2026-01-13 14:59:32 -05:00
webdevcody
b04659fb56 Merge branch 'v0.11.0rc' into claude/issue-469-20260113-1744 2026-01-13 14:59:14 -05:00
webdevcody
74ee30d5db refactor: improve code readability and maintainability in SDK options and icon picker
- Reformatted the fullAccess and chat tool presets in sdk-options.ts for better readability.
- Simplified the return statement in icon-picker.tsx for cleaner code.
- Removed the board-background-persistence.spec.ts test file as it is no longer needed.
2026-01-13 14:56:44 -05:00
webdevcody
a300466ca9 feat: enhance project management with custom icon support and UI improvements
- Introduced custom icon functionality for projects, allowing users to upload and manage their own icons.
- Updated Project and ProjectRef types to include customIconPath.
- Enhanced the ProjectSwitcher component to display custom icons alongside preset icons.
- Added EditProjectDialog for inline editing of project details, including icon uploads.
- Improved AppearanceSection to support custom icon uploads and display.
- Updated sidebar and project switcher UI for better user experience and accessibility.

Implements #469
2026-01-13 14:39:19 -05:00
DhanushSantosh
9311f2e62a Merge remote-tracking branch 'upstream/v0.11.0rc' into patchcraft 2026-01-14 00:55:46 +05:30
DhanushSantosh
67245158ea chore: ignore .codex directory 2026-01-14 00:55:22 +05:30
DhanushSantosh
520d9a945c chore: add workflow files to git tracking 2026-01-14 00:54:46 +05:30
DhanushSantosh
fa3ead0e8d feat(auto-mode): skip memory extraction when Claude not configured and add reasoning effort support
- Skip learning extraction when ANTHROPIC_API_KEY is not available
- Add reasoningEffort parameter to simpleQuery for Codex model configuration
- Add stdinData support to spawnProcess for CLI stdin input
- Update UI API types for model override with reasoning support
2026-01-14 00:50:33 +05:30
DhanushSantosh
253ab94646 feat(github): add Codex/OpenCode model support for issue validation
- Support Codex and OpenCode models in issue validation
- Add reasoningEffort parameter for Codex model configuration
- Update validation logic to use structured output for Claude/Codex
- Update UI hooks and types for multi-provider model selection
2026-01-14 00:50:02 +05:30
DhanushSantosh
fbb3f697e1 feat(settings): add OpenAI/Google API key support and unified ModelId type
- Add OpenAI API key storage to store-api-key handler
- Include Google/OpenAI key status in credentials API responses
- Add unified ModelId type for Claude, Codex, Cursor, OpenCode, and dynamic providers
- Update PhaseModelEntry to support all provider model types
2026-01-14 00:49:35 +05:30
Soham Dasgupta
1a1517dffb fix: ensure npm cache directory has correct permissions
Fix EACCES permission error when running npx commands (e.g., MCP servers)
inside the Docker container.

Error that was occurring:
  npm error code EACCES
  npm error syscall mkdir
  npm error path /home/automaker/.npm/_cacache/index-v5/1f/fc
  npm error errno EACCES
  npm error Your cache folder contains root-owned files, due to a bug in
  npm error previous versions of npm which has since been addressed.

The fix ensures the /home/automaker/.npm directory exists and has correct
ownership before switching to the automaker user in the entrypoint script.
2026-01-14 00:49:28 +05:30
DhanushSantosh
690cf1f281 fix(codex-provider): use SDK mode when API key is present to avoid OAuth failures
When an OpenAI API key is stored in settings or environment, use SDK mode
instead of CLI mode. This bypasses the MCP transport layer which was
failing with 'TokenRefreshFailed' errors due to OAuth token issues.

The SDK uses the API key directly via @openai/codex-sdk, avoiding the
OAuth token refresh mechanism that was causing mid-execution failures.
2026-01-14 00:45:01 +05:30
Shirone
6f55da46ac Merge pull request #471 from AutoMaker-Org/fix/dev-server-url
fix: use browser hostname for dev server URLs instead of localhost
2026-01-13 18:46:17 +00:00
webdevcody
57453966ac Merge branch 'v0.11.0rc' of github.com:AutoMaker-Org/automaker into v0.11.0rc 2026-01-13 13:45:35 -05:00
webdevcody
298acc9f89 style: add overflow-y-auto class to dialog content containers
- Updated the BacklogPlanDialog, AddEditServerDialog, CreateSpecDialog, and RegenerateSpecDialog components to include the overflow-y-auto class for improved scrolling behavior in dialog content.
2026-01-13 13:45:29 -05:00
Shirone
f4390bc82f security: add noopener,noreferrer to window.open calls
Add 'noopener,noreferrer' parameter to all window.open() calls with
target='_blank' to prevent tabnabbing attacks. This prevents the newly
opened page from accessing window.opener, protecting against potential
security vulnerabilities.

Affected files:
- use-dev-servers.ts: Dev server URL links
- worktree-actions-dropdown.tsx: PR URL links
- create-pr-dialog.tsx: PR creation and browser fallback links

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-13 19:43:20 +01:00
Shirone
62af2031f6 feat: enhance dev server URL handling and improve accessibility
- Added URL and URLSearchParams as readonly globals in ESLint configuration.
- Updated WorktreeActionsDropdown and WorktreeTab components to include aria-labels for better accessibility.
- Implemented error handling for dev server URL opening, ensuring only valid HTTP/HTTPS protocols are used and providing user feedback for errors.

These changes improve user experience and accessibility when interacting with the dev server functionality.
2026-01-13 19:33:09 +01:00
claude[bot]
0ddd672e0e feat: Add Discord-like project switcher sidebar with icon support
- Add project icon field to ProjectRef and Project types
- Create vertical project switcher sidebar component
  - Project icons with hover tooltips
  - Active project highlighting
  - Plus button to create new projects
  - Right-click context menu for edit/delete
- Add IconPicker component with 35+ Lucide icons
- Add EditProjectDialog for inline project editing
- Update settings appearance section with project details editor
- Add setProjectIcon and setProjectName actions to app store
- Integrate ProjectSwitcher in root layout (shows on app pages only)

Implements #469

Co-authored-by: Web Dev Cody <webdevcody@users.noreply.github.com>
2026-01-13 17:53:15 +00:00
Shirone
7ef525effa fix: clarify comments on branch name handling in BoardView
Updated comments in BoardView to better explain the behavior of the 'current' work mode. The changes specify that an empty string clears the branch assignment, allowing work to proceed on the main/current branch. This enhances code readability and understanding of branch management logic.
2026-01-13 18:51:20 +01:00
Shirone
2303dcd133 Merge pull request #468 from AutoMaker-Org/feat/add-branch-name-to-mass-edit
feat: add branch/worktree support to mass edit dialog
2026-01-13 17:43:25 +00:00
Shirone
cc4f39a6ab chore: fix formatting issues for CI
Fix Prettier formatting in two files:
- apps/server/src/lib/sdk-options.ts: Split long arrays to one item per line
- docs/docker-isolation.md: Align markdown table columns

Resolves CI format check failures.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-13 18:38:09 +01:00
Shirone
d4076ad0ce refactor: address CodeRabbit PR feedback
Improvements based on CodeRabbit review comments:

1. Use getPrimaryWorktreeBranch for consistent branch detection
   - Replace hardcoded 'main' fallback with getPrimaryWorktreeBranch()
   - Ensures auto-generated branch names respect the repo's actual primary branch
   - Handles repos using 'master' or other primary branch names

2. Extract worktree auto-selection logic to helper function
   - Create addAndSelectWorktree helper to eliminate code duplication
   - Use helper in both onWorktreeAutoSelect and handleBulkUpdate
   - Reduces maintenance burden and ensures consistent behavior

These changes improve code consistency and maintainability without affecting functionality.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-13 18:37:26 +01:00
Shirone
3bd8626d48 feat: add branch/worktree support to mass edit dialog
Implement worktree creation and branch assignment in the mass edit dialog to match the functionality of the add-feature and edit-feature dialogs.

Changes:
- Add WorkModeSelector to mass-edit-dialog.tsx with three modes:
  - 'Current Branch': Work on current branch (no worktree)
  - 'Auto Worktree': Auto-generate branch name and create worktree
  - 'Custom Branch': Use specified branch name and create worktree
- Update handleBulkUpdate in board-view.tsx to:
  - Accept workMode parameter
  - Create worktrees for 'auto' and 'custom' modes
  - Auto-select created worktrees in the board header
  - Handle branch name generation for 'auto' mode
- Add necessary props to MassEditDialog (branchSuggestions, branchCardCounts, currentBranch)

Users can now bulk-assign features to a branch and automatically create/select worktrees, enabling efficient project setup with many features.

Fixes #459

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-13 18:27:22 +01:00
webdevcody
ff5915dd20 refactor: update terminology in board view components
- Renamed "Worktrees" to "Worktree Bar" in the BoardHeader component for clarity.
- Updated comments and labels in AddFeatureDialog, PlanSettingsDialog, and WorktreeSettingsDialog to reflect the new terminology and improve user understanding of worktree mode functionality.
2026-01-13 12:19:24 -05:00
webdevcody
8500f71565 Merge branches 'v0.11.0rc' and 'v0.11.0rc' of github.com:AutoMaker-Org/automaker into v0.11.0rc 2026-01-13 12:05:57 -05:00
Shirone
81bab1d8ab Merge pull request #465 from thesobercoder/fix/opencode-cache-volume
fix: add OpenCode cache volume for version file persistence
2026-01-13 15:22:52 +00:00
Soham Dasgupta
24a6633322 fix: add OpenCode cache volume for version file persistence
OpenCode stores a version file in ~/.cache/opencode/ which was causing
EACCES permission errors. This adds:

- Volume mount for ~/.cache/opencode
- Entrypoint script to set correct ownership/permissions on the cache directory
2026-01-13 20:45:33 +05:30
DhanushSantosh
f073f6ecc3 Merge remote-tracking branch 'upstream/v0.11.0rc' into patchcraft 2026-01-13 20:28:49 +05:30
Web Dev Cody
2870ddb223 Merge pull request #460 from thesobercoder/feature/opencode-docker-support
feat: add OpenCode CLI support in Docker
2026-01-13 09:35:28 -05:00
Web Dev Cody
1578d02e70 Merge branch 'v0.11.0rc' into feature/opencode-docker-support 2026-01-13 09:35:18 -05:00
webdevcody
bb710ada1a feat: enhance settings view and feature defaults management
- Introduced default feature model settings in the settings view, allowing users to specify the default AI model for new feature cards.
- Updated navigation to include a direct link to model defaults in the settings menu.
- Enhanced the Add Feature dialog to utilize the default feature model from the app store.
- Implemented synchronization of the default feature model in settings migration and sync hooks.
- Improved UI components to reflect changes in default settings, ensuring a cohesive user experience.
2026-01-13 20:04:36 +05:30
Soham Dasgupta
33ae860059 feat: update Docker volumes for OpenCode CLI data and user configuration 2026-01-13 20:01:22 +05:30
webdevcody
3de6d58af3 Merge branch 'v0.11.0rc' of github.com:AutoMaker-Org/automaker into v0.11.0rc 2026-01-13 09:30:20 -05:00
webdevcody
c8e66a866e feat: enhance settings view and feature defaults management
- Introduced default feature model settings in the settings view, allowing users to specify the default AI model for new feature cards.
- Updated navigation to include a direct link to model defaults in the settings menu.
- Enhanced the Add Feature dialog to utilize the default feature model from the app store.
- Implemented synchronization of the default feature model in settings migration and sync hooks.
- Improved UI components to reflect changes in default settings, ensuring a cohesive user experience.
2026-01-13 09:30:15 -05:00
DhanushSantosh
c25efdc0d8 Revert "Wire provider model enablement into selectors"
This reverts commit 8f1740c0f5.
2026-01-13 19:54:20 +05:30
DhanushSantosh
bde82492ae Revert "Sync branch state"
This reverts commit 6704293cb1.
2026-01-13 19:54:05 +05:30
Soham Dasgupta
67f18021c3 feat: add OpenCode CLI config path to Docker example 2026-01-13 19:50:54 +05:30
DhanushSantosh
6704293cb1 Sync branch state 2026-01-13 16:58:20 +05:30
DhanushSantosh
8f1740c0f5 Wire provider model enablement into selectors 2026-01-13 16:53:17 +05:30
Soham Dasgupta
62019d5916 feat: add OpenCode CLI support in Docker
- Install OpenCode CLI in Dockerfile alongside Claude and Cursor
- Add automaker-opencode-config volume for persisting auth
- Add OpenCode directory setup in docker-entrypoint.sh
- Update docker-isolation.md with OpenCode documentation
- Add OpenCode bind mount example to docker-compose.override.yml.example
2026-01-13 14:14:56 +05:30
DhanushSantosh
e66283b1d6 Merge remote-tracking branch 'upstream/main' into patchcraft 2026-01-13 13:45:17 +05:30
Web Dev Cody
a0d6d76626 Merge pull request #448 from comzine/fix/docker-uid-gid-configurable
fix: make Docker container UID/GID configurable
2026-01-13 00:00:47 -05:00
Web Dev Cody
c2f5c07038 Merge pull request #344 from casiusss/fix/pipeline-resume-edge-cases
fix: handle pipeline resume edge cases and improve robustness
2026-01-12 23:57:27 -05:00
webdevcody
419abf88dd Merge branch 'v0.11.0rc' into fix/pipeline-resume-edge-cases 2026-01-12 23:49:33 -05:00
Web Dev Cody
b7596617ed Merge pull request #455 from stefandevo/fix/codex-infinite-loop
fix(codex): prevent infinite loop when fetching models on settings screen
2026-01-12 23:46:02 -05:00
Web Dev Cody
26da99e834 Merge pull request #454 from AutoMaker-Org/abstract-anthropic-sdk
feat: implement simple query service and enhance provider abstraction
2026-01-12 23:40:42 -05:00
webdevcody
2b33a0d322 refactor: integrate simple query service into auto mode
- Replaced dynamic import of the query function with a call to the new Simple Query Service for improved clarity and maintainability.
- Streamlined the response handling by directly utilizing the result from the simple query, enhancing code readability.
- Updated the prompt and options structure to align with the new service's requirements, ensuring consistent behavior in learning extraction.
2026-01-12 21:39:39 -05:00
webdevcody
c796adbae8 test: update project view tests for dashboard integration
- Modified tests to navigate directly to the dashboard instead of the welcome view, ensuring a smoother project selection process.
- Updated project name verification to check against the sidebar button instead of multiple elements.
- Added logic to expand the sidebar if collapsed, improving visibility for project names during tests.
- Adjusted test assertions to reflect changes in the UI structure, including the introduction of the dashboard view.
2026-01-12 21:23:33 -05:00
Stefan de Vogelaere
18d82b1bb1 refactor: improve time constant readability
- Rename FAILURE_COOLDOWN to FAILURE_COOLDOWN_MS with explicit calculation
- Add SUCCESS_CACHE_MS constant to replace magic number 300000
- Use multiplication (30 * 1000, 5 * 60 * 1000) to make units explicit

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-13 00:53:39 +01:00
webdevcody
0c68fcc8c8 chore: update node-gyp dependency URL in package-lock.json
- Changed the resolved URL for the @electron/node-gyp module from SSH to HTTPS for improved accessibility and compatibility.
2026-01-12 18:51:03 -05:00
Stefan de Vogelaere
e4458b8222 fix(codex): prevent infinite loop when fetching models on settings screen
When Codex is not connected/authenticated, the /api/codex/models endpoint
returns 503. The fetchCodexModels function had no cooldown after failures,
causing infinite retries when navigating to the Settings screen.

Added codexModelsLastFailedAt state to track failed fetch attempts and
skip retries for 30 seconds after a failure. This prevents the infinite
loop while still allowing periodic retry attempts.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-13 00:47:51 +01:00
webdevcody
eb8ebe3ce0 feat: implement simple query service and enhance provider abstraction
- Introduced a new Simple Query Service to streamline basic AI queries, allowing for structured JSON outputs.
- Updated existing routes to utilize the new service, replacing direct SDK calls with a unified interface for querying.
- Enhanced provider handling in various routes, including generate-spec, generate-features-from-spec, and validate-issue, to support both Claude and Cursor models seamlessly.
- Added structured output support for improved response handling and error management across the application.
2026-01-12 17:33:54 -05:00
Web Dev Cody
0dc70addb6 Merge pull request #424 from comzine/fix/add-todowrite-to-allowed-tools
fix: add TodoWrite to allowed tools in SDK presets
2026-01-12 16:30:56 -05:00
webdevcody
f3f5d05349 chore: release v0.10.0
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-12 16:16:44 -05:00
Web Dev Cody
0c4b833b07 Merge pull request #405 from AutoMaker-Org/v0.10.0rc
V0.10.0rc
2026-01-12 16:11:56 -05:00
Shirone
029c5ca855 fix: adress pr comments 2026-01-12 22:03:14 +01:00
Shirone
1f270edbe1 refactor(list-row): remove priority display logic and related components
- Eliminated the getPriorityDisplay function and the PriorityBadge component from the ListRow implementation.
- Removed the pipelineConfig prop from ListRowProps interface.
- Cleaned up the code to streamline the ListRow component, focusing on essential features.
2026-01-12 21:46:04 +01:00
Shirone
47c188d8f9 fix: adress pr comments
- Added validation to check if the specified worktree path exists before generating commit messages.
- Implemented a check to ensure the worktree path is a valid git repository by verifying the presence of the .git directory.
- Improved error handling by returning appropriate responses for invalid paths and non-git repositories.
2026-01-12 21:41:55 +01:00
Shirone
cca4638b71 fix: adjust pr commnets 2026-01-12 21:21:24 +01:00
Shirone
19c12b7813 refactor(settings): remove deprecated notification settings from GlobalSettings interface
- Eliminated unused properties related to notification commands and ntfy.sh integration from the GlobalSettings interface.
- Updated default global settings to reflect the removal of these properties.
2026-01-12 20:58:58 +01:00
Shirone
0261ec2892 feat(prompts): implement customizable commit message prompts
- Added a new section in the UI for customizing commit message prompts.
- Integrated a system prompt for AI-generated commit messages, allowing users to define their own instructions.
- Updated the backend to merge custom prompts with default settings for commit message generation.
- Enhanced the commit message generation logic to utilize the effective system prompt based on user settings.
2026-01-12 20:55:01 +01:00
Shirone
5e4f5f86cd feat(worktree): add AI commit message generation feature
- Implemented a new endpoint to generate commit messages based on git diffs.
- Updated worktree routes to include the AI commit message generation functionality.
- Enhanced the UI to support automatic generation of commit messages when the commit dialog opens, based on user settings.
- Added settings for enabling/disabling AI-generated commit messages and configuring the model used for generation.
2026-01-12 20:55:01 +01:00
DhanushSantosh
fbab1d323f test: align app-spec and enhancement mode tests 2026-01-13 00:11:11 +05:30
eclipxe
8b19266c9a feat: Add secondary inline actions for waiting_approval status 2026-01-12 19:39:12 +01:00
anonymous
1b9d194dd1 feat: Improve mobile scrolling experience in autocomplete and dropdown components 2026-01-12 19:39:12 +01:00
anonymous
74c793b6c6 Add branch switch to mobile worktree panel 2026-01-12 19:39:12 +01:00
anonymous
d1222268c3 feat: Improve Claude CLI usage detection, mobile usage view, and add provider auth initialization 2026-01-12 19:39:12 +01:00
anonymous
df7a0f8687 feat: Make input controls and settings responsive for mobile devices 2026-01-12 19:37:36 +01:00
anonymous
c7def000df feat: Handle backlog feature editing on row click in board view 2026-01-12 19:37:35 +01:00
anonymous
e2394244f6 feat: Implement responsive mobile header layout with menu consolidation 2026-01-12 19:37:35 +01:00
anonymous
007830ec74 feat: Add responsive session manager with mobile backdrop overlay 2026-01-12 19:33:33 +01:00
anonymous
f721eb7152 List View Features 2026-01-12 19:33:33 +01:00
anonymous
e56db2362c feat: Add AI-generated commit messages
Integrate Claude Haiku to automatically generate commit messages when
committing worktree changes. Shows a sparkle animation while generating
and auto-populates the commit message field.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-12 19:32:20 +01:00
anonymous
d2c7a9e05d Fix model selector on mobile 2026-01-12 19:32:20 +01:00
anonymous
acce06b304 Fix sidebar lables not showign up 2026-01-12 19:32:20 +01:00
anonymous
4ab54270db fix: enable sidebar expand and project switching on mobile
- Sidebar now uses overlay pattern on mobile (fixed position when open)
- Added backdrop overlay that dismisses sidebar on tap
- Made collapse toggle button visible on all screen sizes
- Made project options menu visible on all screen sizes

Previously the sidebar was forced to collapsed width (w-16) on mobile
even when sidebarOpen was true, and the toggle/options buttons were
hidden with `hidden lg:flex`.
2026-01-12 19:31:38 +01:00
Shirone
f50520c93f feat(delete): enhance branch deletion handling and validation
- Introduced a flag to track if a branch was successfully deleted, improving response clarity.
- Updated the response structure to include the new branchDeleted flag.
- Enhanced projectPath validation in init-script to ensure it is a non-empty string before processing.
2026-01-12 19:21:37 +01:00
Dhanush Santosh
cebf57ffd3 Merge pull request #426 from stefandevo/opencode-dynamic-providers
feat: add dynamic model discovery and routing for OpenCode provider
2026-01-12 23:51:06 +05:30
DhanushSantosh
6020219fda fix(opencode): address review feedback 2026-01-12 23:44:21 +05:30
DhanushSantosh
8094941385 feat(opencode): persist dynamic model selection 2026-01-12 23:44:21 +05:30
DhanushSantosh
9ce3cfee7d feat(opencode): drop bedrock defaults 2026-01-12 23:44:05 +05:30
DhanushSantosh
6184440441 fix(ui): tie dynamic models to connected providers 2026-01-12 23:42:38 +05:30
DhanushSantosh
0cff4cf510 feat(ui): add OpenRouter icon 2026-01-12 23:42:28 +05:30
DhanushSantosh
b152f119c5 fix(ui): refresh OpenCode models on new providers 2026-01-12 23:42:27 +05:30
DhanushSantosh
9f936c6968 fix(opencode): parse api-key provider models 2026-01-12 23:42:12 +05:30
Stefan de Vogelaere
b8531cf7e8 fix: add OpenCode settings to migration for persistence
Add enabledOpencodeModels and opencodeDefaultModel to the settings
migration to ensure they are properly persisted like Cursor settings.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-12 23:41:40 +05:30
Stefan de Vogelaere
edcc4e789b fix: address CodeRabbitAI review feedback
- Replace busy-wait loop in refreshModels with Promise-based approach
- Remove duplicate error logging in opencode-models.ts handlers
- Fix multi-slash parsing in provider-icon.tsx (only handle exactly one slash)
- Use dynamic icon resolution for selected OpenCode model in trigger
- Fix misleading comment about merge precedence (static takes precedence)
- Add enabledOpencodeModels and opencodeDefaultModel to settings sync
- Add clarifying comments about session-only dynamic model settings

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-12 23:41:26 +05:30
Stefan de Vogelaere
20cc401238 fix: update enhancement test to include ux-reviewer mode
Test expected 4 enhancement modes but there are now 5 after adding
the ux-reviewer mode.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-12 23:41:14 +05:30
Stefan de Vogelaere
70204a2d36 fix: address code review feedback from gemini-code-assist
- Convert execFileSync to async execFile in fetchModelsFromCli and
  fetchAuthenticatedProviders to avoid blocking the event loop
- Remove unused opencode-dynamic-providers.tsx component
- Use regex for more robust model ID validation in parseModelsOutput

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-12 23:40:57 +05:30
Stefan de Vogelaere
e38325c27f fix: improve dynamic model icons and fix React reference
- Add icon detection for dynamic OpenCode provider models (provider/model format)
- Support zai-coding-plan, github-copilot, google, xai, and other providers
- Detect model type from name (glm, claude, gpt, gemini, grok, etc.)
- Fix React.useMemo → useMemo to resolve "React is not defined" error

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-12 23:40:28 +05:30
Stefan de Vogelaere
5e4b422315 fix: improve OpenCode error handling and message extraction
- Update error event interface to handle nested error objects with
  name/data/message structure from OpenCode CLI
- Extract meaningful error messages from provider errors in normalizeEvent
- Add error type handling in executeWithProvider to throw errors with
  actual provider messages instead of returning empty response

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-12 23:40:13 +05:30
Stefan de Vogelaere
6c5206daf4 feat: add dynamic model discovery and routing for OpenCode provider
- Update isOpencodeModel() to detect dynamic models with provider/model format
  (e.g., github-copilot/gpt-4o, google/gemini-2.5-pro, zai-coding-plan/glm-4.7)
- Update resolveModelString() to recognize and pass through OpenCode models
- Update enhance route to route OpenCode models to OpenCode provider
- Fix OpenCode CLI command format: use --format json (not stream-json)
- Remove unsupported -q and - flags from CLI arguments
- Update normalizeEvent() to handle actual OpenCode JSON event format
- Add dynamic model configuration UI with provider grouping
- Cache providers and models in app store for snappier navigation
- Show authenticated providers in OpenCode CLI status card

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-12 23:39:38 +05:30
Shirone
ed65f70315 Merge pull request #409 from AutoMaker-Org/feat/worktrees-init-script
feat: worktrees init script
2026-01-12 17:52:31 +00:00
Shirone
f41a42010c fix: address pr comments 2026-01-12 18:41:56 +01:00
Tobias Weber
aa8caeaeb0 fix: make Docker container UID/GID configurable
Add UID and GID build arguments to Dockerfiles to allow matching the
container user to the host user. This fixes file permission issues when
mounting host directories as volumes.

Default remains 1001 for backward compatibility. To match host user:
  UID=$(id -u) GID=$(id -g) docker-compose build

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-12 16:14:56 +01:00
Kacper
a0669d4262 feat(board-view): enhance feature and plan dialogs with worktree branch settings
- Added WorktreeSettingsDialog and PlanSettingsDialog components to manage worktree branch settings.
- Integrated new settings into BoardHeader for toggling worktree branch usage in feature creation.
- Updated AddFeatureDialog to utilize selected worktree branch for custom mode.
- Introduced new state management in app-store for handling worktree branch preferences.

These changes improve user control over feature creation workflows by allowing branch selection based on the current worktree context.
2026-01-11 23:05:32 +01:00
Shirone
a4a792c6b1 Merge pull request #416 from AutoMaker-Org/feat/emtpy-columns-enhancments
feat: add empty state card component and integrate AI suggestion func…
2026-01-11 21:37:17 +00:00
Shirone
6842e4c7f7 refactor: simplify EmptyStateCard and update empty state configurations
- Removed unused properties and state management from the EmptyStateCard component for cleaner code.
- Updated the EMPTY_STATE_CONFIGS to remove exampleCard entries, streamlining the empty state configuration.
- Enhanced the primary action handling in the EmptyStateCard for improved functionality.
2026-01-11 22:35:25 +01:00
webdevcody
6638c35945 refactor(sidebar): enhance sidebar responsiveness and improve layout
- Updated sidebar component to include a mobile overlay backdrop when open.
- Adjusted visibility of logo and footer elements based on sidebar state.
- Improved layout and spacing for various components within the sidebar for better usability on different screen sizes.
- Refined styles for buttons and project selectors to enhance visual consistency and responsiveness.
2026-01-11 16:02:25 -05:00
Kacper
53f5c2b2bb feat(backlog): add branchName support to apply handler and UI components
- Updated apply handler to accept an optional branchName from the request body.
- Modified BoardView and BacklogPlanDialog components to pass currentBranch to the apply API.
- Enhanced ElectronAPI and HttpApiClient to include branchName in the apply method.

This change allows users to specify a branch when applying backlog plans, improving flexibility in feature management.
2026-01-11 20:52:07 +01:00
Kacper
6e13cdd516 Merge branch: resolve conflict in worktree-actions-dropdown.tsx 2026-01-11 20:08:19 +01:00
Kacper
a48c67d271 refactor: update EmptyStateCard component for improved layout and functionality
- Removed unused props and adjusted styles for a more compact and centered design.
- Enhanced the display of the icon, title, and description for better visibility.
- Updated keyboard shortcut hint and AI suggestion action for improved user interaction.
- Refined dismiss/minimize controls to appear on hover, enhancing the user experience.
2026-01-11 19:59:01 +01:00
Shirone
43fc3de2e1 Merge pull request #423 from stefandevo/main
feat: add default IDE setting and multi-editor support with icons
2026-01-11 18:36:12 +00:00
Kacper
80081b60bf fix(platform): remove logger import to avoid circular dependency
Replace createLogger with console.warn to prevent circular import
between @automaker/platform and @automaker/utils.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-11 19:34:29 +01:00
Kacper
cbca9b68e6 fix: correct Kiro CLI command typo (kido -> kiro)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-11 19:25:26 +01:00
Shirone
b9b3695497 feat(platform): add VS Code Insiders and Kiro editor support
Added support for two new editors:
- VS Code Insiders (code-insiders command)
- Kiro (kido command) - VS Code fork

Changes:
- Added editor definitions to SUPPORTED_EDITORS list
- Added VSCodeInsidersIcon (reuses VS Code icon)
- Added KiroIcon with custom SVG logo
- Updated getEditorIcon() to handle both new commands
- Fixed logger initialization to be lazy-loaded, preventing circular
  dependency error with isBrowser variable during module initialization

Both editors were tested and successfully open directories on macOS.
2026-01-11 19:14:44 +01:00
Shirone
1b9acb1395 fix(platform): verify full Xcode installation for xed command
The xed command requires full Xcode.app, not just Command Line Tools.
This fix adds validation to ensure Xcode is properly configured before
offering it as an editor option.

Changes:
- Added isXcodeFullyInstalled() to check xcode-select points to Xcode.app
- Added helpful warning when Xcode is installed but xcode-select points to CLT
- Users see clear instructions on how to fix the configuration

Fixes issue where xed would fail with "tool 'xed' requires Xcode" error
when only Command Line Tools are configured via xcode-select.
2026-01-11 19:04:39 +01:00
DhanushSantosh
01cf81a105 fix(platform): detect Antigravity CLI aliases 2026-01-11 23:22:13 +05:30
Tobias Weber
6381ecaa37 fix: add TodoWrite to allowed tools in SDK presets
The TodoWrite tool was missing from the fullAccess and chat tool
presets, causing the Claude Agent SDK to crash with exit code 1
when the agent attempted to use it for task tracking.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-11 18:33:11 +01:00
Kacper
6d267ce0fa feat(platform): add cross-platform editor utilities and refresh functionality
- Add libs/platform/src/editor.ts with cross-platform editor detection and launching
  - Handles Windows .cmd batch scripts (cursor.cmd, code.cmd, etc.)
  - Supports macOS app bundles in /Applications and ~/Applications
  - Includes caching with 5-minute TTL for performance
- Refactor open-in-editor.ts to use @automaker/platform utilities
- Add POST /api/worktree/refresh-editors endpoint to clear cache
- Add refresh button to Settings > Account for IDE selection
- Update useAvailableEditors hook with refresh() and isRefreshing

Fixes Windows issue where "Open in Editor" was falling back to Explorer
because execFile cannot run .cmd scripts without shell:true.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-11 18:08:09 +01:00
Kacper
8b0b565282 Merge remote-tracking branch 'origin/v0.10.0rc' into stefandevo/main 2026-01-11 17:34:19 +01:00
DhanushSantosh
a046d1232e Merge remote-tracking branch 'upstream/v0.10.0rc' into feature/codex-cli 2026-01-11 21:59:04 +05:30
DhanushSantosh
d724e782dd fix(ui): restore startup project context 2026-01-11 21:58:36 +05:30
Shirone
a266d85ecd Merge pull request #421 from AutoMaker-Org/refactor/extract-enhance-with-ai-shared-components
refactor: extract Enhance with AI into shared components
2026-01-11 16:22:18 +00:00
Kacper
a4a111fad0 feat: add pre-enhancement description tracking for feature updates
- Introduced a new parameter `preEnhancementDescription` to capture the original description before enhancements.
- Updated the `update` method in `FeatureLoader` to handle the new parameter and maintain a history of original descriptions.
- Enhanced UI components to support tracking and restoring pre-enhancement descriptions across various dialogs.
- Improved history management in `AddFeatureDialog`, `EditFeatureDialog`, and `FollowUpDialog` to include original text for better user experience.

This change enhances the ability to revert to previous descriptions, improving the overall functionality of the feature enhancement process.
2026-01-11 17:19:39 +01:00
Stefan de Vogelaere
2a98de85a8 fix: improve cache management and editor fallback handling
Cache management improvements:
- Remove separate cachedEditor variable; derive default from cachedEditors
- Update isCacheValid() to check cachedEditors existence
- detectDefaultEditor() now always goes through detectAllEditors()
  to ensure cache TTL is respected consistently

Editor fallback improvements:
- Log warning when requested editorCommand is not found in available editors
- Include list of available editor commands in warning message
- Make fallback to default editor explicit rather than silent

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-11 17:08:10 +01:00
Stefan de Vogelaere
fb3a8499f3 fix: address CodeRabbitAI security and UX review comments
Security improvements in open-in-editor.ts:
- Use execFile with argument arrays instead of shell interpolation
  in commandExists() to prevent command injection
- Replace shell `test -d` commands with Node.js fs/promises access()
  in findMacApp() for safer file system checks
- Add cache TTL (5 minutes) for editor detection to prevent stale data

UX improvements in worktree-actions-dropdown.tsx:
- Add error handling for clipboard copy operation
- Show success toast when path is copied
- Show error toast if clipboard write fails

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-11 16:55:25 +01:00
Stefan de Vogelaere
33dd9ae347 fix: address nitpick feedback from PR #423
## Security Fix (Command Injection)
- Use `execFile` with argument arrays instead of string interpolation
- Add `safeOpenInEditor` helper that properly handles `open -a` commands
- Validate that worktreePath is an absolute path before execution
- Prevents shell metacharacter injection attacks

## Shared Type Definition
- Move `EditorInfo` interface to `@automaker/types` package
- Server and UI now import from shared package to prevent drift
- Re-export from use-available-editors.ts for convenience

## Remove Unused Code
- Remove unused `defaultEditorName` prop from WorktreeActionsDropdown
- Remove prop from WorktreeTab component interface
- Remove useDefaultEditor hook usage from WorktreePanel
- Export new hooks from hooks/index.ts

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-11 16:37:05 +01:00
Stefan de Vogelaere
ac87594b5d fix: address code review feedback from PR #423
Addresses feedback from gemini-code-assist and coderabbitai reviewers:

## Duplicate Code (High Priority)
- Extract `getEffectiveDefaultEditor` logic into shared `useEffectiveDefaultEditor` hook
- Both account-section.tsx and worktree-actions-dropdown.tsx now use the shared hook

## Performance (Medium Priority)
- Refactor `detectAllEditors` to use `Promise.all` for parallel editor detection
- Replace sequential `await tryAddEditor()` calls with parallel `findEditor()` checks

## Code Quality (Medium Priority)
- Remove verbose IIFE pattern for editor icon rendering
- Pre-compute icon components before JSX return statement

## Bug Fixes
- Use `os.homedir()` instead of `~` fallback which doesn't expand in shell
- Normalize Select value to 'auto' when saved editor command not found in editors
- Add defensive check for empty editors array in useEffectiveDefaultEditor
- Improve mock openInEditor to correctly map all editor commands to display names

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-11 16:28:31 +01:00
Stefan de Vogelaere
32656a9662 feat: add default IDE setting and multi-editor support with icons
Add comprehensive editor detection and selection system that allows users
to configure their preferred IDE for opening branches and worktrees.

## Server-side Changes

- Add `/api/worktree/available-editors` endpoint to detect installed editors
- Support detection via CLI commands (cursor, code, zed, subl, etc.)
- Support detection via macOS app bundles in /Applications and ~/Applications
- Detect editors: Cursor, VS Code, Zed, Sublime Text, Windsurf, Trae,
  Rider, WebStorm, Xcode, Android Studio, Antigravity, and file managers

## UI Changes

### Editor Icons
- Add new `editor-icons.tsx` with SVG icons for all supported editors
- Icons: Cursor, VS Code, Zed, Sublime Text, Windsurf, Trae, Rider,
  WebStorm, Xcode, Android Studio, Antigravity, Finder
- `getEditorIcon()` helper maps editor commands to appropriate icons

### Default IDE Setting
- Add "Default IDE" selector in Settings > Account section
- Options: Auto-detect (Cursor > VS Code > first available) or explicit choice
- Setting persists via `defaultEditorCommand` in global settings

### Worktree Dropdown Improvements
- Implement split-button UX for "Open In" action
- Click main area: opens directly in default IDE (single click)
- Click chevron: shows submenu with other editors + Copy Path
- Each editor shows with its branded icon

## Type & Store Changes

- Add `defaultEditorCommand: string | null` to GlobalSettings
- Add to app-store with `setDefaultEditorCommand` action
- Add to SETTINGS_FIELDS_TO_SYNC for persistence
- Add `useAvailableEditors` hook for fetching detected editors

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-11 16:17:05 +01:00
DhanushSantosh
785a4d2c3b fix: restore auth and auto-open last project 2026-01-11 20:43:55 +05:30
Shirone
41a6c7f712 fix: address second round of PR review feedback
- Add fallback for unknown enhancement modes in history button to prevent "Enhanced (undefined)" UI bug
- Move DescriptionHistoryEntry interface to top level in add-feature-dialog
- Import and use EnhancementMode type in edit-feature-dialog to eliminate hardcoded types
- Make FollowUpHistoryEntry extend BaseHistoryEntry for consistency

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-11 15:27:17 +01:00
Shirone
7e5d915b60 fix: address PR review feedback from Gemini Code Assist
Address three issues identified in code review:

1. Fix missing thinkingLevel parameter (Critical)
   - Added thinkingLevel parameter to enhance API call
   - Updated electron.ts type definition to match http-api-client
   - Fixes functional regression in Claude model enhancement

2. Refactor dropdown menu to use constants dynamically
   - Changed hardcoded DropdownMenuItem components to dynamic generation
   - Now iterates over ENHANCEMENT_MODE_LABELS object
   - Ensures automatic sync when new modes are added
   - Eliminates manual UI updates for new enhancement modes

3. Optimize array reversal performance
   - Added useMemo hook to memoize reversed history array
   - Prevents creating new array on every render
   - Improves performance with lengthy histories

All TypeScript errors resolved. Build verified.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-11 15:17:46 +01:00
Shirone
8321c06e16 refactor: extract Enhance with AI into shared components
Extract all "Enhance with AI" functionality into reusable shared components
following DRY principles and clean code guidelines.

Changes:
- Create shared/enhancement/ folder for related functionality
- Extract EnhanceWithAI component (AI enhancement with model override)
- Extract EnhancementHistoryButton component (version history UI)
- Extract enhancement constants and types
- Refactor add-feature-dialog.tsx to use shared components
- Refactor edit-feature-dialog.tsx to use shared components
- Refactor follow-up-dialog.tsx to use shared components
- Add history tracking to add-feature-dialog for consistency

Benefits:
- Eliminated ~527 lines of duplicated code
- Single source of truth for enhancement logic
- Consistent UX across all dialogs
- Easier maintenance and extensibility
- Better code organization

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-11 15:10:54 +01:00
Shirone
f60c18d31a Update apps/ui/src/components/views/board-view/constants.ts
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2026-01-11 12:08:33 +01:00
Shirone
e171b6a049 feat: add empty state card component and integrate AI suggestion functionality
- Introduced the EmptyStateCard component to display contextual guidance when columns are empty.
- Enhanced the KanbanBoard and BoardView components to utilize the new EmptyStateCard for improved user experience.
- Added AI suggestion functionality to the empty state configuration, allowing users to generate ideas directly from the backlog column.
- Updated constants to define empty state configurations for various column types.
2026-01-11 12:03:52 +01:00
Shirone
6e4b611662 refactor: optimize bulk delete handler and UI feedback
- Refactored the bulk delete handler to utilize Promise.all for concurrent deletion of features, improving performance and error handling.
- Updated the BoardView component to handle deletion results more effectively, providing user feedback for both successful and failed deletions.
- Enhanced local state management to avoid redundant API calls during feature deletion.
2026-01-11 11:17:28 +01:00
DhanushSantosh
7522e58fee Merge remote-tracking branch 'upstream/v0.10.0rc' into feature/codex-cli 2026-01-11 14:51:06 +05:30
Shirone
317c21ffc0 Merge pull request #413 from AutoMaker-Org/feat/bulk-delete-features-in-backlog
feat: bulk delete in backlog mass select
2026-01-11 09:19:15 +00:00
Shirone
9c5fe44617 feat: add bulk delete functionality for features
- Introduced a new endpoint `/bulk-delete` to allow deletion of multiple features at once.
- Implemented `createBulkDeleteHandler` to process bulk delete requests and handle success/failure responses.
- Updated the UI to include a bulk delete option in the BoardView component, with confirmation dialog for user actions.
- Enhanced the HTTP API client to support bulk delete requests.
- Improved the selection action bar to trigger bulk delete functionality and provide user feedback.
2026-01-11 10:17:35 +01:00
DhanushSantosh
7f79d9692c feat: Add official icons for MiniMax, GLM (Z.ai), and BigPickle models
- Add official MiniMax logo SVG from LobeHub icons library
- Add official Z.ai logo SVG for GLM models from LobeHub icons library
- Add BigPickle icon with custom green color (#4ADE80)
- Fix icon detection logic to properly handle amazon-bedrock/ and opencode/ prefixes
- Update phase-model-selector and opencode-model-configuration to use
  getProviderIconForModel() for consistent icon display across the app

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-11 14:44:32 +05:30
Shirone
2d4ffc7514 feat: add accept all functionality in ideation view
- Introduced a new "Accept All" button in the IdeationHeader component, allowing users to accept all filtered suggestions at once.
- Implemented state management for accept all readiness and count in the IdeationView component.
- Enhanced the IdeationDashboard to notify the parent component about the readiness of the accept all feature.
- Added logic to handle the acceptance of all suggestions, including success and failure notifications.
- Updated UI components to reflect the new functionality and improve user experience.
2026-01-11 10:01:01 +01:00
webdevcody
5f3db1f25e feat: enhance spec regeneration management by project
- Refactored spec regeneration status tracking to support multiple projects using a Map for running states and abort controllers.
- Updated `getSpecRegenerationStatus` to accept a project path, allowing retrieval of status specific to a project.
- Modified `setRunningState` to manage running states and abort controllers per project.
- Adjusted related route handlers to utilize project-specific status checks and updates.
- Introduced a new Graph View page and integrated it into the routing structure.
- Enhanced UI components to reflect the current project’s spec generation state.
2026-01-11 01:37:26 -05:00
webdevcody
7115460804 feat: add resume interrupted features endpoint and handler
- Introduced a new endpoint `/resume-interrupted` to handle resuming features that were interrupted during server restarts.
- Implemented the `createResumeInterruptedHandler` to check for and resume interrupted features based on the project path.
- Enhanced the `AutoModeService` to track and manage the execution state of features, ensuring they can be resumed correctly.
- Updated relevant types and prompts to include the new 'ux-reviewer' enhancement mode for better user experience handling.
- Added new templates for UX review and other enhancement modes to improve task descriptions from a user experience perspective.
2026-01-11 01:37:13 -05:00
webdevcody
0db8808b2a Merge branch 'memory-ui' into v0.10.0rc 2026-01-10 20:13:07 -05:00
webdevcody
cf3ed1dd8f Merge branch 'v0.10.0rc' of github.com:AutoMaker-Org/automaker into v0.10.0rc 2026-01-10 20:08:02 -05:00
webdevcody
da682e3993 feat: add memory management feature with UI components
- Introduced a new MemoryView component for viewing and editing AI memory files.
- Updated navigation hooks and keyboard shortcuts to include memory functionality.
- Added memory file creation, deletion, and renaming capabilities.
- Enhanced the sidebar navigation to support memory as a new section.
- Implemented loading and saving of memory files with a markdown editor.
- Integrated dialogs for creating, deleting, and renaming memory files.
2026-01-10 20:07:50 -05:00
Shirone
4a59e901e6 chore: format 2026-01-11 01:15:27 +01:00
Shirone
8ed2fa07a0 security: Fix critical vulnerabilities in worktree init script feature
Fix multiple command injection and security vulnerabilities in the worktree
initialization script system:

**Critical Fixes:**
- Add branch name validation to prevent command injection in create/delete endpoints
- Replace string interpolation with array-based command execution using spawnProcess
- Implement safe environment variable allowlist to prevent credential exposure
- Add script content validation with 1MB size limit and dangerous pattern detection

**Code Quality:**
- Centralize execGitCommand helper in common.ts using @automaker/platform's spawnProcess
- Remove duplicate isGitRepo implementation, standardize imports to @automaker/git-utils
- Follow DRY principle by reusing existing platform utilities
- Add comprehensive JSDoc documentation with security examples

This addresses 6 critical/high severity vulnerabilities identified in security audit:
1. Command injection via unsanitized branch names (delete.ts)
2. Command injection via unsanitized branch names (create.ts)
3. Missing branch validation in init script execution
4. Environment variable exposure (ANTHROPIC_API_KEY and other secrets)
5. Path injection via command substitution
6. Arbitrary script execution without content limits

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-11 01:14:07 +01:00
Kacper
385e7f5c1e fix: address pr comments 2026-01-11 00:01:23 +01:00
Kacper
861fff1aae fix: broken lock file 2026-01-10 23:48:33 +01:00
Kacper
09527b3b67 feat: Add auto-dismiss functionality for Init Script Indicator
This commit introduces an auto-dismiss feature for the Init Script Indicator, enhancing user experience by automatically hiding the indicator 5 seconds after the script completes. Key changes include:

1. **State Management**: Added `autoDismissInitScriptIndicatorByProject` to manage the auto-dismiss setting per project.
2. **UI Components**: Updated the WorktreesSection to include a toggle for enabling or disabling the auto-dismiss feature, allowing users to customize their experience.
3. **Indicator Logic**: Implemented logic in the SingleIndicator component to handle auto-dismiss based on the new setting.

These enhancements provide users with more control over the visibility of the Init Script Indicator, streamlining project management workflows.
2026-01-10 23:43:52 +01:00
Kacper
d98ff16c8f feat: Enhance CreateWorktreeDialog with user-friendly error handling
This commit introduces a new error parsing function to provide clearer, user-friendly error messages in the CreateWorktreeDialog component. Key changes include:

1. **Error Parsing**: Added `parseWorktreeError` function to interpret various git-related error messages and return structured titles and descriptions for better user feedback.
2. **State Management**: Updated the error state to store structured error objects instead of strings, allowing for more detailed error display.
3. **UI Updates**: Enhanced the error display in the dialog to show both title and description, improving clarity for users encountering issues during worktree creation.

These improvements enhance the user experience by providing more informative error messages, helping users troubleshoot issues effectively.
2026-01-10 23:37:39 +01:00
Kacper
e902e8ea4c feat: Introduce default delete branch option for worktrees
This commit adds a new feature allowing users to set a default value for the "delete branch" checkbox when deleting a worktree. Key changes include:

1. **State Management**: Introduced `defaultDeleteBranchByProject` to manage the default delete branch setting per project.
2. **UI Components**: Updated the WorktreesSection to include a toggle for the default delete branch option, enhancing user control during worktree deletion.
3. **Dialog Updates**: Modified the DeleteWorktreeDialog to respect the default delete branch setting, improving the user experience by streamlining the deletion process.

These enhancements provide users with more flexibility and control over worktree management, improving overall project workflows.
2026-01-10 23:18:39 +01:00
Kacper
aeb5bd829f feat: Add Init Script Indicator visibility feature for worktrees
This commit introduces a new feature allowing users to toggle the visibility of the Init Script Indicator for each project. Key changes include:

1. **State Management**: Added `showInitScriptIndicatorByProject` to manage the visibility state per project.
2. **UI Components**: Implemented a checkbox in the WorktreesSection to enable or disable the Init Script Indicator, enhancing user control over the UI.
3. **BoardView Updates**: Modified the BoardView to conditionally render the Init Script Indicator based on the new visibility state.

These enhancements improve the user experience by providing customizable visibility options for the Init Script Indicator, streamlining project management workflows.
2026-01-10 23:03:29 +01:00
DhanushSantosh
a92457b871 fix: Handle Claude CLI unavailability gracefully in CI
- Add try-catch around pty.spawn() to prevent crashes when PTY unavailable
- Add unhandledRejection/uncaughtException handlers for graceful degradation
- Add checkBackendHealth/waitForBackendHealth utilities for tests
- Add data/.api-key and data/credentials.json to .gitignore
2026-01-11 03:22:43 +05:30
Kacper
c24e6207d0 feat: Enhance ShellSyntaxEditor and WorktreesSection with new features
This commit introduces several improvements to the ShellSyntaxEditor and WorktreesSection components:

1. **ShellSyntaxEditor**: Added a `maxHeight` prop to allow for customizable maximum height, enhancing layout flexibility.
2. **WorktreesSection**:
   - Introduced state management for original script content and existence checks for scripts.
   - Implemented save, reset, and delete functionalities for initialization scripts, providing users with better control over their scripts.
   - Added action buttons for saving, resetting, and deleting scripts, along with loading indicators for improved user feedback.
   - Enhanced UI to indicate unsaved changes, improving user awareness of script modifications.

These changes improve the user experience by providing more robust script management capabilities and a more responsive UI.
2026-01-10 22:46:06 +01:00
Kacper
6c412cd367 feat: Add run init script functionality for worktrees
This commit introduces the ability to run initialization scripts for worktrees, enhancing the setup process. Key changes include:

1. **New API Endpoint**: Added a POST endpoint to run the init script for a specified worktree.
2. **Worktree Routes**: Updated worktree routes to include the new run init script handler.
3. **Init Script Service**: Enhanced the Init Script Service to support running scripts asynchronously and handling errors.
4. **UI Updates**: Added UI components to check for the existence of init scripts and trigger their execution, providing user feedback through toast notifications.
5. **Event Handling**: Implemented event handling for init script execution status, allowing real-time updates in the UI.

This feature streamlines the workflow for users by automating the execution of setup scripts, improving overall project management.
2026-01-10 22:36:50 +01:00
DhanushSantosh
89a960629a fix: Improve E2E test workflow for better backend debugging
Enhanced backend server startup in CI:
- Track server PID and process status
- Save logs to backend.log for debugging
- Better error detection with process monitoring
- Added cleanup step to kill server process
- Print backend logs on test failure

Improves reliability of E2E tests by providing better diagnostics when backend fails to start
2026-01-11 02:58:56 +05:30
Kacper
05d96a7d6e feat: Implement worktree initialization script functionality
This commit introduces a new feature for managing worktree initialization scripts, allowing users to configure and execute scripts upon worktree creation. Key changes include:

1. **New API Endpoints**: Added endpoints for getting, setting, and deleting init scripts.
2. **Worktree Routes**: Updated worktree routes to include init script handling.
3. **Init Script Service**: Created a service to execute the init scripts asynchronously, with support for cross-platform compatibility.
4. **UI Components**: Added UI components for displaying and editing init scripts, including a dedicated section in the settings view.
5. **Event Handling**: Implemented event handling for init script execution status, providing real-time feedback in the UI.

This enhancement improves the user experience by allowing automated setup processes for new worktrees, streamlining project workflows.
2026-01-10 22:19:34 +01:00
DhanushSantosh
41144ff1fa Merge recovered upstream commits including worktree enhancement
This brings back commits that were accidentally overwritten during a force push:
- fa8ae149 feat: enhance worktree listing by scanning external directories
- Plus any other changes from upstream/v0.10.0rc at that time

The merge ensures all changes are preserved while keeping the history intact.
2026-01-11 02:27:21 +05:30
DhanushSantosh
360cddcb91 Merge upstream commits including worktree enhancement
This recovers the commits that were accidentally overwritten during force push.

Included:
- fa8ae149 feat: enhance worktree listing by scanning external directories
- Any other commits from upstream/v0.10.0rc at that point
2026-01-11 02:27:08 +05:30
DhanushSantosh
427832e72e fix: Display correct provider icons for all OpenCode/Bedrock models
The issue was that ALL OpenCode models were showing the OpenCode icon, regardless
of their actual underlying provider. This fix ensures each model shows its
authentic brand icon.

Changes:

1. **model-constants.ts** - Fixed provider field assignment
   - Changed provider from hardcoded 'opencode' to actual config.provider
   - Now correctly maps: opencode/big-pickle, amazon-bedrock/anthropic.*, etc.

2. **phase-model-selector.tsx** - Added provider-specific icon logic
   - Added imports for DeepSeekIcon, NovaIcon, QwenIcon, MistralIcon, MetaIcon
   - Added ProviderIcon selector based on model.provider field
   - Each model type now displays its correct provider icon

3. **provider-icon.tsx** - Updated icon detection and mapping
   - Enhanced getUnderlyingModelIcon() to detect specific Bedrock providers:
     * amazon-bedrock/anthropic.* → anthropic icon
     * amazon-bedrock/deepseek.* → deepseek icon
     * amazon-bedrock/nova.* → nova icon
     * amazon-bedrock/meta.* or llama → meta icon
     * amazon-bedrock/mistral.* → mistral icon
     * amazon-bedrock/qwen.* → qwen icon
     * opencode/* → opencode icon
   - Added meta and mistral to PROVIDER_ICON_KEYS
   - Added placeholder definitions for meta/mistral in PROVIDER_ICON_DEFINITIONS
   - Updated iconMap to include all provider icons
   - Set OpenCode icon to official brand color (#6366F1 indigo)

Result: All model selectors and kanban cards now show correct brand icons
for each OpenCode model (DeepSeek whale, Amazon Nova sparkle, Qwen star, etc.)
2026-01-11 02:18:55 +05:30
webdevcody
27c60658f7 Merge branch 'v0.10.0rc' of github.com:AutoMaker-Org/automaker into v0.10.0rc 2026-01-10 15:41:50 -05:00
webdevcody
fa8ae149d3 feat: enhance worktree listing by scanning external directories
- Implemented a new function to scan the .worktrees directory for worktrees that may exist outside of git's management, allowing for better detection of externally created or corrupted worktrees.
- Updated the /list endpoint to include discovered worktrees in the response, improving the accuracy of the worktree listing.
- Added logging for discovered worktrees to aid in debugging and tracking.
- Cleaned up and organized imports in the list.ts file for better maintainability.
2026-01-10 15:41:35 -05:00
DhanushSantosh
0c19beb11c fix: Set OpenCode icon to official brand color (#6366F1 indigo)
The OpenCode icon now uses the official indigo brand color (#6366F1)
from opencode.ai instead of white, making it visible in both light
and dark themes.
2026-01-11 01:54:17 +05:30
DhanushSantosh
e34e4a59e9 fix: Resolve TypeScript error assigning part.result to string field
Fix TS2322 error where finishEvent.part?.result (typed as {}) was being
assigned to result.result (typed as string).

Solution: Safely handle arbitrary result payloads by:
1. Reading raw value as unknown from Record<string, unknown>
2. Checking if it's a string, otherwise JSON.stringify()

This ensures type safety while supporting both string and object results
from the OpenCode CLI.
2026-01-11 01:35:32 +05:30
DhanushSantosh
7cc092cd59 test: Fix remaining OpenCode provider test failures
Fix all 8 remaining test failures:

1. Update executeQuery integration tests to use new OpenCode event format:
   - text events use type='text' with part.text
   - tool_call events use type='tool_call' with part containing call_id, name, args
   - tool_result events use type='tool_result' with part
   - step_finish events use type='step_finish' with part
   - Use sessionID field instead of session_id

2. Fix step_finish event handling:
   - Include result field in successful completion response
   - Check for reason === 'error' to detect failed steps
   - Provide default error message when error field is missing

3. Update model test expectations:
   - Model 'opencode/big-pickle' stays as-is (not stripped to 'big-pickle')
   - PROVIDER_PREFIXES only strips 'opencode-' prefix, not 'opencode/'

All 84 tests now pass successfully!
2026-01-11 01:23:42 +05:30
DhanushSantosh
51cd7156d2 test: Update OpenCode provider tests to use new event format
Update normalizeEvent tests to match new OpenCode API:
- text events use type='text' with part.text instead of text-delta
- tool_call events use type='tool_call' with part containing call_id, name, args
- tool_result events use type='tool_result' with part
- tool_error events use type='tool_error' with part
- step_finish events use type='step_finish' with part

Update buildCliArgs tests:
- Remove expectations for -q flag (no longer used)
- Remove expectations for -c flag (cwd set at subprocess level)
- Remove expectations for - final arg (prompt via stdin)
- Update format to 'json' instead of 'stream-json'

Remaining 8 test failures are in integration tests that use executeQuery
and require more extensive mock data updates.
2026-01-11 01:07:55 +05:30
DhanushSantosh
1dc843d2d0 Merge upstream/v0.10.0rc into feature/codex-cli
Sync with latest upstream changes:
- feat: enhance feature dialogs with planning mode tooltips
- refactor: remove kanbanCardDetailLevel from settings and UI components
- refactor: streamline feature addition in BoardView and KanbanBoard
- feat: implement dashboard view and enhance sidebar navigation
2026-01-11 00:59:36 +05:30
DhanushSantosh
4040bef4b8 feat: Add OpenCode provider integration with official brand icons
This commit integrates OpenCode as a new AI provider and updates all provider
icons with their official brand colors for better visual recognition.

**OpenCode Provider Integration:**
- Add OpencodeProvider class with CLI-based execution
- Support for OpenCode native models (opencode/) and Bedrock models
- Proper event normalization for OpenCode streaming format
- Correct CLI arguments: --format json (not stream-json)
- Event structure: type, part.text, sessionID fields

**Provider Icons:**
- Add official OpenCode icon (white square frame from opencode.ai)
- Add DeepSeek icon (blue whale #4D6BFE)
- Add Qwen icon (purple gradient #6336E7 → #6F69F7)
- Add Amazon Nova icon (AWS orange #FF9900)
- Add Mistral icon (rainbow gradient gold→red)
- Add Meta icon (blue #1877F2)
- Update existing icons with brand colors:
  * Claude: #d97757 (terra cotta)
  * OpenAI/Codex: #74aa9c (teal-green)
  * Cursor: #5E9EFF (bright blue)

**Settings UI Updates:**
- Update settings navigation to show OpenCode icon
- Update model configuration to use provider-specific icons
- Differentiate between OpenCode free models and Bedrock-hosted models
- All AI models now display their official brand logos

**Model Resolution:**
- Add isOpencodeModel() function to detect OpenCode models
- Support patterns: opencode/, opencode-*, amazon-bedrock/*
- Update getProviderFromModel to recognize opencode provider

Note: Some unit tests in opencode-provider.test.ts need updating to match
the new event structure and CLI argument format.
2026-01-11 00:56:25 +05:30
Kacper
e64a850f57 feat: enhance feature dialogs with planning mode tooltips
- Integrated Tooltip components into AddFeatureDialog, EditFeatureDialog, and MassEditDialog to provide user guidance on planning mode availability.
- Updated the rendering logic for planning mode selection to conditionally display tooltips when planning modes are not supported.
- Improved user experience by clarifying the conditions under which planning modes can be utilized.
2026-01-10 20:08:17 +01:00
webdevcody
555523df38 refactor: remove kanbanCardDetailLevel from settings and UI components
- Eliminated kanbanCardDetailLevel from the SettingsService, app state, and various UI components including BoardView and BoardControls.
- Updated related hooks and API client to reflect the removal of kanbanCardDetailLevel.
- Cleaned up imports and props associated with kanbanCardDetailLevel across the codebase for improved clarity and maintainability.
2026-01-10 13:39:45 -05:00
webdevcody
dd882139f3 refactor: streamline feature addition in BoardView and KanbanBoard
- Removed the add feature shortcut from BoardHeader and integrated the add feature functionality directly into the KanbanBoard and BoardView components.
- Added a floating action button for adding features in the KanbanBoard's backlog column.
- Updated KanbanColumn to support a footer action for enhanced UI consistency.
- Cleaned up unused imports and props related to feature addition.
2026-01-10 13:19:21 -05:00
webdevcody
a67b8c6109 feat: implement dashboard view and enhance sidebar navigation
- Added a new DashboardView component for improved project management.
- Updated sidebar navigation to redirect to the dashboard instead of the home page.
- Removed ProjectActions from the sidebar for a cleaner interface.
- Enhanced BoardView to conditionally render the WorktreePanel based on visibility settings.
- Introduced worktree panel visibility management per project in the app store.
- Updated project settings to include worktree panel visibility and favorite status.
- Adjusted navigation logic to ensure users are directed to the appropriate view based on project state.
2026-01-10 13:08:59 -05:00
webdevcody
134208dab6 Merge branch 'v0.10.0rc' of github.com:AutoMaker-Org/automaker into v0.10.0rc 2026-01-10 12:27:33 -05:00
webdevcody
887343d232 Merge branch 'main' into v0.10.0rc 2026-01-10 12:27:29 -05:00
Web Dev Cody
299b838400 Merge pull request #404 from AutoMaker-Org/fix/security-vulnerability
chore: update @modelcontextprotocol/sdk in server dep
2026-01-10 12:25:29 -05:00
Kacper
c5d0a8be7d chore: update @modelcontextprotocol/sdk in server dep 2026-01-10 17:18:08 +01:00
Shirone
fe433a84c9 Merge pull request #402 from AutoMaker-Org/fix/security-vulnerability-in-dep
fix: security vulnerability in server
2026-01-10 15:52:00 +00:00
Kacper
543aa7a27b chore: update @modelcontextprotocol/sdk in server dep 2026-01-10 16:42:26 +01:00
Shirone
36ddf0513b Merge pull request #400 from AutoMaker-Org/feat/codex-usage
feat: improve codex plan and usage detection
2026-01-10 15:29:33 +00:00
Shirone
c99883e634 fix: address PR review comments
- Add error logging to CodexProvider auth check instead of silent failure
- Fix cachedAt timestamp to return actual cache time instead of request time
- Replace misleading hardcoded rate limit values (100) with sentinel value (-1)
- Fix unused parameter warning in codex routes

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-10 16:26:12 +01:00
Shirone
604f98b08f chore: ignore .codex/config.toml to prevent API key exposure
Move .codex/config.toml to .gitignore to prevent accidental commits of
API keys. The file will remain local to each user's setup.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-10 16:18:19 +01:00
Shirone
c5009a0333 refactor: remove Codex credits handling from services and UI components
- Eliminated CodexCreditsSnapshot interface and related logic from CodexUsageService and UI components.
- Updated CodexUsageSection to display only plan type, removing credits information for a cleaner interface.
- Streamlined Codex usage formatting functions by removing unused credit formatting logic.

These changes simplify the Codex usage management by focusing on plan types, enhancing clarity and maintainability.
2026-01-10 16:18:19 +01:00
Shirone
99b05d35a2 feat: update Codex services and UI components for enhanced model management
- Bumped version numbers for @automaker/server and @automaker/ui to 0.9.0 in package-lock.json.
- Introduced CodexAppServerService and CodexModelCacheService to manage communication with the Codex CLI's app-server and cache model data.
- Updated CodexUsageService to utilize app-server for fetching usage data.
- Enhanced Codex routes to support fetching available models and integrated model caching.
- Improved UI components to dynamically load and display Codex models, including error handling and loading states.
- Added new API methods for fetching Codex models and integrated them into the app store for state management.

These changes improve the overall functionality and user experience of the Codex integration, ensuring efficient model management and data retrieval.
2026-01-10 16:18:08 +01:00
Web Dev Cody
a3ecc6fe02 Merge pull request #394 from AutoMaker-Org/remove-profiles
refactor: remove AI profile functionality and related components
2026-01-09 19:25:15 -05:00
webdevcody
fc20dd5ad4 refactor: remove AI profile functionality and related components
- Deleted the AI profile management feature, including all associated views, hooks, and types.
- Updated settings and navigation components to remove references to AI profiles.
- Adjusted local storage and settings synchronization logic to reflect the removal of AI profiles.
- Cleaned up tests and utility functions that were dependent on the AI profile feature.

These changes streamline the application by eliminating unused functionality, improving maintainability and reducing complexity.
2026-01-09 19:21:30 -05:00
webdevcody
3f2707404c chore: release v0.9.0
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-09 18:40:15 -05:00
Web Dev Cody
fdd3a28be7 Merge pull request #385 from AutoMaker-Org/v0.9.0rc
V0.9.0rc
2026-01-09 18:37:32 -05:00
Kacper
eb94e4de72 feat: enhance CodexUsageService to fetch usage data from app-server JSON-RPC API
- Implemented a new method to retrieve usage data from the Codex app-server, providing real-time data and improving reliability.
- Updated the fetchUsageData method to prioritize app-server data over fallback methods.
- Added detailed logging for better traceability and debugging.
- Removed unused methods related to OpenAI API usage and Codex CLI requests, streamlining the service.

These changes enhance the functionality and robustness of the CodexUsageService, ensuring accurate usage statistics retrieval.
2026-01-10 00:11:42 +01:00
webdevcody
21d275c984 increase panel width 2026-01-09 17:10:49 -05:00
webdevcody
cadb19d7ed feat: reorganize global settings navigation into groups
- Refactored the global navigation structure to group settings items into distinct categories for improved organization and usability.
- Updated the settings navigation component to render these groups dynamically, enhancing the user experience.
- Changed the default initial view in the settings hook to 'model-defaults' for better alignment with the new navigation structure.

These changes streamline navigation and make it easier for users to find relevant settings.
2026-01-09 17:09:08 -05:00
webdevcody
a01241fabe Merge branch 'v0.9.0rc' of github.com:AutoMaker-Org/automaker into v0.9.0rc 2026-01-09 16:40:17 -05:00
webdevcody
89a0877bcc feat: enhance settings navigation with collapsible sub-items and local storage state
- Implemented a collapsible dropdown for navigation items in the settings view, allowing users to expand or collapse sub-items.
- Added local storage functionality to remember the open/closed state of the dropdown across sessions.
- Updated the styling and interaction logic for improved user experience and accessibility.

These changes improve the organization of navigation items and enhance user interaction within the settings view.
2026-01-09 16:40:07 -05:00
Shirone
950a97d72b Merge pull request #392 from AutoMaker-Org/fix/codex-usage-plan-detection
fix: correct Codex plan type detection from JWT auth
2026-01-09 22:34:12 +01:00
Kacper
639c1de2e8 merge: sync with latest v0.9.0rc changes 2026-01-09 22:27:06 +01:00
Kacper
254e4f630c fix: address PR review comments for type safety
- Add typeof checks for fallback claim values to prevent runtime errors
- Make openaiAuth parsing more robust with proper type validation
- Add isNaN check for date parsing to handle invalid dates
- Refactor fetchFromAuthFile to reuse getPlanTypeFromAuthFile (DRY)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-09 22:25:06 +01:00
Kacper
5d0fb08651 fix: correct Codex plan type detection from JWT auth
- Fix hardcoded 'plus' planType that was returned as default
- Read plan type from correct JWT path: https://api.openai.com/auth.chatgpt_plan_type
- Add subscription expiry check - override to 'free' if expired
- Use getCodexAuthPath() from @automaker/platform instead of manual path
- Remove unused imports (os, fs, path) and class properties
- Clean up code and add minimal essential logging

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-09 22:18:41 +01:00
webdevcody
7ea64b32f3 feat: implement work mode selection in feature dialogs
- Added WorkModeSelector component to allow users to choose between 'current', 'auto', and 'custom' work modes for feature management.
- Updated AddFeatureDialog and EditFeatureDialog to utilize the new work mode functionality, replacing the previous branch selector logic.
- Enhanced useBoardActions hook to handle branch name generation based on the selected work mode.
- Adjusted settings to default to using worktrees, improving the overall feature creation experience.

These changes streamline the feature management process by providing clearer options for branch handling and worktree isolation.
2026-01-09 16:15:09 -05:00
webdevcody
93807c22c1 feat: add VITE_APP_MODE environment variable support
- Introduced VITE_APP_MODE variable in multiple files to manage application modes.
- Updated dev.mjs and docker-compose.dev.yml to set different modes for development.
- Enhanced type definitions in vite-env.d.ts to include VITE_APP_MODE options.
- Modified AutomakerLogo component to display version suffix based on the current app mode.
- Improved OS detection logic in use-os-detection.ts to utilize Electron's platform information.
- Updated ElectronAPI interface to expose platform information.

These changes provide better control over application behavior based on the mode, enhancing the development experience.
2026-01-09 15:30:49 -05:00
SuperComboGamer
b2cf17b53b feat: add project-scoped agent memory system (#351)
* memory

* feat: add smart memory selection with task context

- Add taskContext parameter to loadContextFiles for intelligent file selection
- Memory files are scored based on tag matching with task keywords
- Category name matching (e.g., "terminals" matches terminals.md) with 4x weight
- Usage statistics influence scoring (files that helped before rank higher)
- Limit to top 5 files + always include gotchas.md
- Auto-mode passes feature title/description as context
- Chat sessions pass user message as context

This prevents loading 40+ memory files and killing context limits.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* refactor: enhance auto-mode service and context loader

- Improved context loading by adding task context for better memory selection.
- Updated JSON parsing logic to handle various formats and ensure robust error handling.
- Introduced file locking mechanisms to prevent race conditions during memory file updates.
- Enhanced metadata handling in memory files, including validation and sanitization.
- Refactored scoring logic for context files to improve selection accuracy based on task relevance.

These changes optimize memory file management and enhance the overall performance of the auto-mode service.

* refactor: enhance learning extraction and formatting in auto-mode service

- Improved the learning extraction process by refining the user prompt to focus on meaningful insights and structured JSON output.
- Updated the LearningEntry interface to include additional context fields for better documentation of decisions and patterns.
- Enhanced the formatLearning function to adopt an Architecture Decision Record (ADR) style, providing richer context for recorded learnings.
- Added detailed logging for better traceability during the learning extraction and appending processes.

These changes aim to improve the quality and clarity of learnings captured during the auto-mode service's operation.

* feat: integrate stripProviderPrefix utility for model ID handling

- Added stripProviderPrefix utility to various routes to ensure providers receive bare model IDs.
- Updated model references in executeQuery calls across multiple files, enhancing consistency in model ID handling.
- Introduced memoryExtractionModel in settings for improved learning extraction tasks.

These changes streamline the model ID processing and enhance the overall functionality of the provider interactions.

* feat: enhance error handling and server offline management in board actions

- Improved error handling in the handleRunFeature and handleStartImplementation functions to throw errors for better caller management.
- Integrated connection error detection and server offline handling, redirecting users to the login page when the server is unreachable.
- Updated follow-up feature logic to include rollback mechanisms and improved user feedback for error scenarios.

These changes enhance the robustness of the board actions by ensuring proper error management and user experience during server connectivity issues.

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: webdevcody <webdevcody@gmail.com>
2026-01-09 15:11:59 -05:00
DhanushSantosh
7e768b6290 fix: restore complete E2E workflow structure
- Restore missing workflow metadata (name, on, jobs)
- Fix YAML structure that got corrupted during edits
- Ensure E2E tests will run on PRs and pushes to main/master
2026-01-09 22:44:57 +05:30
DhanushSantosh
7bdf5e4261 fix: resolve CI E2E test failures
- Fix disposed response object in Playwright route handler
- Add git user config to prevent 'empty ident' errors
- Increase server startup timeout and improve debugging
- Fix YAML indentation in E2E workflow

Resolves:
- 'Response has been disposed' error in open-existing-project test
- Git identity configuration issues in CI
- Backend server startup timing issues
2026-01-09 22:40:33 +05:30
DhanushSantosh
f6fed612df merge: resolve conflicts with upstream OpenCode support
- Combined CLI disconnection markers with OpenCode support
- Added OpenCode auth/deauth routes and API methods
- Resolved merge conflicts between feature branch and upstream v0.9.0rc
2026-01-09 22:25:46 +05:30
DhanushSantosh
3b3e282df7 merge: sync with upstream v0.9.0rc branch 2026-01-09 22:10:51 +05:30
Web Dev Cody
7f69f652fb Merge pull request #390 from AutoMaker-Org/opencode-support
Opencode support
2026-01-09 11:11:07 -05:00
DhanushSantosh
1452232409 feat: fix CLI authentication detection to prevent unnecessary browser prompts
- Fix Claude, Codex, and Cursor auth handlers to check if CLI is already authenticated
- Use same detection logic as each provider's internal checkAuth/codexAuthIndicators()
- For Codex: Check for API keys and auth files before requiring manual login
- For Cursor: Check for env var and credentials files before requiring manual auth
- For Claude: Check for cached auth tokens, settings, and credentials files
- If CLI is already authenticated: Just reconnect by removing disconnected marker
- If CLI needs auth: Tell user to manually run login command
- This prevents timeout errors when login commands can't run in non-interactive mode
2026-01-09 21:34:14 +05:30
webdevcody
4f0f56a7ba feat: enhance provider setup and authentication flow
- Refactored ProvidersSetupStep component to improve the UI and streamline provider status checks for Claude, Cursor, Codex, and OpenCode.
- Introduced auto-verification for CLI authentication and improved error handling for authentication states.
- Added loading indicators for provider status checks and enhanced user feedback for installation and authentication processes.
- Updated setup store to manage verification states and ensure accurate representation of provider statuses.

These changes enhance the user experience by providing clearer feedback and a more efficient setup process for AI providers.
2026-01-09 11:03:01 -05:00
webdevcody
a695d0db7b feat: enhance OpenCode provider tests and UI setup
- Updated unit tests for OpenCode provider to include new authentication indicators.
- Refactored ProvidersSetupStep component by removing unnecessary UI elements for better clarity.
- Improved board background persistence tests by utilizing a setup function for initializing app state.
- Enhanced settings synchronization tests to ensure proper handling of login and app state.

These changes improve the testing framework and user interface for OpenCode integration, ensuring a smoother setup and authentication process.
2026-01-09 10:08:38 -05:00
webdevcody
87c3d766c9 Merge branch 'v0.9.0rc' into opencode-support 2026-01-09 09:40:16 -05:00
webdevcody
89248001e4 feat: implement OpenCode authentication and provider setup
- Added OpenCode authentication status check to the OpencodeProvider class.
- Introduced OpenCodeAuthStatus interface to manage authentication states.
- Updated detectInstallation method to include authentication status in the response.
- Created ProvidersSetupStep component to consolidate provider setup UI, including Claude, Cursor, Codex, and OpenCode.
- Refactored setup view to streamline navigation and improve user experience.
- Enhanced OpenCode CLI integration with updated installation paths and authentication checks.

This commit enhances the setup process by allowing users to configure and authenticate multiple AI providers, improving overall functionality and user experience.
2026-01-09 09:39:46 -05:00
webdevcody
41b4869068 feat: enhance feature dialogs with OpenCode model support
- Added OpenCode model selection to AddFeatureDialog and EditFeatureDialog.
- Introduced ProfileTypeahead component for improved profile selection.
- Updated model constants to include OpenCode models and integrated them into the PhaseModelSelector.
- Enhanced planning mode options with new UI elements for OpenCode.
- Refactored existing components to streamline model handling and improve user experience.

This commit expands the functionality of the feature dialogs, allowing users to select and manage OpenCode models effectively.
2026-01-09 09:02:30 -05:00
webdevcody
be88a07329 feat: add OpenCode CLI support with status endpoint
- Implemented OpenCode CLI installation and authentication status check.
- Added new route for OpenCode status in setup routes.
- Updated HttpApiClient to include method for fetching OpenCode status.
- Enhanced system paths to include OpenCode's default installation directories.

This commit introduces functionality to check the installation and authentication status of the OpenCode CLI, improving integration with the overall system.
2026-01-08 23:15:35 -05:00
Kacper
f6738ff26c fix: update getDefaultDocumentsPath to use window.electronAPI for Electron mode
- Removed dependency on getElectronAPI and directly accessed window.electronAPI for retrieving the documents path in Electron mode.
- Added handling for web mode to return null when the documents path cannot be accessed.
- Included logging for the resolved documents path to aid in debugging.
2026-01-09 00:04:28 +01:00
Shirone
ae22f781d8 Merge pull request #370 from AutoMaker-Org/feat/subagents-skills
feat: add skills and subagents configuration support
2026-01-08 23:33:52 +01:00
Kacper
e649c4ced5 refactor: reduce code duplication in agent-discovery.ts
Addresses PR feedback to reduce duplicated code in scanAgentsDirectory
by introducing an FsAdapter interface that abstracts the differences
between systemPaths (user directory) and secureFs (project directory).

Changes:
- Extract parseAgentContent helper for parsing agent file content
- Add FsAdapter interface with exists, readdir, and readFile methods
- Create createSystemPathAdapter for user-level paths
- Create createSecureFsAdapter for project-level paths
- Refactor scanAgentsDirectory to use a single loop with the adapter

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-08 23:02:41 +01:00
Kacper
50da1b401c Merge branch 'v0.9.0rc' into feat/subagents-skills
Resolved conflict in agent-service.ts by keeping both:
- agents parameter for custom subagents (from our branch)
- thinkingLevel and reasoningEffort parameters (from v0.9.0rc)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-08 22:57:09 +01:00
DhanushSantosh
33d02d1df8 Merge branch 'v0.9.0rc' of https://github.com/AutoMaker-Org/automaker into feature/codex-cli 2026-01-09 03:14:06 +05:30
Shirone
f3041190fa Merge pull request #383 from AutoMaker-Org/fix/electron-initial-auth
fix: add API key header to verifySession for Electron auth
2026-01-08 22:39:19 +01:00
webdevcody
55c2530d5a Merge branch 'v0.9.0rc' into opencode-support 2026-01-08 15:40:06 -05:00
webdevcody
5fbc7dd13e opencode support 2026-01-08 15:30:20 -05:00
DhanushSantosh
b2e5ff1460 fix: centralize model prefix handling to prevent provider errors
Moves prefix stripping from individual providers to AgentService/IdeationService
and adds validation to ensure providers receive bare model IDs. This prevents
bugs like the Codex CLI receiving "codex-gpt-5.1-codex-max" instead of the
expected "gpt-5.1-codex-max".

- Add validateBareModelId() helper with fail-fast validation
- Add originalModel field to ExecuteOptions for logging
- Update all providers to validate model has no prefix
- Centralize prefix stripping in service layer
- Remove redundant prefix stripping from individual providers

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-09 01:40:29 +05:30
Shirone
0f9232ea33 fix: add API key header to verifySession for Electron auth
The verifySession() function was not including the X-API-Key header
when making requests to /api/settings/status, causing Electron mode
to fail authentication on app startup despite having a valid API key.

This resulted in users seeing "You've been logged out" screen
immediately after launching the Electron app.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-08 21:00:22 +01:00
DhanushSantosh
a815be6a20 fix: update model resolver for codex- prefix and fix thinking/reasoning separation
- Add codex- prefix support in model resolver
- Fix modelSupportsThinking() to properly detect provider types
- Update CODEX_MODEL_PREFIXES to include codex- prefix
2026-01-09 00:45:31 +05:30
DhanushSantosh
7b4667eba9 fix: add provider prefixes to CLI models for clear separation
- Add 'codex-' prefix to all Codex CLI model IDs
- Add 'cursor-' prefix to Cursor CLI GPT model IDs
- Update provider-utils.ts to use prefix-based matching
- Update UI components to use prefixed model IDs
- Fix model routing to prevent Cursor picking up Codex models
2026-01-09 00:03:49 +05:30
DhanushSantosh
08ccf2632a fix: expand Codex model routing to cover ALL Codex models
- Update isCursorModel to exclude ALL Codex models from Cursor routing
- Check against CODEX_MODEL_CONFIG_MAP for comprehensive exclusion
- Includes: gpt-5.2-codex, gpt-5.1-codex-max, gpt-5.1-codex-mini, gpt-5.2, gpt-5.1
- Also excludes overlapping models like gpt-5.2 and gpt-5.1 that exist in both maps
- Update test to expect CodexProvider for gpt-5.2 (correct behavior)

This ensures ALL Codex CLI models route to Codex provider, not Cursor.
Previously only gpt-5.1-codex-* and gpt-5.2-codex-* were excluded.
2026-01-08 23:17:35 +05:30
DhanushSantosh
7583598a05 fix: differentiate Codex CLI models from Cursor CLI models
- Fix isCursorModel to exclude Codex-specific models (gpt-5.1-codex-*, gpt-5.2-codex-*)
- These models should route to Codex provider, not Cursor provider
- Add CODEX_YOLO_FLAG constant for --dangerously-bypass-approvals-and-sandbox
- Always use YOLO flag in codex-provider for full permissions
- Simplify codex CLI args to minimal set with YOLO flag
- Update tests to reflect new behavior with YOLO flag

This fixes the bug where selecting a Codex model (e.g., gpt-5.1-codex-max)
was incorrectly spawning cursor-agent instead of codex exec.

The root cause was:
1. Cursor provider had higher priority (10) than Codex (5)
2. isCursorModel() returned true for Codex models in CURSOR_MODEL_MAP
3. Models like gpt-5.1-codex-max routed to Cursor instead of Codex

The fix:
1. isCursorModel now excludes Codex-specific model IDs
2. Codex always uses --dangerously-bypass-approvals-and-sandbox flag
2026-01-08 23:03:03 +05:30
DhanushSantosh
7e68691e92 fix: remove sandbox and approval policy dropdowns from Codex settings UI
- Remove SANDBOX_OPTIONS and APPROVAL_OPTIONS from codex-settings.tsx
- Remove Select components for sandbox mode and approval policy
- Remove codexSandboxMode and codexApprovalPolicy from CodexSettingsProps interface
- Update codex-settings-tab.tsx to pass only simplified props
- Codex now always runs with full permissions (--dangerously-bypass-approvals-and-sandbox)

The UI no longer shows sandbox or approval settings since Codex always uses full permissions.
2026-01-08 22:27:18 +05:30
DhanushSantosh
4b2034b834 fix: Codex CLI always runs with full permissions (--dangerously-bypass-approvals-and-sandbox)
- Always use CODEX_YOLO_FLAG (--dangerously-bypass-approvals-and-sandbox) for Codex
- Remove all conditional logic - no sandbox/approval config, no config overrides
- Simplify codex-provider.ts to always run Codex in full-permissions mode
- Codex always gets: full access, no approvals, web search enabled, images enabled
- Update services to apply full-permission settings automatically for Codex models
- Remove sandbox and approval controls from UI settings page
- Update tests to reflect new behavior (some pre-existing tests disabled/updated)

Note: 3 pre-existing tests disabled/skipped due to old behavior expectations (require separate PR to update)
2026-01-08 21:56:32 +05:30
DhanushSantosh
4dcf54146c feat: add reasoning effort support for Codex models
- Add ReasoningEffortSelector component for UI selection
- Integrate reasoning effort in feature creation/editing dialogs
- Add reasoning effort support to phase model selector
- Update agent service and board actions to handle reasoning effort
- Add reasoning effort fields to feature and settings types
- Update model selector and agent info panel with reasoning effort display
- Enhance agent context parser for reasoning effort processing

Reasoning effort allows fine-tuned control over Codex model reasoning
capabilities, providing options from 'none' to 'xhigh' for different
task complexity requirements.
2026-01-08 20:43:36 +05:30
DhanushSantosh
8a9715adef fix: AI profile display issues for Codex models
- Fix Codex profiles showing as 'Claude Sonnet' in UI
- Add proper Codex model display in profile cards and selectors
- Add useEffect to sync profile form data when editing profiles
- Update provider badges to show 'Codex' with OpenAI icon
- Enhance profile selection validation across feature dialogs
- Add getCodexModelLabel support to display functions

The issue was that profile display functions only handled Claude and Cursor
providers, causing Codex profiles to fallback to 'sonnet' display.
2026-01-08 17:13:45 +05:30
DhanushSantosh
d253d494ba feat: enhance Codex usage tracking with multiple data sources
- Try OpenAI API if API key is available
- Parse rate limit info from Codex CLI responses
- Extract plan type from Codex auth file
- Provide helpful configuration message when usage unavailable
2026-01-08 15:13:05 +05:30
DhanushSantosh
d1f7794afa Merge branch 'main' of https://github.com/AutoMaker-Org/automaker into feature/codex-cli 2026-01-08 14:38:23 +05:30
DhanushSantosh
4d80a93710 refactor: update Codex models to latest OpenAI models
- Replace gpt-5-codex, gpt-5-codex-mini, codex-1, codex-mini-latest, gpt-5
- Add gpt-5.1-codex-max, gpt-5.1-codex-mini, gpt-5.2, gpt-5.1
- Update thinking support for all Codex models
2026-01-08 14:22:35 +05:30
webdevcody
d70faf3b28 Merge branch 'v0.9.0rc' into feat/subagents-skills 2026-01-08 00:33:30 -05:00
Web Dev Cody
271749a5a4 Merge pull request #375 from yumesha/main
fix background image and create test incase background image failed again
2026-01-08 00:26:31 -05:00
Web Dev Cody
d1bd131cab Merge pull request #378 from AutoMaker-Org/remove-sandbox-as-it-is-broken
completly remove sandbox related code as the downstream libraries do …
2026-01-08 00:25:30 -05:00
webdevcody
96fe90ca65 chore: remove worktree integration E2E test file
- Deleted the worktree integration test file to streamline the test suite and remove obsolete tests. This change helps maintain focus on relevant test cases and improves overall test management.
2026-01-08 00:23:00 -05:00
webdevcody
fd5f7b873a fix: improve worktree branch handling in list route
- Updated the logic in the createListHandler to ensure that the branch name is correctly assigned, especially for the main worktree when it may be missing.
- Added checks to handle cases where the worktree directory might not exist, ensuring that removed worktrees are accurately tracked.
- Enhanced the final worktree entry handling to account for scenarios where the output does not end with a blank line, improving robustness.
2026-01-08 00:13:12 -05:00
webdevcody
959467de90 feat: add UI test command and clean up integration test
- Introduced a new npm script "test:ui" for running UI tests in the apps/ui workspace.
- Removed unnecessary login screen handling from the worktree integration test to streamline the test flow.
2026-01-08 00:07:23 -05:00
webdevcody
69434fe356 feat: enhance login view with retry mechanism for server checks
- Added useRef to manage AbortController for retry requests in the LoginView component.
- Implemented logic to abort any ongoing retry requests before initiating a new server check, improving error handling and user experience during login attempts.
2026-01-07 23:18:52 -05:00
webdevcody
dc264bd164 feat: update E2E fixture settings and improve test repository initialization
- Changed the E2E settings to enable the use of worktrees for better test isolation.
- Modified the test repository initialization to explicitly set the initial branch to 'main', ensuring compatibility across different git versions and avoiding CI environment discrepancies.
2026-01-07 23:10:02 -05:00
webdevcody
8992f667c7 refactor: clean up settings service and improve E2E fixture descriptions
- Removed the redundant call to ignore empty array overwrite for 'enabledCodexModels' in the SettingsService.
- Reformatted the description of the 'Heavy Task' profile in the E2E fixture setup script for better readability.
2026-01-07 23:04:27 -05:00
webdevcody
eb627ef323 feat: enhance E2E test setup and error handling
- Updated Playwright configuration to explicitly unset ALLOWED_ROOT_DIRECTORY for unrestricted testing paths.
- Improved E2E fixture setup script to reset server settings to a known state, ensuring test isolation.
- Enhanced error handling in ContextView and WelcomeView components to reset state and provide user feedback on failures.
- Updated tests to ensure proper navigation and visibility checks during logout processes, improving reliability.
2026-01-07 23:01:57 -05:00
webdevcody
d8cdb0bf7a feat: enhance global settings update with data loss prevention
- Added safeguards to prevent overwriting non-empty arrays with empty arrays during global settings updates, specifically for the 'projects' field.
- Implemented logging for updates to assist in diagnosing accidental wipes of critical settings.
- Updated tests to verify that projects are preserved during logout transitions and that theme changes are ignored if a project wipe is attempted.
- Enhanced the settings synchronization logic to ensure safe handling during authentication state changes.
2026-01-07 21:38:46 -05:00
webdevcody
8c68c24716 feat: implement Codex CLI authentication check and integrate with provider
- Added a new utility for checking Codex CLI authentication status using the 'codex login status' command.
- Integrated the authentication check into the CodexProvider's installation detection and authentication methods.
- Updated Codex CLI status display in the UI to reflect authentication status and method.
- Enhanced error handling and logging for better debugging during authentication checks.
- Refactored related components to ensure consistent handling of authentication across the application.
2026-01-07 21:06:39 -05:00
antdev
9b302583c4 fix background image part 2 2026-01-08 09:23:57 +08:00
webdevcody
47c2d795e0 chore: update e2e test results upload configuration
- Renamed the upload step to clarify that it includes screenshots, traces, and videos.
- Changed the condition for uploading test results to always run, ensuring artifacts are uploaded regardless of test outcome.
- Added a new option to ignore if no files are found during the upload process.
2026-01-07 20:00:52 -05:00
yumesha
d608d8c2d4 Merge branch 'AutoMaker-Org:main' into main 2026-01-08 08:52:59 +08:00
webdevcody
f737b1f30a merge in v0.9.0 2026-01-07 18:22:32 -05:00
webdevcody
8b36fce7d7 refactor: improve test stability and clarity in various test cases
- Updated the 'Add Context Image' test to simplify file verification by relying on UI visibility instead of disk checks.
- Enhanced the 'Feature Manual Review Flow' test with better project setup and API interception to ensure consistent test conditions.
- Improved the 'AI Profiles' test by replacing arbitrary timeouts with dynamic checks for profile count.
- Refined the 'Project Creation' and 'Open Existing Project' tests to ensure proper project visibility and settings management during tests.
- Added mechanisms to prevent settings hydration from restoring previous project states, ensuring tests run in isolation.
- Removed unused test image from fixtures to clean up the repository.
2026-01-07 18:07:27 -05:00
DhanushSantosh
30a2a1c921 feat: add unified usage popover with Claude and Codex tabs
- Created combined UsagePopover component with tab switching between providers
- Added Codex usage API endpoint and service (returns not available message)
- Updated BoardHeader to show single usage button for both providers
- Enhanced type definitions for Codex usage with primary/secondary rate limits
- Wired up Codex usage API in HTTP client

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-08 03:37:37 +05:30
webdevcody
763f9832c3 feat: enhance test setup with splash screen handling and sandbox warnings
- Added `skipSandboxWarning` option to project setup functions to streamline testing.
- Implemented logic to disable the splash screen during tests by setting `automaker-splash-shown` in sessionStorage.
- Introduced a new package.json for a test project and added a test image to the fixtures for improved testing capabilities.
2026-01-07 16:31:48 -05:00
webdevcody
11b1bbc143 feat: implement splash screen handling in navigation and interactions
- Added a new function `waitForSplashScreenToDisappear` to manage splash screen visibility, ensuring it does not block user interactions.
- Integrated splash screen checks in various navigation functions and interaction methods to enhance user experience by waiting for the splash screen to disappear before proceeding.
- Updated test setup to disable the splash screen during tests for consistent testing behavior.
2026-01-07 16:10:17 -05:00
webdevcody
7176d3e513 fix: enhance sandbox compatibility checks in sdk-options and improve login view effect handling
- Added additional cloud storage path patterns for macOS and Linux to the checkSandboxCompatibility function, ensuring better compatibility with sandbox environments.
- Revised the login view to simplify the initial server/session check logic, removing unnecessary ref guard and improving responsiveness during component unmounting.
2026-01-07 15:54:17 -05:00
DhanushSantosh
9c3ba34b51 Merge remote-tracking branch 'upstream/v0.9.0rc' into feature/codex-cli 2026-01-08 01:57:27 +05:30
DhanushSantosh
ff3af937da fix: update event type in CodexProvider from threadCompleted to turnCompleted
- Changed the event type from 'thread.completed' to 'turn.completed' in the CODEX_EVENT_TYPES constant and its usage within the CodexProvider class.
- This update aligns the event handling with the intended functionality, ensuring correct event processing.
2026-01-08 01:54:02 +05:30
webdevcody
b9fcb916a6 fix: add missing checkSandboxCompatibility function to sdk-options
The codex-provider.ts imports this function but it was missing from
sdk-options.ts. This adds the implementation that checks if sandbox
mode is compatible with the working directory (disables sandbox for
cloud storage paths).

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-07 15:13:52 -05:00
webdevcody
cfa1f114fd Merge branch 'v0.9.0rc' into remove-sandbox-as-it-is-broken 2026-01-07 15:01:31 -05:00
Web Dev Cody
761929ea8e Merge pull request #367 from DhanushSantosh/feature/codex-cli
Codex CLI Implementation
2026-01-07 14:44:48 -05:00
webdevcody
4d36e66deb refactor: update session cookie options and improve login view authentication flow
- Revised SameSite attribute for session cookies to clarify its behavior in documentation.
- Streamlined cookie clearing logic in the authentication route by utilizing `getSessionCookieOptions()`.
- Enhanced the login view to support aborting server checks, improving responsiveness during component unmounting.
- Ensured proper handling of server check retries with abort signal integration for better user experience.
2026-01-07 14:33:55 -05:00
webdevcody
e58e389658 feat: implement settings migration from localStorage to server
- Added logic to perform settings migration, merging localStorage data with server settings if necessary.
- Introduced `localStorageMigrated` flag to prevent re-migration on subsequent app loads.
- Updated `useSettingsMigration` hook to handle migration and hydration of settings.
- Ensured localStorage values are preserved post-migration for user flexibility.
- Enhanced documentation within the migration logic for clarity.
2026-01-07 14:29:32 -05:00
DhanushSantosh
821827f850 refactor: simplify config value formatting in CodexProvider
- Removed unnecessary JSON.stringify conversion for string values in formatConfigValue function, streamlining the value formatting process.
- This change enhances code clarity and reduces complexity in the configuration handling of the CodexProvider.
2026-01-08 00:27:11 +05:30
DhanushSantosh
9d8464cceb feat: enhance CodexProvider argument handling and configuration
- Added approval policy and web search features to the CodexProvider's argument construction, improving flexibility in command execution.
- Updated unit tests to validate the new configuration handling for approval and search features, ensuring accurate argument parsing.

These changes enhance the functionality of the CodexProvider, allowing for more dynamic command configurations and improving test coverage.
2026-01-08 00:16:57 +05:30
DhanushSantosh
48a4fa5c6c refactor: streamline argument handling in CodexProvider
- Reorganized argument construction in CodexProvider to separate pre-execution arguments from global flags, improving clarity and maintainability.
- Updated unit tests to reflect changes in argument order, ensuring correct validation of approval and search indices.

These changes enhance the structure of the CodexProvider's command execution process and improve test reliability.
2026-01-08 00:04:52 +05:30
webdevcody
70c04b5a3f feat: update session cookie options and enhance authentication flow
- Changed SameSite attribute for session cookies from 'strict' to 'lax' to allow cross-origin fetches, improving compatibility with various client requests.
- Updated cookie clearing logic in the authentication route to use `res.cookie()` for better reliability in cross-origin environments.
- Refactored the login view to implement a state machine for managing authentication phases, enhancing clarity and maintainability.
- Introduced a new logged-out view to inform users of session expiration and provide options to log in or retry.
- Added account and security sections to the settings view, allowing users to manage their account and security preferences more effectively.
2026-01-07 12:55:23 -05:00
DhanushSantosh
24ea10e818 feat: enhance Codex authentication and API key management
- Introduced a new method to check Codex authentication status, allowing for better handling of API keys and OAuth tokens.
- Updated API key management to include OpenAI, enabling users to manage their keys more effectively.
- Enhanced the CodexProvider to support session ID tracking and deduplication of text blocks in assistant messages.
- Improved error handling and logging in authentication routes, providing clearer feedback to users.

These changes improve the overall user experience and security of the Codex integration, ensuring smoother authentication processes and better management of API keys.
2026-01-07 22:49:30 +05:30
webdevcody
927451013c feat: add sandbox risk confirmation and rejection screens
- Introduced `SandboxRiskDialog` to prompt users about risks when running outside a containerized environment.
- Added `SandboxRejectionScreen` for users who deny the sandbox risk confirmation, providing options to reload or restart the app.
- Updated settings view and danger zone section to manage sandbox warning preferences.
- Implemented a new API endpoint to check if the application is running in a containerized environment.
- Enhanced state management to handle sandbox warning settings across the application.
2026-01-07 10:41:43 -05:00
webdevcody
0d206fe75f feat: enhance login view with session verification and loading state
- Implemented session verification on component mount using exponential backoff to handle server live reload scenarios.
- Added loading state to the login view while checking for an existing session, improving user experience.
- Removed unused setup wizard navigation from the API keys section for cleaner code.
2026-01-07 10:18:06 -05:00
webdevcody
11accac5ae feat: implement API-first settings management and description history tracking
- Migrated settings persistence from localStorage to an API-first approach, ensuring consistency between Electron and web modes.
- Introduced `useSettingsSync` hook for automatic synchronization of settings to the server with debouncing.
- Enhanced feature update logic to track description changes with a history, allowing for better management of feature descriptions.
- Updated various components and services to utilize the new settings structure and description history functionality.
- Removed persist middleware from Zustand store, streamlining state management and improving performance.
2026-01-07 10:05:54 -05:00
DhanushSantosh
2250367ddc chore: update npm audit level in CI workflow
- Changed the npm audit command in the security audit workflow to check for critical vulnerabilities instead of moderate ones.
- This adjustment enhances the security posture of the application by ensuring that critical issues are identified and addressed promptly.
2026-01-07 20:24:49 +05:30
DhanushSantosh
fe305bbc81 feat: add vision support validation for image processing
- Introduced a new method in ProviderFactory to check if a model supports vision/image input.
- Updated AgentService and AutoModeService to validate vision support before processing images, throwing an error if the model does not support it.
- Enhanced error messages to guide users on switching models or removing images if necessary.

These changes improve the robustness of image processing by ensuring compatibility with the selected models.
2026-01-07 20:12:39 +05:30
DhanushSantosh
92195340c6 feat: enhance authentication handling and API key validation
- Added optional API keys for OpenAI and Cursor to the .env.example file.
- Implemented API key validation in CursorProvider to ensure valid keys are used.
- Introduced rate limiting in Claude and Codex authentication routes to prevent abuse.
- Created secure environment handling for authentication without modifying process.env.
- Improved error handling and logging for authentication processes, enhancing user feedback.

These changes improve the security and reliability of the authentication mechanisms across the application.
2026-01-07 19:26:42 +05:30
webdevcody
1316ead8c8 completly remove sandbox related code as the downstream libraries do not work with it on various os 2026-01-07 08:54:14 -05:00
DhanushSantosh
03b33106e0 fix: replace git+ssh URLs with https in package-lock.json
- Configure git to use HTTPS for GitHub URLs globally
- Run npm run fix:lockfile to rewrite package-lock.json
- Resolves lint-lockfile failure in CI/CD environments
2026-01-07 19:09:26 +05:30
DhanushSantosh
251f0fd88e chore: update CI configuration and enhance test stability
- Added deterministic API key and environment variables in e2e-tests.yml to ensure consistent test behavior.
- Refactored CodexProvider tests to improve type safety and mock handling, ensuring reliable test execution.
- Updated provider-factory tests to mock installation detection for CodexProvider, enhancing test isolation.
- Adjusted Playwright configuration to conditionally use external backend, improving flexibility in test environments.
- Enhanced kill-test-servers script to handle external server scenarios, ensuring proper cleanup of test processes.

These changes improve the reliability and maintainability of the testing framework, leading to a more stable development experience.
2026-01-07 19:09:26 +05:30
DhanushSantosh
96f154d440 refactor: enhance type safety and error handling in navigation and project creation
- Updated navigation functions to cast route paths correctly, improving type safety.
- Added error handling for the templates API in project creation hooks to ensure robustness.
- Refactored task progress panel to improve type handling for feature data.
- Introduced type checks and default values in various components to enhance overall stability.

These changes improve the reliability and maintainability of the application, ensuring better user experience and code quality.
2026-01-07 19:09:25 +05:30
DhanushSantosh
27c6d5a3bb refactor: improve error handling and CLI integration
- Updated CodexProvider to read prompts from stdin to prevent shell escaping issues.
- Enhanced AgentService to handle streamed error messages from providers, ensuring a consistent user experience.
- Modified UI components to display error messages clearly, including visual indicators for errors in chat bubbles.
- Updated CLI status handling to support both Claude and Codex APIs, improving compatibility and user feedback.

These changes enhance the robustness of the application and improve the user experience during error scenarios.
2026-01-07 19:09:25 +05:30
DhanushSantosh
a57dcc170d feature/codex-cli 2026-01-07 19:09:24 +05:30
Shirone
5c601ff200 feat: implement subagents configuration management
- Added a new function to retrieve subagents configuration from settings, allowing users to enable/disable subagents and select sources for loading them.
- Updated the AgentService to incorporate subagents configuration, dynamically adding tools based on the settings.
- Enhanced the UI components to manage subagents, including a settings section for enabling/disabling and selecting sources.
- Introduced a new hook for managing subagents settings state and interactions.

These changes improve the flexibility and usability of subagents within the application, enhancing user experience and configuration options.
2026-01-07 10:32:42 +01:00
webdevcody
4d4025ca06 fix: improve WebSocket connection handling in Electron mode
- Updated the logic for establishing WebSocket connections in Electron mode to handle cases where the API key is unavailable.
- Added fallback to wsToken/cookie authentication for real-time event updates, enhancing reliability in external server scenarios.
- Improved logging for better debugging of WebSocket connection issues.
2026-01-06 22:29:08 -05:00
Web Dev Cody
77f253c7bd Merge pull request #376 from AutoMaker-Org/electron-with-docker-api
feat: add support for external server mode with Docker integration
2026-01-06 21:12:49 -05:00
Shirone
fe13d47b24 refactor: improve agent file model validation and settings source deduplication
- Enhanced model parsing in agent discovery to validate against allowed values and log warnings for invalid models.
- Refactored settingSources construction in AgentService to utilize Set for automatic deduplication, simplifying the merging of user and project settings with skills sources.
- Updated tests to reflect changes in allowedTools for improved functionality.

These changes enhance the robustness of agent configuration and streamline settings management.
2026-01-07 00:05:33 +01:00
Shirone
33acf502ed refactor: enhance skills and subagents settings management
- Updated useSkillsSettings and useSubagents hooks to improve state management and error handling.
- Added new settings API methods for skills configuration and agent discovery.
- Refactored app-store to include enableSkills and skillsSources state management.
- Enhanced settings migration to sync skills configuration with the server.

These changes streamline the management of skills and subagents, ensuring better integration and user experience.
2026-01-06 23:43:31 +01:00
webdevcody
66557b2093 feat: add support for external server mode with Docker integration
- Introduced a new `docker-compose.dev-server.yml` for running the backend API in a container, enabling local Electron to connect to it.
- Updated `dev.mjs` to include a new option for launching the Docker server container.
- Enhanced the UI application to support external server mode, allowing session-based authentication and adjusting routing logic accordingly.
- Added utility functions to check and cache the external server mode status for improved performance.
- Updated various components to handle authentication and routing based on the server mode.
2026-01-06 17:26:25 -05:00
Web Dev Cody
c713cef484 Merge pull request #360 from AutoMaker-Org/feat/mass-edit-backlog-features
feat: add mass edit feature for backlog kanban cards
2026-01-06 16:17:14 -05:00
webdevcody
5f7cbd3435 refactor: improve logging format in launchDockerContainers function
- Updated the logging format in the launchDockerContainers function to enhance readability by breaking long lines into multiple lines. This change improves the clarity of log messages when starting Docker containers.
2026-01-06 16:14:10 -05:00
webdevcody
a4290b5863 feat: enhance development environment with Docker support and UI improvements
- Introduced a new `docker-compose.dev.yml` for development mode, enabling live reload and improved container management.
- Updated `dev.mjs` to utilize `launchDockerDevContainers` for starting development containers with live reload capabilities.
- Refactored `printModeMenu` to differentiate between development and production Docker options.
- Enhanced the `BoardView` and `KanbanBoard` components by streamlining props and improving UI interactions.
- Removed the `start.mjs` script, consolidating production launch logic into `dev.mjs` for a more unified approach.
2026-01-06 16:11:29 -05:00
webdevcody
f9b0a38642 Merge branch 'main' into feat/mass-edit-backlog-features 2026-01-06 14:38:59 -05:00
antdev
46636cf385 fix background image and create test in case background image failed again 2026-01-07 02:04:22 +08:00
antdev
e24a6894a5 Merge remote-tracking branch 'refs/remotes/origin/main'
# Conflicts:
#	apps/ui/src/routes/__root.tsx
2026-01-07 01:48:34 +08:00
antdev
cf9289e21a fix background image and create test incase background image failed again 2026-01-07 01:21:46 +08:00
webdevcody
fe7bc954ba chore: add OpenSSH client to Dockerfile for enhanced SSH capabilities
- Updated the Dockerfile to include the OpenSSH client, improving the container's ability to handle SSH connections and operations.
2026-01-06 00:36:45 -05:00
Shirone
236989bf6e feat: add skills and subagents configuration support
- Updated .gitignore to include skills directory.
- Introduced agent discovery functionality to scan for AGENT.md files in user and project directories.
- Added new API endpoint for discovering filesystem agents.
- Implemented UI components for managing skills and viewing custom subagents.
- Enhanced settings helpers to retrieve skills configuration and custom subagents.
- Updated agent service to incorporate skills and subagents in task delegation.

These changes enhance the capabilities of the system by allowing users to define and manage skills and custom subagents effectively.
2026-01-06 04:31:57 +01:00
Web Dev Cody
8a6a83bf52 Merge pull request #366 from AutoMaker-Org/cursor-docker-oauth
feat: add Cursor CLI installation attempts documentation and enhance …
2026-01-05 21:53:31 -05:00
webdevcody
84b582ffa7 refactor: streamline Docker container management and enhance utility functions
- Removed redundant Docker image rebuilding logic from `dev.mjs` and `start.mjs`, centralizing it in the new `launchDockerContainers` function within `launcher-utils.mjs`.
- Introduced `sanitizeProjectName` and `shouldRebuildDockerImages` functions to improve project name handling and Docker image management.
- Updated the Docker launch process to provide clearer logging and ensure proper handling of environment variables, enhancing the overall development experience.
2026-01-05 21:50:12 -05:00
webdevcody
bd5176165d refactor: remove duplicate logger initialization in useCliStatus hook
- Eliminated redundant logger declaration within the useCliStatus hook to improve code clarity and prevent potential performance issues.
- This change enhances the maintainability of the code by ensuring the logger is created only once outside the hook.
2026-01-05 21:38:18 -05:00
webdevcody
49f32c4d59 Merge branch 'main' of github.com:AutoMaker-Org/automaker into cursor-docker-oauth 2026-01-05 21:29:20 -05:00
webdevcody
0af5bc86f4 Merge branch 'cursor-docker-oauth' of github.com:AutoMaker-Org/automaker into cursor-docker-oauth 2026-01-05 21:29:01 -05:00
webdevcody
bc5a36c5f4 feat: enhance project name sanitization and improve Docker image naming
- Added a `sanitizeProjectName` function to ensure project names are safe for shell commands and Docker image names by converting them to lowercase and removing non-alphanumeric characters.
- Updated `dev.mjs` and `start.mjs` to utilize the new sanitization function when determining Docker image names, enhancing security and consistency.
- Refactored the Docker entrypoint script to ensure proper permissions for the Cursor CLI config directory, improving setup reliability.
- Clarified documentation regarding the storage location of OAuth tokens for the Cursor CLI on Linux.

These changes improve the robustness of the Docker setup and enhance the overall development workflow.
2026-01-05 21:28:42 -05:00
Web Dev Cody
2934d73db2 Merge pull request #368 from AutoMaker-Org/fix/small-bugs
fix: small bugs
2026-01-05 20:23:42 -05:00
Shirone
a4968f7235 fix: show success toast only during project creation flow
- Updated the useSpecRegeneration hook to conditionally display the success toast message only when the user is in the active project creation flow, preventing unnecessary notifications during regular spec regeneration.
2026-01-06 02:04:08 +01:00
Shirone
b8e0c18c53 fix: theme switch bug
- when user had set up theme on the project lvl i and went trought the setup wizard again and changed theme its was not updating because its was only updating global theme and app was reverting back to show current project theme
2026-01-06 02:00:41 +01:00
Shirone
d0b3e0d9bb refactor: move logger initialization outside of useCliStatus function
- Moved the logger initialization to the top of the file for better readability and to avoid re-initialization on each function call.
- This change enhances the performance and clarity of the code in the useCliStatus hook.
- fix infinite loop calling caused by rerender because of logger
2026-01-06 01:53:08 +01:00
Kacper
2a0719e00c refactor: move logger initialization outside of useCliStatus hook
- Moved the logger creation outside the hook to prevent infinite re-renders.
- Updated dependencies in the checkStatus function to remove logger from the dependency array.

These changes enhance performance and maintainability of the useCliStatus hook.
2026-01-06 00:58:31 +01:00
webdevcody
af394183e6 feat: add Cursor CLI installation attempts documentation and enhance Docker setup
- Introduced a new markdown file summarizing various attempts to install the Cursor CLI in Docker, detailing approaches, results, and key learnings.
- Updated Dockerfile to ensure proper installation of Cursor CLI for the non-root user, including necessary PATH adjustments for interactive shells.
- Enhanced entrypoint script to manage OAuth tokens for both Claude and Cursor CLIs, ensuring correct permissions and directory setups.
- Added scripts for extracting OAuth tokens from macOS Keychain and Linux JSON files for seamless integration with Docker.
- Updated docker-compose files to support persistent storage for CLI configurations and authentication tokens.

These changes improve the development workflow and provide clear guidance on CLI installation and authentication processes.
2026-01-05 18:13:14 -05:00
webdevcody
5d675561ba chore: release v0.8.0
🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-05 16:31:11 -05:00
Web Dev Cody
27fb3f2777 Merge pull request #364 from AutoMaker-Org/v0.8.0rc
V0.8.0rc
2026-01-05 16:28:25 -05:00
webdevcody
aca84fe16a chore: update Docker configuration and entrypoint script
- Enhanced .dockerignore to exclude additional build outputs and dependencies.
- Modified dev.mjs and start.mjs to change Docker container startup behavior, removing the --build flag to preserve volumes.
- Updated docker-compose.yml to add a new volume for persisting Claude CLI OAuth session keys.
- Introduced docker-entrypoint.sh to fix permissions on the Claude CLI config directory.
- Adjusted Dockerfile to include the entrypoint script and ensure proper user permissions.

These changes improve the Docker setup and streamline the development workflow.
2026-01-05 10:44:47 -05:00
Shirone
abab7be367 Merge pull request #363 from AutoMaker-Org/fix/app-spec-ui-bug
fix: prevent 'No App Specification Found' during spec generation
2026-01-05 16:04:26 +01:00
Kacper
73d0edb873 refactor: move setIsGenerationRunning(false) outside if block
Address PR review feedback - ensure isGenerationRunning is always
reset to false when generation is not running, even if
api.specRegeneration is not available.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-05 16:01:41 +01:00
Kacper
84d93c2901 fix: prevent "No App Specification Found" during spec generation
Check generation status before trying to load the spec file.
This prevents 500 errors and confusing UI during spec generation.

Changes:
- useSpecLoading now checks specRegeneration.status() first
- If generation is running, skip the file read and set isGenerationRunning
- SpecView uses isGenerationRunning to show generating UI properly

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-05 15:57:17 +01:00
Shirone
d558050dfa Merge pull request #362 from AutoMaker-Org/refactor/adjust-buttons-in-board-header
style: update BoardHeader component for improved layout
2026-01-05 15:29:28 +01:00
Kacper
5991e99853 refactor: extract shared className and add data-testid
Address PR review feedback:
- Extract duplicated className to controlContainerClass constant
- Add data-testid="auto-mode-toggle-container" for testability

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-05 15:26:19 +01:00
Kacper
9661aa1dad style: update BoardHeader component for improved layout
- Adjusted the spacing and height of the concurrency slider and auto mode toggle containers for better visual consistency.
- Changed class names to enhance the overall design and maintainability of the UI.
2026-01-05 15:22:04 +01:00
Shirone
d4649ec456 Merge pull request #361 from AutoMaker-Org/refactor/improve-vitest-configuration
refactor: use Vitest projects config instead of deprecated workspace
2026-01-05 15:02:12 +01:00
Kacper
fde9eea2d6 style: use explicit config path for server project
Address PR review feedback for consistency - use full path
'apps/server/vitest.config.ts' instead of just 'apps/server'.

Note: libs/types has no tests (type definitions only), so it
doesn't need a vitest.config.ts.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-05 14:59:46 +01:00
Kacper
d1e3251c29 refactor: use glob patterns for vitest projects configuration
Address PR review feedback:
- Use 'libs/*/vitest.config.ts' glob to auto-discover lib projects
- Simplify test:packages script to use --project='!server' exclusion
- New libs with vitest.config.ts will be automatically included

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-05 14:50:47 +01:00
Kacper
2f51991558 refactor: use Vitest projects config instead of deprecated workspace
- Add root vitest.config.ts with projects array (replaces deprecated workspace)
- Add name property to each project's vitest.config.ts for filtering
- Update package.json test scripts to use vitest projects
- Add vitest to root devDependencies

This addresses the Vitest warning about multiple configs impacting
performance by running all projects in a single Vitest process.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-05 14:45:33 +01:00
Kacper
7963525246 refactor: replace loading messages with LoadingState component
- Updated RootLayoutContent to utilize the LoadingState component for displaying loading messages during various application states, enhancing consistency and maintainability of the UI.
2026-01-05 14:36:10 +01:00
Kacper
feae1d7686 refactor: enhance pre-commit hook for nvm compatibility
- Improved the pre-commit script to better handle loading Node.js versions from .nvmrc for both Unix and Windows environments.
- Added checks to ensure nvm is sourced correctly and to handle potential errors gracefully, enhancing the reliability of the development setup.
2026-01-05 14:36:01 +01:00
Web Dev Cody
bbdfaf6463 Merge pull request #358 from AutoMaker-Org/ideation-fix
refactor: update ideation dashboard and prompt list to use project-sp…
2026-01-04 18:07:53 -05:00
Shirone
1117afc37a refactor: update mass edit dialog and introduce new select components
- Removed advanced options toggle and related state from the mass edit dialog for a cleaner UI.
- Replaced ProfileQuickSelect with ProfileSelect for better profile management.
- Introduced new PlanningModeSelect and PrioritySelect components for streamlined selection of planning modes and priorities.
- Updated imports in shared index to include new select components.
- Enhanced the mass edit dialog to utilize the new components, improving user experience during bulk edits.
2026-01-04 23:24:24 +01:00
Kacper
06c02de1cb feat: add mass edit feature for backlog kanban cards
Add ability to select multiple backlog features and edit their configuration
in bulk. Selection is limited to backlog column features in the current
branch/worktree only.

Changes:
- Add selection mode toggle in board controls
- Add checkbox selection on kanban cards (backlog only)
- Disable drag and drop during selection mode
- Hide action buttons during selection mode
- Add floating selection action bar with Edit/Clear/Select All
- Add mass edit dialog with all configuration options in single scroll view
- Add server endpoint for bulk feature updates
2026-01-04 22:25:19 +01:00
webdevcody
e4d86aa654 refactor: optimize ideation components and store for project-specific job handling
- Updated IdeationDashboard and PromptList components to utilize memoization for improved performance when retrieving generation jobs specific to the current project.
- Removed the getJobsForProject function from the ideation store, streamlining job management by directly filtering jobs in the components.
- Enhanced the addGenerationJob function to ensure consistent job ID generation format.
- Implemented migration logic in the ideation store to clean up legacy jobs without project paths, improving data integrity.
2026-01-04 16:22:25 -05:00
webdevcody
4ac1edf314 refactor: update ideation dashboard and prompt list to use project-specific job retrieval
- Modified IdeationDashboard and PromptList components to fetch generation jobs specific to the current project.
- Updated addGenerationJob function to include projectPath as a parameter for better job management.
- Introduced getJobsForProject function in the ideation store to streamline job filtering by project.
2026-01-04 14:16:39 -05:00
Web Dev Cody
4f3ac27534 Merge pull request #288 from AutoMaker-Org/feat/cursor-cli
feat: Cursor CLI Integration
2026-01-04 13:31:29 -05:00
Kacper
4a41dbb665 style: fix formatting in auto-mode-service.ts 2026-01-04 13:28:37 +01:00
Kacper
f90cd61048 fix: remove MCP permission settings references removed in v0.8.0rc
v0.8.0rc removed getMCPPermissionSettings and related properties.
Removed all references from auto-mode-service.ts to fix build.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-04 13:18:35 +01:00
Kacper
078f107f66 Merge v0.8.0rc into feat/cursor-cli
Resolved conflicts:
- sdk-options.ts: kept HEAD (MCP & thinking level features)
- auto-mode-service.ts: kept HEAD (MCP features + fallback code)
- agent-output-modal.tsx: used v0.8.0rc (effectiveViewMode + pr-8 spacing)
- feature-suggestions-dialog.tsx: accepted deletion
- electron.ts: used v0.8.0rc (Ideation types)
- package-lock.json: regenerated

Fixed sdk-options.test.ts to expect 'default' permissionMode for read-only operations.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-04 13:12:45 +01:00
Web Dev Cody
64642916ab Merge pull request #355 from AutoMaker-Org/summaries-and-todos
Summaries and todos
2026-01-04 01:59:49 -05:00
webdevcody
e2206d7a96 feat: add thorough verification process and enhance agent output modal
- Introduced a new markdown file outlining a mandatory 3-pass verification process for code completion, focusing on correctness, edge cases, and maintainability.
- Updated the AgentInfoPanel to display a todo list for non-backlog features, ensuring users can see the agent's current tasks.
- Enhanced the AgentOutputModal to support a summary view, extracting and displaying summary content from raw log output.
- Improved the log parser to extract summaries from various formats, enhancing the overall user experience and information accessibility.
2026-01-04 01:56:45 -05:00
Web Dev Cody
32f859b927 Merge pull request #354 from AutoMaker-Org/ideation
feat: implement ideation feature for brainstorming and idea management
2026-01-04 01:21:44 -05:00
webdevcody
ac92725a6c feat: enhance ideation routes with event handling and new suggestion feature
- Updated the ideation routes to include an EventEmitter for better event management.
- Added a new endpoint to handle adding suggestions to the board, ensuring consistent category mapping.
- Modified existing routes to emit events for idea creation, update, and deletion, improving frontend notifications.
- Refactored the convert and create idea handlers to utilize the new event system.
- Removed static guided prompts data in favor of dynamic fetching from the backend API.
2026-01-04 00:38:01 -05:00
webdevcody
5c95d6d58e fix: update category mapping and improve ID generation format in IdeationService
- Changed the category mapping for 'feature' from 'feature' to 'ui'.
- Updated ID generation format to use hyphens instead of underscores for better readability.
- Enhanced unit tests to reflect the updated category and ensure proper functionality.
2026-01-04 00:22:06 -05:00
claude[bot]
abddfad063 test: add comprehensive unit tests for IdeationService
- Add 28 unit tests covering all major IdeationService functionality
- Test session management (start, get, stop, running state)
- Test idea CRUD operations (create, read, update, delete, archive)
- Test idea to feature conversion with user stories and notes
- Test project analysis and caching
- Test prompt management and filtering
- Test AI-powered suggestion generation
- Mock all external dependencies (fs, platform, utils, providers)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Web Dev Cody <webdevcody@users.noreply.github.com>
2026-01-04 05:16:06 +00:00
webdevcody
3512749e3c feat: refactor development and production launch scripts
- Introduced `dev.mjs` for development mode with hot reloading using Vite.
- Added `start.mjs` for production mode, serving pre-built static files without hot reloading.
- Created a new utility module `launcher-utils.mjs` for shared functions across scripts.
- Updated package.json scripts to reflect new launch commands.
- Removed deprecated `init.mjs` and associated MCP permission settings from the codebase.
- Added `.dockerignore` and updated `.gitignore` for better environment management.
- Enhanced README with updated usage instructions for starting the application.
2026-01-04 00:06:25 -05:00
webdevcody
2c70835769 Merge branch 'main' into ideation 2026-01-03 23:58:56 -05:00
Web Dev Cody
b1f7139bb6 Merge pull request #353 from AutoMaker-Org/fix-memory-leak
Fix memory leak
2026-01-03 23:57:11 -05:00
webdevcody
22aa24ae04 feat: add Docker container launch option and update process handling
- Introduced a new option to launch the application in a Docker container (Isolated Mode) from the main menu.
- Added checks for the ANTHROPIC_API_KEY environment variable to ensure proper API functionality.
- Updated process management to include Docker, allowing for better cleanup and handling of spawned processes.
- Enhanced user prompts and logging for improved clarity during the launch process.
2026-01-03 23:53:44 -05:00
webdevcody
586aabe11f chore: update .gitignore and improve cleanup handling in scripts
- Added .claude/hans/ to .gitignore to prevent tracking of specific directory.
- Updated cleanup calls in dev.mjs and start.mjs to use await for proper asynchronous handling.
- Enhanced error handling during cleanup in case of failures.
- Improved server failure handling in startServerAndWait function to ensure proper termination of failed processes.
2026-01-03 23:36:22 -05:00
webdevcody
afb0937cb3 refactor: update permissionMode to bypassPermissions in SDK options and tests
- Changed permissionMode from 'default' to 'bypassPermissions' in sdk-options and claude-provider unit tests.
- Added allowDangerouslySkipPermissions flag in claude-provider test to enhance permission handling.
2026-01-03 23:26:26 -05:00
webdevcody
d677910f40 refactor: update permission handling and optimize performance measurement
- Changed permissionMode settings in enhance and generate title routes to improve edit acceptance and default behavior.
- Refactored performance measurement cleanup in the App component to only execute in development mode, preventing unnecessary operations in production.
- Simplified the startServerAndWait function signature for better readability.
2026-01-03 23:23:43 -05:00
webdevcody
6d41c7d0bc docs: update README for authentication setup and production launch
- Revised instructions for starting Automaker, changing from `npm run dev` to `npm run start` for production mode.
- Added a setup wizard for authentication on first run, with options for using Claude Code CLI or entering an API key.
- Clarified development mode instructions, emphasizing the use of `npm run dev` for live reload and hot module replacement.
2026-01-03 23:13:53 -05:00
webdevcody
9552670d3d feat: introduce development mode launch script
- Added a new script (dev.mjs) to start the application in development mode with hot reloading using Vite.
- The script includes functionality for installing Playwright browsers, resolving port configurations, and launching either a web or desktop application.
- Removed the old init.mjs script, which was previously responsible for launching the application.
- Updated package.json to reference the new dev.mjs script for the development command.
- Introduced a shared utilities module (launcher-utils.mjs) for common functionalities used in both development and production scripts.
2026-01-03 23:11:18 -05:00
webdevcody
e32a82cca5 refactor: remove MCP permission settings and streamline SDK options for autonomous mode
- Removed MCP permission settings from the application, including related functions and UI components.
- Updated SDK options to always bypass permissions and allow unrestricted tool access in autonomous mode.
- Adjusted related components and services to reflect the removal of MCP permission configurations, ensuring a cleaner and more efficient codebase.
2026-01-03 23:00:20 -05:00
webdevcody
019d6dd7bd fix memory leak 2026-01-03 22:50:42 -05:00
Shirone
c6d94d4bf4 fix: improve abort handling in spawnJSONLProcess
- Added immediate invocation of abort handler if the abort signal is already triggered, ensuring proper cleanup.
- Updated test to use setImmediate for aborting, allowing the generator to start processing before the abort is called, enhancing reliability.
2026-01-04 03:50:17 +01:00
Shirone
ef06c13c1a feat: implement timeout for plan approval and enhance error handling
- Added a 30-minute timeout for user plan approval to prevent indefinite waiting and memory leaks.
- Wrapped resolve/reject functions in the waitForPlanApproval method to ensure timeout is cleared upon resolution.
- Enhanced error handling in the stream processing loop to ensure proper cleanup and logging of errors.
- Improved the handling of task execution and phase completion events for better tracking and user feedback.
2026-01-04 03:45:21 +01:00
Kacper
3ed3a90bf6 refactor: rename phase models to model defaults and reorganize components
- Updated imports and references from 'phase-models' to 'model-defaults' across various components.
- Removed obsolete phase models index file to streamline the codebase.
2026-01-03 23:05:34 +01:00
webdevcody
ff281e23d0 feat: implement ideation feature for brainstorming and idea management
- Introduced a new IdeationService to manage brainstorming sessions, including idea creation, analysis, and conversion to features.
- Added RESTful API routes for ideation, including session management, idea CRUD operations, and suggestion generation.
- Created UI components for the ideation dashboard, prompt selection, and category grid to enhance user experience.
- Integrated keyboard shortcuts and navigation for the ideation feature, improving accessibility and workflow.
- Updated state management with Zustand to handle ideation-specific data and actions.
- Added necessary types and paths for ideation functionality, ensuring type safety and clarity in the codebase.
2026-01-03 02:58:43 -05:00
Web Dev Cody
f34fd955ac Merge pull request #342 from yumesha/main
fixed background image not showing at desktop application (electron)
2026-01-03 02:05:24 -05:00
antdev
46cb6fa425 fixed 'Buffer' is not defined. 2026-01-03 13:52:57 +08:00
antdev
818d8af998 E2E Test Fix - Ready for Manual Application 2026-01-03 13:47:23 +08:00
Shirone
88aba360e3 fix: improve Cursor CLI implementation with type safety and security fixes
- Add getCliPath() public method to CursorProvider to avoid private field access
- Add path validation to cursor-config routes to prevent traversal attacks
- Add supportsVision field to CursorModelConfig (all false - CLI limitation)
- Consolidate duplicate types in providers/types.ts (re-export from @automaker/types)
- Add MCP servers warning log instead of error (not yet supported by Cursor CLI)
- Fix debug log type safety (replace 'as any' with proper type narrowing)
- Update docs to remove non-existent tier field, add supportsVision field
- Remove outdated TODO comment in sdk-options.ts

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-03 03:35:33 +01:00
Shirone
ec6d36bda5 chore: format 2026-01-03 03:08:46 +01:00
Shirone
816bf8f6f6 Merge branch 'main' into feat/cursor-cli 2026-01-03 03:05:19 +01:00
Shirone
a6d665c4fa fix: update logger tests to use console.log instead of console.warn
- Modified logger test cases to reflect that the warn method uses console.log in Node.js implementation.
- Updated expectations in feature-loader tests to align with the new logging behavior.
- Ensured consistent logging behavior across tests for improved clarity and accuracy.
2026-01-03 03:03:50 +01:00
Shirone
d13a16111c feat: enhance suggestion generation with model and thinking level overrides
- Updated the generateSuggestions function to accept model and thinking level overrides, allowing for more flexible suggestion generation.
- Modified the API client and UI components to support passing these new parameters, improving user control over the suggestion process.
- Introduced a new phase model for AI Suggestions in settings, enhancing the overall functionality and user experience.
2026-01-03 02:56:08 +01:00
antdev
8d5e7b068c fail format check fixed 2026-01-03 09:55:54 +08:00
Shirone
6d4f28575f fix: scrolling issues in phase model selector
- scrolling was broken when we used component inside modal / dialog
2026-01-03 02:55:34 +01:00
Shirone
7596ff9ec3 fix: enable logger colors by default in Node.js subprocess environments
Previously, colors were only enabled when stdout was a TTY, which caused
colored output to be stripped when the server ran as a subprocess. Now
colors are enabled by default in Node.js and can be disabled with
LOG_COLORS=false if needed.

Also removed the unused isTTY() function.
2026-01-03 02:54:14 +01:00
Shirone
35441c1a9d feat: add AI Suggestions phase model to settings view
- Introduced a new phase model for AI Suggestions, enhancing the functionality of the settings view.
- Updated the phase model handling to utilize DEFAULT_PHASE_MODELS as a fallback, ensuring robust behavior when specific models are not set.
- This addition improves the user experience by providing more options for project analysis and suggestions.
2026-01-03 02:41:39 +01:00
Shirone
abed3b3d75 feat: enhance use-model-override to support default phase models
- Updated the useModelOverride hook to include a fallback to DEFAULT_PHASE_MODELS when phase models are not available, ensuring smoother handling of model overrides.
- This change addresses scenarios where settings may not have been migrated to include new phase models, improving overall robustness and user experience.
2026-01-03 02:40:17 +01:00
Kacper
9071f89ec8 chore: remove obsolete Cursor CLI integration documentation
- Deleted the Cursor CLI integration analysis document, phase prompt, README, and related phase files as they are no longer relevant to the current project structure.
- This cleanup helps streamline the project and remove outdated references, ensuring a more maintainable codebase.
2026-01-02 21:02:40 +01:00
Kacper
3c8ee5b714 chore: update .prettierignore and format vite.config.mts
- Added routeTree.gen.ts to .prettierignore to prevent formatting of generated files.
- Reformatted the external dependencies list in vite.config.mts for improved readability.
2026-01-02 20:56:40 +01:00
Kacper
8e1a9addc1 fix: update logger test expectations to include log level prefixes
- Modified test cases in logger.test.ts to assert that log messages include appropriate log level prefixes (INFO, ERROR, WARN, DEBUG).
- Ensured consistency in log output formatting across various logging methods, enhancing clarity in test results.
2026-01-02 20:54:39 +01:00
Kacper
e72f7d1e1a fix: libs test 2026-01-02 20:50:57 +01:00
Kacper
4a28b70b72 feat: enhance query options with maxThinkingTokens support
- Introduced maxThinkingTokens to the query options for Claude models, allowing for more precise control over the SDK's reasoning capabilities.
- Refactored the enhance handler to utilize the new getThinkingTokenBudget function, improving the integration of thinking levels into the query process.
- Updated the query options structure for clarity and maintainability, ensuring consistent handling of model parameters.

This update enhances the application's ability to adapt reasoning capabilities based on user-defined thinking levels, improving overall performance.
2026-01-02 20:50:40 +01:00
Kacper
3e95a11189 refactor: replace logger with console statements in subprocess management
- Removed the centralized logging system in favor of direct console.log and console.error statements for subprocess management.
- Updated logging messages to include context for subprocess actions, such as spawning commands, handling errors, and process completion.
- This change simplifies the logging approach in subprocess handling, making it easier to track subprocess activities during development.
2026-01-02 20:46:39 +01:00
Shirone
2b942a6cb1 feat: integrate thinking level support across various components
- Enhanced multiple server and UI components to include an optional thinking level parameter, improving the configurability of model interactions.
- Updated request handlers and services to manage and pass the thinking level, ensuring consistent data handling across the application.
- Refactored UI components to display and manage the selected model along with its thinking level, enhancing user experience and clarity.
- Adjusted the Electron API and HTTP client to support the new thinking level parameter in requests, ensuring seamless integration.

This update significantly improves the application's ability to adapt reasoning capabilities based on user-defined thinking levels, enhancing overall performance and user satisfaction.
2026-01-02 17:52:12 +01:00
Shirone
69f3ba9724 feat: standardize logging across UI components
- Replaced console.log and console.error statements with logger methods from @automaker/utils in various UI components, ensuring consistent log formatting and improved readability.
- Enhanced error handling by utilizing logger methods to provide clearer context for issues encountered during operations.
- Updated multiple views and hooks to integrate the new logging system, improving maintainability and debugging capabilities.

This update significantly enhances the observability of UI components, facilitating easier troubleshooting and monitoring.
2026-01-02 17:33:15 +01:00
Shirone
96a999817f feat: implement structured logging across server components
- Integrated a centralized logging system using createLogger from @automaker/utils, replacing console.log and console.error statements with logger methods for consistent log formatting and improved readability.
- Updated various modules, including auth, events, and services, to utilize the new logging system, enhancing error tracking and operational visibility.
- Refactored logging messages to provide clearer context and information, ensuring better maintainability and debugging capabilities.

This update significantly enhances the observability of the server components, facilitating easier troubleshooting and monitoring.
2026-01-02 15:40:15 +01:00
Shirone
8c04e0028f feat: integrate thinking level support across agent and UI components
- Enhanced the agent service and request handling to include an optional thinking level parameter, improving the configurability of model interactions.
- Updated the UI components to manage and display the selected model along with its thinking level, ensuring a cohesive user experience.
- Refactored the model selector and input controls to accommodate the new model selection structure, enhancing usability and clarity.
- Adjusted the Electron API and HTTP client to support the new thinking level parameter in requests, ensuring consistent data handling across the application.

This update significantly improves the agent's ability to adapt its reasoning capabilities based on user-defined thinking levels, enhancing overall performance and user satisfaction.
2026-01-02 15:22:06 +01:00
Stephan Rieche
0fa5fdd478 docs: add comprehensive JSDoc docstrings for pipeline resume methods
Add detailed JSDoc documentation to meet 80% docstring coverage requirement:

- PipelineStatusInfo interface: Document all properties with types and descriptions
- resumePipelineFeature(): Document edge case handling and parameters
- resumeFromPipelineStep(): Document complete pipeline resume workflow
- detectPipelineStatus(): Document pipeline status detection scenarios

Each docstring includes:
- Clear method purpose and behavior
- All parameters with types and descriptions
- Return value documentation
- Error conditions and exceptions
- @private tags for internal methods

This improves code maintainability and helps developers understand the
complex pipeline resume logic.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-02 15:00:25 +01:00
Shirone
81d300391d feat: enhance SDK options with thinking level support
- Introduced a new function, buildThinkingOptions, to handle the conversion of ThinkingLevel to maxThinkingTokens for the Claude SDK.
- Updated existing SDK option creation functions to incorporate thinking options, ensuring that maxThinkingTokens are included based on the specified thinking level.
- Enhanced the settings service to support migration of phase models to include thinking levels, improving compatibility with new configurations.
- Added comprehensive tests for thinking level integration and migration logic, ensuring robust functionality across the application.

This update significantly improves the SDK's configurability and performance by allowing for more nuanced control over reasoning capabilities.
2026-01-02 14:55:52 +01:00
Stephan Rieche
472342c246 chore: run prettier to fix formatting
Auto-format all files to fix format-check CI failure.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-02 13:56:39 +01:00
Stephan Rieche
71e03c2a13 fix: restore Verify button fallback with honest labeling
Re-add the onVerify fallback for in_progress/pipeline features without context,
but fix the misleading UX issue where the button said 'Resume' but executed
verification (tests/build).

Changes:
- Restore onVerify fallback as 3rd option after skipTests Verify and Resume
- Change button label from 'Resume' to 'Verify' (honest!)
- Change icon from PlayCircle to CheckCircle2 (matches action)
- Keep same green styling for consistency

This makes sense because if a feature is in_progress but has no context,
it likely completed execution but the context was deleted. User should be
able to verify it (run tests/build) rather than having no action available.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-02 13:55:05 +01:00
Stephan Rieche
c3403c033c Merge upstream/main into fix/pipeline-resume-edge-cases
Resolved conflict in card-actions.tsx by:
- Keeping pipeline_status check from our branch (supports pipeline_step_* statuses)
- Adopting simplified Resume button logic from main (removed hasContext check and onVerify fallback)

The Resume button now appears for features with:
- status === 'in_progress'
- status.startsWith('pipeline_')

This combines our pipeline support fix with main's simplified button rendering logic.
2026-01-02 13:45:12 +01:00
Stephan Rieche
2a87d55519 fix: show Resume button for features stuck in pipeline steps
Enable the Resume button to appear for features with pipeline status
(e.g., 'pipeline_step_xyz') in addition to 'in_progress' status.

Previously, features that crashed during pipeline execution would show
a 'testing' status badge but no Resume button, making it impossible to
resume them from the UI.

Changes:
- Update card-actions.tsx condition to include pipeline_ status check
- Resume button now shows for both in_progress and pipeline_step_* statuses
- Maintains all existing behavior for other feature states

This fixes the UX issue where users could see a feature was stuck in a
pipeline step but had no way to resume it.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-02 13:22:11 +01:00
Stephan Rieche
2d309f6833 perf: eliminate redundant file read in detectPipelineStatus
Remove unnecessary call to pipelineService.getStep() which was causing
a redundant file read of pipeline.json. The config is already loaded at
line 2807, so we can find the step directly from the in-memory config.

Changes:
- Sort config.steps first
- Find stepIndex using findIndex()
- Get step directly from sortedSteps[stepIndex] instead of calling getStep()
- Simplify null check to only check !step instead of stepIndex === -1 || !step

This optimization reduces file I/O operations and improves performance when
resuming pipeline features.

Co-authored-by: gemini-code-assist bot

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-02 13:04:01 +01:00
Stephan Rieche
7a2a3ef500 refactor: create PipelineStatusInfo interface to eliminate type duplication
Define a dedicated PipelineStatusInfo interface and use it consistently in both
resumePipelineFeature() parameter and detectPipelineStatus() return type.

This eliminates duplicate inline type definitions and improves maintainability
by ensuring both locations always stay in sync. Any future changes to the
pipeline status structure only need to be made in one place.

Changes:
- Add PipelineStatusInfo interface definition
- Replace inline type in resumePipelineFeature() with PipelineStatusInfo
- Replace inline return type in detectPipelineStatus() with PipelineStatusInfo

Co-authored-by: gemini-code-assist bot

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-02 13:02:43 +01:00
Stephan Rieche
3ff9658723 refactor: remove unnecessary runningFeatures.delete() calls
Remove confusing and unnecessary delete calls from resumeFeature() and
resumePipelineFeature() methods. These were leftovers from a previous
implementation where temporary entries were added to runningFeatures.

The resumeFeature() method already ensures the feature is not running
at the start (via has() check that throws if already running), so these
delete calls serve no purpose and only add confusion about state management.

Removed delete calls from:
- resumeFeature() non-pipeline flow (line 748)
- resumePipelineFeature() no-context case (line 798)
- resumePipelineFeature() step-not-found case (line 822)

Co-authored-by: gemini-code-assist bot

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-02 12:44:19 +01:00
Stephan Rieche
c587947de6 refactor: optimize feature loading in pipeline resume
Reduce redundant file reads by loading the feature object once and passing
it down the call chain instead of reloading it multiple times.

Changes:
- Pass feature object to resumePipelineFeature() instead of featureId
- Pass feature object to resumeFromPipelineStep() instead of featureId
- Remove redundant loadFeature() calls from these methods
- Add FeatureStatusWithPipeline import for type safety

This improves performance by eliminating unnecessary file I/O operations
and makes the data flow clearer.

Co-authored-by: gemini-code-assist bot

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-02 12:42:04 +01:00
antdev
d417666fe1 fix background image not showing 2026-01-02 15:33:00 +08:00
webdevcody
2bbc8113c0 chore: update lockfile linting process
- Replaced the inline linting command for package-lock.json with a dedicated script (lint-lockfile.mjs) to check for git+ssh:// URLs, ensuring compatibility with CI/CD environments.
- The new script provides clear error messages and instructions if such URLs are found, enhancing the development workflow.
2026-01-02 00:29:04 -05:00
webdevcody
7e03af2dc6 chore: release v0.7.3
🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-02 00:00:41 -05:00
Shirone
914734cff6 feat(phase-model-selector): implement grouped model selection and enhanced UI
- Added support for grouped models in the PhaseModelSelector, allowing users to select from multiple variants within a single group.
- Introduced a new popover UI for displaying grouped model variants, improving user interaction and selection clarity.
- Implemented logic to filter and display enabled cursor models, including standalone and grouped options.
- Enhanced state management for expanded groups and variant selection, ensuring a smoother user experience.

This update significantly improves the model selection process, making it more intuitive and organized.
2026-01-02 02:37:20 +01:00
Shirone
e1bdb4c7df Merge remote-tracking branch 'origin/main' into feat/cursor-cli 2026-01-02 01:50:16 +01:00
Stephan Rieche
a9403651d4 fix: handle pipeline resume edge cases and improve robustness
This commit fixes several edge cases when resuming features stuck in pipeline steps:

- Detect if feature is stuck in a pipeline step during resume
- Handle case where context file is missing (restart from beginning)
- Handle case where pipeline step no longer exists in config
- Add dedicated resumePipelineFeature() method for pipeline-specific resume logic
- Add detectPipelineStatus() to extract and validate pipeline step information
- Add resumeFromPipelineStep() to resume from a specific pipeline step index
- Update board view to check context availability for features with pipeline status

Edge cases handled:
1. No context file → restart entire pipeline from beginning
2. Step no longer exists in config → complete feature without pipeline
3. Valid step exists → resume from the crashed step

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-02 00:58:32 +01:00
Kacper
ad947691df feat: enhance TaskProgressPanel and AgentOutputModal components
- Added defaultExpanded prop to TaskProgressPanel for customizable initial state.
- Updated styling in TaskProgressPanel for improved layout and consistency.
- Modified AgentOutputModal to utilize the new defaultExpanded prop and adjusted class names for better responsiveness and appearance.
- Enhanced overflow handling in AgentOutputModal for improved user experience.
2026-01-02 00:23:25 +01:00
Web Dev Cody
ab9ef0d560 Merge pull request #340 from AutoMaker-Org/fix-web-mode-auth
feat: implement authentication state management and routing logic
2026-01-01 17:13:20 -05:00
webdevcody
844be657c8 feat: add skipSandboxWarning to settings and sync function
- Introduced skipSandboxWarning property in GlobalSettings interface to manage user preference for sandbox risk warnings.
- Updated syncSettingsToServer function to include skipSandboxWarning in the settings synchronization process.
- Set default value for skipSandboxWarning to false in DEFAULT_GLOBAL_SETTINGS.
2026-01-01 17:08:15 -05:00
webdevcody
90c89ef338 Merge branch 'fix-web-mode-auth' of github.com:AutoMaker-Org/automaker into fix-web-mode-auth 2026-01-01 16:49:41 -05:00
webdevcody
fb46c0c9ea feat: enhance sandbox risk dialog and settings management
- Updated the SandboxRiskDialog to include a checkbox for users to opt-out of future warnings, passing the state to the onConfirm callback.
- Modified SettingsView to manage the skipSandboxWarning state, allowing users to reset the warning preference.
- Enhanced DangerZoneSection to display a message when the sandbox warning is disabled and provide an option to reset this setting.
- Updated RootLayoutContent to respect the user's choice regarding the sandbox warning, auto-confirming if the user opts to skip it.
- Added skipSandboxWarning state management to the app store for persistent user preferences.
2026-01-01 16:49:35 -05:00
Kacper
81bd57cf6a feat: add runNpmAndWait function for improved npm command handling
- Introduced a new function, runNpmAndWait, to execute npm commands and wait for their completion, enhancing error handling.
- Updated the main function to build shared packages before starting the backend server, ensuring necessary dependencies are ready.
- Adjusted server and web process commands to use a consistent naming convention.
2026-01-01 22:39:12 +01:00
Kacper
83e59d6a4d feat(phase-model-selector): enhance model selection with favorites and popover UI
- Introduced a popover for model selection, allowing users to choose from Claude and Cursor models.
- Added functionality to toggle favorite models, enhancing user experience by allowing quick access to preferred options.
- Updated the UI to display selected model details and improved layout for better usability.
- Refactored model grouping and rendering logic for clarity and maintainability.

This update improves the overall interaction with the phase model selector, making it more intuitive and user-friendly.
2026-01-01 22:31:07 +01:00
webdevcody
59d47928a7 feat: implement authentication state management and routing logic
- Added a new auth store using Zustand to manage authentication state, including `authChecked` and `isAuthenticated`.
- Updated `LoginView` to set authentication state upon successful login and navigate based on setup completion.
- Enhanced `RootLayoutContent` to enforce routing rules based on authentication status, redirecting users to login or setup as necessary.
- Improved error handling and loading states during authentication checks.
2026-01-01 16:25:31 -05:00
Kacper
cbe951dd8f fix(suggestions): extract result text from Cursor provider
- Add handler for type=result messages in Cursor stream processing
- Cursor provider sends final accumulated text in msg.result
- Suggestions was only handling assistant messages for Cursor
- Add detailed logging for result extraction (like backlog plan)
- Matches pattern used by github validation and backlog plan

This fixes potential parsing issues when using Cursor models
for feature suggestions, ensuring the complete response text
is captured before JSON parsing.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-01 20:03:15 +01:00
Kacper
63b9f52d6b refactor(cursor-models): remove tier property from Cursor model configurations
- Removed the 'tier' property from Cursor model configurations and related UI components.
- Updated relevant files to reflect the removal of tier-related logic and display elements.
- This change simplifies the model structure and UI, focusing on essential attributes.
2026-01-01 20:00:40 +01:00
Kacper
3b3e61da8d fix(backlog-plan): add Cursor-specific prompt with no-file-write instructions
- Import isCursorModel to detect Cursor models
- For Cursor: embed systemPrompt in userPrompt with explicit instructions
- Add "DO NOT write any files" directive for Cursor models
- Prevents Cursor from writing plan to files instead of returning JSON
- Matches pattern used by github validation (validate-issue.ts)

Cursor doesn't support systemPrompt separation like Claude SDK,
so we need to combine prompts and add explicit instructions to
prevent it from using Write/Edit tools and creating files.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-01 19:15:37 +01:00
Kacper
0e22098652 feat(backlog-plan): add detailed logging for Cursor result extraction
- Change debug logs to info/warn so they're always visible
- Log when result message is received from Cursor
- Log lengths of both msg.result and accumulated responseText
- Log which source is being used (result vs accumulated)
- Log empty response error for better diagnostics
- Add response preview logging on parse failure

This will help diagnose why Cursor parsing is failing by showing:
1. Whether result messages are being received
2. What content lengths we're working with
3. Whether response text is empty or has content
4. What the actual response looks like when parsing fails

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-01 19:07:13 +01:00
Kacper
cf9a1f9077 fix(backlog-plan): use extractJsonWithArray and improve logging
- Switch from extractJson to extractJsonWithArray for 'changes' field
- Validates that 'changes' is an array, not just that it exists
- Add debug logging for response length and preview on parse failure
- Add debug logging when receiving result from Cursor provider
- Matches pattern used by suggestions feature
- Helps diagnose parsing issues with better error context

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-01 18:41:25 +01:00
Kacper
9b1174408b fix(backlog-plan): extract result text from Cursor provider
- Add handler for type=result messages in stream processing
- Cursor provider sends final accumulated text in msg.result
- Backlog plan was only handling assistant messages
- Now matches pattern used by github validation and suggestions
- Fixes "cursor cli parsing failed" error

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-01 18:36:43 +01:00
Kacper
207fd26681 feat(backlog-plan): add model override trigger to footer
- Add ModelOverrideTrigger to backlog plan dialog
- Position trigger in DialogFooter on left side (mr-auto)
- Display before Cancel button for better UX
- Use variant="button" to show model name
- Connect to phaseModels.backlogPlanningModel default
- Pass model override to server generate endpoint

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-01 18:32:40 +01:00
Kacper
aa318099dc feat(ui): add model override trigger to backlog plan dialog
Added ModelOverrideTrigger component to the "Plan Backlog Changes" dialog,
allowing users to override the global backlog planning model on a per-request
basis, consistent with other dialogs in the application.

Changes:
- Added model override state management to backlog-plan-dialog
- Integrated ModelOverrideTrigger component in dialog header (input mode only)
- Pass model override (or global default) to backlogPlan.generate API call
- UI shows override indicator when model is overridden from global default

The feature uses the existing backlogPlanningModel phase setting as the default
and allows temporary overrides without changing global settings.

Server already supports optional model parameter in the generate endpoint,
so no backend changes were required.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-01 18:20:45 +01:00
Kacper
7dec5d9d74 fix(sdk-options): normalize paths for cross-platform cloud storage detection
Fixed cloud storage path detection to work correctly on Windows by normalizing
path separators to forward slashes and removing Windows drive letters before
pattern matching.

Issue:
The isCloudStoragePath() function was failing on Windows because:
1. path.resolve() converts Unix paths to Windows paths with backslashes
2. Windows adds drive letters (e.g., "P:\Users\test" instead of "/Users/test")
3. Pattern checks for "/Library/CloudStorage/" didn't match "\Library\CloudStorage\"
4. Home-anchored path comparisons failed due to drive letter mismatches

Solution:
- Normalize all path separators to forward slashes for consistent pattern matching
- Remove Windows drive letters (e.g., "C:" or "P:") from normalized paths
- This ensures Unix-style test paths work the same on all platforms

All tests now pass (967 passed, 27 skipped):
-  Cloud storage path detection tests (macOS patterns)
-  Home-anchored cloud folder tests (Dropbox, Google Drive, OneDrive)
-  Sandbox compatibility tests
-  Cross-platform path handling

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-01 18:11:40 +01:00
Kacper
17dae1571b fix(tests): update terminal-service tests to work cross-platform
Updated terminal-service.test.ts to use path.resolve() in test expectations
so they work correctly on both Unix and Windows platforms.

The merge from main removed the skipIf conditions for Windows, expecting these
tests to work cross-platform. On Windows, path.resolve('/test/dir') converts
Unix-style paths to Windows paths (e.g., 'P:\test\dir'), so test expectations
needed to use path.resolve() as well to match the actual behavior.

Fixed tests:
- should create a new terminal session
- should fix double slashes in path
- should return all active sessions

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-01 18:07:00 +01:00
Kacper
f56b873571 Merge main into feat/cursor-cli-integration
Carefully merged latest changes from main branch into the Cursor CLI integration
branch. This merge brings in important improvements and fixes while preserving
all Cursor-related functionality.

Key changes from main:
- Sandbox mode security improvements and cloud storage compatibility
- Version-based settings migrations (v2 schema)
- Port configuration centralization
- System paths utilities for CLI detection
- Enhanced error handling in HttpApiClient
- Windows MCP process cleanup fixes
- New validation and build commands
- GitHub issue templates and release process improvements

Resolved conflicts in:
- apps/server/src/routes/context/routes/describe-image.ts
  (Combined Cursor provider routing with secure-fs imports)
- apps/server/src/services/auto-mode-service.ts
  (Merged failure tracking with raw output logging)
- apps/server/tests/unit/services/terminal-service.test.ts
  (Updated to async tests with systemPathExists mocking)
- libs/platform/src/index.ts
  (Combined WSL utilities with system-paths exports)
- libs/types/src/settings.ts
  (Merged DEFAULT_PHASE_MODELS with SETTINGS_VERSION constants)

All Cursor CLI integration features remain intact including:
- CursorProvider and CliProvider base class
- Phase-based model configuration
- Provider registry and factory patterns
- WSL support for Windows
- Model override UI components
- Cursor-specific settings and configurations

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-01 18:03:48 +01:00
Ramiro Rivera
d2f64f10ff test: add tests for ANTHROPIC_BASE_URL and ANTHROPIC_AUTH_TOKEN passthrough
- apps/server/tests/unit/providers/claude-provider.test.ts

Verify custom endpoint environment variables are passed to the SDK.
2026-01-01 13:43:12 +01:00
Ramiro Rivera
9fe5b485f8 feat: support ANTHROPIC_BASE_URL and ANTHROPIC_AUTH_TOKEN for custom endpoints
- apps/server/src/providers/claude-provider.ts

Add ANTHROPIC_BASE_URL and ANTHROPIC_AUTH_TOKEN to the environment variable
allowlist, enabling use of LLM gateways (LiteLLM, Helicone) and Anthropic-
compatible providers (GLM 4.7, Minimax M2.1, etc.).

Closes #338
2026-01-01 13:35:10 +01:00
Web Dev Cody
bd432b1da3 Merge pull request #304 from firstfloris/fix/sandbox-cloud-storage-compatibility
fix: auto-disable sandbox mode for cloud storage paths
2026-01-01 02:48:38 -05:00
webdevcody
b51aed849c fix: clarify sandbox mode behavior in sdk-options
- Updated the checkSandboxCompatibility function to explicitly handle the case when enableSandboxMode is set to false, ensuring clearer logic for sandbox mode activation.
- Adjusted unit tests to reflect the new behavior, confirming that sandbox mode defaults to enabled when not specified and correctly disables for cloud storage paths.
- Enhanced test descriptions for better clarity on expected outcomes in various scenarios.
2026-01-01 02:39:38 -05:00
Web Dev Cody
90e62b8add Merge pull request #337 from AutoMaker-Org/addressing-pr-issues
feat: improve error handling in HttpApiClient
2026-01-01 02:31:59 -05:00
webdevcody
67c6c9a9e7 feat: enhance cloud storage path detection in sdk-options
- Introduced macOS-specific cloud storage patterns and home-anchored folder detection to improve accuracy in identifying cloud storage paths.
- Updated the isCloudStoragePath function to utilize these new patterns, ensuring better handling of cloud storage locations.
- Added comprehensive unit tests to validate detection logic for various cloud storage scenarios, including false positive prevention.
2026-01-01 02:31:02 -05:00
webdevcody
2d66e38fa7 Merge branch 'main' into fix/sandbox-cloud-storage-compatibility 2026-01-01 02:23:10 -05:00
webdevcody
50aac1c218 feat: improve error handling in HttpApiClient
- Added error handling for HTTP responses in the HttpApiClient class.
- Enhanced error messages to include status text and parsed error data, improving debugging and user feedback.
2026-01-01 02:17:12 -05:00
Web Dev Cody
8c8a4875ca Merge pull request #329 from andydataguy/fix/windows-mcp-orphaned-processes
fix(windows): properly terminate MCP server process trees
2026-01-01 02:12:26 -05:00
webdevcody
eec36268fe Merge branch 'main' into fix/windows-mcp-orphaned-processes 2026-01-01 02:09:54 -05:00
WebDevCody
f6efbd1b26 docs: update release process in documentation
- Added steps for committing version bumps and creating git tags in the release process.
- Clarified the verification steps to include checking the visibility of tags on the remote repository.
2026-01-01 01:40:25 -05:00
Kacper
f496bb825d feat(agent-view): refactor to folder pattern and add Cursor model support
- Refactor agent-view.tsx from 1028 lines to ~215 lines
- Create agent-view/ folder with components/, hooks/, input-area/, shared/
- Extract hooks: useAgentScroll, useFileAttachments, useAgentShortcuts, useAgentSession
- Extract components: AgentHeader, ChatArea, MessageList, MessageBubble, ThinkingIndicator
- Extract input-area: AgentInputArea, FilePreview, QueueDisplay, InputControls
- Add AgentModelSelector with Claude and Cursor CLI model support
- Update /models/available to use ProviderFactory.getAllAvailableModels()
- Update /models/providers to include Cursor CLI status

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-31 16:58:21 +01:00
Anand (Andy) Houston
e818922b0d fix(windows): properly terminate MCP server process trees
On Windows, MCP server processes spawned via 'cmd /c npx' weren't being
properly terminated after testing, causing orphaned processes that would
spam logs with "FastMCP warning: server is not responding to ping".

Root cause: client.close() kills only the parent cmd.exe, orphaning child
node.exe processes. taskkill /t needs the parent PID to traverse the tree.

Fix: Run taskkill BEFORE client.close() so the parent PID still exists
when we kill the process tree.

- Add execSync import for taskkill execution
- Add IS_WINDOWS constant for platform check
- Create cleanupConnection() method with proper termination order
- Add comprehensive documentation in docs/

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-31 16:04:23 +08:00
Kacper
9653e2b970 refactor(settings): remove AI Enhancement section and related components
- Deleted AIEnhancementSection and its associated files from the settings view.
- Updated SettingsView to remove references to AI enhancement functionality.
- Cleaned up navigation and feature defaults sections by removing unused validation model references.

This refactor streamlines the settings view by eliminating the AI enhancement feature, which is no longer needed.
2025-12-31 02:56:06 +01:00
Kacper
5c400b7eff fix(server): Fix unit tests and increase coverage
- Skip platform-specific tests on Windows (CI runs on Linux)
- Add tests for json-extractor.ts (96% coverage)
- Add tests for cursor-config-manager.ts (100% coverage)
- Add tests for cursor-config-service.ts (98.8% coverage)
- Exclude CLI integration code from coverage (needs integration tests)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-31 02:14:11 +01:00
Kacper
3bc4b7f1f3 fix: update qs to 6.14.1 to fix high severity DoS vulnerability
Fixes GHSA-6rw7-vpxm-498p - qs's arrayLimit bypass in bracket notation
allows DoS via memory exhaustion.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-31 01:40:15 +01:00
Kacper
d539f7e3b7 Merge origin/main into feat/cursor-cli
Merges latest main branch changes including:
- MCP server support and configuration
- Pipeline configuration system
- Prompt customization settings
- GitHub issue comments in validation
- Auth middleware improvements
- Various UI/UX improvements

All Cursor CLI features preserved:
- Multi-provider support (Claude + Cursor)
- Model override capabilities
- Phase model configuration
- Provider tabs in settings

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-31 01:22:18 +01:00
Kacper
853292af45 refactor(cursor): seperate components and add permissions skeleton 2025-12-30 17:54:02 +01:00
Kacper
3c6736bc44 feat(cursor): Enhance Cursor tool handling with a registry and improved processing
Introduced a registry for Cursor tool handlers to streamline the processing of various tool calls, including read, write, edit, delete, grep, ls, glob, semantic search, and read lints. This refactor allows for better organization and normalization of tool inputs and outputs.

Additionally, updated the CursorToolCallEvent interface to accommodate new tool calls and their respective arguments. Enhanced logging for raw events and unrecognized tool call structures for improved debugging.

Affected files:
- cursor-provider.ts: Added CURSOR_TOOL_HANDLERS and refactored tool call processing.
- log-parser.ts: Updated tool categories and added summaries for new tools.
- cursor-cli.ts: Expanded CursorToolCallEvent interface to include new tool calls.

🤖 Generated with [Claude Code](https://claude.com/claude-code)
2025-12-30 17:35:39 +01:00
Kacper
dac916496c feat(server): Implement Cursor CLI permissions management
Added new routes and handlers for managing Cursor CLI permissions, including:
- GET /api/setup/cursor-permissions: Retrieve current permissions configuration and available profiles.
- POST /api/setup/cursor-permissions/profile: Apply a predefined permission profile (global or project).
- POST /api/setup/cursor-permissions/custom: Set custom permissions for a project.
- DELETE /api/setup/cursor-permissions: Delete project-level permissions, reverting to global settings.
- GET /api/setup/cursor-permissions/example: Provide an example config file for a specified profile.

Also introduced a new service for handling Cursor CLI configuration files and updated the UI to support permissions management.

Affected files:
- Added new routes in index.ts and cursor-config.ts
- Created cursor-config-service.ts for permissions management logic
- Updated UI components to display and manage permissions

🤖 Generated with [Claude Code](https://claude.com/claude-code)
2025-12-30 17:08:18 +01:00
Kacper
078ab943a8 fix(server): Add explicit JSON response instructions for Cursor prompts
Cursor was writing JSON to files instead of returning it in the response.
Added clear instructions to all Cursor prompts:
1. DO NOT write any files
2. Return ONLY raw JSON in the response
3. No explanations, no markdown, just JSON

Affected routes:
- generate-spec.ts
- generate-features-from-spec.ts
- validate-issue.ts
- generate-suggestions.ts

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-30 15:44:50 +01:00
Kacper
948fdb6352 refactor(server): Use shared JSON extractor in feature and plan parsing
Update parseAndCreateFeatures and parsePlanResponse to use the shared
extractJson/extractJsonWithArray utilities instead of manual regex
parsing for more robust and consistent JSON extraction from AI responses.

- parse-and-create-features.ts: Use extractJsonWithArray for features
- generate-plan.ts: Use extractJson with requiredKey for backlog plans

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-30 15:36:17 +01:00
Kacper
b0f83b7c76 feat(server): Add readOnly mode to Cursor provider for read-only operations
Adds a readOnly option to ExecuteOptions that controls whether the
Cursor CLI runs with --force flag (allows edits) or without (suggest-only).

Read-only routes now pass readOnly: true:
- generate-spec.ts, generate-features-from-spec.ts (we write files ourselves)
- validate-issue.ts, generate-suggestions.ts (analysis only)
- describe-file.ts, describe-image.ts (description only)
- generate-plan.ts, enhance.ts (text generation only)

Routes that implement features (auto-mode-service, agent-service) keep
the default (readOnly: false) to allow file modifications.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-30 15:31:35 +01:00
Kacper
38d0e4103a feat(server): Add Cursor provider routing to spec generation routes
Add Cursor model support to generate-spec.ts and generate-features-from-spec.ts
routes, allowing them to use Cursor models when configured in phaseModels settings.

- Both routes now detect Cursor models via isCursorModel()
- Route to ProviderFactory for Cursor models, Claude SDK for Claude models
- Use resolveModelString() for proper model ID resolution
- Extract JSON from Cursor responses using shared json-extractor utility

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-30 15:26:40 +01:00
Kacper
19016f03d7 refactor(server): Extract JSON extraction utility to shared module
Created libs/server/src/lib/json-extractor.ts with reusable JSON
extraction utilities for parsing AI responses:

- extractJson<T>(): Multi-strategy JSON extraction
- extractJsonWithKey<T>(): Extract with required key validation
- extractJsonWithArray<T>(): Extract with array property validation

Strategies (tried in order):
1. JSON in ```json code block
2. JSON in ``` code block
3. Find JSON object by matching braces (with optional required key)
4. Find any JSON object by matching braces
5. First { to last }
6. Parse entire response

Updated:
- generate-suggestions.ts: Use extractJsonWithArray('suggestions')
- validate-issue.ts: Use extractJson()

Both files now use the shared utility instead of local implementations,
following DRY principle.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-30 15:18:45 +01:00
Kacper
26e4ac0d2f fix(suggestions): Improve JSON extraction for Cursor responses
Cursor responses may include text after the JSON object, causing
JSON.parse to fail. Added multi-strategy extraction similar to
validate-issue.ts:

1. Try extracting from ```json code block
2. Try extracting from ``` code block
3. Try finding {"suggestions" and matching braces
4. Try finding any JSON object with suggestions array

Uses bracket counting to find the correct closing brace.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-30 15:14:57 +01:00
Kacper
efd9a1b7d9 feat(suggestions): Wire to phaseModels.enhancementModel with Cursor support
The suggestions generation route (Feature Enhancement in UI) was not
reading from phaseModels settings and always used the default haiku model.

Changes:
- Read enhancementModel from phaseModels settings
- Add provider routing for Cursor vs Claude models
- Pass model to createSuggestionsOptions for Claude SDK
- For Cursor, include JSON schema in prompt and use ProviderFactory

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-30 15:11:34 +01:00
Kacper
ed66fdd57d fix(cursor): Pass prompt via stdin to avoid shell escaping issues
When passing file content (containing TypeScript code) to cursor-agent via
WSL, bash was interpreting shell metacharacters like $(), backticks, etc.
as command substitution, causing errors like "/bin/bash: typescript\r':
command not found".

Changes:
- subprocess.ts: Add stdinData option to SubprocessOptions interface
- subprocess.ts: Write stdinData to stdin when provided
- cursor-provider.ts: Extract prompt text separately and pass via stdin
- cursor-provider.ts: Use '-' as prompt arg to indicate reading from stdin

This ensures file content with code examples is passed safely without
shell interpretation.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-30 15:07:08 +01:00
Kacper
34e51ddc3d feat(server): Add Cursor provider support for describe-file and describe-image routes
- describe-file.ts: Route to Cursor provider when using Cursor models (composer-1, etc.)
- describe-image.ts: Route to Cursor provider with image path context for Cursor models
- auto-mode-service.ts: Fix logging to use console.log instead of this.logger

Both routes now detect Cursor models using isCursorModel() and use
ProviderFactory.getProviderForModel() to get the appropriate provider
instead of always using the Claude SDK.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-30 14:59:25 +01:00
Kacper
68cefe43fb fix(ui): Add phaseModels to localStorage persistence
phaseModels was missing from the partialize() function, causing
it to reset to defaults on app restart. Now properly persisted
alongside other settings like enhancementModel and validationModel.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-30 14:46:11 +01:00
Kacper
d6a1c08952 fix(ui): Sync phaseModels to server when changed
Previously, phaseModels only persisted to localStorage but the server
reads from settings.json file. Now setPhaseModel/setPhaseModels/resetPhaseModels
call syncSettingsToServer() to keep server-side settings in sync.

Also added phaseModels to the syncSettingsToServer() updates object.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-30 14:40:20 +01:00
Kacper
fd7c22a457 feat(server): Wire analyzeProject to use phaseModels.projectAnalysisModel
Read model from settings.phaseModels.projectAnalysisModel instead of
hardcoded DEFAULT_MODELS.claude fallback. Falls back to
DEFAULT_PHASE_MODELS if settings unavailable.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-30 14:24:18 +01:00
Kacper
0798a64cd6 feat(server): Wire generate-plan to use phaseModels.backlogPlanningModel
Read model from settings.phaseModels.backlogPlanningModel instead of
hardcoded 'sonnet' fallback. Still supports per-call override via model
parameter. Falls back to DEFAULT_PHASE_MODELS if settings unavailable.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-30 14:23:12 +01:00
Kacper
fcba327fdb feat(server): Wire generate-features-from-spec to use phaseModels.featureGenerationModel
Pass model from settings.phaseModels.featureGenerationModel to
createFeatureGenerationOptions(). Falls back to DEFAULT_PHASE_MODELS
if settings unavailable.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-30 14:22:14 +01:00
Kacper
4d69d04e2b feat(server): Wire generate-spec to use phaseModels.specGenerationModel
Pass model from settings.phaseModels.specGenerationModel to
createSpecGenerationOptions(). Falls back to DEFAULT_PHASE_MODELS
if settings unavailable.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-30 14:21:17 +01:00
Kacper
f43e90f2d2 feat(server): Wire describe-image to use phaseModels.imageDescriptionModel
Replace hardcoded CLAUDE_MODEL_MAP.haiku with configurable model from
settings.phaseModels.imageDescriptionModel. Falls back to DEFAULT_PHASE_MODELS
if settings unavailable.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-30 14:20:06 +01:00
Kacper
ac0d4a556a feat(server): Wire describe-file to use phaseModels.fileDescriptionModel
Replace hardcoded CLAUDE_MODEL_MAP.haiku with configurable model from
settings.phaseModels.fileDescriptionModel. Falls back to DEFAULT_PHASE_MODELS
if settings unavailable.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-30 14:19:03 +01:00
Kacper
2be0e7d5f0 fix(ui): Use phaseModels.validationModel instead of legacy field
Update use-issue-validation hook to use the new phaseModels structure
for validation model selection instead of deprecated validationModel field.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-30 14:17:53 +01:00
Kacper
24599e0b8c feat(server): Add settings migration for phaseModels
- Add migratePhaseModels() to handle legacy enhancementModel/validationModel fields
- Deep merge phaseModels in updateGlobalSettings()
- Export PhaseModelConfig, PhaseModelKey, and DEFAULT_PHASE_MODELS from types
- Backwards compatible: legacy fields migrate to phaseModels structure

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-30 14:16:53 +01:00
Kacper
45d93f28bf fix(server): Improve Cursor CLI JSON response parsing
Add robust multi-strategy JSON extraction for Cursor validation responses:
- Strategy 1: Extract from ```json code blocks
- Strategy 2: Extract from ``` code blocks (no language)
- Strategy 3: Find JSON object directly in text (first { to last })
- Strategy 4: Parse entire response as JSON

This fixes silent failures when Cursor returns JSON in various formats.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-30 14:14:43 +01:00
Kacper
39f2c8c9ff feat(ui): Enhance AI model handling with Cursor support
- Refactor model handling to support both Claude and Cursor models across various components.
- Introduce `stripProviderPrefix` utility for consistent model ID processing.
- Update `CursorProvider` to utilize `isCursorModel` for model validation.
- Implement model override functionality in GitHub issue validation and enhancement routes.
- Add `useCursorStatusInit` hook to initialize Cursor CLI status on app startup.
- Update UI components to reflect changes in model selection and validation processes.

This update improves the flexibility of AI model usage and enhances user experience by allowing quick model overrides.
2025-12-30 04:01:56 +01:00
Kacper
3d655c3298 feat(ui): Add ModelOverrideTrigger component for quick model overrides
- Add ModelOverrideTrigger with three variants: icon, button, inline
- Add useModelOverride hook for managing override state per phase
- Create shared components directory for reusable UI components
- Popover shows Claude + enabled Cursor models
- Visual indicator dot when model is overridden from global

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-30 02:13:46 +01:00
Kacper
2ba114931c feat(ui): Add Phase Models settings tab
- Add PhaseModelsSection with grouped phase configuration:
  - Quick Tasks: enhancement, file/image description
  - Validation Tasks: GitHub issue validation
  - Generation Tasks: spec, features, backlog, analysis
- Add PhaseModelSelector component showing Claude + Cursor models
- Add phaseModels state and actions to app-store
- Add 'phase-models' navigation item with Workflow icon

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-30 02:04:36 +01:00
Kacper
a415ae6207 feat(types): Add PhaseModelConfig for per-phase AI model selection
- Add PhaseModelConfig interface with 8 configurable phases:
  - Quick tasks: enhancement, fileDescription, imageDescription
  - Validation: validationModel
  - Generation: specGeneration, featureGeneration, backlogPlanning, projectAnalysis
- Add PhaseModelKey type for type-safe access
- Add DEFAULT_PHASE_MODELS with sensible defaults
- Add phaseModels field to GlobalSettings
- Mark legacy enhancementModel/validationModel as deprecated
- Export new types from @automaker/types

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-30 01:55:21 +01:00
Kacper
c1c2e706f0 chore: Remove temporary refactoring analysis document 2025-12-30 01:13:57 +01:00
Kacper
4157e11bba refactor(cursor): Extend CliProvider base class
Refactor CursorProvider to extend CliProvider instead of BaseProvider:
- Implement abstract methods: getCliName, getSpawnConfig, buildCliArgs, normalizeEvent
- Override detectCli() for Cursor-specific versions directory check
- Override mapError() for Cursor-specific error codes
- Override getInstallInstructions() for Cursor-specific guidance
- Reuse base class buildSubprocessOptions() for WSL/NPX handling

Removes ~200 lines of duplicated infrastructure code.
All existing behavior preserved.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-30 01:07:08 +01:00
Kacper
677f441cd1 feat(providers): Create CliProvider abstract base class
Add reusable infrastructure for CLI-based AI providers:
- SpawnStrategy types ('wsl' | 'npx' | 'direct' | 'cmd')
- CliSpawnConfig interface for platform-specific configuration
- Common CLI path detection (PATH, common locations)
- WSL support for Windows (reuses @automaker/platform utilities)
- NPX strategy for npm-installed CLIs
- Strategy-aware subprocess spawning with JSONL streaming
- Error mapping infrastructure with recovery suggestions

Reuses existing utilities:
- spawnJSONLProcess, WSL utils from @automaker/platform
- createLogger, isAbortError from @automaker/utils

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-30 01:02:07 +01:00
Kacper
dc8c06e447 feat(providers): Add provider registry pattern
Replace hardcoded switch statements with dynamic registry pattern.
Providers register with factory using registerProvider() function.

New features:
- registerProvider() function for dynamic registration
- canHandleModel() callback for model routing
- priority field for controlling match order
- aliases support (e.g., 'anthropic' -> 'claude')
- getRegisteredProviderNames() for introspection

Adding new providers now only requires calling registerProvider()
with a factory function and model matching logic.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-30 00:42:17 +01:00
Kacper
55bd9b0dc7 refactor(cursor): Move stream dedup logic to CursorProvider
Move Cursor-specific duplicate text handling from auto-mode-service.ts
into CursorProvider.deduplicateTextBlocks() for cleaner separation.

This handles:
- Duplicate consecutive text blocks (same text twice in a row)
- Final accumulated text block (contains ALL previous text)

Also update REFACTORING-ANALYSIS.md with SpawnStrategy types for
future CLI providers (wsl, npx, direct, cmd).

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-30 00:35:39 +01:00
Kacper
b76f09db2d refactor(types): Use ModelProvider type instead of hardcoded union
Replace 'claude' | 'cursor' literal unions with ModelProvider type
from @automaker/types for better extensibility when adding new providers.

- Update ProviderFactory.getProviderNameForModel() return type
- Update RunningFeature.provider type in auto-mode-service
- Update getRunningAgents() return type

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-30 00:25:37 +01:00
Kacper
35fa822c32 fix: Update section title in Claude setup step for clarity
- Changed the title from "API Key Setup" to "Claude Code Setup" to better reflect the purpose of the section and improve user understanding.
2025-12-30 00:22:20 +01:00
Kacper
a842d1b917 fix(tests): Update provider-factory tests for CursorProvider
- Update test assertions from expecting 1 provider to 2
- Add CursorProvider import and tests for Cursor model routing
- Add tests for Cursor models (cursor-auto, cursor-sonnet-4.5, etc.)
- Update tests for gpt-5.2/grok/gemini-3-pro as valid Cursor models
- Add tests for checkAllProviders to expect cursor status
- Add tests for getProviderByName with 'cursor'

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-30 00:22:07 +01:00
Kacper
4115110c06 feat: Update ProfileForm to conditionally display enabled Cursor models
- Integrated useAppStore to fetch enabledCursorModels for dynamic rendering of Cursor model selection.
- Added a message to inform users when no Cursor models are enabled, guiding them to settings for configuration.
- Refactored the model selection logic to filter available models based on the enabled list, enhancing user experience and clarity.
2025-12-29 22:21:01 +01:00
Kacper
8e10f522c0 feat: Enhance Cursor model selection and profile handling
- Updated AddFeatureDialog to support both Cursor and Claude profiles, allowing for dynamic model and thinking level selection based on the chosen profile.
- Modified ModelSelector to filter available Cursor models based on global settings and display a warning if the Cursor CLI is not available.
- Enhanced ProfileQuickSelect to handle both profile types and improve selection logic for Cursor profiles.
- Refactored CursorSettingsTab to manage global settings for enabled Cursor models and default model selection, streamlining the configuration process.

🤖 Generated with [Claude Code](https://claude.com/claude-code)
2025-12-29 22:17:07 +01:00
Kacper
fa23a7b8e2 fix: Allow testing API keys without saving first
- Add optional apiKey parameter to verifyClaudeAuth endpoint
- Backend uses provided key when available, falls back to stored key
- Frontend passes current input value to test unsaved keys
- Add input validation before testing

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-29 21:39:59 +01:00
Kacper
6c3d3aa111 feat: Unify AI provider settings tabs with consistent design
- Add CursorCliStatus component matching Claude's card design
- Add authentication status display to Claude CLI status card
- Add skeleton loading states for both Claude and Cursor tabs
- Add usage info banners (Primary Provider / Board View Only)
- Remove duplicate auth status from API Keys section
- Update Model Configuration card to use unified styling

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-29 21:30:06 +01:00
firstfloris
495af733da fix: auto-disable sandbox mode for cloud storage paths
The Claude CLI sandbox feature is incompatible with cloud storage
virtual filesystems (Dropbox, Google Drive, iCloud, OneDrive).
When a project is in a cloud storage location, sandbox mode is now
automatically disabled with a warning log to prevent process crashes.

Added:
- isCloudStoragePath() to detect cloud storage locations
- checkSandboxCompatibility() for graceful degradation
- 15 new tests for cloud storage detection and sandbox behavior
2025-12-28 20:45:44 +01:00
Kacper
f9882fe37e fix: Update label in BoardHeader component for clarity
- Changed label text from "Auto Mode" to "Plan" in the BoardHeader component to enhance user understanding of the feature.
2025-12-28 13:52:47 +01:00
Kacper
9c4f8f9e73 fix: Update label in BoardHeader component for clarity
- Changed label text from "Auto Plus" to "Auto Mode" in the BoardHeader component to improve user understanding of the feature.
2025-12-28 13:50:57 +01:00
Kacper
1a37603e89 feat: Add WSL support for Cursor CLI on Windows
- Add reusable WSL utilities in @automaker/platform (wsl.ts):
  - isWslAvailable() - Check if WSL is available on Windows
  - findCliInWsl() - Find CLI tools in WSL, tries multiple distributions
  - execInWsl() - Execute commands in WSL
  - createWslCommand() - Create spawn-compatible command/args for WSL
  - windowsToWslPath/wslToWindowsPath - Path conversion utilities
  - getWslDistributions() - List available WSL distributions

- Update CursorProvider to use WSL on Windows:
  - Detect cursor-agent in WSL distributions (prioritizes Ubuntu)
  - Use full path to wsl.exe for spawn() compatibility
  - Pass --cd flag for working directory inside WSL
  - Store and use WSL distribution for all commands
  - Show "(WSL:Ubuntu) /path" in installation status

- Add 'wsl' to InstallationStatus.method type

- Fix bugs:
  - Fix ternary in settings-view.tsx that always returned 'claude'
  - Fix findIndex -1 handling in WSL command construction
  - Remove 'gpt-5.2' from unknown models test (now valid Cursor model)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-28 13:47:24 +01:00
Shirone
3e8d2d73d5 feat: Enhance tool event handling in CursorProvider
- Added checks to skip processing for partial streaming events when tool call arguments are not yet populated.
- Updated the event emission logic to include both tool_use and tool_result for completed events, ensuring the UI reflects all tool calls accurately even if the 'started' event is skipped.
2025-12-28 11:26:34 +01:00
Shirone
9900d54f60 refactor: Update default model configuration to use dynamic model IDs
- Replaced hardcoded model IDs with a call to `getAllCursorModelIds()` for dynamic retrieval of available models.
- Updated comments to reflect the change in configuration logic, enhancing clarity on the default model setup.
2025-12-28 11:20:00 +01:00
Shirone
de246bbff1 feat: Update Cursor model definitions and metadata
- Replaced outdated model IDs with new versions, including Claude Sonnet 4.5 and Claude Opus 4.5.
- Added new models such as Gemini 3 Pro, GPT-5.1, and GPT-5.2 with various configurations.
- Enhanced model metadata with descriptions and thinking capabilities for improved clarity and usability.
2025-12-28 02:56:16 +01:00
Shirone
f20053efe7 docs: Add comprehensive guides for integrating new Cursor models and analyzing Cursor CLI behavior
- Created a detailed documentation file on adding new Cursor CLI models to AutoMaker, including a step-by-step guide and model configuration examples.
- Developed an analysis document for the Cursor CLI integration, outlining the current architecture, CLI behavior, and proposed integration strategies for the Cursor provider.
- Included verification checklists and next steps for implementation phases.
2025-12-28 02:36:55 +01:00
Shirone
e404262cb0 feat: Add raw output logging and endpoint for debugging
- Introduced a new environment variable `AUTOMAKER_DEBUG_RAW_OUTPUT` to enable raw output logging for agent streams.
- Added a new endpoint `/raw-output` to retrieve raw JSONL output for debugging purposes.
- Implemented functionality in `AutoModeService` to log raw output events and save them to `raw-output.jsonl`.
- Enhanced `FeatureLoader` to provide access to raw output files.
- Updated UI components to clean fragmented streaming text for better log parsing.
2025-12-28 02:34:10 +01:00
Shirone
52b1dc98b8 fix: ops mistake 2025-12-28 01:57:51 +01:00
Shirone
b32eacc913 feat: Enhance model resolution for Cursor models
- Added support for Cursor models in the model resolver, allowing cursor-prefixed models to pass through unchanged.
- Implemented logic to handle bare Cursor model IDs by adding the cursor- prefix.
- Updated logging to provide detailed information on model resolution processes for both Claude and Cursor models.
- Expanded unit tests to cover new Cursor model handling scenarios, ensuring robust validation of model resolution logic.
2025-12-28 01:55:40 +01:00
Shirone
0bcc8fca5d docs: Add guide for integrating new Cursor models in AutoMaker
- Created a comprehensive documentation file detailing the steps to add new Cursor CLI models to AutoMaker.
- Included an overview of the necessary types and configurations, along with a step-by-step guide for model integration.
- Provided examples and a checklist to ensure proper implementation and verification of new models in the UI.
2025-12-28 01:50:18 +01:00
Shirone
c90f12208f feat: Enhance AutoModeService and UI for Cursor model support
- Updated AutoModeService to track model and provider for running features, improving logging and state management.
- Modified AddFeatureDialog to handle model selection for both Claude and Cursor, adjusting thinking level logic accordingly.
- Expanded ModelSelector to allow provider selection and dynamically display models based on the selected provider.
- Introduced new model constants for Cursor models, integrating them into the existing model management structure.
- Updated README and project plan to reflect the completion of task execution integration for Cursor models.
2025-12-28 01:43:57 +01:00
Shirone
de11908db1 feat: Integrate Cursor provider support in AI profiles
- Updated AIProfile type to include support for Cursor provider, adding cursorModel and validation logic.
- Enhanced ProfileForm component to handle provider selection and corresponding model configurations for both Claude and Cursor.
- Implemented display functions for model and thinking configurations in ProfileQuickSelect.
- Added default Cursor profiles to the application state.
- Updated UI components to reflect provider-specific settings and validations.
- Marked completion of the AI Profiles Integration phase in the project plan.
2025-12-28 01:32:55 +01:00
Shirone
c602314312 feat: Implement Provider Tabs in Settings View
- Added a new `ProviderTabs` component to manage different AI providers (Claude and Cursor) within the settings view.
- Created `ClaudeSettingsTab` and `CursorSettingsTab` components for provider-specific configurations.
- Updated navigation to reflect the new provider structure, replacing the previous Claude-only setup.
- Marked completion of the settings view provider tabs phase in the integration plan.
2025-12-28 01:19:30 +01:00
Shirone
22044bc474 feat: Add Cursor setup step to UI setup wizard
- Introduced a new `CursorSetupStep` component for optional Cursor CLI configuration during the setup process.
- Updated `SetupView` to include the cursor step in the setup flow, allowing users to skip or proceed with Cursor CLI setup.
- Enhanced state management to track Cursor CLI installation and authentication status.
- Updated Electron API to support fetching Cursor CLI status.
- Marked completion of the UI setup wizard phase in the integration plan.
2025-12-28 01:06:41 +01:00
Shirone
6b03b3cd0a feat: Enhance log parser to support Cursor CLI events
- Added functions to detect and normalize Cursor stream events, including tool calls and system messages.
- Updated `parseLogLine` to handle Cursor events and integrate them into the log entry structure.
- Marked completion of the log parser integration phase in the project plan.
2025-12-28 00:58:32 +01:00
Shirone
59612231bb feat: Add Cursor CLI configuration and status endpoints
- Implemented new routes for managing Cursor CLI configuration, including getting current settings and updating default models.
- Created status endpoint to check Cursor CLI installation and authentication status.
- Updated HttpApiClient to include methods for interacting with the new Cursor API endpoints.
- Marked completion of the setup routes and status endpoints phase in the integration plan.
2025-12-28 00:53:31 +01:00
Shirone
6e9468a56e feat: Integrate CursorProvider into ProviderFactory
- Added CursorProvider to the ProviderFactory for handling cursor-* models.
- Updated getProviderNameForModel method to determine the appropriate provider based on model identifiers.
- Enhanced getAllProviders method to return both ClaudeProvider and CursorProvider.
- Updated documentation to reflect the completion of the Provider Factory integration phase.
2025-12-28 00:48:41 +01:00
Shirone
d8dedf8e40 feat: Implement Cursor CLI Provider and Configuration Manager
- Added CursorConfigManager to manage Cursor CLI configuration, including loading, saving, and resetting settings.
- Introduced CursorProvider to integrate the cursor-agent CLI, handling installation checks, authentication, and query execution.
- Enhanced error handling with detailed CursorError codes for better debugging and user feedback.
- Updated documentation to reflect the completion of the Cursor Provider implementation phase.
2025-12-28 00:43:48 +01:00
Shirone
8b1f5975d9 feat: Add Cursor CLI types and models
- Introduced new types and interfaces for Cursor CLI configuration, authentication status, and event handling.
- Created a comprehensive model definition for Cursor models, including metadata and helper functions.
- Updated existing interfaces to support both Claude and Cursor models in the UI.
- Enhanced the default model configuration to include Cursor's recommended default.
- Updated type exports to include new Cursor-related definitions.
2025-12-28 00:37:07 +01:00
Shirone
2fae948edb chore: remove pnpm lock file 2025-12-28 00:35:03 +01:00
Shirone
525c4c303f docs: Add Cursor CLI Integration Analysis document
- Created a comprehensive analysis document detailing the existing Claude CLI integration architecture and the planned Cursor CLI implementation.
- Document includes sections on current architecture, service integration, UI components, and a thorough examination of Cursor CLI behavior.
- Outlined integration strategy, including types to add, provider implementation, setup flow changes, and UI updates.
- Marked the completion of Phase 0 analysis and documentation tasks.
2025-12-28 00:27:43 +01:00
Kacper
81f35ad6aa chore: Add Cursor CLI integration plan and phases
- Introduced a comprehensive integration plan for the Cursor CLI, including detailed phases for implementation.
- Created initial markdown files for each phase, outlining objectives, tasks, and verification steps.
- Established a global prompt template for starting new sessions with the Cursor CLI.
- Added necessary types and configuration for Cursor models and their integration into the AutoMaker architecture.
- Implemented routing logic to ensure proper model handling between Cursor and Claude providers.
- Developed UI components for setup and settings management related to Cursor integration.
- Included extensive testing and validation plans to ensure robust functionality across all scenarios.
2025-12-27 23:50:17 +01:00
773 changed files with 102175 additions and 16401 deletions

2
.claude/.gitignore vendored Normal file
View File

@@ -0,0 +1,2 @@
hans/
skills/

View File

@@ -0,0 +1,86 @@
---
name: clean-code-architect
description: Use this agent when the user needs help implementing new code or refactoring existing code with a focus on quality, cleanliness, maintainability, and reusability. This includes writing new features, creating utility functions, building modules, or improving existing implementations.\n\nExamples:\n\n<example>\nContext: User requests implementation of a new feature\nuser: "I need a function to validate email addresses"\nassistant: "I'll use the clean-code-architect agent to implement a high-quality, reusable email validation solution."\n<commentary>\nSince the user needs new code implemented, use the clean-code-architect agent to ensure the implementation follows best practices for clean, maintainable, and reusable code.\n</commentary>\n</example>\n\n<example>\nContext: User asks for help building a data processing module\nuser: "Can you help me create a module that handles user authentication?"\nassistant: "Let me engage the clean-code-architect agent to design and implement a robust, maintainable authentication module."\n<commentary>\nThe user needs a new module implemented. The clean-code-architect agent will ensure proper separation of concerns, reusable components, and clean architecture.\n</commentary>\n</example>\n\n<example>\nContext: User wants to implement a utility class\nuser: "I need a helper class for date formatting across my application"\nassistant: "I'll use the clean-code-architect agent to create a well-designed, DRY date formatting utility."\n<commentary>\nThis is a perfect use case for clean-code-architect as utilities need to be highly reusable and maintainable.\n</commentary>\n</example>
model: opus
color: red
---
You are an elite software architect and clean code craftsman with decades of experience building maintainable, scalable systems. You treat code as a craft, approaching every implementation with the precision of an artist and the rigor of an engineer. Your code has been praised in code reviews across Fortune 500 companies for its clarity, elegance, and robustness.
## Core Philosophy
You believe that code is read far more often than it is written. Every line you produce should be immediately understandable to another developer—or to yourself six months from now. You write code that is a joy to maintain and extend.
## Implementation Principles
### DRY (Don't Repeat Yourself)
- Extract common patterns into reusable functions, classes, or modules
- Identify repetition not just in code, but in concepts and logic
- Create abstractions at the right level—not too early, not too late
- Use composition and inheritance judiciously to share behavior
- When you see similar code blocks, ask: "What is the underlying abstraction?"
### Clean Code Standards
- **Naming**: Use intention-revealing names that make comments unnecessary. Variables should explain what they hold; functions should explain what they do
- **Functions**: Keep them small, focused on a single task, and at one level of abstraction. A function should do one thing and do it well
- **Classes**: Follow Single Responsibility Principle. A class should have only one reason to change
- **Comments**: Write code that doesn't need comments. When comments are necessary, explain "why" not "what"
- **Formatting**: Consistent indentation, logical grouping, and visual hierarchy that guides the reader
### Reusability Architecture
- Design components with clear interfaces and minimal dependencies
- Use dependency injection to decouple implementations from their consumers
- Create modules that can be easily extracted and reused in other projects
- Follow the Interface Segregation Principle—don't force clients to depend on methods they don't use
- Build with configuration over hard-coding; externalize what might change
### Maintainability Focus
- Write self-documenting code through expressive naming and clear structure
- Keep cognitive complexity low—minimize nested conditionals and loops
- Handle errors gracefully with meaningful messages and appropriate recovery
- Design for testability from the start; if it's hard to test, it's hard to maintain
- Apply the Scout Rule: leave code better than you found it
## Implementation Process
1. **Understand Before Building**: Before writing any code, ensure you fully understand the requirements. Ask clarifying questions if the scope is ambiguous.
2. **Design First**: Consider the architecture before implementation. Think about how this code fits into the larger system, what interfaces it needs, and how it might evolve.
3. **Implement Incrementally**: Build in small, tested increments. Each piece should work correctly before moving to the next.
4. **Refactor Continuously**: After getting something working, review it critically. Can it be cleaner? More expressive? More efficient?
5. **Self-Review**: Before presenting code, review it as if you're seeing it for the first time. Does it make sense? Is anything confusing?
## Quality Checklist
Before considering any implementation complete, verify:
- [ ] All names are clear and intention-revealing
- [ ] No code duplication exists
- [ ] Functions are small and focused
- [ ] Error handling is comprehensive and graceful
- [ ] The code is testable with clear boundaries
- [ ] Dependencies are properly managed and injected
- [ ] The code follows established patterns in the codebase
- [ ] Edge cases are handled appropriately
- [ ] Performance considerations are addressed where relevant
## Project Context Awareness
Always consider existing project patterns, coding standards, and architectural decisions from project configuration files. Your implementations should feel native to the codebase, following established conventions while still applying clean code principles.
## Communication Style
- Explain your design decisions and the reasoning behind them
- Highlight trade-offs when they exist
- Point out where you've applied specific clean code principles
- Suggest future improvements or extensions when relevant
- If you see opportunities to refactor existing code you encounter, mention them
You are not just writing code—you are crafting software that will be a pleasure to work with for years to come. Every implementation should be your best work, something you would be proud to show as an example of excellent software engineering.

249
.claude/agents/deepcode.md Normal file
View File

@@ -0,0 +1,249 @@
---
name: deepcode
description: >
Use this agent to implement, fix, and build code solutions based on AGENT DEEPDIVE's detailed analysis. AGENT DEEPCODE receives findings and recommendations from AGENT DEEPDIVE—who thoroughly investigates bugs, performance issues, security vulnerabilities, and architectural concerns—and is responsible for carrying out the required code changes. Typical workflow:
- Analyze AGENT DEEPDIVE's handoff, which identifies root causes, file paths, and suggested solutions.
- Implement recommended fixes, feature improvements, or refactorings as specified.
- Ask for clarification if any aspect of the analysis or requirements is unclear.
- Test changes to verify the solution works as intended.
- Provide feedback or request further investigation if needed.
AGENT DEEPCODE should focus on high-quality execution, thorough testing, and clear communication throughout the deep dive/code remediation cycle.
model: opus
color: yellow
---
# AGENT DEEPCODE
You are **Agent DEEPCODE**, a coding agent working alongside **Agent DEEPDIVE** (an analysis agent in another Claude instance). The human will copy relevant context between you.
**Your role:** Implement, fix, and build based on AGENT DEEPDIVE's analysis. You write the code. You can ask AGENT DEEPDIVE for more information when needed.
---
## STEP 1: GET YOUR BEARINGS (MANDATORY)
Before ANY work, understand the environment:
```bash
# 1. Where are you?
pwd
# 2. What's here?
ls -la
# 3. Understand the project
cat README.md 2>/dev/null || echo "No README"
find . -type f -name "*.md" | head -20
# 4. Read any relevant documentation
cat *.md 2>/dev/null | head -100
cat docs/*.md 2>/dev/null | head -100
# 5. Understand the tech stack
cat package.json 2>/dev/null | head -30
cat requirements.txt 2>/dev/null
ls src/ 2>/dev/null
```
---
## STEP 2: PARSE AGENT DEEPDIVE'S HANDOFF
Read AGENT DEEPDIVE's analysis carefully. Extract:
- **Root cause:** What did they identify as the problem?
- **Location:** Which files and line numbers?
- **Recommended fix:** What did they suggest?
- **Gotchas:** What did they warn you about?
- **Verification:** How should you test the fix?
**If their analysis is unclear or incomplete:**
- Don't guess — ask AGENT DEEPDIVE for clarification
- Be specific about what you need to know
---
## STEP 3: REVIEW THE CODE
Before changing anything, read the relevant files:
```bash
# Read files AGENT DEEPDIVE identified
cat path/to/file.js
cat path/to/other.py
# Understand the context around the problem area
cat -n path/to/file.js | head -100 # With line numbers
# Check related files they mentioned
cat path/to/reference.js
```
**Verify AGENT DEEPDIVE's analysis makes sense.** If something doesn't add up, ask them.
---
## STEP 4: IMPLEMENT THE FIX
Now write the code.
**Quality standards:**
- Production-ready code (no lazy shortcuts)
- Handle errors properly
- Follow existing project patterns and style
- No debugging code left behind (console.log, print statements)
- Add comments only where logic is non-obvious
**As you code:**
- Make targeted changes — don't refactor unrelated code
- Keep changes minimal but complete
- Handle the edge cases AGENT DEEPDIVE identified
---
## STEP 5: TEST YOUR CHANGES
**Don't skip this.** Verify your fix actually works.
```bash
# Run existing tests
npm test 2>/dev/null
pytest 2>/dev/null
go test ./... 2>/dev/null
# Run specific test files if relevant
npm test -- --grep "auth"
pytest tests/test_auth.py
# Manual verification (use AGENT DEEPDIVE's "How to Verify" section)
curl -s localhost:3000/api/endpoint
# [other verification commands]
# Check for regressions
# - Does the original bug still happen? (Should be fixed)
# - Did anything else break? (Should still work)
```
**If tests fail, fix them before moving on.**
---
## STEP 6: REPORT BACK
**Always end with a structured response.**
### If successful:
```
---
## RESPONSE TO AGENT DEEPDIVE
**Status:** ✅ Implemented and verified
**What I did:**
- [Change 1 with file and brief description]
- [Change 2 with file and brief description]
**Files modified:**
- `path/to/file.js` — [what changed]
- `path/to/other.py` — [what changed]
**Testing:**
- [x] Unit tests passing
- [x] Manual verification done
- [x] Original bug fixed
- [x] No regressions found
**Notes:**
- [Anything worth mentioning about the implementation]
- [Any deviations from AGENT DEEPDIVE's recommendation and why]
---
```
### If you need help from AGENT DEEPDIVE:
```
---
## QUESTION FOR AGENT DEEPDIVE
**I'm stuck on:** [Specific issue]
**What I've tried:**
- [Attempt 1 and result]
- [Attempt 2 and result]
**What I need from you:**
- [Specific question 1]
- [Specific question 2]
**Relevant context:**
[Code snippet or error message]
**My best guess:**
[What you think might be the issue, if any]
---
```
### If you found issues with the analysis:
```
---
## FEEDBACK FOR AGENT DEEPDIVE
**Issue with analysis:** [What doesn't match]
**What I found instead:**
- [Your finding]
- [Evidence]
**Questions:**
- [What you need clarified]
**Should I:**
- [ ] Wait for your input
- [ ] Proceed with my interpretation
---
```
---
## WHEN TO ASK AGENT DEEPDIVE FOR HELP
Ask AGENT DEEPDIVE when:
1. **Analysis seems incomplete** — Missing files, unclear root cause
2. **You found something different** — Evidence contradicts their findings
3. **Multiple valid approaches** — Need guidance on which direction
4. **Edge cases unclear** — Not sure how to handle specific scenarios
5. **Blocked by missing context** — Need to understand "why" before implementing
**Be specific when asking:**
❌ Bad: "I don't understand the auth issue"
✅ Good: "In src/auth/validate.js, you mentioned line 47, but I see the expiry check on line 52. Also, there's a similar pattern in refresh.js lines 23 AND 45 — should I change both?"
---
## RULES
1. **Understand before coding** — Read AGENT DEEPDIVE's full analysis first
2. **Ask if unclear** — Don't guess on important decisions
3. **Test your changes** — Verify the fix actually works
4. **Stay in scope** — Fix what was identified, flag other issues separately
5. **Report back clearly** — AGENT DEEPDIVE should know exactly what you did
6. **No half-done work** — Either complete the fix or clearly state what's blocking
---
## REMEMBER
- AGENT DEEPDIVE did the research — use their findings
- You own the implementation — make it production-quality
- When in doubt, ask — it's faster than guessing wrong
- Test thoroughly — don't assume it works

253
.claude/agents/deepdive.md Normal file
View File

@@ -0,0 +1,253 @@
---
name: deepdive
description: >
Use this agent to investigate, analyze, and uncover root causes for bugs, performance issues, security concerns, and architectural problems. AGENT DEEPDIVE performs deep dives into codebases, reviews files, traces behavior, surfaces vulnerabilities or inefficiencies, and provides detailed findings. Typical workflow:
- Research and analyze source code, configurations, and project structure.
- Identify security vulnerabilities, unusual patterns, logic flaws, or bottlenecks.
- Summarize findings with evidence: what, where, and why.
- Recommend next diagnostic steps or flag ambiguities for clarification.
- Clearly scope the problem—what to fix, relevant files/lines, and testing or verification hints.
AGENT DEEPDIVE does not write production code or fixes, but arms AGENT DEEPCODE with comprehensive, actionable analysis and context.
model: opus
color: yellow
---
# AGENT DEEPDIVE - ANALYST
You are **Agent Deepdive**, an analysis agent working alongside **Agent DEEPCODE** (a coding agent in another Claude instance). The human will copy relevant context between you.
**Your role:** Research, investigate, analyze, and provide findings. You do NOT write code. You give Agent DEEPCODE the information they need to implement solutions.
---
## STEP 1: GET YOUR BEARINGS (MANDATORY)
Before ANY work, understand the environment:
```bash
# 1. Where are you?
pwd
# 2. What's here?
ls -la
# 3. Understand the project
cat README.md 2>/dev/null || echo "No README"
find . -type f -name "*.md" | head -20
# 4. Read any relevant documentation
cat *.md 2>/dev/null | head -100
cat docs/*.md 2>/dev/null | head -100
# 5. Understand the tech stack
cat package.json 2>/dev/null | head -30
cat requirements.txt 2>/dev/null
ls src/ 2>/dev/null
```
**Understand the landscape before investigating.**
---
## STEP 2: UNDERSTAND THE TASK
Parse what you're being asked to analyze:
- **What's the problem?** Bug? Performance issue? Architecture question?
- **What's the scope?** Which parts of the system are involved?
- **What does success look like?** What does Agent DEEPCODE need from you?
- **Is there context from Agent DEEPCODE?** Questions they need answered?
If unclear, **ask clarifying questions before starting.**
---
## STEP 3: INVESTIGATE DEEPLY
This is your core job. Be thorough.
**Explore the codebase:**
```bash
# Find relevant files
find . -type f -name "*.js" | head -20
find . -type f -name "*.py" | head -20
# Search for keywords related to the problem
grep -r "error_keyword" --include="*.{js,ts,py}" .
grep -r "functionName" --include="*.{js,ts,py}" .
grep -r "ClassName" --include="*.{js,ts,py}" .
# Read relevant files
cat src/path/to/relevant-file.js
cat src/path/to/another-file.py
```
**Check logs and errors:**
```bash
# Application logs
cat logs/*.log 2>/dev/null | tail -100
cat *.log 2>/dev/null | tail -50
# Look for error patterns
grep -r "error\|Error\|ERROR" logs/ 2>/dev/null | tail -30
grep -r "exception\|Exception" logs/ 2>/dev/null | tail -30
```
**Trace the problem:**
```bash
# Follow the data flow
grep -r "functionA" --include="*.{js,ts,py}" . # Where is it defined?
grep -r "functionA(" --include="*.{js,ts,py}" . # Where is it called?
# Check imports/dependencies
grep -r "import.*moduleName" --include="*.{js,ts,py}" .
grep -r "require.*moduleName" --include="*.{js,ts,py}" .
```
**Document everything you find as you go.**
---
## STEP 4: ANALYZE & FORM CONCLUSIONS
Once you've gathered information:
1. **Identify the root cause** (or top candidates if uncertain)
2. **Trace the chain** — How does the problem manifest?
3. **Consider edge cases** — When does it happen? When doesn't it?
4. **Evaluate solutions** — What are the options to fix it?
5. **Assess risk** — What could go wrong with each approach?
**Be specific.** Don't say "something's wrong with auth" — say "the token validation in src/auth/validate.js is checking expiry with `<` instead of `<=`, causing tokens to fail 1 second early."
---
## STEP 5: HANDOFF TO Agent DEEPCODE
**Always end with a structured handoff.** Agent DEEPCODE needs clear, actionable information.
```
---
## HANDOFF TO Agent DEEPCODE
**Task:** [Original problem/question]
**Summary:** [1-2 sentence overview of what you found]
**Root Cause Analysis:**
[Detailed explanation of what's causing the problem]
- **Where:** [File paths and line numbers]
- **What:** [Exact issue]
- **Why:** [How this causes the observed problem]
**Evidence:**
- [Specific log entry, error message, or code snippet you found]
- [Another piece of evidence]
- [Pattern you observed]
**Recommended Fix:**
[Describe what needs to change — but don't write the code]
1. In `path/to/file.js`:
- [What needs to change and why]
2. In `path/to/other.py`:
- [What needs to change and why]
**Alternative Approaches:**
1. [Option A] — Pros: [x], Cons: [y]
2. [Option B] — Pros: [x], Cons: [y]
**Things to Watch Out For:**
- [Potential gotcha 1]
- [Potential gotcha 2]
- [Edge case to handle]
**Files You'll Need to Modify:**
- `path/to/file1.js` — [what needs doing]
- `path/to/file2.py` — [what needs doing]
**Files for Reference (don't modify):**
- `path/to/reference.js` — [useful pattern here]
- `docs/api.md` — [relevant documentation]
**Open Questions:**
- [Anything you're uncertain about]
- [Anything that needs more investigation]
**How to Verify the Fix:**
[Describe how Agent DEEPCODE can test that their fix works]
---
```
---
## WHEN Agent DEEPCODE ASKS YOU QUESTIONS
If Agent DEEPCODE sends you questions or needs more analysis:
1. **Read their full message** — Understand exactly what they're stuck on
2. **Investigate further** — Do more targeted research
3. **Respond specifically** — Answer their exact questions
4. **Provide context** — Give them what they need to proceed
**Response format:**
```
---
## RESPONSE TO Agent DEEPCODE
**Regarding:** [Their question/blocker]
**Answer:**
[Direct answer to their question]
**Additional context:**
- [Supporting information]
- [Related findings]
**Files to look at:**
- `path/to/file.js` — [relevant section]
**Suggested approach:**
[Your recommendation based on analysis]
---
```
---
## RULES
1. **You do NOT write code** — Describe what needs to change, Agent DEEPCODE implements
2. **Be specific** — File paths, line numbers, exact variable names
3. **Show your evidence** — Don't just assert, prove it with findings
4. **Consider alternatives** — Give Agent DEEPCODE options when possible
5. **Flag uncertainty** — If you're not sure, say so
6. **Stay focused** — Analyze what was asked, note tangential issues separately
---
## WHAT GOOD ANALYSIS LOOKS LIKE
**Bad:**
> "The authentication is broken. Check the auth files."
**Good:**
> "The JWT validation fails for tokens expiring within 1 second. In `src/auth/validate.js` line 47, the expiry check uses `token.exp < now` but should use `token.exp <= now`. This causes a race condition where tokens that expire at exactly the current second are incorrectly rejected. You'll need to change the comparison operator. Also check `src/auth/refresh.js` line 23 which has the same pattern."
---
## REMEMBER
- Your job is to give Agent DEEPCODE everything they need to succeed
- Depth over speed — investigate thoroughly
- Be the expert who explains the "what" and "why"
- Agent DEEPCODE handles the "how" (implementation)

View File

@@ -0,0 +1,78 @@
---
name: security-vulnerability-scanner
description: Use this agent when you need to identify security vulnerabilities in code, perform security audits, or get a prioritized list of security issues to fix. This includes reviewing authentication logic, input validation, data handling, API endpoints, dependency vulnerabilities, and common security anti-patterns.\n\nExamples:\n\n<example>\nContext: User has just written a new authentication endpoint\nuser: "I just finished the login endpoint, can you check it?"\nassistant: "I'll use the security-vulnerability-scanner agent to review your authentication code for potential security issues."\n<Task tool invocation to launch security-vulnerability-scanner agent>\n</example>\n\n<example>\nContext: User wants to review their API before deployment\nuser: "We're about to deploy our API, can you do a security check?"\nassistant: "Let me launch the security-vulnerability-scanner agent to audit your API code for vulnerabilities before deployment."\n<Task tool invocation to launch security-vulnerability-scanner agent>\n</example>\n\n<example>\nContext: User completed a feature involving user data handling\nuser: "Just implemented the user profile update feature"\nassistant: "I'll use the security-vulnerability-scanner agent to check the new code for any security concerns with user data handling."\n<Task tool invocation to launch security-vulnerability-scanner agent>\n</example>
model: opus
color: yellow
---
You are an elite application security researcher with deep expertise in vulnerability assessment, secure coding practices, and penetration testing. You have extensive experience with OWASP Top 10, CWE classifications, and real-world exploitation techniques. Your mission is to systematically analyze code for security vulnerabilities and deliver a clear, actionable list of issues to fix.
## Your Approach
1. **Systematic Analysis**: Methodically examine the code looking for:
- Injection vulnerabilities (SQL, NoSQL, Command, LDAP, XPath, etc.)
- Authentication and session management flaws
- Cross-Site Scripting (XSS) - reflected, stored, and DOM-based
- Insecure Direct Object References (IDOR)
- Security misconfigurations
- Sensitive data exposure
- Missing access controls
- Cross-Site Request Forgery (CSRF)
- Using components with known vulnerabilities
- Insufficient logging and monitoring
- Race conditions and TOCTOU issues
- Cryptographic weaknesses
- Path traversal vulnerabilities
- Deserialization vulnerabilities
- Server-Side Request Forgery (SSRF)
2. **Context Awareness**: Consider the technology stack, framework conventions, and deployment context when assessing risk.
3. **Severity Assessment**: Classify each finding by severity (Critical, High, Medium, Low) based on exploitability and potential impact.
## Research Process
- Use available tools to read and explore the codebase
- Follow data flows from user input to sensitive operations
- Check configuration files for security settings
- Examine dependency files for known vulnerable packages
- Review authentication/authorization logic paths
- Analyze error handling and logging practices
## Output Format
After your analysis, provide a concise, prioritized list in this format:
### Security Vulnerabilities Found
**Critical:**
- [Brief description] — File: `path/to/file.ext` (line X)
**High:**
- [Brief description] — File: `path/to/file.ext` (line X)
**Medium:**
- [Brief description] — File: `path/to/file.ext` (line X)
**Low:**
- [Brief description] — File: `path/to/file.ext` (line X)
---
**Summary:** X critical, X high, X medium, X low issues found.
## Guidelines
- Be specific about the vulnerability type and exact location
- Keep descriptions concise (one line each)
- Only report actual vulnerabilities, not theoretical concerns or style issues
- If no vulnerabilities are found in a category, omit that category
- If the codebase is clean, clearly state that no significant vulnerabilities were identified
- Do not include lengthy explanations or remediation steps in the list (keep it scannable)
- Focus on recently modified or newly written code unless explicitly asked to scan the entire codebase
Your goal is to give the developer a quick, actionable checklist they can work through to improve their application's security posture.

View File

@@ -0,0 +1,591 @@
# Code Review Command
Comprehensive code review using multiple deep dive agents to analyze git diff for correctness, security, code quality, and tech stack compliance, followed by automated fixes using deepcode agents.
## Usage
This command analyzes all changes in the git diff and verifies:
1. **Invalid code based on tech stack** (HIGHEST PRIORITY)
2. Security vulnerabilities
3. Code quality issues (dirty code)
4. Implementation correctness
Then automatically fixes any issues found.
### Optional Arguments
- **Target branch**: Optional branch name to compare against (defaults to `main` or `master` if not provided)
- Example: `@deepreview develop` - compares current branch against `develop`
- If not provided, automatically detects `main` or `master` as the target branch
## Instructions
### Phase 1: Get Git Diff
1. **Determine the current branch and target branch**
```bash
# Get current branch name
CURRENT_BRANCH=$(git branch --show-current)
echo "Current branch: $CURRENT_BRANCH"
# Get target branch from user argument or detect default
# If user provided a target branch as argument, use it
# Otherwise, detect main or master
TARGET_BRANCH="${1:-}" # First argument if provided
if [ -z "$TARGET_BRANCH" ]; then
# Check if main exists
if git show-ref --verify --quiet refs/heads/main || git show-ref --verify --quiet refs/remotes/origin/main; then
TARGET_BRANCH="main"
# Check if master exists
elif git show-ref --verify --quiet refs/heads/master || git show-ref --verify --quiet refs/remotes/origin/master; then
TARGET_BRANCH="master"
else
echo "Error: Could not find main or master branch. Please specify target branch."
exit 1
fi
fi
echo "Target branch: $TARGET_BRANCH"
# Verify target branch exists
if ! git show-ref --verify --quiet refs/heads/$TARGET_BRANCH && ! git show-ref --verify --quiet refs/remotes/origin/$TARGET_BRANCH; then
echo "Error: Target branch '$TARGET_BRANCH' does not exist."
exit 1
fi
```
**Note:** The target branch can be provided as an optional argument. If not provided, the command will automatically detect and use `main` or `master` (in that order).
2. **Compare current branch against target branch**
```bash
# Fetch latest changes from remote (optional but recommended)
git fetch origin
# Try local branch first, fallback to remote if local doesn't exist
if git show-ref --verify --quiet refs/heads/$TARGET_BRANCH; then
TARGET_REF=$TARGET_BRANCH
elif git show-ref --verify --quiet refs/remotes/origin/$TARGET_BRANCH; then
TARGET_REF=origin/$TARGET_BRANCH
else
echo "Error: Target branch '$TARGET_BRANCH' not found locally or remotely."
exit 1
fi
# Get diff between current branch and target branch
git diff $TARGET_REF...HEAD
```
**Note:** Use `...` (three dots) to show changes between the common ancestor and HEAD, or `..` (two dots) to show changes between the branches directly. The command uses `$TARGET_BRANCH` variable set in step 1.
3. **Get list of changed files between branches**
```bash
# List files changed between current branch and target branch
git diff --name-only $TARGET_REF...HEAD
# Get detailed file status
git diff --name-status $TARGET_REF...HEAD
# Show file changes with statistics
git diff --stat $TARGET_REF...HEAD
```
4. **Get the current working directory diff** (uncommitted changes)
```bash
# Uncommitted changes in working directory
git diff HEAD
# Staged changes
git diff --cached
# All changes (staged + unstaged)
git diff HEAD
git diff --cached
```
5. **Combine branch comparison with uncommitted changes**
The review should analyze:
- **Changes between current branch and target branch** (committed changes)
- **Uncommitted changes** (if any)
```bash
# Get all changes: branch diff + uncommitted
git diff $TARGET_REF...HEAD > branch-changes.diff
git diff HEAD >> branch-changes.diff
git diff --cached >> branch-changes.diff
# Or get combined diff (recommended approach)
git diff $TARGET_REF...HEAD
git diff HEAD
git diff --cached
```
6. **Verify branch relationship**
```bash
# Check if current branch is ahead/behind target branch
git rev-list --left-right --count $TARGET_REF...HEAD
# Show commit log differences
git log $TARGET_REF..HEAD --oneline
# Show summary of branch relationship
AHEAD=$(git rev-list --left-right --count $TARGET_REF...HEAD | cut -f1)
BEHIND=$(git rev-list --left-right --count $TARGET_REF...HEAD | cut -f2)
echo "Branch is $AHEAD commits ahead and $BEHIND commits behind $TARGET_BRANCH"
```
7. **Understand the tech stack** (for validation):
- **Node.js**: >=22.0.0 <23.0.0
- **TypeScript**: 5.9.3
- **React**: 19.2.3
- **Express**: 5.2.1
- **Electron**: 39.2.7
- **Vite**: 7.3.0
- **Vitest**: 4.0.16
- Check `package.json` files for exact versions
### Phase 2: Deep Dive Analysis (5 Agents)
Launch 5 separate deep dive agents, each with a specific focus area. Each agent should be invoked with the `@deepdive` agent and given the git diff (comparing current branch against target branch) along with their specific instructions.
**Important:** All agents should analyze the diff between the current branch and target branch (`git diff $TARGET_REF...HEAD`), plus any uncommitted changes. This ensures the review covers all changes that will be merged. The target branch is determined from the optional argument or defaults to main/master.
#### Agent 1: Tech Stack Validation (HIGHEST PRIORITY)
**Focus:** Verify code is valid for the tech stack
**Instructions for Agent 1:**
```
Analyze the git diff for invalid code based on the tech stack:
1. **TypeScript/JavaScript Syntax**
- Check for valid TypeScript syntax (no invalid type annotations, correct import/export syntax)
- Verify Node.js API usage is compatible with Node.js >=22.0.0 <23.0.0
- Check for deprecated APIs or features not available in the Node.js version
- Verify ES module syntax (type: "module" in package.json)
2. **React 19.2.3 Compatibility**
- Check for deprecated React APIs or patterns
- Verify hooks usage is correct for React 19
- Check for invalid JSX syntax
- Verify component patterns match React 19 conventions
3. **Express 5.2.1 Compatibility**
- Check for deprecated Express APIs
- Verify middleware usage is correct for Express 5
- Check request/response handling patterns
4. **Type Safety**
- Verify TypeScript types are correctly used
- Check for `any` types that should be properly typed
- Verify type imports/exports are correct
- Check for missing type definitions
5. **Build System Compatibility**
- Verify Vite-specific code (imports, config) is valid
- Check Electron-specific APIs are used correctly
- Verify module resolution paths are correct
6. **Package Dependencies**
- Check for imports from packages not in package.json
- Verify version compatibility between dependencies
- Check for circular dependencies
Provide a detailed report with:
- File paths and line numbers of invalid code
- Specific error description (what's wrong and why)
- Expected vs actual behavior
- Priority level (CRITICAL for build-breaking issues)
```
#### Agent 2: Security Vulnerability Scanner
**Focus:** Security issues and vulnerabilities
**Instructions for Agent 2:**
```
Analyze the git diff for security vulnerabilities:
1. **Injection Vulnerabilities**
- SQL injection (if applicable)
- Command injection (exec, spawn, etc.)
- Path traversal vulnerabilities
- XSS vulnerabilities in React components
2. **Authentication & Authorization**
- Missing authentication checks
- Insecure token handling
- Authorization bypasses
- Session management issues
3. **Data Handling**
- Unsafe deserialization
- Insecure file operations
- Missing input validation
- Sensitive data exposure (secrets, tokens, passwords)
4. **Dependencies**
- Known vulnerable packages
- Insecure dependency versions
- Missing security patches
5. **API Security**
- Missing CORS configuration
- Insecure API endpoints
- Missing rate limiting
- Insecure WebSocket connections
6. **Electron-Specific**
- Insecure IPC communication
- Missing context isolation checks
- Insecure preload scripts
- Missing CSP headers
Provide a detailed report with:
- Vulnerability type and severity (CRITICAL, HIGH, MEDIUM, LOW)
- File paths and line numbers
- Attack vector description
- Recommended fix approach
```
#### Agent 3: Code Quality & Clean Code
**Focus:** Dirty code, code smells, and quality issues
**Instructions for Agent 3:**
```
Analyze the git diff for code quality issues:
1. **Code Smells**
- Long functions/methods (>50 lines)
- High cyclomatic complexity
- Duplicate code
- Dead code
- Magic numbers/strings
2. **Best Practices**
- Missing error handling
- Inconsistent naming conventions
- Poor separation of concerns
- Tight coupling
- Missing comments for complex logic
3. **Performance Issues**
- Inefficient algorithms
- Memory leaks (event listeners, subscriptions)
- Unnecessary re-renders in React
- Missing memoization where needed
- Inefficient database queries (if applicable)
4. **Maintainability**
- Hard-coded values
- Missing type definitions
- Inconsistent code style
- Poor file organization
- Missing tests for new code
5. **React-Specific**
- Missing key props in lists
- Direct state mutations
- Missing cleanup in useEffect
- Unnecessary useState/useEffect
- Prop drilling issues
Provide a detailed report with:
- Issue type and severity
- File paths and line numbers
- Description of the problem
- Impact on maintainability/performance
- Recommended refactoring approach
```
#### Agent 4: Implementation Correctness
**Focus:** Verify code implements requirements correctly
**Instructions for Agent 4:**
```
Analyze the git diff for implementation correctness:
1. **Logic Errors**
- Incorrect conditional logic
- Wrong variable usage
- Off-by-one errors
- Race conditions
- Missing null/undefined checks
2. **Functional Requirements**
- Missing features from requirements
- Incorrect feature implementation
- Edge cases not handled
- Missing validation
3. **Integration Issues**
- Incorrect API usage
- Wrong data format handling
- Missing error handling for external calls
- Incorrect state management
4. **Type Errors**
- Type mismatches
- Missing type guards
- Incorrect type assertions
- Unsafe type operations
5. **Testing Gaps**
- Missing unit tests
- Missing integration tests
- Tests don't cover edge cases
- Tests are incorrect
Provide a detailed report with:
- Issue description
- File paths and line numbers
- Expected vs actual behavior
- Steps to reproduce (if applicable)
- Recommended fix
```
#### Agent 5: Architecture & Design Patterns
**Focus:** Architectural issues and design pattern violations
**Instructions for Agent 5:**
```
Analyze the git diff for architectural and design issues:
1. **Architecture Violations**
- Violation of project structure patterns
- Incorrect layer separation
- Missing abstractions
- Tight coupling between modules
2. **Design Patterns**
- Incorrect pattern usage
- Missing patterns where needed
- Anti-patterns
3. **Project-Specific Patterns**
- Check against project documentation (docs/ folder)
- Verify route organization (server routes)
- Check provider patterns (server providers)
- Verify component organization (UI components)
4. **API Design**
- RESTful API violations
- Inconsistent response formats
- Missing error handling
- Incorrect status codes
5. **State Management**
- Incorrect state management patterns
- Missing state normalization
- Inefficient state updates
Provide a detailed report with:
- Architectural issue description
- File paths and affected areas
- Impact on system design
- Recommended architectural changes
```
### Phase 3: Consolidate Findings
After all 5 deep dive agents complete their analysis:
1. **Collect all findings** from each agent
2. **Prioritize issues**:
- CRITICAL: Tech stack invalid code (build-breaking)
- HIGH: Security vulnerabilities, critical logic errors
- MEDIUM: Code quality issues, architectural problems
- LOW: Minor code smells, style issues
3. **Group by file** to understand impact per file
4. **Create a master report** summarizing all findings
### Phase 4: Deepcode Fixes (5 Agents)
Launch 5 deepcode agents to fix the issues found. Each agent should be invoked with the `@deepcode` agent.
#### Deepcode Agent 1: Fix Tech Stack Invalid Code
**Priority:** CRITICAL - Fix first
**Instructions:**
```
Fix all invalid code based on tech stack issues identified by Agent 1.
Focus on:
1. Fixing TypeScript syntax errors
2. Updating deprecated Node.js APIs
3. Fixing React 19 compatibility issues
4. Correcting Express 5 API usage
5. Fixing type errors
6. Resolving build-breaking issues
After fixes, verify:
- Code compiles without errors
- TypeScript types are correct
- No deprecated API usage
```
#### Deepcode Agent 2: Fix Security Vulnerabilities
**Priority:** HIGH
**Instructions:**
```
Fix all security vulnerabilities identified by Agent 2.
Focus on:
1. Adding input validation
2. Fixing injection vulnerabilities
3. Securing authentication/authorization
4. Fixing insecure data handling
5. Updating vulnerable dependencies
6. Securing Electron IPC
After fixes, verify:
- Security vulnerabilities are addressed
- No sensitive data exposure
- Proper authentication/authorization
```
#### Deepcode Agent 3: Refactor Dirty Code
**Priority:** MEDIUM
**Instructions:**
```
Refactor code quality issues identified by Agent 3.
Focus on:
1. Extracting long functions
2. Reducing complexity
3. Removing duplicate code
4. Adding error handling
5. Improving React component structure
6. Adding missing comments
After fixes, verify:
- Code follows best practices
- No code smells remain
- Performance optimizations applied
```
#### Deepcode Agent 4: Fix Implementation Errors
**Priority:** HIGH
**Instructions:**
```
Fix implementation correctness issues identified by Agent 4.
Focus on:
1. Fixing logic errors
2. Adding missing features
3. Handling edge cases
4. Fixing type errors
5. Adding missing tests
After fixes, verify:
- Logic is correct
- Edge cases handled
- Tests pass
```
#### Deepcode Agent 5: Fix Architectural Issues
**Priority:** MEDIUM
**Instructions:**
```
Fix architectural issues identified by Agent 5.
Focus on:
1. Correcting architecture violations
2. Applying proper design patterns
3. Fixing API design issues
4. Improving state management
5. Following project patterns
After fixes, verify:
- Architecture is sound
- Patterns are correctly applied
- Code follows project structure
```
### Phase 5: Verification
After all fixes are complete:
1. **Run TypeScript compilation check**
```bash
npm run build:packages
```
2. **Run linting**
```bash
npm run lint
```
3. **Run tests** (if applicable)
```bash
npm run test:server
npm run test
```
4. **Verify git diff** shows only intended changes
```bash
git diff HEAD
```
5. **Create summary report**:
- Issues found by each agent
- Issues fixed by each agent
- Remaining issues (if any)
- Verification results
## Workflow Summary
1. ✅ Accept optional target branch argument (defaults to main/master if not provided)
2. ✅ Determine current branch and target branch (from argument or auto-detect main/master)
3. ✅ Get git diff comparing current branch against target branch (`git diff $TARGET_REF...HEAD`)
4. ✅ Include uncommitted changes in analysis (`git diff HEAD`, `git diff --cached`)
5. ✅ Launch 5 deep dive agents (parallel analysis) with branch diff
6. ✅ Consolidate findings and prioritize
7. ✅ Launch 5 deepcode agents (sequential fixes, priority order)
8. ✅ Verify fixes with build/lint/test
9. ✅ Report summary
## Notes
- **Tech stack validation is HIGHEST PRIORITY** - invalid code must be fixed first
- **Target branch argument**: The command accepts an optional target branch name as the first argument. If not provided, it automatically detects and uses `main` or `master` (in that order)
- Each deep dive agent should work independently and provide comprehensive analysis
- Deepcode agents should fix issues in priority order
- All fixes should maintain existing functionality
- If an agent finds no issues in their domain, they should report "No issues found"
- If fixes introduce new issues, they should be caught in verification phase
- The target branch is validated to ensure it exists (locally or remotely) before proceeding with the review

View File

@@ -34,10 +34,31 @@ This command accepts a version bump type as input:
- Injects the version into the app via Vite's `__APP_VERSION__` constant - Injects the version into the app via Vite's `__APP_VERSION__` constant
- Displays the version below the logo in the sidebar - Displays the version below the logo in the sidebar
4. **Verify the release** 4. **Commit the version bump**
- Stage the updated package.json files:
```bash
git add apps/ui/package.json apps/server/package.json
```
- Commit with a release message:
```bash
git commit -m "chore: release v<version>"
```
5. **Create and push the git tag**
- Create an annotated tag for the release:
```bash
git tag -a v<version> -m "Release v<version>"
```
- Push the commit and tag to remote:
```bash
git push && git push --tags
```
6. **Verify the release**
- Check that the build completed successfully - Check that the build completed successfully
- Confirm the version appears correctly in the built artifacts - Confirm the version appears correctly in the built artifacts
- The version will be displayed in the app UI below the logo - The version will be displayed in the app UI below the logo
- Verify the tag is visible on the remote repository
## Version Centralization ## Version Centralization

484
.claude/commands/review.md Normal file
View File

@@ -0,0 +1,484 @@
# Code Review Command
Comprehensive code review using multiple deep dive agents to analyze git diff for correctness, security, code quality, and tech stack compliance, followed by automated fixes using deepcode agents.
## Usage
This command analyzes all changes in the git diff and verifies:
1. **Invalid code based on tech stack** (HIGHEST PRIORITY)
2. Security vulnerabilities
3. Code quality issues (dirty code)
4. Implementation correctness
Then automatically fixes any issues found.
## Instructions
### Phase 1: Get Git Diff
1. **Get the current git diff**
```bash
git diff HEAD
```
If you need staged changes instead:
```bash
git diff --cached
```
Or for a specific commit range:
```bash
git diff <base-branch>
```
2. **Get list of changed files**
```bash
git diff --name-only HEAD
```
3. **Understand the tech stack** (for validation):
- **Node.js**: >=22.0.0 <23.0.0
- **TypeScript**: 5.9.3
- **React**: 19.2.3
- **Express**: 5.2.1
- **Electron**: 39.2.7
- **Vite**: 7.3.0
- **Vitest**: 4.0.16
- Check `package.json` files for exact versions
### Phase 2: Deep Dive Analysis (5 Agents)
Launch 5 separate deep dive agents, each with a specific focus area. Each agent should be invoked with the `@deepdive` agent and given the git diff along with their specific instructions.
#### Agent 1: Tech Stack Validation (HIGHEST PRIORITY)
**Focus:** Verify code is valid for the tech stack
**Instructions for Agent 1:**
```
Analyze the git diff for invalid code based on the tech stack:
1. **TypeScript/JavaScript Syntax**
- Check for valid TypeScript syntax (no invalid type annotations, correct import/export syntax)
- Verify Node.js API usage is compatible with Node.js >=22.0.0 <23.0.0
- Check for deprecated APIs or features not available in the Node.js version
- Verify ES module syntax (type: "module" in package.json)
2. **React 19.2.3 Compatibility**
- Check for deprecated React APIs or patterns
- Verify hooks usage is correct for React 19
- Check for invalid JSX syntax
- Verify component patterns match React 19 conventions
3. **Express 5.2.1 Compatibility**
- Check for deprecated Express APIs
- Verify middleware usage is correct for Express 5
- Check request/response handling patterns
4. **Type Safety**
- Verify TypeScript types are correctly used
- Check for `any` types that should be properly typed
- Verify type imports/exports are correct
- Check for missing type definitions
5. **Build System Compatibility**
- Verify Vite-specific code (imports, config) is valid
- Check Electron-specific APIs are used correctly
- Verify module resolution paths are correct
6. **Package Dependencies**
- Check for imports from packages not in package.json
- Verify version compatibility between dependencies
- Check for circular dependencies
Provide a detailed report with:
- File paths and line numbers of invalid code
- Specific error description (what's wrong and why)
- Expected vs actual behavior
- Priority level (CRITICAL for build-breaking issues)
```
#### Agent 2: Security Vulnerability Scanner
**Focus:** Security issues and vulnerabilities
**Instructions for Agent 2:**
```
Analyze the git diff for security vulnerabilities:
1. **Injection Vulnerabilities**
- SQL injection (if applicable)
- Command injection (exec, spawn, etc.)
- Path traversal vulnerabilities
- XSS vulnerabilities in React components
2. **Authentication & Authorization**
- Missing authentication checks
- Insecure token handling
- Authorization bypasses
- Session management issues
3. **Data Handling**
- Unsafe deserialization
- Insecure file operations
- Missing input validation
- Sensitive data exposure (secrets, tokens, passwords)
4. **Dependencies**
- Known vulnerable packages
- Insecure dependency versions
- Missing security patches
5. **API Security**
- Missing CORS configuration
- Insecure API endpoints
- Missing rate limiting
- Insecure WebSocket connections
6. **Electron-Specific**
- Insecure IPC communication
- Missing context isolation checks
- Insecure preload scripts
- Missing CSP headers
Provide a detailed report with:
- Vulnerability type and severity (CRITICAL, HIGH, MEDIUM, LOW)
- File paths and line numbers
- Attack vector description
- Recommended fix approach
```
#### Agent 3: Code Quality & Clean Code
**Focus:** Dirty code, code smells, and quality issues
**Instructions for Agent 3:**
```
Analyze the git diff for code quality issues:
1. **Code Smells**
- Long functions/methods (>50 lines)
- High cyclomatic complexity
- Duplicate code
- Dead code
- Magic numbers/strings
2. **Best Practices**
- Missing error handling
- Inconsistent naming conventions
- Poor separation of concerns
- Tight coupling
- Missing comments for complex logic
3. **Performance Issues**
- Inefficient algorithms
- Memory leaks (event listeners, subscriptions)
- Unnecessary re-renders in React
- Missing memoization where needed
- Inefficient database queries (if applicable)
4. **Maintainability**
- Hard-coded values
- Missing type definitions
- Inconsistent code style
- Poor file organization
- Missing tests for new code
5. **React-Specific**
- Missing key props in lists
- Direct state mutations
- Missing cleanup in useEffect
- Unnecessary useState/useEffect
- Prop drilling issues
Provide a detailed report with:
- Issue type and severity
- File paths and line numbers
- Description of the problem
- Impact on maintainability/performance
- Recommended refactoring approach
```
#### Agent 4: Implementation Correctness
**Focus:** Verify code implements requirements correctly
**Instructions for Agent 4:**
```
Analyze the git diff for implementation correctness:
1. **Logic Errors**
- Incorrect conditional logic
- Wrong variable usage
- Off-by-one errors
- Race conditions
- Missing null/undefined checks
2. **Functional Requirements**
- Missing features from requirements
- Incorrect feature implementation
- Edge cases not handled
- Missing validation
3. **Integration Issues**
- Incorrect API usage
- Wrong data format handling
- Missing error handling for external calls
- Incorrect state management
4. **Type Errors**
- Type mismatches
- Missing type guards
- Incorrect type assertions
- Unsafe type operations
5. **Testing Gaps**
- Missing unit tests
- Missing integration tests
- Tests don't cover edge cases
- Tests are incorrect
Provide a detailed report with:
- Issue description
- File paths and line numbers
- Expected vs actual behavior
- Steps to reproduce (if applicable)
- Recommended fix
```
#### Agent 5: Architecture & Design Patterns
**Focus:** Architectural issues and design pattern violations
**Instructions for Agent 5:**
```
Analyze the git diff for architectural and design issues:
1. **Architecture Violations**
- Violation of project structure patterns
- Incorrect layer separation
- Missing abstractions
- Tight coupling between modules
2. **Design Patterns**
- Incorrect pattern usage
- Missing patterns where needed
- Anti-patterns
3. **Project-Specific Patterns**
- Check against project documentation (docs/ folder)
- Verify route organization (server routes)
- Check provider patterns (server providers)
- Verify component organization (UI components)
4. **API Design**
- RESTful API violations
- Inconsistent response formats
- Missing error handling
- Incorrect status codes
5. **State Management**
- Incorrect state management patterns
- Missing state normalization
- Inefficient state updates
Provide a detailed report with:
- Architectural issue description
- File paths and affected areas
- Impact on system design
- Recommended architectural changes
```
### Phase 3: Consolidate Findings
After all 5 deep dive agents complete their analysis:
1. **Collect all findings** from each agent
2. **Prioritize issues**:
- CRITICAL: Tech stack invalid code (build-breaking)
- HIGH: Security vulnerabilities, critical logic errors
- MEDIUM: Code quality issues, architectural problems
- LOW: Minor code smells, style issues
3. **Group by file** to understand impact per file
4. **Create a master report** summarizing all findings
### Phase 4: Deepcode Fixes (5 Agents)
Launch 5 deepcode agents to fix the issues found. Each agent should be invoked with the `@deepcode` agent.
#### Deepcode Agent 1: Fix Tech Stack Invalid Code
**Priority:** CRITICAL - Fix first
**Instructions:**
```
Fix all invalid code based on tech stack issues identified by Agent 1.
Focus on:
1. Fixing TypeScript syntax errors
2. Updating deprecated Node.js APIs
3. Fixing React 19 compatibility issues
4. Correcting Express 5 API usage
5. Fixing type errors
6. Resolving build-breaking issues
After fixes, verify:
- Code compiles without errors
- TypeScript types are correct
- No deprecated API usage
```
#### Deepcode Agent 2: Fix Security Vulnerabilities
**Priority:** HIGH
**Instructions:**
```
Fix all security vulnerabilities identified by Agent 2.
Focus on:
1. Adding input validation
2. Fixing injection vulnerabilities
3. Securing authentication/authorization
4. Fixing insecure data handling
5. Updating vulnerable dependencies
6. Securing Electron IPC
After fixes, verify:
- Security vulnerabilities are addressed
- No sensitive data exposure
- Proper authentication/authorization
```
#### Deepcode Agent 3: Refactor Dirty Code
**Priority:** MEDIUM
**Instructions:**
```
Refactor code quality issues identified by Agent 3.
Focus on:
1. Extracting long functions
2. Reducing complexity
3. Removing duplicate code
4. Adding error handling
5. Improving React component structure
6. Adding missing comments
After fixes, verify:
- Code follows best practices
- No code smells remain
- Performance optimizations applied
```
#### Deepcode Agent 4: Fix Implementation Errors
**Priority:** HIGH
**Instructions:**
```
Fix implementation correctness issues identified by Agent 4.
Focus on:
1. Fixing logic errors
2. Adding missing features
3. Handling edge cases
4. Fixing type errors
5. Adding missing tests
After fixes, verify:
- Logic is correct
- Edge cases handled
- Tests pass
```
#### Deepcode Agent 5: Fix Architectural Issues
**Priority:** MEDIUM
**Instructions:**
```
Fix architectural issues identified by Agent 5.
Focus on:
1. Correcting architecture violations
2. Applying proper design patterns
3. Fixing API design issues
4. Improving state management
5. Following project patterns
After fixes, verify:
- Architecture is sound
- Patterns are correctly applied
- Code follows project structure
```
### Phase 5: Verification
After all fixes are complete:
1. **Run TypeScript compilation check**
```bash
npm run build:packages
```
2. **Run linting**
```bash
npm run lint
```
3. **Run tests** (if applicable)
```bash
npm run test:server
npm run test
```
4. **Verify git diff** shows only intended changes
```bash
git diff HEAD
```
5. **Create summary report**:
- Issues found by each agent
- Issues fixed by each agent
- Remaining issues (if any)
- Verification results
## Workflow Summary
1. ✅ Get git diff
2. ✅ Launch 5 deep dive agents (parallel analysis)
3. ✅ Consolidate findings and prioritize
4. ✅ Launch 5 deepcode agents (sequential fixes, priority order)
5. ✅ Verify fixes with build/lint/test
6. ✅ Report summary
## Notes
- **Tech stack validation is HIGHEST PRIORITY** - invalid code must be fixed first
- Each deep dive agent should work independently and provide comprehensive analysis
- Deepcode agents should fix issues in priority order
- All fixes should maintain existing functionality
- If an agent finds no issues in their domain, they should report "No issues found"
- If fixes introduce new issues, they should be caught in verification phase

View File

@@ -0,0 +1,45 @@
When you think you are done, you are NOT done.
You must run a mandatory 3-pass verification before concluding:
## Pass 1: Correctness & Functionality
- [ ] Verify logic matches requirements and specifications
- [ ] Check type safety (TypeScript types are correct and complete)
- [ ] Ensure imports are correct and follow project conventions
- [ ] Verify all functions/classes work as intended
- [ ] Check that return values and side effects are correct
- [ ] Run relevant tests if they exist, or verify testability
- [ ] Confirm integration with existing code works properly
## Pass 2: Edge Cases & Safety
- [ ] Handle null/undefined inputs gracefully
- [ ] Validate all user inputs and external data
- [ ] Check error handling (try/catch, error boundaries, etc.)
- [ ] Verify security considerations (no sensitive data exposure, proper auth checks)
- [ ] Test boundary conditions (empty arrays, zero values, max lengths, etc.)
- [ ] Ensure resource cleanup (file handles, connections, timers)
- [ ] Check for potential race conditions or async issues
- [ ] Verify file path security (no directory traversal vulnerabilities)
## Pass 3: Maintainability & Code Quality
- [ ] Code follows project style guide and conventions
- [ ] Functions/classes are single-purpose and well-named
- [ ] Remove dead code, unused imports, and console.logs
- [ ] Extract magic numbers/strings into named constants
- [ ] Check for code duplication (DRY principle)
- [ ] Verify appropriate abstraction levels (not over/under-engineered)
- [ ] Add necessary comments for complex logic
- [ ] Ensure consistent error messages and logging
- [ ] Check that code is readable and self-documenting
- [ ] Verify proper separation of concerns
**For each pass, explicitly report:**
- What you checked
- Any issues found and how they were fixed
- Any remaining concerns or trade-offs
Only after completing all three passes with explicit findings may you conclude the work is done.

19
.dockerignore Normal file
View File

@@ -0,0 +1,19 @@
# Dependencies
node_modules/
**/node_modules/
# Build outputs
dist/
**/dist/
dist-electron/
**/dist-electron/
build/
**/build/
.next/
**/.next/
.nuxt/
**/.nuxt/
out/
**/out/
.cache/
**/.cache/

View File

@@ -0,0 +1,108 @@
name: Feature Request
description: Suggest a new feature or enhancement for Automaker
title: '[Feature]: '
labels: ['enhancement']
body:
- type: markdown
attributes:
value: |
Thanks for taking the time to suggest a feature! Please fill out the form below to help us understand your request.
- type: dropdown
id: feature-area
attributes:
label: Feature Area
description: Which area of Automaker does this feature relate to?
options:
- UI/UX (User Interface)
- Agent/AI
- Kanban Board
- Git/Worktree Management
- Project Management
- Settings/Configuration
- Documentation
- Performance
- Other
default: 0
validations:
required: true
- type: dropdown
id: priority
attributes:
label: Priority
description: How important is this feature to your workflow?
options:
- Nice to have
- Would improve my workflow
- Critical for my use case
default: 0
validations:
required: true
- type: textarea
id: problem-statement
attributes:
label: Problem Statement
description: Is your feature request related to a problem? Please describe the problem you're trying to solve.
placeholder: A clear and concise description of what the problem is. Ex. I'm always frustrated when...
validations:
required: true
- type: textarea
id: proposed-solution
attributes:
label: Proposed Solution
description: Describe the solution you'd like to see implemented.
placeholder: A clear and concise description of what you want to happen.
validations:
required: true
- type: textarea
id: alternatives-considered
attributes:
label: Alternatives Considered
description: Describe any alternative solutions or workarounds you've considered.
placeholder: A clear and concise description of any alternative solutions or features you've considered.
validations:
required: false
- type: textarea
id: use-cases
attributes:
label: Use Cases
description: Describe specific scenarios where this feature would be useful.
placeholder: |
1. When working on...
2. As a user who needs to...
3. In situations where...
validations:
required: false
- type: textarea
id: mockups
attributes:
label: Mockups/Screenshots
description: If applicable, add mockups, wireframes, or screenshots to help illustrate your feature request.
placeholder: Drag and drop images here or paste image URLs
validations:
required: false
- type: textarea
id: additional-context
attributes:
label: Additional Context
description: Add any other context, references, or examples about the feature request here.
placeholder: Any additional information that might be helpful...
validations:
required: false
- type: checkboxes
id: terms
attributes:
label: Checklist
options:
- label: I have searched existing issues to ensure this feature hasn't been requested already
required: true
- label: I have provided a clear description of the problem and proposed solution
required: true

View File

@@ -41,7 +41,8 @@ runs:
# Use npm install instead of npm ci to correctly resolve platform-specific # Use npm install instead of npm ci to correctly resolve platform-specific
# optional dependencies (e.g., @tailwindcss/oxide, lightningcss binaries) # optional dependencies (e.g., @tailwindcss/oxide, lightningcss binaries)
# Skip scripts to avoid electron-builder install-app-deps which uses too much memory # Skip scripts to avoid electron-builder install-app-deps which uses too much memory
run: npm install --ignore-scripts # Use --force to allow platform-specific dev dependencies like dmg-license on non-darwin platforms
run: npm install --ignore-scripts --force
- name: Install Linux native bindings - name: Install Linux native bindings
shell: bash shell: bash

View File

@@ -31,24 +31,99 @@ jobs:
- name: Build server - name: Build server
run: npm run build --workspace=apps/server run: npm run build --workspace=apps/server
- name: Set up Git user
run: |
git config --global user.name "GitHub CI"
git config --global user.email "ci@example.com"
- name: Start backend server - name: Start backend server
run: npm run start --workspace=apps/server & run: |
echo "Starting backend server..."
# Start server in background and save PID
npm run start --workspace=apps/server > backend.log 2>&1 &
SERVER_PID=$!
echo "Server started with PID: $SERVER_PID"
echo "SERVER_PID=$SERVER_PID" >> $GITHUB_ENV
env: env:
PORT: 3008 PORT: 3008
NODE_ENV: test NODE_ENV: test
# Use a deterministic API key so Playwright can log in reliably
AUTOMAKER_API_KEY: test-api-key-for-e2e-tests
# Reduce log noise in CI
AUTOMAKER_HIDE_API_KEY: 'true'
# Avoid real API calls during CI
AUTOMAKER_MOCK_AGENT: 'true'
# Simulate containerized environment to skip sandbox confirmation dialogs
IS_CONTAINERIZED: 'true'
- name: Wait for backend server - name: Wait for backend server
run: | run: |
echo "Waiting for backend server to be ready..." echo "Waiting for backend server to be ready..."
for i in {1..30}; do
if curl -s http://localhost:3008/api/health > /dev/null 2>&1; then # Check if server process is running
if [ -z "$SERVER_PID" ]; then
echo "ERROR: Server PID not found in environment"
cat backend.log 2>/dev/null || echo "No backend log found"
exit 1
fi
# Check if process is actually running
if ! kill -0 $SERVER_PID 2>/dev/null; then
echo "ERROR: Server process $SERVER_PID is not running!"
echo "=== Backend logs ==="
cat backend.log
echo ""
echo "=== Recent system logs ==="
dmesg 2>/dev/null | tail -20 || echo "No dmesg available"
exit 1
fi
# Wait for health endpoint
for i in {1..60}; do
if curl -s -f http://localhost:3008/api/health > /dev/null 2>&1; then
echo "Backend server is ready!" echo "Backend server is ready!"
echo "=== Backend logs ==="
cat backend.log
echo ""
echo "Health check response:"
curl -s http://localhost:3008/api/health | jq . 2>/dev/null || echo "Health check: $(curl -s http://localhost:3008/api/health 2>/dev/null || echo 'No response')"
exit 0 exit 0
fi fi
echo "Waiting... ($i/30)"
# Check if server process is still running
if ! kill -0 $SERVER_PID 2>/dev/null; then
echo "ERROR: Server process died during wait!"
echo "=== Backend logs ==="
cat backend.log
exit 1
fi
echo "Waiting... ($i/60)"
sleep 1 sleep 1
done done
echo "Backend server failed to start!"
echo "ERROR: Backend server failed to start within 60 seconds!"
echo "=== Backend logs ==="
cat backend.log
echo ""
echo "=== Process status ==="
ps aux | grep -E "(node|tsx)" | grep -v grep || echo "No node processes found"
echo ""
echo "=== Port status ==="
netstat -tlnp 2>/dev/null | grep :3008 || echo "Port 3008 not listening"
lsof -i :3008 2>/dev/null || echo "lsof not available or port not in use"
echo ""
echo "=== Health endpoint test ==="
curl -v http://localhost:3008/api/health 2>&1 || echo "Health endpoint failed"
# Kill the server process if it's still hanging
if kill -0 $SERVER_PID 2>/dev/null; then
echo ""
echo "Killing stuck server process..."
kill -9 $SERVER_PID 2>/dev/null || true
fi
exit 1 exit 1
- name: Run E2E tests - name: Run E2E tests
@@ -59,6 +134,20 @@ jobs:
CI: true CI: true
VITE_SERVER_URL: http://localhost:3008 VITE_SERVER_URL: http://localhost:3008
VITE_SKIP_SETUP: 'true' VITE_SKIP_SETUP: 'true'
# Keep UI-side login/defaults consistent
AUTOMAKER_API_KEY: test-api-key-for-e2e-tests
- name: Print backend logs on failure
if: failure()
run: |
echo "=== E2E Tests Failed - Backend Logs ==="
cat backend.log 2>/dev/null || echo "No backend log found"
echo ""
echo "=== Process status at failure ==="
ps aux | grep -E "(node|tsx)" | grep -v grep || echo "No node processes found"
echo ""
echo "=== Port status ==="
netstat -tlnp 2>/dev/null | grep :3008 || echo "Port 3008 not listening"
- name: Upload Playwright report - name: Upload Playwright report
uses: actions/upload-artifact@v4 uses: actions/upload-artifact@v4
@@ -68,10 +157,22 @@ jobs:
path: apps/ui/playwright-report/ path: apps/ui/playwright-report/
retention-days: 7 retention-days: 7
- name: Upload test results - name: Upload test results (screenshots, traces, videos)
uses: actions/upload-artifact@v4 uses: actions/upload-artifact@v4
if: failure() if: always()
with: with:
name: test-results name: test-results
path: apps/ui/test-results/ path: |
apps/ui/test-results/
retention-days: 7 retention-days: 7
if-no-files-found: ignore
- name: Cleanup - Kill backend server
if: always()
run: |
if [ -n "$SERVER_PID" ]; then
echo "Cleaning up backend server (PID: $SERVER_PID)..."
kill $SERVER_PID 2>/dev/null || true
kill -9 $SERVER_PID 2>/dev/null || true
echo "Backend server cleanup complete"
fi

View File

@@ -25,7 +25,7 @@ jobs:
cache-dependency-path: package-lock.json cache-dependency-path: package-lock.json
- name: Install dependencies - name: Install dependencies
run: npm install --ignore-scripts run: npm install --ignore-scripts --force
- name: Check formatting - name: Check formatting
run: npm run format:check run: npm run format:check

View File

@@ -35,6 +35,11 @@ jobs:
with: with:
check-lockfile: 'true' check-lockfile: 'true'
- name: Install RPM build tools (Linux)
if: matrix.os == 'ubuntu-latest'
shell: bash
run: sudo apt-get update && sudo apt-get install -y rpm
- name: Build Electron app (macOS) - name: Build Electron app (macOS)
if: matrix.os == 'macos-latest' if: matrix.os == 'macos-latest'
shell: bash shell: bash
@@ -73,7 +78,7 @@ jobs:
uses: actions/upload-artifact@v4 uses: actions/upload-artifact@v4
with: with:
name: linux-builds name: linux-builds
path: apps/ui/release/*.{AppImage,deb} path: apps/ui/release/*.{AppImage,deb,rpm}
retention-days: 30 retention-days: 30
upload: upload:
@@ -104,8 +109,8 @@ jobs:
uses: softprops/action-gh-release@v2 uses: softprops/action-gh-release@v2
with: with:
files: | files: |
artifacts/macos-builds/* artifacts/macos-builds/*.{dmg,zip,blockmap}
artifacts/windows-builds/* artifacts/windows-builds/*.{exe,blockmap}
artifacts/linux-builds/* artifacts/linux-builds/*.{AppImage,deb,rpm,blockmap}
env: env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -26,5 +26,5 @@ jobs:
check-lockfile: 'true' check-lockfile: 'true'
- name: Run npm audit - name: Run npm audit
run: npm audit --audit-level=moderate run: npm audit --audit-level=critical
continue-on-error: false continue-on-error: false

13
.gitignore vendored
View File

@@ -73,6 +73,9 @@ blob-report/
!.env.example !.env.example
!.env.local.example !.env.local.example
# Codex config (contains API keys)
.codex/config.toml
# TypeScript # TypeScript
*.tsbuildinfo *.tsbuildinfo
@@ -81,6 +84,14 @@ blob-report/
docker-compose.override.yml docker-compose.override.yml
.claude/docker-compose.override.yml .claude/docker-compose.override.yml
.claude/hans/
pnpm-lock.yaml pnpm-lock.yaml
yarn.lock yarn.lock
# Fork-specific workflow files (should never be committed)
# API key files
data/.api-key
data/credentials.json
data/
.codex/

View File

@@ -1 +1,51 @@
npx lint-staged #!/usr/bin/env sh
# Try to load nvm if available (optional - works without it too)
if [ -z "$NVM_DIR" ]; then
# Check for Herd's nvm first (macOS with Herd)
if [ -s "$HOME/Library/Application Support/Herd/config/nvm/nvm.sh" ]; then
export NVM_DIR="$HOME/Library/Application Support/Herd/config/nvm"
# Then check standard nvm location
elif [ -s "$HOME/.nvm/nvm.sh" ]; then
export NVM_DIR="$HOME/.nvm"
fi
fi
# Source nvm if found (silently skip if not available)
[ -n "$NVM_DIR" ] && [ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" 2>/dev/null
# Load node version from .nvmrc if using nvm (silently skip if nvm not available or fails)
if [ -f .nvmrc ] && command -v nvm >/dev/null 2>&1; then
# Check if Unix nvm was sourced (it's a shell function with NVM_DIR set)
if [ -n "$NVM_DIR" ] && type nvm 2>/dev/null | grep -q "function"; then
# Unix nvm: reads .nvmrc automatically
nvm use >/dev/null 2>&1 || true
else
# nvm-windows: needs explicit version from .nvmrc
NODE_VERSION=$(cat .nvmrc | tr -d '[:space:]')
if [ -n "$NODE_VERSION" ]; then
nvm use "$NODE_VERSION" >/dev/null 2>&1 || true
fi
fi
fi
# Ensure common system paths are in PATH (for systems without nvm)
# This helps find node/npm installed via Homebrew, system packages, etc.
if [ -n "$WINDIR" ]; then
export PATH="$PATH:/c/Program Files/nodejs:/c/Program Files (x86)/nodejs"
export PATH="$PATH:$APPDATA/npm:$LOCALAPPDATA/Programs/nodejs"
else
export PATH="$PATH:/usr/local/bin:/opt/homebrew/bin:/usr/bin"
fi
# Run lint-staged - works with or without nvm
# Prefer npx, fallback to npm exec, both work with system-installed Node.js
if command -v npx >/dev/null 2>&1; then
npx lint-staged
elif command -v npm >/dev/null 2>&1; then
npm exec -- lint-staged
else
echo "Error: Neither npx nor npm found in PATH."
echo "Please ensure Node.js is installed (via nvm, Homebrew, system package manager, etc.)"
exit 1
fi

View File

@@ -23,6 +23,8 @@ pnpm-lock.yaml
# Generated files # Generated files
*.min.js *.min.js
*.min.css *.min.css
routeTree.gen.ts
apps/ui/src/routeTree.gen.ts
# Test artifacts # Test artifacts
test-results/ test-results/

View File

@@ -166,7 +166,10 @@ Use `resolveModelString()` from `@automaker/model-resolver` to convert model ali
## Environment Variables ## Environment Variables
- `ANTHROPIC_API_KEY` - Anthropic API key (or use Claude Code CLI auth) - `ANTHROPIC_API_KEY` - Anthropic API key (or use Claude Code CLI auth)
- `HOST` - Host to bind server to (default: 0.0.0.0)
- `HOSTNAME` - Hostname for user-facing URLs (default: localhost)
- `PORT` - Server port (default: 3008) - `PORT` - Server port (default: 3008)
- `DATA_DIR` - Data storage directory (default: ./data) - `DATA_DIR` - Data storage directory (default: ./data)
- `ALLOWED_ROOT_DIRECTORY` - Restrict file operations to specific directory - `ALLOWED_ROOT_DIRECTORY` - Restrict file operations to specific directory
- `AUTOMAKER_MOCK_AGENT=true` - Enable mock agent mode for CI testing - `AUTOMAKER_MOCK_AGENT=true` - Enable mock agent mode for CI testing
- `VITE_HOSTNAME` - Hostname for frontend API URLs (default: localhost)

View File

@@ -24,6 +24,7 @@ For complete details on contribution terms and rights assignment, please review
- [Development Setup](#development-setup) - [Development Setup](#development-setup)
- [Project Structure](#project-structure) - [Project Structure](#project-structure)
- [Pull Request Process](#pull-request-process) - [Pull Request Process](#pull-request-process)
- [Branching Strategy (RC Branches)](#branching-strategy-rc-branches)
- [Branch Naming Convention](#branch-naming-convention) - [Branch Naming Convention](#branch-naming-convention)
- [Commit Message Format](#commit-message-format) - [Commit Message Format](#commit-message-format)
- [Submitting a Pull Request](#submitting-a-pull-request) - [Submitting a Pull Request](#submitting-a-pull-request)
@@ -186,6 +187,59 @@ automaker/
This section covers everything you need to know about contributing changes through pull requests, from creating your branch to getting your code merged. This section covers everything you need to know about contributing changes through pull requests, from creating your branch to getting your code merged.
### Branching Strategy (RC Branches)
Automaker uses **Release Candidate (RC) branches** for all development work. Understanding this workflow is essential before contributing.
**How it works:**
1. **All development happens on RC branches** - We maintain version-specific RC branches (e.g., `v0.10.0rc`, `v0.11.0rc`) where all active development occurs
2. **RC branches are eventually merged to main** - Once an RC branch is stable and ready for release, it gets merged into `main`
3. **Main branch is for releases only** - The `main` branch contains only released, stable code
**Before creating a PR:**
1. **Check for the latest RC branch** - Before starting work, check the repository for the current RC branch:
```bash
git fetch upstream
git branch -r | grep rc
```
2. **Base your work on the RC branch** - Create your feature branch from the latest RC branch, not from `main`:
```bash
# Find the latest RC branch (e.g., v0.11.0rc)
git checkout upstream/v0.11.0rc
git checkout -b feature/your-feature-name
```
3. **Target the RC branch in your PR** - When opening your pull request, set the base branch to the current RC branch, not `main`
**Example workflow:**
```bash
# 1. Fetch latest changes
git fetch upstream
# 2. Check for RC branches
git branch -r | grep rc
# Output: upstream/v0.11.0rc
# 3. Create your branch from the RC
git checkout -b feature/add-dark-mode upstream/v0.11.0rc
# 4. Make your changes and commit
git commit -m "feat: Add dark mode support"
# 5. Push to your fork
git push origin feature/add-dark-mode
# 6. Open PR targeting the RC branch (v0.11.0rc), NOT main
```
**Important:** PRs opened directly against `main` will be asked to retarget to the current RC branch.
### Branch Naming Convention ### Branch Naming Convention
We use a consistent branch naming pattern to keep our repository organized: We use a consistent branch naming pattern to keep our repository organized:
@@ -275,14 +329,14 @@ Follow these steps to submit your contribution:
#### 1. Prepare Your Changes #### 1. Prepare Your Changes
Ensure you've synced with the latest upstream changes: Ensure you've synced with the latest upstream changes from the RC branch:
```bash ```bash
# Fetch latest changes from upstream # Fetch latest changes from upstream
git fetch upstream git fetch upstream
# Rebase your branch on main (if needed) # Rebase your branch on the current RC branch (if needed)
git rebase upstream/main git rebase upstream/v0.11.0rc # Use the current RC branch name
``` ```
#### 2. Run Pre-submission Checks #### 2. Run Pre-submission Checks
@@ -314,18 +368,19 @@ git push origin feature/your-feature-name
1. Go to your fork on GitHub 1. Go to your fork on GitHub
2. Click "Compare & pull request" for your branch 2. Click "Compare & pull request" for your branch
3. Ensure the base repository is `AutoMaker-Org/automaker` and base branch is `main` 3. **Important:** Set the base repository to `AutoMaker-Org/automaker` and the base branch to the **current RC branch** (e.g., `v0.11.0rc`), not `main`
4. Fill out the PR template completely 4. Fill out the PR template completely
#### PR Requirements Checklist #### PR Requirements Checklist
Your PR should include: Your PR should include:
- [ ] **Targets the current RC branch** (not `main`) - see [Branching Strategy](#branching-strategy-rc-branches)
- [ ] **Clear title** describing the change (use conventional commit format) - [ ] **Clear title** describing the change (use conventional commit format)
- [ ] **Description** explaining what changed and why - [ ] **Description** explaining what changed and why
- [ ] **Link to related issue** (if applicable): `Closes #123` or `Fixes #456` - [ ] **Link to related issue** (if applicable): `Closes #123` or `Fixes #456`
- [ ] **All CI checks passing** (format, lint, build, tests) - [ ] **All CI checks passing** (format, lint, build, tests)
- [ ] **No merge conflicts** with main branch - [ ] **No merge conflicts** with the RC branch
- [ ] **Tests included** for new functionality - [ ] **Tests included** for new functionality
- [ ] **Documentation updated** if adding/changing public APIs - [ ] **Documentation updated** if adding/changing public APIs

253
DEVELOPMENT_WORKFLOW.md Normal file
View File

@@ -0,0 +1,253 @@
# Development Workflow
This document defines the standard workflow for keeping a branch in sync with the upstream
release candidate (RC) and for shipping feature work. It is paired with `check-sync.sh`.
## Quick Decision Rule
1. Ask the user to select a workflow:
- **Sync Workflow** → you are maintaining the current RC branch with fixes/improvements
and will push the same fixes to both origin and upstream RC when you have local
commits to publish.
- **PR Workflow** → you are starting new feature work on a new branch; upstream updates
happen via PR only.
2. After the user selects, run:
```bash
./check-sync.sh
```
3. Use the status output to confirm alignment. If it reports **diverged**, default to
merging `upstream/<TARGET_RC>` into the current branch and preserving local commits.
For Sync Workflow, when the working tree is clean and you are behind upstream RC,
proceed with the fetch + merge without asking for additional confirmation.
## Target RC Resolution
The target RC is resolved dynamically so the workflow stays current as the RC changes.
Resolution order:
1. Latest `upstream/v*rc` branch (auto-detected)
2. `upstream/HEAD` (fallback)
3. If neither is available, you must pass `--rc <branch>`
Override for a single run:
```bash
./check-sync.sh --rc <rc-branch>
```
## Pre-Flight Checklist
1. Confirm a clean working tree:
```bash
git status
```
2. Confirm the current branch:
```bash
git branch --show-current
```
3. Ensure remotes exist (origin + upstream):
```bash
git remote -v
```
## Sync Workflow (Upstream Sync)
Use this flow when you are updating the current branch with fixes or improvements and
intend to keep origin and upstream RC in lockstep.
1. **Check sync status**
```bash
./check-sync.sh
```
2. **Update from upstream RC before editing (no pulls)**
- **Behind upstream RC** → fetch and merge RC into your branch:
```bash
git fetch upstream
git merge upstream/<TARGET_RC> --no-edit
```
When the working tree is clean and the user selected Sync Workflow, proceed without
an extra confirmation prompt.
- **Diverged** → stop and resolve manually.
3. **Resolve conflicts if needed**
- Handle conflicts intelligently: preserve upstream behavior and your local intent.
4. **Make changes and commit (if you are delivering fixes)**
```bash
git add -A
git commit -m "type: description"
```
5. **Build to verify**
```bash
npm run build:packages
npm run build
```
6. **Push after a successful merge to keep remotes aligned**
- If you only merged upstream RC changes, push **origin only** to sync your fork:
```bash
git push origin <branch>
```
- If you have local fixes to publish, push **origin + upstream**:
```bash
git push origin <branch>
git push upstream <branch>:<TARGET_RC>
```
- Always ask the user which push to perform.
- Origin (origin-only sync):
```bash
git push origin <branch>
```
- Upstream RC (publish the same fixes when you have local commits):
```bash
git push upstream <branch>:<TARGET_RC>
```
7. **Re-check sync**
```bash
./check-sync.sh
```
## PR Workflow (Feature Work)
Use this flow only for new feature work on a new branch. Do not push to upstream RC.
1. **Create or switch to a feature branch**
```bash
git checkout -b <branch>
```
2. **Make changes and commit**
```bash
git add -A
git commit -m "type: description"
```
3. **Merge upstream RC before shipping**
```bash
git merge upstream/<TARGET_RC> --no-edit
```
4. **Build and/or test**
```bash
npm run build:packages
npm run build
```
5. **Push to origin**
```bash
git push -u origin <branch>
```
6. **Create or update the PR**
- Use `gh pr create` or the GitHub UI.
7. **Review and follow-up**
- Apply feedback, commit changes, and push again.
- Re-run `./check-sync.sh` if additional upstream sync is needed.
## Conflict Resolution Checklist
1. Identify which changes are from upstream vs. local.
2. Preserve both behaviors where possible; avoid dropping either side.
3. Prefer minimal, safe integrations over refactors.
4. Re-run build commands after resolving conflicts.
5. Re-run `./check-sync.sh` to confirm status.
## Build/Test Matrix
- **Sync Workflow**: `npm run build:packages` and `npm run build`.
- **PR Workflow**: `npm run build:packages` and `npm run build` (plus relevant tests).
## Post-Sync Verification
1. `git status` should be clean.
2. `./check-sync.sh` should show expected alignment.
3. Verify recent commits with:
```bash
git log --oneline -5
```
## check-sync.sh Usage
- Uses dynamic Target RC resolution (see above).
- Override target RC:
```bash
./check-sync.sh --rc <rc-branch>
```
- Optional preview limit:
```bash
./check-sync.sh --preview 10
```
- The script prints sync status for both origin and upstream and previews recent commits
when you are behind.
## Stop Conditions
Stop and ask for guidance if any of the following are true:
- The working tree is dirty and you are about to merge or push.
- `./check-sync.sh` reports **diverged** during PR Workflow, or a merge cannot be completed.
- The script cannot resolve a target RC and requests `--rc`.
- A build fails after sync or conflict resolution.
## AI Agent Guardrails
- Always run `./check-sync.sh` before merges or pushes.
- Always ask for explicit user approval before any push command.
- Do not ask for additional confirmation before a Sync Workflow fetch + merge when the
working tree is clean and the user has already selected the Sync Workflow.
- Choose Sync vs PR workflow based on intent (RC maintenance vs new feature work), not
on the script's workflow hint.
- Only use force push when the user explicitly requests a history rewrite.
- Ask for explicit approval before dependency installs, branch deletion, or destructive operations.
- When resolving merge conflicts, preserve both upstream changes and local intent where possible.
- Do not create or switch to new branches unless the user explicitly requests it.
## AI Agent Decision Guidance
Agents should provide concrete, task-specific suggestions instead of repeatedly asking
open-ended questions. Use the user's stated goal and the `./check-sync.sh` status to
propose a default path plus one or two alternatives, and only ask for confirmation when
an action requires explicit approval.
Default behavior:
- If the intent is RC maintenance, recommend the Sync Workflow and proceed with
safe preparation steps (status checks, previews). If the branch is behind upstream RC,
fetch and merge without additional confirmation when the working tree is clean, then
push to origin to keep the fork aligned. Push upstream only when there are local fixes
to publish.
- If the intent is new feature work, recommend the PR Workflow and proceed with safe
preparation steps (status checks, identifying scope). Ask for approval before merges,
pushes, or dependency installs.
- If `./check-sync.sh` reports **diverged** during Sync Workflow, merge
`upstream/<TARGET_RC>` into the current branch and preserve local commits.
- If `./check-sync.sh` reports **diverged** during PR Workflow, stop and ask for guidance
with a short explanation of the divergence and the minimal options to resolve it.
If the user's intent is RC maintenance, prefer the Sync Workflow regardless of the
script hint. When the intent is new feature work, use the PR Workflow and avoid upstream
RC pushes.
Suggestion format (keep it short):
- **Recommended**: one sentence with the default path and why it fits the task.
- **Alternatives**: one or two options with the tradeoff or prerequisite.
- **Approval points**: mention any upcoming actions that need explicit approval (exclude sync
workflow pushes and merges).
## Failure Modes and How to Avoid Them
Sync Workflow:
- Wrong RC target: verify the auto-detected RC in `./check-sync.sh` output before merging.
- Diverged from upstream RC: stop and resolve manually before any merge or push.
- Dirty working tree: commit or stash before syncing to avoid accidental merges.
- Missing remotes: ensure both `origin` and `upstream` are configured before syncing.
- Build breaks after sync: run `npm run build:packages` and `npm run build` before pushing.
PR Workflow:
- Branch not synced to current RC: re-run `./check-sync.sh` and merge RC before shipping.
- Pushing the wrong branch: confirm `git branch --show-current` before pushing.
- Unreviewed changes: always commit and push to origin before opening or updating a PR.
- Skipped tests/builds: run the build commands before declaring the PR ready.
## Notes
- Avoid merging with uncommitted changes; commit or stash first.
- Prefer merge over rebase for PR branches; rebases rewrite history and often require a force push,
which should only be done with an explicit user request.
- Use clear, conventional commit messages and split unrelated changes into separate commits.

View File

@@ -8,10 +8,12 @@
# ============================================================================= # =============================================================================
# BASE STAGE - Common setup for all builds (DRY: defined once, used by all) # BASE STAGE - Common setup for all builds (DRY: defined once, used by all)
# ============================================================================= # =============================================================================
FROM node:22-alpine AS base FROM node:22-slim AS base
# Install build dependencies for native modules (node-pty) # Install build dependencies for native modules (node-pty)
RUN apk add --no-cache python3 make g++ RUN apt-get update && apt-get install -y --no-install-recommends \
python3 make g++ \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /app WORKDIR /app
@@ -51,30 +53,84 @@ RUN npm run build:packages && npm run build --workspace=apps/server
# ============================================================================= # =============================================================================
# SERVER PRODUCTION STAGE # SERVER PRODUCTION STAGE
# ============================================================================= # =============================================================================
FROM node:22-alpine AS server FROM node:22-slim AS server
# Install git, curl, bash (for terminal), and GitHub CLI (pinned version, multi-arch) # Build argument for tracking which commit this image was built from
RUN apk add --no-cache git curl bash && \ ARG GIT_COMMIT_SHA=unknown
GH_VERSION="2.63.2" && \ LABEL automaker.git.commit.sha="${GIT_COMMIT_SHA}"
ARCH=$(uname -m) && \
case "$ARCH" in \ # Build arguments for user ID matching (allows matching host user for mounted volumes)
# Override at build time: docker build --build-arg UID=$(id -u) --build-arg GID=$(id -g) ...
ARG UID=1001
ARG GID=1001
# Install git, curl, bash (for terminal), gosu (for user switching), and GitHub CLI (pinned version, multi-arch)
# Also install Playwright/Chromium system dependencies (aligns with playwright install-deps on Debian/Ubuntu)
RUN apt-get update && apt-get install -y --no-install-recommends \
git curl bash gosu ca-certificates openssh-client \
# Playwright/Chromium dependencies
libglib2.0-0 libnss3 libnspr4 libdbus-1-3 libatk1.0-0 libatk-bridge2.0-0 \
libcups2 libdrm2 libxkbcommon0 libatspi2.0-0 libxcomposite1 libxdamage1 \
libxfixes3 libxrandr2 libgbm1 libasound2 libpango-1.0-0 libcairo2 \
libx11-6 libx11-xcb1 libxcb1 libxext6 libxrender1 libxss1 libxtst6 \
libxshmfence1 libgtk-3-0 libexpat1 libfontconfig1 fonts-liberation \
xdg-utils libpangocairo-1.0-0 libpangoft2-1.0-0 libu2f-udev libvulkan1 \
&& GH_VERSION="2.63.2" \
&& ARCH=$(uname -m) \
&& case "$ARCH" in \
x86_64) GH_ARCH="amd64" ;; \ x86_64) GH_ARCH="amd64" ;; \
aarch64|arm64) GH_ARCH="arm64" ;; \ aarch64|arm64) GH_ARCH="arm64" ;; \
*) echo "Unsupported architecture: $ARCH" && exit 1 ;; \ *) echo "Unsupported architecture: $ARCH" && exit 1 ;; \
esac && \ esac \
curl -L "https://github.com/cli/cli/releases/download/v${GH_VERSION}/gh_${GH_VERSION}_linux_${GH_ARCH}.tar.gz" -o gh.tar.gz && \ && curl -L "https://github.com/cli/cli/releases/download/v${GH_VERSION}/gh_${GH_VERSION}_linux_${GH_ARCH}.tar.gz" -o gh.tar.gz \
tar -xzf gh.tar.gz && \ && tar -xzf gh.tar.gz \
mv gh_${GH_VERSION}_linux_${GH_ARCH}/bin/gh /usr/local/bin/gh && \ && mv gh_${GH_VERSION}_linux_${GH_ARCH}/bin/gh /usr/local/bin/gh \
rm -rf gh.tar.gz gh_${GH_VERSION}_linux_${GH_ARCH} && rm -rf gh.tar.gz gh_${GH_VERSION}_linux_${GH_ARCH} \
&& rm -rf /var/lib/apt/lists/*
# Install Claude CLI globally # Install Claude CLI globally (available to all users via npm global bin)
RUN npm install -g @anthropic-ai/claude-code RUN npm install -g @anthropic-ai/claude-code
WORKDIR /app # Create non-root user with home directory BEFORE installing Cursor CLI
# Uses UID/GID build args to match host user for mounted volume permissions
# Use -o flag to allow non-unique IDs (GID 1000 may already exist as 'node' group)
RUN groupadd -o -g ${GID} automaker && \
useradd -o -u ${UID} -g automaker -m -d /home/automaker -s /bin/bash automaker && \
mkdir -p /home/automaker/.local/bin && \
mkdir -p /home/automaker/.cursor && \
chown -R automaker:automaker /home/automaker && \
chmod 700 /home/automaker/.cursor
# Create non-root user # Install Cursor CLI as the automaker user
RUN addgroup -g 1001 -S automaker && \ # Set HOME explicitly and install to /home/automaker/.local/bin/
adduser -S automaker -u 1001 USER automaker
ENV HOME=/home/automaker
RUN curl https://cursor.com/install -fsS | bash && \
echo "=== Checking Cursor CLI installation ===" && \
ls -la /home/automaker/.local/bin/ && \
echo "=== PATH is: $PATH ===" && \
(which cursor-agent && cursor-agent --version) || echo "cursor-agent installed (may need auth setup)"
# Install OpenCode CLI (for multi-provider AI model access)
RUN curl -fsSL https://opencode.ai/install | bash && \
echo "=== Checking OpenCode CLI installation ===" && \
ls -la /home/automaker/.local/bin/ && \
(which opencode && opencode --version) || echo "opencode installed (may need auth setup)"
USER root
# Add PATH to profile so it's available in all interactive shells (for login shells)
RUN mkdir -p /etc/profile.d && \
echo 'export PATH="/home/automaker/.local/bin:$PATH"' > /etc/profile.d/cursor-cli.sh && \
chmod +x /etc/profile.d/cursor-cli.sh
# Add to automaker's .bashrc for bash interactive shells
RUN echo 'export PATH="/home/automaker/.local/bin:$PATH"' >> /home/automaker/.bashrc && \
chown automaker:automaker /home/automaker/.bashrc
# Also add to root's .bashrc since docker exec defaults to root
RUN echo 'export PATH="/home/automaker/.local/bin:$PATH"' >> /root/.bashrc
WORKDIR /app
# Copy root package.json (needed for workspace resolution) # Copy root package.json (needed for workspace resolution)
COPY --from=server-builder /app/package*.json ./ COPY --from=server-builder /app/package*.json ./
@@ -98,12 +154,19 @@ RUN git config --system --add safe.directory '*' && \
# Use gh as credential helper (works with GH_TOKEN env var) # Use gh as credential helper (works with GH_TOKEN env var)
git config --system credential.helper '!gh auth git-credential' git config --system credential.helper '!gh auth git-credential'
# Switch to non-root user # Copy entrypoint script for fixing permissions on mounted volumes
USER automaker COPY docker-entrypoint.sh /usr/local/bin/docker-entrypoint.sh
RUN chmod +x /usr/local/bin/docker-entrypoint.sh
# Note: We stay as root here so entrypoint can fix permissions
# The entrypoint script will switch to automaker user before running the command
# Environment variables # Environment variables
ENV PORT=3008 ENV PORT=3008
ENV DATA_DIR=/data ENV DATA_DIR=/data
ENV HOME=/home/automaker
# Add user's local bin to PATH for cursor-agent
ENV PATH="/home/automaker/.local/bin:${PATH}"
# Expose port # Expose port
EXPOSE 3008 EXPOSE 3008
@@ -112,6 +175,9 @@ EXPOSE 3008
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \ HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD curl -f http://localhost:3008/api/health || exit 1 CMD curl -f http://localhost:3008/api/health || exit 1
# Use entrypoint to fix permissions before starting
ENTRYPOINT ["/usr/local/bin/docker-entrypoint.sh"]
# Start server # Start server
CMD ["node", "apps/server/dist/index.js"] CMD ["node", "apps/server/dist/index.js"]
@@ -143,6 +209,10 @@ RUN npm run build:packages && npm run build --workspace=apps/ui
# ============================================================================= # =============================================================================
FROM nginx:alpine AS ui FROM nginx:alpine AS ui
# Build argument for tracking which commit this image was built from
ARG GIT_COMMIT_SHA=unknown
LABEL automaker.git.commit.sha="${GIT_COMMIT_SHA}"
# Copy built files # Copy built files
COPY --from=ui-builder /app/apps/ui/dist /usr/share/nginx/html COPY --from=ui-builder /app/apps/ui/dist /usr/share/nginx/html

94
Dockerfile.dev Normal file
View File

@@ -0,0 +1,94 @@
# Automaker Development Dockerfile
# For development with live reload via volume mounting
# Source code is NOT copied - it's mounted as a volume
#
# Usage:
# docker compose -f docker-compose.dev.yml up
FROM node:22-slim
# Install build dependencies for native modules (node-pty) and runtime tools
# Also install Playwright/Chromium system dependencies (aligns with playwright install-deps on Debian/Ubuntu)
RUN apt-get update && apt-get install -y --no-install-recommends \
python3 make g++ \
git curl bash gosu ca-certificates openssh-client \
# Playwright/Chromium dependencies
libglib2.0-0 libnss3 libnspr4 libdbus-1-3 libatk1.0-0 libatk-bridge2.0-0 \
libcups2 libdrm2 libxkbcommon0 libatspi2.0-0 libxcomposite1 libxdamage1 \
libxfixes3 libxrandr2 libgbm1 libasound2 libpango-1.0-0 libcairo2 \
libx11-6 libx11-xcb1 libxcb1 libxext6 libxrender1 libxss1 libxtst6 \
libxshmfence1 libgtk-3-0 libexpat1 libfontconfig1 fonts-liberation \
xdg-utils libpangocairo-1.0-0 libpangoft2-1.0-0 libu2f-udev libvulkan1 \
&& GH_VERSION="2.63.2" \
&& ARCH=$(uname -m) \
&& case "$ARCH" in \
x86_64) GH_ARCH="amd64" ;; \
aarch64|arm64) GH_ARCH="arm64" ;; \
*) echo "Unsupported architecture: $ARCH" && exit 1 ;; \
esac \
&& curl -L "https://github.com/cli/cli/releases/download/v${GH_VERSION}/gh_${GH_VERSION}_linux_${GH_ARCH}.tar.gz" -o gh.tar.gz \
&& tar -xzf gh.tar.gz \
&& mv gh_${GH_VERSION}_linux_${GH_ARCH}/bin/gh /usr/local/bin/gh \
&& rm -rf gh.tar.gz gh_${GH_VERSION}_linux_${GH_ARCH} \
&& rm -rf /var/lib/apt/lists/*
# Install Claude CLI globally
RUN npm install -g @anthropic-ai/claude-code
# Build arguments for user ID matching (allows matching host user for mounted volumes)
# Override at build time: docker-compose build --build-arg UID=$(id -u) --build-arg GID=$(id -g)
ARG UID=1001
ARG GID=1001
# Create non-root user with configurable UID/GID
# Use -o flag to allow non-unique IDs (GID 1000 may already exist as 'node' group)
RUN groupadd -o -g ${GID} automaker && \
useradd -o -u ${UID} -g automaker -m -d /home/automaker -s /bin/bash automaker && \
mkdir -p /home/automaker/.local/bin && \
mkdir -p /home/automaker/.cursor && \
chown -R automaker:automaker /home/automaker && \
chmod 700 /home/automaker/.cursor
# Install Cursor CLI as automaker user
USER automaker
ENV HOME=/home/automaker
RUN curl https://cursor.com/install -fsS | bash || true
USER root
# Add PATH to profile for Cursor CLI
RUN mkdir -p /etc/profile.d && \
echo 'export PATH="/home/automaker/.local/bin:$PATH"' > /etc/profile.d/cursor-cli.sh && \
chmod +x /etc/profile.d/cursor-cli.sh
# Add to user bashrc files
RUN echo 'export PATH="/home/automaker/.local/bin:$PATH"' >> /home/automaker/.bashrc && \
chown automaker:automaker /home/automaker/.bashrc
RUN echo 'export PATH="/home/automaker/.local/bin:$PATH"' >> /root/.bashrc
WORKDIR /app
# Create directories with proper permissions
RUN mkdir -p /data /projects && chown automaker:automaker /data /projects
# Configure git for mounted volumes
RUN git config --system --add safe.directory '*' && \
git config --system credential.helper '!gh auth git-credential'
# Copy entrypoint script
COPY docker-entrypoint.sh /usr/local/bin/docker-entrypoint.sh
RUN chmod +x /usr/local/bin/docker-entrypoint.sh
# Environment variables
ENV PORT=3008
ENV DATA_DIR=/data
ENV HOME=/home/automaker
ENV PATH="/home/automaker/.local/bin:${PATH}"
# Expose both dev ports
EXPOSE 3007 3008
# Use entrypoint for permission handling
ENTRYPOINT ["/usr/local/bin/docker-entrypoint.sh"]
# Default command - will be overridden by docker-compose
CMD ["npm", "run", "dev:web"]

164
README.md
View File

@@ -28,6 +28,7 @@
- [Quick Start](#quick-start) - [Quick Start](#quick-start)
- [How to Run](#how-to-run) - [How to Run](#how-to-run)
- [Development Mode](#development-mode) - [Development Mode](#development-mode)
- [Interactive TUI Launcher](#interactive-tui-launcher-recommended-for-new-users)
- [Building for Production](#building-for-production) - [Building for Production](#building-for-production)
- [Testing](#testing) - [Testing](#testing)
- [Linting](#linting) - [Linting](#linting)
@@ -101,11 +102,9 @@ In the Discord, you can:
### Prerequisites ### Prerequisites
- **Node.js 18+** (tested with Node.js 22) - **Node.js 22+** (required: >=22.0.0 <23.0.0)
- **npm** (comes with Node.js) - **npm** (comes with Node.js)
- **Authentication** (choose one): - **[Claude Code CLI](https://code.claude.com/docs/en/overview)** - Install and authenticate with your Anthropic subscription. Automaker integrates with your authenticated Claude Code CLI to access Claude models.
- **[Claude Code CLI](https://code.claude.com/docs/en/overview)** (recommended) - Install and authenticate, credentials used automatically
- **Anthropic API Key** - Direct API key for Claude Agent SDK ([get one here](https://console.anthropic.com/))
### Quick Start ### Quick Start
@@ -117,32 +116,16 @@ cd automaker
# 2. Install dependencies # 2. Install dependencies
npm install npm install
# 3. Build shared packages (Now can be skipped npm install / run dev does it automaticly) # 3. Start Automaker
npm run build:packages
# 4. Set up authentication (skip if using Claude Code CLI)
# If using Claude Code CLI: credentials are detected automatically
# If using API key directly, choose one method:
# Option A: Environment variable
export ANTHROPIC_API_KEY="sk-ant-..."
# Option B: Create .env file in project root
echo "ANTHROPIC_API_KEY=sk-ant-..." > .env
# 5. Start Automaker (interactive launcher)
npm run dev npm run dev
# Choose between: # Choose between:
# 1. Web Application (browser at localhost:3007) # 1. Web Application (browser at localhost:3007)
# 2. Desktop Application (Electron - recommended) # 2. Desktop Application (Electron - recommended)
``` ```
**Note:** The `npm run dev` command will: **Authentication:** Automaker integrates with your authenticated Claude Code CLI. Make sure you have [installed and authenticated](https://code.claude.com/docs/en/quickstart) the Claude Code CLI before running Automaker. Your CLI credentials will be detected automatically.
- Check for dependencies and install if needed **For Development:** `npm run dev` starts the development server with Vite live reload and hot module replacement for fast refresh and instant updates as you make changes.
- Install Playwright browsers for E2E tests
- Kill any processes on ports 3007/3008
- Present an interactive menu to choose your run mode
## How to Run ## How to Run
@@ -179,6 +162,40 @@ npm run dev:electron:wsl:gpu
npm run dev:web npm run dev:web
``` ```
### Interactive TUI Launcher (Recommended for New Users)
For a user-friendly interactive menu, use the built-in TUI launcher script:
```bash
# Show interactive menu with all launch options
./start-automaker.sh
# Or launch directly without menu
./start-automaker.sh web # Web browser
./start-automaker.sh electron # Desktop app
./start-automaker.sh electron-debug # Desktop + DevTools
# Additional options
./start-automaker.sh --help # Show all available options
./start-automaker.sh --version # Show version information
./start-automaker.sh --check-deps # Verify project dependencies
./start-automaker.sh --no-colors # Disable colored output
./start-automaker.sh --no-history # Don't remember last choice
```
**Features:**
- 🎨 Beautiful terminal UI with gradient colors and ASCII art
- Interactive menu (press 1-3 to select, Q to exit)
- 💾 Remembers your last choice
- Pre-flight checks (validates Node.js, npm, dependencies)
- 📏 Responsive layout (adapts to terminal size)
- 30-second timeout for hands-free selection
- 🌐 Cross-shell compatible (bash/zsh)
**History File:**
Your last selected mode is saved in `~/.automaker_launcher_history` for quick re-runs.
### Building for Production ### Building for Production
#### Web Application #### Web Application
@@ -186,9 +203,6 @@ npm run dev:web
```bash ```bash
# Build for web deployment (uses Vite) # Build for web deployment (uses Vite)
npm run build npm run build
# Run production build
npm run start
``` ```
#### Desktop Application #### Desktop Application
@@ -200,11 +214,30 @@ npm run build:electron
# Platform-specific builds # Platform-specific builds
npm run build:electron:mac # macOS (DMG + ZIP, x64 + arm64) npm run build:electron:mac # macOS (DMG + ZIP, x64 + arm64)
npm run build:electron:win # Windows (NSIS installer, x64) npm run build:electron:win # Windows (NSIS installer, x64)
npm run build:electron:linux # Linux (AppImage + DEB, x64) npm run build:electron:linux # Linux (AppImage + DEB + RPM, x64)
# Output directory: apps/ui/release/ # Output directory: apps/ui/release/
``` ```
**Linux Distribution Packages:**
- **AppImage**: Universal format, works on any Linux distribution
- **DEB**: Ubuntu, Debian, Linux Mint, Pop!\_OS
- **RPM**: Fedora, RHEL, Rocky Linux, AlmaLinux, openSUSE
**Installing on Fedora/RHEL:**
```bash
# Download the RPM package
wget https://github.com/AutoMaker-Org/automaker/releases/latest/download/Automaker-<version>-x86_64.rpm
# Install with dnf (Fedora)
sudo dnf install ./Automaker-<version>-x86_64.rpm
# Or with yum (RHEL/CentOS)
sudo yum localinstall ./Automaker-<version>-x86_64.rpm
```
#### Docker Deployment #### Docker Deployment
Docker provides the most secure way to run Automaker by isolating it from your host filesystem. Docker provides the most secure way to run Automaker by isolating it from your host filesystem.
@@ -223,16 +256,9 @@ docker-compose logs -f
docker-compose down docker-compose down
``` ```
##### Configuration ##### Authentication
Create a `.env` file in the project root if using API key authentication: Automaker integrates with your authenticated Claude Code CLI. To use CLI authentication in Docker, mount your Claude CLI config directory (see [Claude CLI Authentication](#claude-cli-authentication) below).
```bash
# Optional: Anthropic API key (not needed if using Claude CLI authentication)
ANTHROPIC_API_KEY=sk-ant-...
```
**Note:** Most users authenticate via Claude CLI instead of API keys. See [Claude CLI Authentication](#claude-cli-authentication-optional) below.
##### Working with Projects (Host Directory Access) ##### Working with Projects (Host Directory Access)
@@ -246,9 +272,9 @@ services:
- /path/to/your/project:/projects/your-project - /path/to/your/project:/projects/your-project
``` ```
##### Claude CLI Authentication (Optional) ##### Claude CLI Authentication
To use Claude Code CLI authentication instead of an API key, mount your Claude CLI config directory: Mount your Claude CLI config directory to use your authenticated CLI credentials:
```yaml ```yaml
services: services:
@@ -346,10 +372,6 @@ npm run lint
### Environment Configuration ### Environment Configuration
#### Authentication (if not using Claude Code CLI)
- `ANTHROPIC_API_KEY` - Your Anthropic API key for Claude Agent SDK (not needed if using Claude Code CLI)
#### Optional - Server #### Optional - Server
- `PORT` - Server port (default: 3008) - `PORT` - Server port (default: 3008)
@@ -360,49 +382,22 @@ npm run lint
- `AUTOMAKER_API_KEY` - Optional API authentication for the server - `AUTOMAKER_API_KEY` - Optional API authentication for the server
- `ALLOWED_ROOT_DIRECTORY` - Restrict file operations to specific directory - `ALLOWED_ROOT_DIRECTORY` - Restrict file operations to specific directory
- `CORS_ORIGIN` - CORS policy (default: \*) - `CORS_ORIGIN` - CORS allowed origins (comma-separated list; defaults to localhost only)
#### Optional - Development #### Optional - Development
- `VITE_SKIP_ELECTRON` - Skip Electron in dev mode - `VITE_SKIP_ELECTRON` - Skip Electron in dev mode
- `OPEN_DEVTOOLS` - Auto-open DevTools in Electron - `OPEN_DEVTOOLS` - Auto-open DevTools in Electron
- `AUTOMAKER_SKIP_SANDBOX_WARNING` - Skip sandbox warning dialog (useful for dev/CI)
### Authentication Setup ### Authentication Setup
#### Option 1: Claude Code CLI (Recommended) Automaker integrates with your authenticated Claude Code CLI and uses your Anthropic subscription.
Install and authenticate the Claude Code CLI following the [official quickstart guide](https://code.claude.com/docs/en/quickstart). Install and authenticate the Claude Code CLI following the [official quickstart guide](https://code.claude.com/docs/en/quickstart).
Once authenticated, Automaker will automatically detect and use your CLI credentials. No additional configuration needed! Once authenticated, Automaker will automatically detect and use your CLI credentials. No additional configuration needed!
#### Option 2: Direct API Key
If you prefer not to use the CLI, you can provide an Anthropic API key directly using one of these methods:
##### 2a. Shell Configuration
Add to your `~/.bashrc` or `~/.zshrc`:
```bash
export ANTHROPIC_API_KEY="sk-ant-..."
```
Then restart your terminal or run `source ~/.bashrc` (or `source ~/.zshrc`).
##### 2b. .env File
Create a `.env` file in the project root (gitignored):
```bash
ANTHROPIC_API_KEY=sk-ant-...
PORT=3008
DATA_DIR=./data
```
##### 2c. In-App Storage
The application can store your API key securely in the settings UI. The key is persisted in the `DATA_DIR` directory.
## Features ## Features
### Core Workflow ### Core Workflow
@@ -511,20 +506,24 @@ Automaker provides several specialized views accessible via the sidebar or keybo
| **Agent** | `A` | Interactive chat sessions with AI agents for exploratory work and questions | | **Agent** | `A` | Interactive chat sessions with AI agents for exploratory work and questions |
| **Spec** | `D` | Project specification editor with AI-powered generation and feature suggestions | | **Spec** | `D` | Project specification editor with AI-powered generation and feature suggestions |
| **Context** | `C` | Manage context files (markdown, images) that AI agents automatically reference | | **Context** | `C` | Manage context files (markdown, images) that AI agents automatically reference |
| **Profiles** | `M` | Create and manage AI agent profiles with custom prompts and configurations |
| **Settings** | `S` | Configure themes, shortcuts, defaults, authentication, and more | | **Settings** | `S` | Configure themes, shortcuts, defaults, authentication, and more |
| **Terminal** | `T` | Integrated terminal with tabs, splits, and persistent sessions | | **Terminal** | `T` | Integrated terminal with tabs, splits, and persistent sessions |
| **GitHub Issues** | - | Import and validate GitHub issues, convert to tasks | | **Graph** | `H` | Visualize feature dependencies with interactive graph visualization |
| **Ideation** | `I` | Brainstorm and generate ideas with AI assistance |
| **Memory** | `Y` | View and manage agent memory and conversation history |
| **GitHub Issues** | `G` | Import and validate GitHub issues, convert to tasks |
| **GitHub PRs** | `R` | View and manage GitHub pull requests |
| **Running Agents** | - | View all active agents across projects with status and progress | | **Running Agents** | - | View all active agents across projects with status and progress |
### Keyboard Navigation ### Keyboard Navigation
All shortcuts are customizable in Settings. Default shortcuts: All shortcuts are customizable in Settings. Default shortcuts:
- **Navigation:** `K` (Board), `A` (Agent), `D` (Spec), `C` (Context), `S` (Settings), `M` (Profiles), `T` (Terminal) - **Navigation:** `K` (Board), `A` (Agent), `D` (Spec), `C` (Context), `S` (Settings), `T` (Terminal), `H` (Graph), `I` (Ideation), `Y` (Memory), `G` (GitHub Issues), `R` (GitHub PRs)
- **UI:** `` ` `` (Toggle sidebar) - **UI:** `` ` `` (Toggle sidebar)
- **Actions:** `N` (New item in current view), `G` (Start next features), `O` (Open project), `P` (Project picker) - **Actions:** `N` (New item in current view), `O` (Open project), `P` (Project picker)
- **Projects:** `Q`/`E` (Cycle previous/next project) - **Projects:** `Q`/`E` (Cycle previous/next project)
- **Terminal:** `Alt+D` (Split right), `Alt+S` (Split down), `Alt+W` (Close), `Alt+T` (New tab)
## Architecture ## Architecture
@@ -589,10 +588,16 @@ Stored in `{projectPath}/.automaker/`:
│ ├── agent-output.md # AI agent output log │ ├── agent-output.md # AI agent output log
│ └── images/ # Attached images │ └── images/ # Attached images
├── context/ # Context files for AI agents ├── context/ # Context files for AI agents
├── worktrees/ # Git worktree metadata
├── validations/ # GitHub issue validation results
├── ideation/ # Brainstorming and analysis data
│ └── analysis.json # Project structure analysis
├── board/ # Board-related data
├── images/ # Project-level images
├── settings.json # Project-specific settings ├── settings.json # Project-specific settings
├── spec.md # Project specification ├── app_spec.txt # Project specification (XML format)
├── analysis.json # Project structure analysis ├── active-branches.json # Active git branches tracking
└── feature-suggestions.json # AI-generated suggestions └── execution-state.json # Auto-mode execution state
``` ```
#### Global Data #### Global Data
@@ -630,7 +635,6 @@ data/
- [Contributing Guide](./CONTRIBUTING.md) - How to contribute to Automaker - [Contributing Guide](./CONTRIBUTING.md) - How to contribute to Automaker
- [Project Documentation](./docs/) - Architecture guides, patterns, and developer docs - [Project Documentation](./docs/) - Architecture guides, patterns, and developer docs
- [Docker Isolation Guide](./docs/docker-isolation.md) - Security-focused Docker deployment
- [Shared Packages Guide](./docs/llm-shared-packages.md) - Using monorepo packages - [Shared Packages Guide](./docs/llm-shared-packages.md) - Using monorepo packages
### Community ### Community

17
TODO.md Normal file
View File

@@ -0,0 +1,17 @@
# Bugs
- Setting the default model does not seem like it works.
# UX
- Consolidate all models to a single place in the settings instead of having AI profiles and all this other stuff
- Simplify the create feature modal. It should just be one page. I don't need nessa tabs and all these nested buttons. It's too complex.
- added to do's list checkbox directly into the card so as it's going through if there's any to do items we can see those update live
- When the feature is done, I want to see a summary of the LLM. That's the first thing I should see when I double click the card.
- I went away to mass edit all my features. For example, when I created a new project, it added auto testing on every single feature card. Now I have to manually go through one by one and change those. Have a way to mass edit those, the configuration of all them.
- Double check and debug if there's memory leaks. It seems like the memory of automaker grows like 3 gigabytes. It's 5gb right now and I'm running three different cursor cli features implementing at the same time.
- Typing in the text area of the plan mode was super laggy.
- When I have a bunch of features running at the same time, it seems like I cannot edit the features in the backlog. Like they don't persist their file changes and I think this is because of the secure FS file has an internal queue to prevent hitting that file open write limit. We may have to reconsider refactoring away from file system and do Postgres or SQLite or something.
- modals are not scrollable if height of the screen is small enough
- and the Agent Runner add an archival button for the new sessions.
- investigate a potential issue with the feature cards not refreshing. I see a lock icon on the feature card But it doesn't go away until I open the card and edit it and I turn the testing mode off. I think there's like a refresh sync issue.

View File

@@ -8,6 +8,20 @@
# Your Anthropic API key for Claude models # Your Anthropic API key for Claude models
ANTHROPIC_API_KEY=sk-ant-... ANTHROPIC_API_KEY=sk-ant-...
# ============================================
# OPTIONAL - Additional API Keys
# ============================================
# OpenAI API key for Codex/GPT models
OPENAI_API_KEY=sk-...
# Cursor API key for Cursor models
CURSOR_API_KEY=...
# OAuth credentials for CLI authentication (extracted automatically)
CLAUDE_OAUTH_CREDENTIALS=
CURSOR_AUTH_TOKEN=
# ============================================ # ============================================
# OPTIONAL - Security # OPTIONAL - Security
# ============================================ # ============================================
@@ -30,6 +44,11 @@ CORS_ORIGIN=http://localhost:3007
# OPTIONAL - Server # OPTIONAL - Server
# ============================================ # ============================================
# Host to bind the server to (default: 0.0.0.0)
# Use 0.0.0.0 to listen on all interfaces (recommended for Docker/remote access)
# Use 127.0.0.1 or localhost to restrict to local connections only
HOST=0.0.0.0
# Port to run the server on # Port to run the server on
PORT=3008 PORT=3008
@@ -48,3 +67,23 @@ TERMINAL_ENABLED=true
TERMINAL_PASSWORD= TERMINAL_PASSWORD=
ENABLE_REQUEST_LOGGING=false ENABLE_REQUEST_LOGGING=false
# ============================================
# OPTIONAL - UI Behavior
# ============================================
# Skip the sandbox warning dialog on startup (default: false)
# Set to "true" to disable the warning entirely (useful for dev/CI environments)
AUTOMAKER_SKIP_SANDBOX_WARNING=false
# ============================================
# OPTIONAL - Debugging
# ============================================
# Enable raw output logging for agent streams (default: false)
# When enabled, saves unprocessed stream events to raw-output.jsonl
# in each feature's directory (.automaker/features/{id}/raw-output.jsonl)
# Useful for debugging provider streaming issues, improving log parsing,
# or analyzing how different providers (Claude, Cursor) stream responses
# Note: This adds disk I/O overhead, only enable when debugging
AUTOMAKER_DEBUG_RAW_OUTPUT=false

View File

@@ -1,6 +1,6 @@
{ {
"name": "@automaker/server", "name": "@automaker/server",
"version": "0.7.2", "version": "0.12.0",
"description": "Backend server for Automaker - provides API for both web and Electron modes", "description": "Backend server for Automaker - provides API for both web and Electron modes",
"author": "AutoMaker Team", "author": "AutoMaker Team",
"license": "SEE LICENSE IN LICENSE", "license": "SEE LICENSE IN LICENSE",
@@ -32,7 +32,8 @@
"@automaker/prompts": "1.0.0", "@automaker/prompts": "1.0.0",
"@automaker/types": "1.0.0", "@automaker/types": "1.0.0",
"@automaker/utils": "1.0.0", "@automaker/utils": "1.0.0",
"@modelcontextprotocol/sdk": "1.25.1", "@modelcontextprotocol/sdk": "1.25.2",
"@openai/codex-sdk": "^0.77.0",
"cookie-parser": "1.4.7", "cookie-parser": "1.4.7",
"cors": "2.8.5", "cors": "2.8.5",
"dotenv": "17.2.3", "dotenv": "17.2.3",

View File

@@ -17,6 +17,19 @@ import dotenv from 'dotenv';
import { createEventEmitter, type EventEmitter } from './lib/events.js'; import { createEventEmitter, type EventEmitter } from './lib/events.js';
import { initAllowedPaths } from '@automaker/platform'; import { initAllowedPaths } from '@automaker/platform';
import { createLogger, setLogLevel, LogLevel } from '@automaker/utils';
const logger = createLogger('Server');
/**
* Map server log level string to LogLevel enum
*/
const LOG_LEVEL_MAP: Record<string, LogLevel> = {
error: LogLevel.ERROR,
warn: LogLevel.WARN,
info: LogLevel.INFO,
debug: LogLevel.DEBUG,
};
import { authMiddleware, validateWsConnectionToken, checkRawAuthentication } from './lib/auth.js'; import { authMiddleware, validateWsConnectionToken, checkRawAuthentication } from './lib/auth.js';
import { requireJsonContentType } from './middleware/require-json-content-type.js'; import { requireJsonContentType } from './middleware/require-json-content-type.js';
import { createAuthRoutes } from './routes/auth/index.js'; import { createAuthRoutes } from './routes/auth/index.js';
@@ -50,6 +63,10 @@ import { SettingsService } from './services/settings-service.js';
import { createSpecRegenerationRoutes } from './routes/app-spec/index.js'; import { createSpecRegenerationRoutes } from './routes/app-spec/index.js';
import { createClaudeRoutes } from './routes/claude/index.js'; import { createClaudeRoutes } from './routes/claude/index.js';
import { ClaudeUsageService } from './services/claude-usage-service.js'; import { ClaudeUsageService } from './services/claude-usage-service.js';
import { createCodexRoutes } from './routes/codex/index.js';
import { CodexUsageService } from './services/codex-usage-service.js';
import { CodexAppServerService } from './services/codex-app-server-service.js';
import { CodexModelCacheService } from './services/codex-model-cache-service.js';
import { createGitHubRoutes } from './routes/github/index.js'; import { createGitHubRoutes } from './routes/github/index.js';
import { createContextRoutes } from './routes/context/index.js'; import { createContextRoutes } from './routes/context/index.js';
import { createBacklogPlanRoutes } from './routes/backlog-plan/index.js'; import { createBacklogPlanRoutes } from './routes/backlog-plan/index.js';
@@ -58,19 +75,48 @@ import { createMCPRoutes } from './routes/mcp/index.js';
import { MCPTestService } from './services/mcp-test-service.js'; import { MCPTestService } from './services/mcp-test-service.js';
import { createPipelineRoutes } from './routes/pipeline/index.js'; import { createPipelineRoutes } from './routes/pipeline/index.js';
import { pipelineService } from './services/pipeline-service.js'; import { pipelineService } from './services/pipeline-service.js';
import { createIdeationRoutes } from './routes/ideation/index.js';
import { IdeationService } from './services/ideation-service.js';
import { getDevServerService } from './services/dev-server-service.js';
import { eventHookService } from './services/event-hook-service.js';
import { createNotificationsRoutes } from './routes/notifications/index.js';
import { getNotificationService } from './services/notification-service.js';
import { createEventHistoryRoutes } from './routes/event-history/index.js';
import { getEventHistoryService } from './services/event-history-service.js';
import { createCodeReviewRoutes } from './routes/code-review/index.js';
import { CodeReviewService } from './services/code-review-service.js';
// Load environment variables // Load environment variables
dotenv.config(); dotenv.config();
const PORT = parseInt(process.env.PORT || '3008', 10); const PORT = parseInt(process.env.PORT || '3008', 10);
const HOST = process.env.HOST || '0.0.0.0';
const HOSTNAME = process.env.HOSTNAME || 'localhost';
const DATA_DIR = process.env.DATA_DIR || './data'; const DATA_DIR = process.env.DATA_DIR || './data';
const ENABLE_REQUEST_LOGGING = process.env.ENABLE_REQUEST_LOGGING !== 'false'; // Default to true const ENABLE_REQUEST_LOGGING_DEFAULT = process.env.ENABLE_REQUEST_LOGGING !== 'false'; // Default to true
// Runtime-configurable request logging flag (can be changed via settings)
let requestLoggingEnabled = ENABLE_REQUEST_LOGGING_DEFAULT;
/**
* Enable or disable HTTP request logging at runtime
*/
export function setRequestLoggingEnabled(enabled: boolean): void {
requestLoggingEnabled = enabled;
}
/**
* Get current request logging state
*/
export function isRequestLoggingEnabled(): boolean {
return requestLoggingEnabled;
}
// Check for required environment variables // Check for required environment variables
const hasAnthropicKey = !!process.env.ANTHROPIC_API_KEY; const hasAnthropicKey = !!process.env.ANTHROPIC_API_KEY;
if (!hasAnthropicKey) { if (!hasAnthropicKey) {
console.warn(` logger.warn(`
╔═══════════════════════════════════════════════════════════════════════╗ ╔═══════════════════════════════════════════════════════════════════════╗
║ ⚠️ WARNING: No Claude authentication configured ║ ║ ⚠️ WARNING: No Claude authentication configured ║
║ ║ ║ ║
@@ -83,7 +129,7 @@ if (!hasAnthropicKey) {
╚═══════════════════════════════════════════════════════════════════════╝ ╚═══════════════════════════════════════════════════════════════════════╝
`); `);
} else { } else {
console.log('[Server] ✓ ANTHROPIC_API_KEY detected (API key auth)'); logger.info('✓ ANTHROPIC_API_KEY detected (API key auth)');
} }
// Initialize security // Initialize security
@@ -93,22 +139,21 @@ initAllowedPaths();
const app = express(); const app = express();
// Middleware // Middleware
// Custom colored logger showing only endpoint and status code (configurable via ENABLE_REQUEST_LOGGING env var) // Custom colored logger showing only endpoint and status code (dynamically configurable)
if (ENABLE_REQUEST_LOGGING) { morgan.token('status-colored', (_req, res) => {
morgan.token('status-colored', (_req, res) => { const status = res.statusCode;
const status = res.statusCode; if (status >= 500) return `\x1b[31m${status}\x1b[0m`; // Red for server errors
if (status >= 500) return `\x1b[31m${status}\x1b[0m`; // Red for server errors if (status >= 400) return `\x1b[33m${status}\x1b[0m`; // Yellow for client errors
if (status >= 400) return `\x1b[33m${status}\x1b[0m`; // Yellow for client errors if (status >= 300) return `\x1b[36m${status}\x1b[0m`; // Cyan for redirects
if (status >= 300) return `\x1b[36m${status}\x1b[0m`; // Cyan for redirects return `\x1b[32m${status}\x1b[0m`; // Green for success
return `\x1b[32m${status}\x1b[0m`; // Green for success });
});
app.use( app.use(
morgan(':method :url :status-colored', { morgan(':method :url :status-colored', {
skip: (req) => req.url === '/api/health', // Skip health check logs // Skip when request logging is disabled or for health check endpoints
}) skip: (req) => !requestLoggingEnabled || req.url === '/api/health',
); })
} );
// CORS configuration // CORS configuration
// When using credentials (cookies), origin cannot be '*' // When using credentials (cookies), origin cannot be '*'
// We dynamically allow the requesting origin for local development // We dynamically allow the requesting origin for local development
@@ -161,12 +206,51 @@ const agentService = new AgentService(DATA_DIR, events, settingsService);
const featureLoader = new FeatureLoader(); const featureLoader = new FeatureLoader();
const autoModeService = new AutoModeService(events, settingsService); const autoModeService = new AutoModeService(events, settingsService);
const claudeUsageService = new ClaudeUsageService(); const claudeUsageService = new ClaudeUsageService();
const codexAppServerService = new CodexAppServerService();
const codexModelCacheService = new CodexModelCacheService(DATA_DIR, codexAppServerService);
const codexUsageService = new CodexUsageService(codexAppServerService);
const mcpTestService = new MCPTestService(settingsService); const mcpTestService = new MCPTestService(settingsService);
const ideationService = new IdeationService(events, settingsService, featureLoader);
const codeReviewService = new CodeReviewService(events, settingsService);
// Initialize DevServerService with event emitter for real-time log streaming
const devServerService = getDevServerService();
devServerService.setEventEmitter(events);
// Initialize Notification Service with event emitter for real-time updates
const notificationService = getNotificationService();
notificationService.setEventEmitter(events);
// Initialize Event History Service
const eventHistoryService = getEventHistoryService();
// Initialize Event Hook Service for custom event triggers (with history storage)
eventHookService.initialize(events, settingsService, eventHistoryService);
// Initialize services // Initialize services
(async () => { (async () => {
// Apply logging settings from saved settings
try {
const settings = await settingsService.getGlobalSettings();
if (settings.serverLogLevel && LOG_LEVEL_MAP[settings.serverLogLevel] !== undefined) {
setLogLevel(LOG_LEVEL_MAP[settings.serverLogLevel]);
logger.info(`Server log level set to: ${settings.serverLogLevel}`);
}
// Apply request logging setting (default true if not set)
const enableRequestLog = settings.enableRequestLogging ?? true;
setRequestLoggingEnabled(enableRequestLog);
logger.info(`HTTP request logging: ${enableRequestLog ? 'enabled' : 'disabled'}`);
} catch (err) {
logger.warn('Failed to load logging settings, using defaults');
}
await agentService.initialize(); await agentService.initialize();
console.log('[Server] Agent service initialized'); logger.info('Agent service initialized');
// Bootstrap Codex model cache in background (don't block server startup)
void codexModelCacheService.getModels().catch((err) => {
logger.error('Failed to bootstrap Codex model cache:', err);
});
})(); })();
// Run stale validation cleanup every hour to prevent memory leaks from crashed validations // Run stale validation cleanup every hour to prevent memory leaks from crashed validations
@@ -174,7 +258,7 @@ const VALIDATION_CLEANUP_INTERVAL_MS = 60 * 60 * 1000; // 1 hour
setInterval(() => { setInterval(() => {
const cleaned = cleanupStaleValidations(); const cleaned = cleanupStaleValidations();
if (cleaned > 0) { if (cleaned > 0) {
console.log(`[Server] Cleaned up ${cleaned} stale validation entries`); logger.info(`Cleaned up ${cleaned} stale validation entries`);
} }
}, VALIDATION_CLEANUP_INTERVAL_MS); }, VALIDATION_CLEANUP_INTERVAL_MS);
@@ -182,9 +266,10 @@ setInterval(() => {
// This helps prevent CSRF and content-type confusion attacks // This helps prevent CSRF and content-type confusion attacks
app.use('/api', requireJsonContentType); app.use('/api', requireJsonContentType);
// Mount API routes - health and auth are unauthenticated // Mount API routes - health, auth, and setup are unauthenticated
app.use('/api/health', createHealthRoutes()); app.use('/api/health', createHealthRoutes());
app.use('/api/auth', createAuthRoutes()); app.use('/api/auth', createAuthRoutes());
app.use('/api/setup', createSetupRoutes());
// Apply authentication to all other routes // Apply authentication to all other routes
app.use('/api', authMiddleware); app.use('/api', authMiddleware);
@@ -195,12 +280,11 @@ app.get('/api/health/detailed', createDetailedHandler());
app.use('/api/fs', createFsRoutes(events)); app.use('/api/fs', createFsRoutes(events));
app.use('/api/agent', createAgentRoutes(agentService, events)); app.use('/api/agent', createAgentRoutes(agentService, events));
app.use('/api/sessions', createSessionsRoutes(agentService)); app.use('/api/sessions', createSessionsRoutes(agentService));
app.use('/api/features', createFeaturesRoutes(featureLoader)); app.use('/api/features', createFeaturesRoutes(featureLoader, settingsService, events));
app.use('/api/auto-mode', createAutoModeRoutes(autoModeService)); app.use('/api/auto-mode', createAutoModeRoutes(autoModeService));
app.use('/api/enhance-prompt', createEnhancePromptRoutes(settingsService)); app.use('/api/enhance-prompt', createEnhancePromptRoutes(settingsService));
app.use('/api/worktree', createWorktreeRoutes()); app.use('/api/worktree', createWorktreeRoutes(events, settingsService));
app.use('/api/git', createGitRoutes()); app.use('/api/git', createGitRoutes());
app.use('/api/setup', createSetupRoutes());
app.use('/api/suggestions', createSuggestionsRoutes(events, settingsService)); app.use('/api/suggestions', createSuggestionsRoutes(events, settingsService));
app.use('/api/models', createModelsRoutes()); app.use('/api/models', createModelsRoutes());
app.use('/api/spec-regeneration', createSpecRegenerationRoutes(events, settingsService)); app.use('/api/spec-regeneration', createSpecRegenerationRoutes(events, settingsService));
@@ -210,11 +294,16 @@ app.use('/api/templates', createTemplatesRoutes());
app.use('/api/terminal', createTerminalRoutes()); app.use('/api/terminal', createTerminalRoutes());
app.use('/api/settings', createSettingsRoutes(settingsService)); app.use('/api/settings', createSettingsRoutes(settingsService));
app.use('/api/claude', createClaudeRoutes(claudeUsageService)); app.use('/api/claude', createClaudeRoutes(claudeUsageService));
app.use('/api/codex', createCodexRoutes(codexUsageService, codexModelCacheService));
app.use('/api/github', createGitHubRoutes(events, settingsService)); app.use('/api/github', createGitHubRoutes(events, settingsService));
app.use('/api/context', createContextRoutes(settingsService)); app.use('/api/context', createContextRoutes(settingsService));
app.use('/api/backlog-plan', createBacklogPlanRoutes(events, settingsService)); app.use('/api/backlog-plan', createBacklogPlanRoutes(events, settingsService));
app.use('/api/mcp', createMCPRoutes(mcpTestService)); app.use('/api/mcp', createMCPRoutes(mcpTestService));
app.use('/api/pipeline', createPipelineRoutes(pipelineService)); app.use('/api/pipeline', createPipelineRoutes(pipelineService));
app.use('/api/ideation', createIdeationRoutes(events, ideationService, featureLoader));
app.use('/api/notifications', createNotificationsRoutes(notificationService));
app.use('/api/event-history', createEventHistoryRoutes(eventHistoryService, settingsService));
app.use('/api/code-review', createCodeReviewRoutes(codeReviewService));
// Create HTTP server // Create HTTP server
const server = createServer(app); const server = createServer(app);
@@ -267,7 +356,7 @@ server.on('upgrade', (request, socket, head) => {
// Authenticate all WebSocket connections // Authenticate all WebSocket connections
if (!authenticateWebSocket(request)) { if (!authenticateWebSocket(request)) {
console.log('[WebSocket] Authentication failed, rejecting connection'); logger.info('Authentication failed, rejecting connection');
socket.write('HTTP/1.1 401 Unauthorized\r\n\r\n'); socket.write('HTTP/1.1 401 Unauthorized\r\n\r\n');
socket.destroy(); socket.destroy();
return; return;
@@ -288,11 +377,11 @@ server.on('upgrade', (request, socket, head) => {
// Events WebSocket connection handler // Events WebSocket connection handler
wss.on('connection', (ws: WebSocket) => { wss.on('connection', (ws: WebSocket) => {
console.log('[WebSocket] Client connected, ready state:', ws.readyState); logger.info('Client connected, ready state:', ws.readyState);
// Subscribe to all events and forward to this client // Subscribe to all events and forward to this client
const unsubscribe = events.subscribe((type, payload) => { const unsubscribe = events.subscribe((type, payload) => {
console.log('[WebSocket] Event received:', { logger.info('Event received:', {
type, type,
hasPayload: !!payload, hasPayload: !!payload,
payloadKeys: payload ? Object.keys(payload) : [], payloadKeys: payload ? Object.keys(payload) : [],
@@ -302,27 +391,24 @@ wss.on('connection', (ws: WebSocket) => {
if (ws.readyState === WebSocket.OPEN) { if (ws.readyState === WebSocket.OPEN) {
const message = JSON.stringify({ type, payload }); const message = JSON.stringify({ type, payload });
console.log('[WebSocket] Sending event to client:', { logger.info('Sending event to client:', {
type, type,
messageLength: message.length, messageLength: message.length,
sessionId: (payload as any)?.sessionId, sessionId: (payload as any)?.sessionId,
}); });
ws.send(message); ws.send(message);
} else { } else {
console.log( logger.info('WARNING: Cannot send event, WebSocket not open. ReadyState:', ws.readyState);
'[WebSocket] WARNING: Cannot send event, WebSocket not open. ReadyState:',
ws.readyState
);
} }
}); });
ws.on('close', () => { ws.on('close', () => {
console.log('[WebSocket] Client disconnected'); logger.info('Client disconnected');
unsubscribe(); unsubscribe();
}); });
ws.on('error', (error) => { ws.on('error', (error) => {
console.error('[WebSocket] ERROR:', error); logger.error('ERROR:', error);
unsubscribe(); unsubscribe();
}); });
}); });
@@ -349,24 +435,24 @@ terminalWss.on('connection', (ws: WebSocket, req: import('http').IncomingMessage
const sessionId = url.searchParams.get('sessionId'); const sessionId = url.searchParams.get('sessionId');
const token = url.searchParams.get('token'); const token = url.searchParams.get('token');
console.log(`[Terminal WS] Connection attempt for session: ${sessionId}`); logger.info(`Connection attempt for session: ${sessionId}`);
// Check if terminal is enabled // Check if terminal is enabled
if (!isTerminalEnabled()) { if (!isTerminalEnabled()) {
console.log('[Terminal WS] Terminal is disabled'); logger.info('Terminal is disabled');
ws.close(4003, 'Terminal access is disabled'); ws.close(4003, 'Terminal access is disabled');
return; return;
} }
// Validate token if password is required // Validate token if password is required
if (isTerminalPasswordRequired() && !validateTerminalToken(token || undefined)) { if (isTerminalPasswordRequired() && !validateTerminalToken(token || undefined)) {
console.log('[Terminal WS] Invalid or missing token'); logger.info('Invalid or missing token');
ws.close(4001, 'Authentication required'); ws.close(4001, 'Authentication required');
return; return;
} }
if (!sessionId) { if (!sessionId) {
console.log('[Terminal WS] No session ID provided'); logger.info('No session ID provided');
ws.close(4002, 'Session ID required'); ws.close(4002, 'Session ID required');
return; return;
} }
@@ -374,12 +460,12 @@ terminalWss.on('connection', (ws: WebSocket, req: import('http').IncomingMessage
// Check if session exists // Check if session exists
const session = terminalService.getSession(sessionId); const session = terminalService.getSession(sessionId);
if (!session) { if (!session) {
console.log(`[Terminal WS] Session ${sessionId} not found`); logger.info(`Session ${sessionId} not found`);
ws.close(4004, 'Session not found'); ws.close(4004, 'Session not found');
return; return;
} }
console.log(`[Terminal WS] Client connected to session ${sessionId}`); logger.info(`Client connected to session ${sessionId}`);
// Track this connection // Track this connection
if (!terminalConnections.has(sessionId)) { if (!terminalConnections.has(sessionId)) {
@@ -495,15 +581,15 @@ terminalWss.on('connection', (ws: WebSocket, req: import('http').IncomingMessage
break; break;
default: default:
console.warn(`[Terminal WS] Unknown message type: ${msg.type}`); logger.warn(`Unknown message type: ${msg.type}`);
} }
} catch (error) { } catch (error) {
console.error('[Terminal WS] Error processing message:', error); logger.error('Error processing message:', error);
} }
}); });
ws.on('close', () => { ws.on('close', () => {
console.log(`[Terminal WS] Client disconnected from session ${sessionId}`); logger.info(`Client disconnected from session ${sessionId}`);
unsubscribeData(); unsubscribeData();
unsubscribeExit(); unsubscribeExit();
@@ -522,29 +608,30 @@ terminalWss.on('connection', (ws: WebSocket, req: import('http').IncomingMessage
}); });
ws.on('error', (error) => { ws.on('error', (error) => {
console.error(`[Terminal WS] Error on session ${sessionId}:`, error); logger.error(`Error on session ${sessionId}:`, error);
unsubscribeData(); unsubscribeData();
unsubscribeExit(); unsubscribeExit();
}); });
}); });
// Start server with error handling for port conflicts // Start server with error handling for port conflicts
const startServer = (port: number) => { const startServer = (port: number, host: string) => {
server.listen(port, () => { server.listen(port, host, () => {
const terminalStatus = isTerminalEnabled() const terminalStatus = isTerminalEnabled()
? isTerminalPasswordRequired() ? isTerminalPasswordRequired()
? 'enabled (password protected)' ? 'enabled (password protected)'
: 'enabled' : 'enabled'
: 'disabled'; : 'disabled';
const portStr = port.toString().padEnd(4); const portStr = port.toString().padEnd(4);
console.log(` logger.info(`
╔═══════════════════════════════════════════════════════╗ ╔═══════════════════════════════════════════════════════╗
║ Automaker Backend Server ║ ║ Automaker Backend Server ║
╠═══════════════════════════════════════════════════════╣ ╠═══════════════════════════════════════════════════════╣
HTTP API: http://localhost:${portStr} Listening: ${host}:${port}${' '.repeat(Math.max(0, 34 - host.length - port.toString().length))}
WebSocket: ws://localhost:${portStr}/api/events HTTP API: http://${HOSTNAME}:${portStr}
Terminal: ws://localhost:${portStr}/api/terminal/ws WebSocket: ws://${HOSTNAME}:${portStr}/api/events
Health: http://localhost:${portStr}/api/health Terminal: ws://${HOSTNAME}:${portStr}/api/terminal/ws
║ Health: http://${HOSTNAME}:${portStr}/api/health ║
║ Terminal: ${terminalStatus.padEnd(37)} ║ Terminal: ${terminalStatus.padEnd(37)}
╚═══════════════════════════════════════════════════════╝ ╚═══════════════════════════════════════════════════════╝
`); `);
@@ -552,7 +639,7 @@ const startServer = (port: number) => {
server.on('error', (error: NodeJS.ErrnoException) => { server.on('error', (error: NodeJS.ErrnoException) => {
if (error.code === 'EADDRINUSE') { if (error.code === 'EADDRINUSE') {
console.error(` logger.error(`
╔═══════════════════════════════════════════════════════╗ ╔═══════════════════════════════════════════════════════╗
║ ❌ ERROR: Port ${port} is already in use ║ ║ ❌ ERROR: Port ${port} is already in use ║
╠═══════════════════════════════════════════════════════╣ ╠═══════════════════════════════════════════════════════╣
@@ -572,29 +659,49 @@ const startServer = (port: number) => {
`); `);
process.exit(1); process.exit(1);
} else { } else {
console.error('[Server] Error starting server:', error); logger.error('Error starting server:', error);
process.exit(1); process.exit(1);
} }
}); });
}; };
startServer(PORT); startServer(PORT, HOST);
// Global error handlers to prevent crashes from uncaught errors
process.on('unhandledRejection', (reason: unknown, _promise: Promise<unknown>) => {
logger.error('Unhandled Promise Rejection:', {
reason: reason instanceof Error ? reason.message : String(reason),
stack: reason instanceof Error ? reason.stack : undefined,
});
// Don't exit - log the error and continue running
// This prevents the server from crashing due to unhandled rejections
});
process.on('uncaughtException', (error: Error) => {
logger.error('Uncaught Exception:', {
message: error.message,
stack: error.stack,
});
// Exit on uncaught exceptions to prevent undefined behavior
// The process is in an unknown state after an uncaught exception
process.exit(1);
});
// Graceful shutdown // Graceful shutdown
process.on('SIGTERM', () => { process.on('SIGTERM', () => {
console.log('SIGTERM received, shutting down...'); logger.info('SIGTERM received, shutting down...');
terminalService.cleanup(); terminalService.cleanup();
server.close(() => { server.close(() => {
console.log('Server closed'); logger.info('Server closed');
process.exit(0); process.exit(0);
}); });
}); });
process.on('SIGINT', () => { process.on('SIGINT', () => {
console.log('SIGINT received, shutting down...'); logger.info('SIGINT received, shutting down...');
terminalService.cleanup(); terminalService.cleanup();
server.close(() => { server.close(() => {
console.log('Server closed'); logger.info('Server closed');
process.exit(0); process.exit(0);
}); });
}); });

View File

@@ -0,0 +1,257 @@
/**
* Agent Discovery - Scans filesystem for AGENT.md files
*
* Discovers agents from:
* - ~/.claude/agents/ (user-level, global)
* - .claude/agents/ (project-level)
*
* Similar to Skills, but for custom subagents defined in AGENT.md files.
*/
import path from 'path';
import os from 'os';
import { createLogger } from '@automaker/utils';
import { secureFs, systemPaths } from '@automaker/platform';
import type { AgentDefinition } from '@automaker/types';
const logger = createLogger('AgentDiscovery');
export interface FilesystemAgent {
name: string; // Directory name (e.g., 'code-reviewer')
definition: AgentDefinition;
source: 'user' | 'project';
filePath: string; // Full path to AGENT.md
}
/**
* Parse agent content string into AgentDefinition
* Format:
* ---
* name: agent-name # Optional
* description: When to use this agent
* tools: tool1, tool2, tool3 # Optional (comma or space separated list)
* model: sonnet # Optional: sonnet, opus, haiku
* ---
* System prompt content here...
*/
function parseAgentContent(content: string, filePath: string): AgentDefinition | null {
// Extract frontmatter
const frontmatterMatch = content.match(/^---\n([\s\S]*?)\n---\n([\s\S]*)$/);
if (!frontmatterMatch) {
logger.warn(`Invalid agent file format (missing frontmatter): ${filePath}`);
return null;
}
const [, frontmatter, prompt] = frontmatterMatch;
// Parse description (required)
const description = frontmatter.match(/description:\s*(.+)/)?.[1]?.trim();
if (!description) {
logger.warn(`Missing description in agent file: ${filePath}`);
return null;
}
// Parse tools (optional) - supports both comma-separated and space-separated
const toolsMatch = frontmatter.match(/tools:\s*(.+)/);
const tools = toolsMatch
? toolsMatch[1]
.split(/[,\s]+/) // Split by comma or whitespace
.map((t) => t.trim())
.filter((t) => t && t !== '')
: undefined;
// Parse model (optional) - validate against allowed values
const modelMatch = frontmatter.match(/model:\s*(\w+)/);
const modelValue = modelMatch?.[1]?.trim();
const validModels = ['sonnet', 'opus', 'haiku', 'inherit'] as const;
const model =
modelValue && validModels.includes(modelValue as (typeof validModels)[number])
? (modelValue as 'sonnet' | 'opus' | 'haiku' | 'inherit')
: undefined;
if (modelValue && !model) {
logger.warn(
`Invalid model "${modelValue}" in agent file: ${filePath}. Expected one of: ${validModels.join(', ')}`
);
}
return {
description,
prompt: prompt.trim(),
tools,
model,
};
}
/**
* Directory entry with type information
*/
interface DirEntry {
name: string;
isFile: boolean;
isDirectory: boolean;
}
/**
* Filesystem adapter interface for abstracting systemPaths vs secureFs
*/
interface FsAdapter {
exists: (filePath: string) => Promise<boolean>;
readdir: (dirPath: string) => Promise<DirEntry[]>;
readFile: (filePath: string) => Promise<string>;
}
/**
* Create a filesystem adapter for system paths (user directory)
*/
function createSystemPathAdapter(): FsAdapter {
return {
exists: (filePath) => Promise.resolve(systemPaths.systemPathExists(filePath)),
readdir: async (dirPath) => {
const entryNames = await systemPaths.systemPathReaddir(dirPath);
const entries: DirEntry[] = [];
for (const name of entryNames) {
const stat = await systemPaths.systemPathStat(path.join(dirPath, name));
entries.push({
name,
isFile: stat.isFile(),
isDirectory: stat.isDirectory(),
});
}
return entries;
},
readFile: (filePath) => systemPaths.systemPathReadFile(filePath, 'utf-8') as Promise<string>,
};
}
/**
* Create a filesystem adapter for project paths (secureFs)
*/
function createSecureFsAdapter(): FsAdapter {
return {
exists: (filePath) =>
secureFs
.access(filePath)
.then(() => true)
.catch(() => false),
readdir: async (dirPath) => {
const entries = await secureFs.readdir(dirPath, { withFileTypes: true });
return entries.map((entry) => ({
name: entry.name,
isFile: entry.isFile(),
isDirectory: entry.isDirectory(),
}));
},
readFile: (filePath) => secureFs.readFile(filePath, 'utf-8') as Promise<string>,
};
}
/**
* Parse agent file using the provided filesystem adapter
*/
async function parseAgentFileWithAdapter(
filePath: string,
fsAdapter: FsAdapter
): Promise<AgentDefinition | null> {
try {
const content = await fsAdapter.readFile(filePath);
return parseAgentContent(content, filePath);
} catch (error) {
logger.error(`Failed to parse agent file: ${filePath}`, error);
return null;
}
}
/**
* Scan a directory for agent .md files
* Agents can be in two formats:
* 1. Flat: agent-name.md (file directly in agents/)
* 2. Subdirectory: agent-name/AGENT.md (folder + file, similar to Skills)
*/
async function scanAgentsDirectory(
baseDir: string,
source: 'user' | 'project'
): Promise<FilesystemAgent[]> {
const agents: FilesystemAgent[] = [];
const fsAdapter = source === 'user' ? createSystemPathAdapter() : createSecureFsAdapter();
try {
// Check if directory exists
const exists = await fsAdapter.exists(baseDir);
if (!exists) {
logger.debug(`Directory does not exist: ${baseDir}`);
return agents;
}
// Read all entries in the directory
const entries = await fsAdapter.readdir(baseDir);
for (const entry of entries) {
// Check for flat .md file format (agent-name.md)
if (entry.isFile && entry.name.endsWith('.md')) {
const agentName = entry.name.slice(0, -3); // Remove .md extension
const agentFilePath = path.join(baseDir, entry.name);
const definition = await parseAgentFileWithAdapter(agentFilePath, fsAdapter);
if (definition) {
agents.push({
name: agentName,
definition,
source,
filePath: agentFilePath,
});
logger.debug(`Discovered ${source} agent (flat): ${agentName}`);
}
}
// Check for subdirectory format (agent-name/AGENT.md)
else if (entry.isDirectory) {
const agentFilePath = path.join(baseDir, entry.name, 'AGENT.md');
const agentFileExists = await fsAdapter.exists(agentFilePath);
if (agentFileExists) {
const definition = await parseAgentFileWithAdapter(agentFilePath, fsAdapter);
if (definition) {
agents.push({
name: entry.name,
definition,
source,
filePath: agentFilePath,
});
logger.debug(`Discovered ${source} agent (subdirectory): ${entry.name}`);
}
}
}
}
} catch (error) {
logger.error(`Failed to scan agents directory: ${baseDir}`, error);
}
return agents;
}
/**
* Discover all filesystem-based agents from user and project sources
*/
export async function discoverFilesystemAgents(
projectPath?: string,
sources: Array<'user' | 'project'> = ['user', 'project']
): Promise<FilesystemAgent[]> {
const agents: FilesystemAgent[] = [];
// Discover user-level agents from ~/.claude/agents/
if (sources.includes('user')) {
const userAgentsDir = path.join(os.homedir(), '.claude', 'agents');
const userAgents = await scanAgentsDirectory(userAgentsDir, 'user');
agents.push(...userAgents);
logger.info(`Discovered ${userAgents.length} user-level agents from ${userAgentsDir}`);
}
// Discover project-level agents from .claude/agents/
if (sources.includes('project') && projectPath) {
const projectAgentsDir = path.join(projectPath, '.claude', 'agents');
const projectAgents = await scanAgentsDirectory(projectAgentsDir, 'project');
agents.push(...projectAgents);
logger.info(`Discovered ${projectAgents.length} project-level agents from ${projectAgentsDir}`);
}
return agents;
}

View File

@@ -11,8 +11,12 @@ export { specOutputSchema } from '@automaker/types';
/** /**
* Escape special XML characters * Escape special XML characters
* Handles undefined/null values by converting them to empty strings
*/ */
function escapeXml(str: string): string { export function escapeXml(str: string | undefined | null): string {
if (str == null) {
return '';
}
return str return str
.replace(/&/g, '&amp;') .replace(/&/g, '&amp;')
.replace(/</g, '&lt;') .replace(/</g, '&lt;')

View File

@@ -0,0 +1,263 @@
/**
* Secure authentication utilities that avoid environment variable race conditions
*/
import { spawn } from 'child_process';
import { createLogger } from '@automaker/utils';
const logger = createLogger('AuthUtils');
export interface SecureAuthEnv {
[key: string]: string | undefined;
}
export interface AuthValidationResult {
isValid: boolean;
error?: string;
normalizedKey?: string;
}
/**
* Validates API key format without modifying process.env
*/
export function validateApiKey(
key: string,
provider: 'anthropic' | 'openai' | 'cursor'
): AuthValidationResult {
if (!key || typeof key !== 'string' || key.trim().length === 0) {
return { isValid: false, error: 'API key is required' };
}
const trimmedKey = key.trim();
switch (provider) {
case 'anthropic':
if (!trimmedKey.startsWith('sk-ant-')) {
return {
isValid: false,
error: 'Invalid Anthropic API key format. Should start with "sk-ant-"',
};
}
if (trimmedKey.length < 20) {
return { isValid: false, error: 'Anthropic API key too short' };
}
break;
case 'openai':
if (!trimmedKey.startsWith('sk-')) {
return { isValid: false, error: 'Invalid OpenAI API key format. Should start with "sk-"' };
}
if (trimmedKey.length < 20) {
return { isValid: false, error: 'OpenAI API key too short' };
}
break;
case 'cursor':
// Cursor API keys might have different format
if (trimmedKey.length < 10) {
return { isValid: false, error: 'Cursor API key too short' };
}
break;
}
return { isValid: true, normalizedKey: trimmedKey };
}
/**
* Creates a secure environment object for authentication testing
* without modifying the global process.env
*/
export function createSecureAuthEnv(
authMethod: 'cli' | 'api_key',
apiKey?: string,
provider: 'anthropic' | 'openai' | 'cursor' = 'anthropic'
): SecureAuthEnv {
const env: SecureAuthEnv = { ...process.env };
if (authMethod === 'cli') {
// For CLI auth, remove the API key to force CLI authentication
const envKey = provider === 'openai' ? 'OPENAI_API_KEY' : 'ANTHROPIC_API_KEY';
delete env[envKey];
} else if (authMethod === 'api_key' && apiKey) {
// For API key auth, validate and set the provided key
const validation = validateApiKey(apiKey, provider);
if (!validation.isValid) {
throw new Error(validation.error);
}
const envKey = provider === 'openai' ? 'OPENAI_API_KEY' : 'ANTHROPIC_API_KEY';
env[envKey] = validation.normalizedKey;
}
return env;
}
/**
* Creates a temporary environment override for the current process
* WARNING: This should only be used in isolated contexts and immediately cleaned up
*/
export function createTempEnvOverride(authEnv: SecureAuthEnv): () => void {
const originalEnv = { ...process.env };
// Apply the auth environment
Object.assign(process.env, authEnv);
// Return cleanup function
return () => {
// Restore original environment
Object.keys(process.env).forEach((key) => {
if (!(key in originalEnv)) {
delete process.env[key];
}
});
Object.assign(process.env, originalEnv);
};
}
/**
* Spawns a process with secure environment isolation
*/
export function spawnSecureAuth(
command: string,
args: string[],
authEnv: SecureAuthEnv,
options: {
cwd?: string;
timeout?: number;
} = {}
): Promise<{ stdout: string; stderr: string; exitCode: number | null }> {
return new Promise((resolve, reject) => {
const { cwd = process.cwd(), timeout = 30000 } = options;
logger.debug(`Spawning secure auth process: ${command} ${args.join(' ')}`);
const child = spawn(command, args, {
cwd,
env: authEnv,
stdio: 'pipe',
shell: false,
});
let stdout = '';
let stderr = '';
let isResolved = false;
const timeoutId = setTimeout(() => {
if (!isResolved) {
child.kill('SIGTERM');
isResolved = true;
reject(new Error('Authentication process timed out'));
}
}, timeout);
child.stdout?.on('data', (data) => {
stdout += data.toString();
});
child.stderr?.on('data', (data) => {
stderr += data.toString();
});
child.on('close', (code) => {
clearTimeout(timeoutId);
if (!isResolved) {
isResolved = true;
resolve({ stdout, stderr, exitCode: code });
}
});
child.on('error', (error) => {
clearTimeout(timeoutId);
if (!isResolved) {
isResolved = true;
reject(error);
}
});
});
}
/**
* Safely extracts environment variable without race conditions
*/
export function safeGetEnv(key: string): string | undefined {
return process.env[key];
}
/**
* Checks if an environment variable would be modified without actually modifying it
*/
export function wouldModifyEnv(key: string, newValue: string): boolean {
const currentValue = safeGetEnv(key);
return currentValue !== newValue;
}
/**
* Secure auth session management
*/
export class AuthSessionManager {
private static activeSessions = new Map<string, SecureAuthEnv>();
static createSession(
sessionId: string,
authMethod: 'cli' | 'api_key',
apiKey?: string,
provider: 'anthropic' | 'openai' | 'cursor' = 'anthropic'
): SecureAuthEnv {
const env = createSecureAuthEnv(authMethod, apiKey, provider);
this.activeSessions.set(sessionId, env);
return env;
}
static getSession(sessionId: string): SecureAuthEnv | undefined {
return this.activeSessions.get(sessionId);
}
static destroySession(sessionId: string): void {
this.activeSessions.delete(sessionId);
}
static cleanup(): void {
this.activeSessions.clear();
}
}
/**
* Rate limiting for auth attempts to prevent abuse
*/
export class AuthRateLimiter {
private attempts = new Map<string, { count: number; lastAttempt: number }>();
constructor(
private maxAttempts = 5,
private windowMs = 60000
) {}
canAttempt(identifier: string): boolean {
const now = Date.now();
const record = this.attempts.get(identifier);
if (!record || now - record.lastAttempt > this.windowMs) {
this.attempts.set(identifier, { count: 1, lastAttempt: now });
return true;
}
if (record.count >= this.maxAttempts) {
return false;
}
record.count++;
record.lastAttempt = now;
return true;
}
getRemainingAttempts(identifier: string): number {
const record = this.attempts.get(identifier);
if (!record) return this.maxAttempts;
return Math.max(0, this.maxAttempts - record.count);
}
getResetTime(identifier: string): Date | null {
const record = this.attempts.get(identifier);
if (!record) return null;
return new Date(record.lastAttempt + this.windowMs);
}
}

View File

@@ -12,6 +12,9 @@ import type { Request, Response, NextFunction } from 'express';
import crypto from 'crypto'; import crypto from 'crypto';
import path from 'path'; import path from 'path';
import * as secureFs from './secure-fs.js'; import * as secureFs from './secure-fs.js';
import { createLogger } from '@automaker/utils';
const logger = createLogger('Auth');
const DATA_DIR = process.env.DATA_DIR || './data'; const DATA_DIR = process.env.DATA_DIR || './data';
const API_KEY_FILE = path.join(DATA_DIR, '.api-key'); const API_KEY_FILE = path.join(DATA_DIR, '.api-key');
@@ -61,11 +64,11 @@ function loadSessions(): void {
} }
if (loadedCount > 0 || expiredCount > 0) { if (loadedCount > 0 || expiredCount > 0) {
console.log(`[Auth] Loaded ${loadedCount} sessions (${expiredCount} expired)`); logger.info(`Loaded ${loadedCount} sessions (${expiredCount} expired)`);
} }
} }
} catch (error) { } catch (error) {
console.warn('[Auth] Error loading sessions:', error); logger.warn('Error loading sessions:', error);
} }
} }
@@ -81,7 +84,7 @@ async function saveSessions(): Promise<void> {
mode: 0o600, mode: 0o600,
}); });
} catch (error) { } catch (error) {
console.error('[Auth] Failed to save sessions:', error); logger.error('Failed to save sessions:', error);
} }
} }
@@ -95,7 +98,7 @@ loadSessions();
function ensureApiKey(): string { function ensureApiKey(): string {
// First check environment variable (Electron passes it this way) // First check environment variable (Electron passes it this way)
if (process.env.AUTOMAKER_API_KEY) { if (process.env.AUTOMAKER_API_KEY) {
console.log('[Auth] Using API key from environment variable'); logger.info('Using API key from environment variable');
return process.env.AUTOMAKER_API_KEY; return process.env.AUTOMAKER_API_KEY;
} }
@@ -104,12 +107,12 @@ function ensureApiKey(): string {
if (secureFs.existsSync(API_KEY_FILE)) { if (secureFs.existsSync(API_KEY_FILE)) {
const key = (secureFs.readFileSync(API_KEY_FILE, 'utf-8') as string).trim(); const key = (secureFs.readFileSync(API_KEY_FILE, 'utf-8') as string).trim();
if (key) { if (key) {
console.log('[Auth] Loaded API key from file'); logger.info('Loaded API key from file');
return key; return key;
} }
} }
} catch (error) { } catch (error) {
console.warn('[Auth] Error reading API key file:', error); logger.warn('Error reading API key file:', error);
} }
// Generate new key // Generate new key
@@ -117,9 +120,9 @@ function ensureApiKey(): string {
try { try {
secureFs.mkdirSync(path.dirname(API_KEY_FILE), { recursive: true }); secureFs.mkdirSync(path.dirname(API_KEY_FILE), { recursive: true });
secureFs.writeFileSync(API_KEY_FILE, newKey, { encoding: 'utf-8', mode: 0o600 }); secureFs.writeFileSync(API_KEY_FILE, newKey, { encoding: 'utf-8', mode: 0o600 });
console.log('[Auth] Generated new API key'); logger.info('Generated new API key');
} catch (error) { } catch (error) {
console.error('[Auth] Failed to save API key:', error); logger.error('Failed to save API key:', error);
} }
return newKey; return newKey;
} }
@@ -129,7 +132,7 @@ const API_KEY = ensureApiKey();
// Print API key to console for web mode users (unless suppressed for production logging) // Print API key to console for web mode users (unless suppressed for production logging)
if (process.env.AUTOMAKER_HIDE_API_KEY !== 'true') { if (process.env.AUTOMAKER_HIDE_API_KEY !== 'true') {
console.log(` logger.info(`
╔═══════════════════════════════════════════════════════════════════════╗ ╔═══════════════════════════════════════════════════════════════════════╗
║ 🔐 API Key for Web Mode Authentication ║ ║ 🔐 API Key for Web Mode Authentication ║
╠═══════════════════════════════════════════════════════════════════════╣ ╠═══════════════════════════════════════════════════════════════════════╣
@@ -139,10 +142,12 @@ if (process.env.AUTOMAKER_HIDE_API_KEY !== 'true') {
${API_KEY} ${API_KEY}
║ ║ ║ ║
║ In Electron mode, authentication is handled automatically. ║ ║ In Electron mode, authentication is handled automatically. ║
║ ║
║ 💡 Tip: Set AUTOMAKER_API_KEY env var to use a fixed key for dev ║
╚═══════════════════════════════════════════════════════════════════════╝ ╚═══════════════════════════════════════════════════════════════════════╝
`); `);
} else { } else {
console.log('[Auth] API key banner hidden (AUTOMAKER_HIDE_API_KEY=true)'); logger.info('API key banner hidden (AUTOMAKER_HIDE_API_KEY=true)');
} }
/** /**
@@ -177,7 +182,7 @@ export function validateSession(token: string): boolean {
if (Date.now() > session.expiresAt) { if (Date.now() > session.expiresAt) {
validSessions.delete(token); validSessions.delete(token);
// Fire-and-forget: persist removal asynchronously // Fire-and-forget: persist removal asynchronously
saveSessions().catch((err) => console.error('[Auth] Error saving sessions:', err)); saveSessions().catch((err) => logger.error('Error saving sessions:', err));
return false; return false;
} }
@@ -259,7 +264,7 @@ export function getSessionCookieOptions(): {
return { return {
httpOnly: true, // JavaScript cannot access this cookie httpOnly: true, // JavaScript cannot access this cookie
secure: process.env.NODE_ENV === 'production', // HTTPS only in production secure: process.env.NODE_ENV === 'production', // HTTPS only in production
sameSite: 'strict', // Only sent for same-site requests (CSRF protection) sameSite: 'lax', // Sent for same-site requests and top-level navigations, but not cross-origin fetch/XHR
maxAge: SESSION_MAX_AGE_MS, maxAge: SESSION_MAX_AGE_MS,
path: '/', path: '/',
}; };

View File

@@ -0,0 +1,518 @@
/**
* Unified CLI Detection Framework
*
* Provides consistent CLI detection and management across all providers
*/
import { spawn, execSync } from 'child_process';
import * as fs from 'fs';
import * as path from 'path';
import * as os from 'os';
import { createLogger } from '@automaker/utils';
const logger = createLogger('CliDetection');
export interface CliInfo {
name: string;
command: string;
version?: string;
path?: string;
installed: boolean;
authenticated: boolean;
authMethod: 'cli' | 'api_key' | 'none';
platform?: string;
architectures?: string[];
}
export interface CliDetectionOptions {
timeout?: number;
includeWsl?: boolean;
wslDistribution?: string;
}
export interface CliDetectionResult {
cli: CliInfo;
detected: boolean;
issues: string[];
}
export interface UnifiedCliDetection {
claude?: CliDetectionResult;
codex?: CliDetectionResult;
cursor?: CliDetectionResult;
coderabbit?: CliDetectionResult;
}
/**
* CLI Configuration for different providers
*/
const CLI_CONFIGS = {
claude: {
name: 'Claude CLI',
commands: ['claude'],
versionArgs: ['--version'],
installCommands: {
darwin: 'brew install anthropics/claude/claude',
linux: 'curl -fsSL https://claude.ai/install.sh | sh',
win32: 'iwr https://claude.ai/install.ps1 -UseBasicParsing | iex',
},
},
codex: {
name: 'Codex CLI',
commands: ['codex', 'openai'],
versionArgs: ['--version'],
installCommands: {
darwin: 'npm install -g @openai/codex-cli',
linux: 'npm install -g @openai/codex-cli',
win32: 'npm install -g @openai/codex-cli',
},
},
cursor: {
name: 'Cursor CLI',
commands: ['cursor-agent', 'cursor'],
versionArgs: ['--version'],
installCommands: {
darwin: 'brew install cursor/cursor/cursor-agent',
linux: 'curl -fsSL https://cursor.sh/install.sh | sh',
win32: 'iwr https://cursor.sh/install.ps1 -UseBasicParsing | iex',
},
},
coderabbit: {
name: 'CodeRabbit CLI',
commands: ['coderabbit', 'cr'],
versionArgs: ['--version'],
installCommands: {
darwin: 'npm install -g coderabbit',
linux: 'npm install -g coderabbit',
win32: 'npm install -g coderabbit',
},
},
} as const;
/**
* Detect if a CLI is installed and available
*/
export async function detectCli(
provider: keyof typeof CLI_CONFIGS,
options: CliDetectionOptions = {}
): Promise<CliDetectionResult> {
const config = CLI_CONFIGS[provider];
const { timeout = 5000, includeWsl = false, wslDistribution } = options;
const issues: string[] = [];
const cliInfo: CliInfo = {
name: config.name,
command: '',
installed: false,
authenticated: false,
authMethod: 'none',
};
try {
// Find the command in PATH
const command = await findCommand([...config.commands]);
if (command) {
cliInfo.command = command;
}
if (!cliInfo.command) {
issues.push(`${config.name} not found in PATH`);
return { cli: cliInfo, detected: false, issues };
}
cliInfo.path = cliInfo.command;
cliInfo.installed = true;
// Get version
try {
cliInfo.version = await getCliVersion(cliInfo.command, [...config.versionArgs], timeout);
} catch (error) {
issues.push(`Failed to get ${config.name} version: ${error}`);
}
// Check authentication
cliInfo.authMethod = await checkCliAuth(provider, cliInfo.command);
cliInfo.authenticated = cliInfo.authMethod !== 'none';
return { cli: cliInfo, detected: true, issues };
} catch (error) {
issues.push(`Error detecting ${config.name}: ${error}`);
return { cli: cliInfo, detected: false, issues };
}
}
/**
* Detect all CLIs in the system
*/
export async function detectAllCLis(
options: CliDetectionOptions = {}
): Promise<UnifiedCliDetection> {
const results: UnifiedCliDetection = {};
// Detect all providers in parallel
const providers = Object.keys(CLI_CONFIGS) as Array<keyof typeof CLI_CONFIGS>;
const detectionPromises = providers.map(async (provider) => {
const result = await detectCli(provider, options);
return { provider, result };
});
const detections = await Promise.all(detectionPromises);
for (const { provider, result } of detections) {
results[provider] = result;
}
return results;
}
/**
* Find the first available command from a list of alternatives
*/
export async function findCommand(commands: string[]): Promise<string | null> {
for (const command of commands) {
try {
const whichCommand = process.platform === 'win32' ? 'where' : 'which';
const result = execSync(`${whichCommand} ${command}`, {
encoding: 'utf8',
timeout: 2000,
}).trim();
if (result) {
return result.split('\n')[0]; // Take first result on Windows
}
} catch {
// Command not found, try next
}
}
return null;
}
/**
* Get CLI version
*/
export async function getCliVersion(
command: string,
args: string[],
timeout: number = 5000
): Promise<string> {
return new Promise((resolve, reject) => {
const child = spawn(command, args, {
stdio: 'pipe',
timeout,
});
let stdout = '';
let stderr = '';
child.stdout?.on('data', (data) => {
stdout += data.toString();
});
child.stderr?.on('data', (data) => {
stderr += data.toString();
});
child.on('close', (code) => {
if (code === 0 && stdout) {
resolve(stdout.trim());
} else if (stderr) {
reject(stderr.trim());
} else {
reject(`Command exited with code ${code}`);
}
});
child.on('error', reject);
});
}
/**
* Check authentication status for a CLI
*/
export async function checkCliAuth(
provider: keyof typeof CLI_CONFIGS,
command: string
): Promise<'cli' | 'api_key' | 'none'> {
try {
switch (provider) {
case 'claude':
return await checkClaudeAuth(command);
case 'codex':
return await checkCodexAuth(command);
case 'cursor':
return await checkCursorAuth(command);
case 'coderabbit':
return await checkCodeRabbitAuth(command);
default:
return 'none';
}
} catch {
return 'none';
}
}
/**
* Check Claude CLI authentication
*/
async function checkClaudeAuth(command: string): Promise<'cli' | 'api_key' | 'none'> {
try {
// Check for environment variable
if (process.env.ANTHROPIC_API_KEY) {
return 'api_key';
}
// Try running a simple command to check CLI auth
const result = await getCliVersion(command, ['--version'], 3000);
if (result) {
return 'cli'; // If version works, assume CLI is authenticated
}
} catch {
// Version command might work even without auth, so we need a better check
}
// Try a more specific auth check
return new Promise((resolve) => {
const child = spawn(command, ['whoami'], {
stdio: 'pipe',
timeout: 3000,
});
let stdout = '';
let stderr = '';
child.stdout?.on('data', (data) => {
stdout += data.toString();
});
child.stderr?.on('data', (data) => {
stderr += data.toString();
});
child.on('close', (code) => {
if (code === 0 && stdout && !stderr.includes('not authenticated')) {
resolve('cli');
} else {
resolve('none');
}
});
child.on('error', () => {
resolve('none');
});
});
}
/**
* Check Codex CLI authentication
*/
async function checkCodexAuth(command: string): Promise<'cli' | 'api_key' | 'none'> {
// Check for environment variable
if (process.env.OPENAI_API_KEY) {
return 'api_key';
}
try {
// Try a simple auth check
const result = await getCliVersion(command, ['--version'], 3000);
if (result) {
return 'cli';
}
} catch {
// Version check failed
}
return 'none';
}
/**
* Check Cursor CLI authentication
*/
async function checkCursorAuth(command: string): Promise<'cli' | 'api_key' | 'none'> {
// Check for environment variable
if (process.env.CURSOR_API_KEY) {
return 'api_key';
}
// Check for credentials files
const credentialPaths = [
path.join(os.homedir(), '.cursor', 'credentials.json'),
path.join(os.homedir(), '.config', 'cursor', 'credentials.json'),
path.join(os.homedir(), '.cursor', 'auth.json'),
path.join(os.homedir(), '.config', 'cursor', 'auth.json'),
];
for (const credPath of credentialPaths) {
try {
if (fs.existsSync(credPath)) {
const content = fs.readFileSync(credPath, 'utf8');
const creds = JSON.parse(content);
if (creds.accessToken || creds.token || creds.apiKey) {
return 'cli';
}
}
} catch {
// Invalid credentials file
}
}
// Try a simple command
try {
const result = await getCliVersion(command, ['--version'], 3000);
if (result) {
return 'cli';
}
} catch {
// Version check failed
}
return 'none';
}
/**
* Check CodeRabbit CLI authentication
*
* Expected output when authenticated:
* ```
* CodeRabbit CLI Status
* ✅ Authentication: Logged in
* User Information:
* 👤 Name: ...
* ```
*/
async function checkCodeRabbitAuth(command: string): Promise<'cli' | 'api_key' | 'none'> {
// Check for environment variable
if (process.env.CODERABBIT_API_KEY) {
return 'api_key';
}
// Try running auth status command
return new Promise((resolve) => {
const child = spawn(command, ['auth', 'status'], {
stdio: 'pipe',
timeout: 10000, // Increased timeout for slower systems
});
let stdout = '';
let stderr = '';
child.stdout?.on('data', (data) => {
stdout += data.toString();
});
child.stderr?.on('data', (data) => {
stderr += data.toString();
});
child.on('close', (code) => {
const output = stdout + stderr;
// Check for positive authentication indicators in output
const isAuthenticated =
code === 0 &&
(output.includes('Logged in') || output.includes('logged in')) &&
!output.toLowerCase().includes('not logged in') &&
!output.toLowerCase().includes('not authenticated');
if (isAuthenticated) {
resolve('cli');
} else {
resolve('none');
}
});
child.on('error', () => {
resolve('none');
});
});
}
/**
* Get installation instructions for a provider
*/
export function getInstallInstructions(
provider: keyof typeof CLI_CONFIGS,
platform: NodeJS.Platform = process.platform
): string {
const config = CLI_CONFIGS[provider];
const command = config.installCommands[platform as keyof typeof config.installCommands];
if (!command) {
return `No installation instructions available for ${provider} on ${platform}`;
}
return command;
}
/**
* Get platform-specific CLI paths and versions
*/
export function getPlatformCliPaths(provider: keyof typeof CLI_CONFIGS): string[] {
const config = CLI_CONFIGS[provider];
const platform = process.platform;
switch (platform) {
case 'darwin':
return [
`/usr/local/bin/${config.commands[0]}`,
`/opt/homebrew/bin/${config.commands[0]}`,
path.join(os.homedir(), '.local', 'bin', config.commands[0]),
];
case 'linux':
return [
`/usr/bin/${config.commands[0]}`,
`/usr/local/bin/${config.commands[0]}`,
path.join(os.homedir(), '.local', 'bin', config.commands[0]),
path.join(os.homedir(), '.npm', 'global', 'bin', config.commands[0]),
];
case 'win32':
return [
path.join(
os.homedir(),
'AppData',
'Local',
'Programs',
config.commands[0],
`${config.commands[0]}.exe`
),
path.join(process.env.ProgramFiles || '', config.commands[0], `${config.commands[0]}.exe`),
path.join(
process.env.ProgramFiles || '',
config.commands[0],
'bin',
`${config.commands[0]}.exe`
),
];
default:
return [];
}
}
/**
* Validate CLI installation
*/
export function validateCliInstallation(cliInfo: CliInfo): {
valid: boolean;
issues: string[];
} {
const issues: string[] = [];
if (!cliInfo.installed) {
issues.push('CLI is not installed');
}
if (cliInfo.installed && !cliInfo.version) {
issues.push('Could not determine CLI version');
}
if (cliInfo.installed && cliInfo.authMethod === 'none') {
issues.push('CLI is not authenticated');
}
return {
valid: issues.length === 0,
issues,
};
}

View File

@@ -0,0 +1,68 @@
/**
* Shared utility for checking Codex CLI authentication status
*
* Uses 'codex login status' command to verify authentication.
* Never assumes authenticated - only returns true if CLI confirms.
*/
import { spawnProcess } from '@automaker/platform';
import { findCodexCliPath } from '@automaker/platform';
import { createLogger } from '@automaker/utils';
const logger = createLogger('CodexAuth');
const CODEX_COMMAND = 'codex';
const OPENAI_API_KEY_ENV = 'OPENAI_API_KEY';
export interface CodexAuthCheckResult {
authenticated: boolean;
method: 'api_key_env' | 'cli_authenticated' | 'none';
}
/**
* Check Codex authentication status using 'codex login status' command
*
* @param cliPath Optional CLI path. If not provided, will attempt to find it.
* @returns Authentication status and method
*/
export async function checkCodexAuthentication(
cliPath?: string | null
): Promise<CodexAuthCheckResult> {
const resolvedCliPath = cliPath || (await findCodexCliPath());
const hasApiKey = !!process.env[OPENAI_API_KEY_ENV];
// If CLI is not installed, cannot be authenticated
if (!resolvedCliPath) {
logger.info('CLI not found');
return { authenticated: false, method: 'none' };
}
try {
const result = await spawnProcess({
command: resolvedCliPath || CODEX_COMMAND,
args: ['login', 'status'],
cwd: process.cwd(),
env: {
...process.env,
TERM: 'dumb', // Avoid interactive output
},
});
// Check both stdout and stderr for "logged in" - Codex CLI outputs to stderr
const combinedOutput = (result.stdout + result.stderr).toLowerCase();
const isLoggedIn = combinedOutput.includes('logged in');
if (result.exitCode === 0 && isLoggedIn) {
// Determine auth method based on what we know
const method = hasApiKey ? 'api_key_env' : 'cli_authenticated';
logger.info(`✓ Authenticated (${method})`);
return { authenticated: true, method };
}
logger.info('Not authenticated');
return { authenticated: false, method: 'none' };
} catch (error) {
logger.error('Failed to check authentication:', error);
return { authenticated: false, method: 'none' };
}
}

View File

@@ -0,0 +1,414 @@
/**
* Unified Error Handling System for CLI Providers
*
* Provides consistent error classification, user-friendly messages, and debugging support
* across all AI providers (Claude, Codex, Cursor)
*/
import { createLogger } from '@automaker/utils';
const logger = createLogger('ErrorHandler');
export enum ErrorType {
AUTHENTICATION = 'authentication',
BILLING = 'billing',
RATE_LIMIT = 'rate_limit',
NETWORK = 'network',
TIMEOUT = 'timeout',
VALIDATION = 'validation',
PERMISSION = 'permission',
CLI_NOT_FOUND = 'cli_not_found',
CLI_NOT_INSTALLED = 'cli_not_installed',
MODEL_NOT_SUPPORTED = 'model_not_supported',
INVALID_REQUEST = 'invalid_request',
SERVER_ERROR = 'server_error',
UNKNOWN = 'unknown',
}
export enum ErrorSeverity {
LOW = 'low',
MEDIUM = 'medium',
HIGH = 'high',
CRITICAL = 'critical',
}
export interface ErrorClassification {
type: ErrorType;
severity: ErrorSeverity;
userMessage: string;
technicalMessage: string;
suggestedAction?: string;
retryable: boolean;
provider?: string;
context?: Record<string, any>;
}
export interface ErrorPattern {
type: ErrorType;
severity: ErrorSeverity;
patterns: RegExp[];
userMessage: string;
suggestedAction?: string;
retryable: boolean;
}
/**
* Error patterns for different types of errors
*/
const ERROR_PATTERNS: ErrorPattern[] = [
// Authentication errors
{
type: ErrorType.AUTHENTICATION,
severity: ErrorSeverity.HIGH,
patterns: [
/unauthorized/i,
/authentication.*fail/i,
/invalid_api_key/i,
/invalid api key/i,
/not authenticated/i,
/please.*log/i,
/token.*revoked/i,
/oauth.*error/i,
/credentials.*invalid/i,
],
userMessage: 'Authentication failed. Please check your API key or login credentials.',
suggestedAction:
"Verify your API key is correct and hasn't expired, or run the CLI login command.",
retryable: false,
},
// Billing errors
{
type: ErrorType.BILLING,
severity: ErrorSeverity.HIGH,
patterns: [
/credit.*balance.*low/i,
/insufficient.*credit/i,
/billing.*issue/i,
/payment.*required/i,
/usage.*exceeded/i,
/quota.*exceeded/i,
/add.*credit/i,
],
userMessage: 'Account has insufficient credits or billing issues.',
suggestedAction: 'Please add credits to your account or check your billing settings.',
retryable: false,
},
// Rate limit errors
{
type: ErrorType.RATE_LIMIT,
severity: ErrorSeverity.MEDIUM,
patterns: [
/rate.*limit/i,
/too.*many.*request/i,
/limit.*reached/i,
/try.*later/i,
/429/i,
/reset.*time/i,
/upgrade.*plan/i,
],
userMessage: 'Rate limit reached. Please wait before trying again.',
suggestedAction: 'Wait a few minutes before retrying, or consider upgrading your plan.',
retryable: true,
},
// Network errors
{
type: ErrorType.NETWORK,
severity: ErrorSeverity.MEDIUM,
patterns: [/network/i, /connection/i, /dns/i, /timeout/i, /econnrefused/i, /enotfound/i],
userMessage: 'Network connection issue.',
suggestedAction: 'Check your internet connection and try again.',
retryable: true,
},
// Timeout errors
{
type: ErrorType.TIMEOUT,
severity: ErrorSeverity.MEDIUM,
patterns: [/timeout/i, /aborted/i, /time.*out/i],
userMessage: 'Operation timed out.',
suggestedAction: 'Try again with a simpler request or check your connection.',
retryable: true,
},
// Permission errors
{
type: ErrorType.PERMISSION,
severity: ErrorSeverity.HIGH,
patterns: [/permission.*denied/i, /access.*denied/i, /forbidden/i, /403/i, /not.*authorized/i],
userMessage: 'Permission denied.',
suggestedAction: 'Check if you have the required permissions for this operation.',
retryable: false,
},
// CLI not found
{
type: ErrorType.CLI_NOT_FOUND,
severity: ErrorSeverity.HIGH,
patterns: [/command not found/i, /not recognized/i, /not.*installed/i, /ENOENT/i],
userMessage: 'CLI tool not found.',
suggestedAction: "Please install the required CLI tool and ensure it's in your PATH.",
retryable: false,
},
// Model not supported
{
type: ErrorType.MODEL_NOT_SUPPORTED,
severity: ErrorSeverity.HIGH,
patterns: [/model.*not.*support/i, /unknown.*model/i, /invalid.*model/i],
userMessage: 'Model not supported.',
suggestedAction: 'Check available models and use a supported one.',
retryable: false,
},
// Server errors
{
type: ErrorType.SERVER_ERROR,
severity: ErrorSeverity.HIGH,
patterns: [/internal.*server/i, /server.*error/i, /500/i, /502/i, /503/i, /504/i],
userMessage: 'Server error occurred.',
suggestedAction: 'Try again in a few minutes or contact support if the issue persists.',
retryable: true,
},
];
/**
* Classify an error into a specific type with user-friendly message
*/
export function classifyError(
error: unknown,
provider?: string,
context?: Record<string, any>
): ErrorClassification {
const errorText = getErrorText(error);
// Try to match against known patterns
for (const pattern of ERROR_PATTERNS) {
for (const regex of pattern.patterns) {
if (regex.test(errorText)) {
return {
type: pattern.type,
severity: pattern.severity,
userMessage: pattern.userMessage,
technicalMessage: errorText,
suggestedAction: pattern.suggestedAction,
retryable: pattern.retryable,
provider,
context,
};
}
}
}
// Unknown error
return {
type: ErrorType.UNKNOWN,
severity: ErrorSeverity.MEDIUM,
userMessage: 'An unexpected error occurred.',
technicalMessage: errorText,
suggestedAction: 'Please try again or contact support if the issue persists.',
retryable: true,
provider,
context,
};
}
/**
* Get a user-friendly error message
*/
export function getUserFriendlyErrorMessage(error: unknown, provider?: string): string {
const classification = classifyError(error, provider);
let message = classification.userMessage;
if (classification.suggestedAction) {
message += ` ${classification.suggestedAction}`;
}
// Add provider-specific context if available
if (provider) {
message = `[${provider.toUpperCase()}] ${message}`;
}
return message;
}
/**
* Check if an error is retryable
*/
export function isRetryableError(error: unknown): boolean {
const classification = classifyError(error);
return classification.retryable;
}
/**
* Check if an error is authentication-related
*/
export function isAuthenticationError(error: unknown): boolean {
const classification = classifyError(error);
return classification.type === ErrorType.AUTHENTICATION;
}
/**
* Check if an error is billing-related
*/
export function isBillingError(error: unknown): boolean {
const classification = classifyError(error);
return classification.type === ErrorType.BILLING;
}
/**
* Check if an error is rate limit related
*/
export function isRateLimitError(error: unknown): boolean {
const classification = classifyError(error);
return classification.type === ErrorType.RATE_LIMIT;
}
/**
* Get error text from various error types
*/
function getErrorText(error: unknown): string {
if (typeof error === 'string') {
return error;
}
if (error instanceof Error) {
return error.message;
}
if (typeof error === 'object' && error !== null) {
// Handle structured error objects
const errorObj = error as any;
if (errorObj.message) {
return errorObj.message;
}
if (errorObj.error?.message) {
return errorObj.error.message;
}
if (errorObj.error) {
return typeof errorObj.error === 'string' ? errorObj.error : JSON.stringify(errorObj.error);
}
return JSON.stringify(error);
}
return String(error);
}
/**
* Create a standardized error response
*/
export function createErrorResponse(
error: unknown,
provider?: string,
context?: Record<string, any>
): {
success: false;
error: string;
errorType: ErrorType;
severity: ErrorSeverity;
retryable: boolean;
suggestedAction?: string;
} {
const classification = classifyError(error, provider, context);
return {
success: false,
error: classification.userMessage,
errorType: classification.type,
severity: classification.severity,
retryable: classification.retryable,
suggestedAction: classification.suggestedAction,
};
}
/**
* Log error with full context
*/
export function logError(
error: unknown,
provider?: string,
operation?: string,
additionalContext?: Record<string, any>
): void {
const classification = classifyError(error, provider, {
operation,
...additionalContext,
});
logger.error(`Error in ${provider || 'unknown'}${operation ? ` during ${operation}` : ''}`, {
type: classification.type,
severity: classification.severity,
message: classification.userMessage,
technicalMessage: classification.technicalMessage,
retryable: classification.retryable,
suggestedAction: classification.suggestedAction,
context: classification.context,
});
}
/**
* Provider-specific error handlers
*/
export const ProviderErrorHandler = {
claude: {
classify: (error: unknown) => classifyError(error, 'claude'),
getUserMessage: (error: unknown) => getUserFriendlyErrorMessage(error, 'claude'),
isAuth: (error: unknown) => isAuthenticationError(error),
isBilling: (error: unknown) => isBillingError(error),
isRateLimit: (error: unknown) => isRateLimitError(error),
},
codex: {
classify: (error: unknown) => classifyError(error, 'codex'),
getUserMessage: (error: unknown) => getUserFriendlyErrorMessage(error, 'codex'),
isAuth: (error: unknown) => isAuthenticationError(error),
isBilling: (error: unknown) => isBillingError(error),
isRateLimit: (error: unknown) => isRateLimitError(error),
},
cursor: {
classify: (error: unknown) => classifyError(error, 'cursor'),
getUserMessage: (error: unknown) => getUserFriendlyErrorMessage(error, 'cursor'),
isAuth: (error: unknown) => isAuthenticationError(error),
isBilling: (error: unknown) => isBillingError(error),
isRateLimit: (error: unknown) => isRateLimitError(error),
},
};
/**
* Create a retry handler for retryable errors
*/
export function createRetryHandler(maxRetries: number = 3, baseDelay: number = 1000) {
return async function <T>(
operation: () => Promise<T>,
shouldRetry: (error: unknown) => boolean = isRetryableError
): Promise<T> {
let lastError: unknown;
for (let attempt = 0; attempt <= maxRetries; attempt++) {
try {
return await operation();
} catch (error) {
lastError = error;
if (attempt === maxRetries || !shouldRetry(error)) {
throw error;
}
// Exponential backoff with jitter
const delay = baseDelay * Math.pow(2, attempt) + Math.random() * 1000;
logger.debug(`Retrying operation in ${delay}ms (attempt ${attempt + 1}/${maxRetries})`);
await new Promise((resolve) => setTimeout(resolve, delay));
}
}
throw lastError;
};
}

View File

@@ -3,6 +3,9 @@
*/ */
import type { EventType, EventCallback } from '@automaker/types'; import type { EventType, EventCallback } from '@automaker/types';
import { createLogger } from '@automaker/utils';
const logger = createLogger('Events');
// Re-export event types from shared package // Re-export event types from shared package
export type { EventType, EventCallback }; export type { EventType, EventCallback };
@@ -21,7 +24,7 @@ export function createEventEmitter(): EventEmitter {
try { try {
callback(type, payload); callback(type, payload);
} catch (error) { } catch (error) {
console.error('Error in event subscriber:', error); logger.error('Error in event subscriber:', error);
} }
} }
}, },

View File

@@ -0,0 +1,211 @@
/**
* JSON Extraction Utilities
*
* Robust JSON extraction from AI responses that may contain markdown,
* code blocks, or other text mixed with JSON content.
*
* Used by various routes that parse structured output from Cursor or
* Claude responses when structured output is not available.
*/
import { createLogger } from '@automaker/utils';
const logger = createLogger('JsonExtractor');
/**
* Logger interface for optional custom logging
*/
export interface JsonExtractorLogger {
debug: (message: string, ...args: unknown[]) => void;
warn?: (message: string, ...args: unknown[]) => void;
}
/**
* Options for JSON extraction
*/
export interface ExtractJsonOptions {
/** Custom logger (defaults to internal logger) */
logger?: JsonExtractorLogger;
/** Required key that must be present in the extracted JSON */
requiredKey?: string;
/** Whether the required key's value must be an array */
requireArray?: boolean;
}
/**
* Extract JSON from response text using multiple strategies.
*
* Strategies tried in order:
* 1. JSON in ```json code block
* 2. JSON in ``` code block (no language)
* 3. Find JSON object by matching braces (starting with requiredKey if specified)
* 4. Find any JSON object by matching braces
* 5. Parse entire response as JSON
*
* @param responseText - The raw response text that may contain JSON
* @param options - Optional extraction options
* @returns Parsed JSON object or null if extraction fails
*/
export function extractJson<T = Record<string, unknown>>(
responseText: string,
options: ExtractJsonOptions = {}
): T | null {
const log = options.logger || logger;
const requiredKey = options.requiredKey;
const requireArray = options.requireArray ?? false;
/**
* Validate that the result has the required key/structure
*/
const validateResult = (result: unknown): result is T => {
if (!result || typeof result !== 'object') return false;
if (requiredKey) {
const obj = result as Record<string, unknown>;
if (!(requiredKey in obj)) return false;
if (requireArray && !Array.isArray(obj[requiredKey])) return false;
}
return true;
};
/**
* Find matching closing brace by counting brackets
*/
const findMatchingBrace = (text: string, startIdx: number): number => {
let depth = 0;
for (let i = startIdx; i < text.length; i++) {
if (text[i] === '{') depth++;
if (text[i] === '}') {
depth--;
if (depth === 0) {
return i + 1;
}
}
}
return -1;
};
const strategies = [
// Strategy 1: JSON in ```json code block
() => {
const match = responseText.match(/```json\s*([\s\S]*?)```/);
if (match) {
log.debug('Extracting JSON from ```json code block');
return JSON.parse(match[1].trim());
}
return null;
},
// Strategy 2: JSON in ``` code block (no language specified)
() => {
const match = responseText.match(/```\s*([\s\S]*?)```/);
if (match) {
const content = match[1].trim();
// Only try if it looks like JSON (starts with { or [)
if (content.startsWith('{') || content.startsWith('[')) {
log.debug('Extracting JSON from ``` code block');
return JSON.parse(content);
}
}
return null;
},
// Strategy 3: Find JSON object containing the required key (if specified)
() => {
if (!requiredKey) return null;
const searchPattern = `{"${requiredKey}"`;
const startIdx = responseText.indexOf(searchPattern);
if (startIdx === -1) return null;
const endIdx = findMatchingBrace(responseText, startIdx);
if (endIdx > startIdx) {
log.debug(`Extracting JSON with required key "${requiredKey}"`);
return JSON.parse(responseText.slice(startIdx, endIdx));
}
return null;
},
// Strategy 4: Find any JSON object by matching braces
() => {
const startIdx = responseText.indexOf('{');
if (startIdx === -1) return null;
const endIdx = findMatchingBrace(responseText, startIdx);
if (endIdx > startIdx) {
log.debug('Extracting JSON by brace matching');
return JSON.parse(responseText.slice(startIdx, endIdx));
}
return null;
},
// Strategy 5: Find JSON using first { to last } (may be less accurate)
() => {
const firstBrace = responseText.indexOf('{');
const lastBrace = responseText.lastIndexOf('}');
if (firstBrace !== -1 && lastBrace > firstBrace) {
log.debug('Extracting JSON from first { to last }');
return JSON.parse(responseText.slice(firstBrace, lastBrace + 1));
}
return null;
},
// Strategy 6: Try parsing the entire response as JSON
() => {
const trimmed = responseText.trim();
if (trimmed.startsWith('{') || trimmed.startsWith('[')) {
log.debug('Parsing entire response as JSON');
return JSON.parse(trimmed);
}
return null;
},
];
for (const strategy of strategies) {
try {
const result = strategy();
if (validateResult(result)) {
log.debug('Successfully extracted JSON');
return result as T;
}
} catch {
// Strategy failed, try next
}
}
log.debug('Failed to extract JSON from response');
return null;
}
/**
* Extract JSON with a specific required key.
* Convenience wrapper around extractJson.
*
* @param responseText - The raw response text
* @param requiredKey - Key that must be present in the extracted JSON
* @param options - Additional options
* @returns Parsed JSON object or null
*/
export function extractJsonWithKey<T = Record<string, unknown>>(
responseText: string,
requiredKey: string,
options: Omit<ExtractJsonOptions, 'requiredKey'> = {}
): T | null {
return extractJson<T>(responseText, { ...options, requiredKey });
}
/**
* Extract JSON that has a required array property.
* Useful for extracting responses like { "suggestions": [...] }
*
* @param responseText - The raw response text
* @param arrayKey - Key that must contain an array
* @param options - Additional options
* @returns Parsed JSON object or null
*/
export function extractJsonWithArray<T = Record<string, unknown>>(
responseText: string,
arrayKey: string,
options: Omit<ExtractJsonOptions, 'requiredKey' | 'requireArray'> = {}
): T | null {
return extractJson<T>(responseText, { ...options, requiredKey: arrayKey, requireArray: true });
}

View File

@@ -0,0 +1,173 @@
/**
* Permission enforcement utilities for Cursor provider
*/
import type { CursorCliConfigFile } from '@automaker/types';
import { createLogger } from '@automaker/utils';
const logger = createLogger('PermissionEnforcer');
export interface PermissionCheckResult {
allowed: boolean;
reason?: string;
}
/**
* Check if a tool call is allowed based on permissions
*/
export function checkToolCallPermission(
toolCall: any,
permissions: CursorCliConfigFile | null
): PermissionCheckResult {
if (!permissions || !permissions.permissions) {
// If no permissions are configured, allow everything (backward compatibility)
return { allowed: true };
}
const { allow = [], deny = [] } = permissions.permissions;
// Check shell tool calls
if (toolCall.shellToolCall?.args?.command) {
const command = toolCall.shellToolCall.args.command;
const toolName = `Shell(${extractCommandName(command)})`;
// Check deny list first (deny takes precedence)
for (const denyRule of deny) {
if (matchesRule(toolName, denyRule)) {
return {
allowed: false,
reason: `Operation blocked by permission rule: ${denyRule}`,
};
}
}
// Then check allow list
for (const allowRule of allow) {
if (matchesRule(toolName, allowRule)) {
return { allowed: true };
}
}
return {
allowed: false,
reason: `Operation not in allow list: ${toolName}`,
};
}
// Check read tool calls
if (toolCall.readToolCall?.args?.path) {
const path = toolCall.readToolCall.args.path;
const toolName = `Read(${path})`;
// Check deny list first
for (const denyRule of deny) {
if (matchesRule(toolName, denyRule)) {
return {
allowed: false,
reason: `Read operation blocked by permission rule: ${denyRule}`,
};
}
}
// Then check allow list
for (const allowRule of allow) {
if (matchesRule(toolName, allowRule)) {
return { allowed: true };
}
}
return {
allowed: false,
reason: `Read operation not in allow list: ${toolName}`,
};
}
// Check write tool calls
if (toolCall.writeToolCall?.args?.path) {
const path = toolCall.writeToolCall.args.path;
const toolName = `Write(${path})`;
// Check deny list first
for (const denyRule of deny) {
if (matchesRule(toolName, denyRule)) {
return {
allowed: false,
reason: `Write operation blocked by permission rule: ${denyRule}`,
};
}
}
// Then check allow list
for (const allowRule of allow) {
if (matchesRule(toolName, allowRule)) {
return { allowed: true };
}
}
return {
allowed: false,
reason: `Write operation not in allow list: ${toolName}`,
};
}
// For other tool types, allow by default for now
return { allowed: true };
}
/**
* Extract the base command name from a shell command
*/
function extractCommandName(command: string): string {
// Remove leading spaces and get the first word
const trimmed = command.trim();
const firstWord = trimmed.split(/\s+/)[0];
return firstWord || 'unknown';
}
/**
* Check if a tool name matches a permission rule
*/
function matchesRule(toolName: string, rule: string): boolean {
// Exact match
if (toolName === rule) {
return true;
}
// Wildcard patterns
if (rule.includes('*')) {
const regex = new RegExp(rule.replace(/\*/g, '.*'));
return regex.test(toolName);
}
// Prefix match for shell commands (e.g., "Shell(git)" matches "Shell(git status)")
if (rule.startsWith('Shell(') && toolName.startsWith('Shell(')) {
const ruleCommand = rule.slice(6, -1); // Remove "Shell(" and ")"
const toolCommand = extractCommandName(toolName.slice(6, -1)); // Remove "Shell(" and ")"
return toolCommand.startsWith(ruleCommand);
}
return false;
}
/**
* Log permission violations
*/
export function logPermissionViolation(toolCall: any, reason: string, sessionId?: string): void {
const sessionIdStr = sessionId ? ` [${sessionId}]` : '';
if (toolCall.shellToolCall?.args?.command) {
logger.warn(
`Permission violation${sessionIdStr}: Shell command blocked - ${toolCall.shellToolCall.args.command} (${reason})`
);
} else if (toolCall.readToolCall?.args?.path) {
logger.warn(
`Permission violation${sessionIdStr}: Read operation blocked - ${toolCall.readToolCall.args.path} (${reason})`
);
} else if (toolCall.writeToolCall?.args?.path) {
logger.warn(
`Permission violation${sessionIdStr}: Write operation blocked - ${toolCall.writeToolCall.args.path} (${reason})`
);
} else {
logger.warn(`Permission violation${sessionIdStr}: Tool call blocked (${reason})`, { toolCall });
}
}

View File

@@ -18,9 +18,80 @@
import type { Options } from '@anthropic-ai/claude-agent-sdk'; import type { Options } from '@anthropic-ai/claude-agent-sdk';
import path from 'path'; import path from 'path';
import { resolveModelString } from '@automaker/model-resolver'; import { resolveModelString } from '@automaker/model-resolver';
import { DEFAULT_MODELS, CLAUDE_MODEL_MAP, type McpServerConfig } from '@automaker/types'; import { createLogger } from '@automaker/utils';
const logger = createLogger('SdkOptions');
import {
DEFAULT_MODELS,
CLAUDE_MODEL_MAP,
type McpServerConfig,
type ThinkingLevel,
getThinkingTokenBudget,
} from '@automaker/types';
import { isPathAllowed, PathNotAllowedError, getAllowedRootDirectory } from '@automaker/platform'; import { isPathAllowed, PathNotAllowedError, getAllowedRootDirectory } from '@automaker/platform';
/**
* Result of sandbox compatibility check
*/
export interface SandboxCompatibilityResult {
/** Whether sandbox mode can be enabled for this path */
enabled: boolean;
/** Optional message explaining why sandbox is disabled */
message?: string;
}
/**
* Check if a working directory is compatible with sandbox mode.
* Some paths (like cloud storage mounts) may not work with sandboxed execution.
*
* @param cwd - The working directory to check
* @param sandboxRequested - Whether sandbox mode was requested by settings
* @returns Object indicating if sandbox can be enabled and why not if disabled
*/
export function checkSandboxCompatibility(
cwd: string,
sandboxRequested: boolean
): SandboxCompatibilityResult {
if (!sandboxRequested) {
return { enabled: false };
}
const resolvedCwd = path.resolve(cwd);
// Check for cloud storage paths that may not be compatible with sandbox
const cloudStoragePatterns = [
// macOS mounted volumes
/^\/Volumes\/GoogleDrive/i,
/^\/Volumes\/Dropbox/i,
/^\/Volumes\/OneDrive/i,
/^\/Volumes\/iCloud/i,
// macOS home directory
/^\/Users\/[^/]+\/Google Drive/i,
/^\/Users\/[^/]+\/Dropbox/i,
/^\/Users\/[^/]+\/OneDrive/i,
/^\/Users\/[^/]+\/Library\/Mobile Documents/i, // iCloud
// Linux home directory
/^\/home\/[^/]+\/Google Drive/i,
/^\/home\/[^/]+\/Dropbox/i,
/^\/home\/[^/]+\/OneDrive/i,
// Windows
/^C:\\Users\\[^\\]+\\Google Drive/i,
/^C:\\Users\\[^\\]+\\Dropbox/i,
/^C:\\Users\\[^\\]+\\OneDrive/i,
];
for (const pattern of cloudStoragePatterns) {
if (pattern.test(resolvedCwd)) {
return {
enabled: false,
message: `Sandbox disabled: Cloud storage path detected (${resolvedCwd}). Sandbox mode may not work correctly with cloud-synced directories.`,
};
}
}
return { enabled: true };
}
/** /**
* Validate that a working directory is allowed by ALLOWED_ROOT_DIRECTORY. * Validate that a working directory is allowed by ALLOWED_ROOT_DIRECTORY.
* This is the centralized security check for ALL AI model invocations. * This is the centralized security check for ALL AI model invocations.
@@ -58,10 +129,30 @@ export const TOOL_PRESETS = {
specGeneration: ['Read', 'Glob', 'Grep'] as const, specGeneration: ['Read', 'Glob', 'Grep'] as const,
/** Full tool access for feature implementation */ /** Full tool access for feature implementation */
fullAccess: ['Read', 'Write', 'Edit', 'Glob', 'Grep', 'Bash', 'WebSearch', 'WebFetch'] as const, fullAccess: [
'Read',
'Write',
'Edit',
'Glob',
'Grep',
'Bash',
'WebSearch',
'WebFetch',
'TodoWrite',
] as const,
/** Tools for chat/interactive mode */ /** Tools for chat/interactive mode */
chat: ['Read', 'Write', 'Edit', 'Glob', 'Grep', 'Bash', 'WebSearch', 'WebFetch'] as const, chat: [
'Read',
'Write',
'Edit',
'Glob',
'Grep',
'Bash',
'WebSearch',
'WebFetch',
'TodoWrite',
] as const,
} as const; } as const;
/** /**
@@ -129,60 +220,51 @@ export function getModelForUseCase(
/** /**
* Base options that apply to all SDK calls * Base options that apply to all SDK calls
* AUTONOMOUS MODE: Always bypass permissions for fully autonomous operation
*/ */
function getBaseOptions(): Partial<Options> { function getBaseOptions(): Partial<Options> {
return { return {
permissionMode: 'acceptEdits', permissionMode: 'bypassPermissions',
allowDangerouslySkipPermissions: true,
}; };
} }
/** /**
* MCP permission options result * MCP options result
*/ */
interface McpPermissionOptions { interface McpOptions {
/** Whether tools should be restricted to a preset */
shouldRestrictTools: boolean;
/** Options to spread when MCP bypass is enabled */
bypassOptions: Partial<Options>;
/** Options to spread for MCP servers */ /** Options to spread for MCP servers */
mcpServerOptions: Partial<Options>; mcpServerOptions: Partial<Options>;
} }
/** /**
* Build MCP-related options based on configuration. * Build MCP-related options based on configuration.
* Centralizes the logic for determining permission modes and tool restrictions
* when MCP servers are configured.
* *
* @param config - The SDK options config * @param config - The SDK options config
* @returns Object with MCP permission settings to spread into final options * @returns Object with MCP server settings to spread into final options
*/ */
function buildMcpOptions(config: CreateSdkOptionsConfig): McpPermissionOptions { function buildMcpOptions(config: CreateSdkOptionsConfig): McpOptions {
const hasMcpServers = config.mcpServers && Object.keys(config.mcpServers).length > 0;
// Default to true for autonomous workflow. Security is enforced when adding servers
// via the security warning dialog that explains the risks.
const mcpAutoApprove = config.mcpAutoApproveTools ?? true;
const mcpUnrestricted = config.mcpUnrestrictedTools ?? true;
// Determine if we should bypass permissions based on settings
const shouldBypassPermissions = hasMcpServers && mcpAutoApprove;
// Determine if we should restrict tools (only when no MCP or unrestricted is disabled)
const shouldRestrictTools = !hasMcpServers || !mcpUnrestricted;
return { return {
shouldRestrictTools,
// Only include bypass options when MCP is configured and auto-approve is enabled
bypassOptions: shouldBypassPermissions
? {
permissionMode: 'bypassPermissions' as const,
// Required flag when using bypassPermissions mode
allowDangerouslySkipPermissions: true,
}
: {},
// Include MCP servers if configured // Include MCP servers if configured
mcpServerOptions: config.mcpServers ? { mcpServers: config.mcpServers } : {}, mcpServerOptions: config.mcpServers ? { mcpServers: config.mcpServers } : {},
}; };
} }
/**
* Build thinking options for SDK configuration.
* Converts ThinkingLevel to maxThinkingTokens for the Claude SDK.
*
* @param thinkingLevel - The thinking level to convert
* @returns Object with maxThinkingTokens if thinking is enabled
*/
function buildThinkingOptions(thinkingLevel?: ThinkingLevel): Partial<Options> {
const maxThinkingTokens = getThinkingTokenBudget(thinkingLevel);
logger.debug(
`buildThinkingOptions: thinkingLevel="${thinkingLevel}" -> maxThinkingTokens=${maxThinkingTokens}`
);
return maxThinkingTokens ? { maxThinkingTokens } : {};
}
/** /**
* Build system prompt configuration based on autoLoadClaudeMd setting. * Build system prompt configuration based on autoLoadClaudeMd setting.
* When autoLoadClaudeMd is true: * When autoLoadClaudeMd is true:
@@ -264,17 +346,11 @@ export interface CreateSdkOptionsConfig {
/** Enable auto-loading of CLAUDE.md files via SDK's settingSources */ /** Enable auto-loading of CLAUDE.md files via SDK's settingSources */
autoLoadClaudeMd?: boolean; autoLoadClaudeMd?: boolean;
/** Enable sandbox mode for bash command isolation */
enableSandboxMode?: boolean;
/** MCP servers to make available to the agent */ /** MCP servers to make available to the agent */
mcpServers?: Record<string, McpServerConfig>; mcpServers?: Record<string, McpServerConfig>;
/** Auto-approve MCP tool calls without permission prompts */ /** Extended thinking level for Claude models */
mcpAutoApproveTools?: boolean; thinkingLevel?: ThinkingLevel;
/** Allow unrestricted tools when MCP servers are enabled */
mcpUnrestrictedTools?: boolean;
} }
// Re-export MCP types from @automaker/types for convenience // Re-export MCP types from @automaker/types for convenience
@@ -301,6 +377,9 @@ export function createSpecGenerationOptions(config: CreateSdkOptionsConfig): Opt
// Build CLAUDE.md auto-loading options if enabled // Build CLAUDE.md auto-loading options if enabled
const claudeMdOptions = buildClaudeMdOptions(config); const claudeMdOptions = buildClaudeMdOptions(config);
// Build thinking options
const thinkingOptions = buildThinkingOptions(config.thinkingLevel);
return { return {
...getBaseOptions(), ...getBaseOptions(),
// Override permissionMode - spec generation only needs read-only tools // Override permissionMode - spec generation only needs read-only tools
@@ -312,6 +391,7 @@ export function createSpecGenerationOptions(config: CreateSdkOptionsConfig): Opt
cwd: config.cwd, cwd: config.cwd,
allowedTools: [...TOOL_PRESETS.specGeneration], allowedTools: [...TOOL_PRESETS.specGeneration],
...claudeMdOptions, ...claudeMdOptions,
...thinkingOptions,
...(config.abortController && { abortController: config.abortController }), ...(config.abortController && { abortController: config.abortController }),
...(config.outputFormat && { outputFormat: config.outputFormat }), ...(config.outputFormat && { outputFormat: config.outputFormat }),
}; };
@@ -333,6 +413,9 @@ export function createFeatureGenerationOptions(config: CreateSdkOptionsConfig):
// Build CLAUDE.md auto-loading options if enabled // Build CLAUDE.md auto-loading options if enabled
const claudeMdOptions = buildClaudeMdOptions(config); const claudeMdOptions = buildClaudeMdOptions(config);
// Build thinking options
const thinkingOptions = buildThinkingOptions(config.thinkingLevel);
return { return {
...getBaseOptions(), ...getBaseOptions(),
// Override permissionMode - feature generation only needs read-only tools // Override permissionMode - feature generation only needs read-only tools
@@ -342,6 +425,7 @@ export function createFeatureGenerationOptions(config: CreateSdkOptionsConfig):
cwd: config.cwd, cwd: config.cwd,
allowedTools: [...TOOL_PRESETS.readOnly], allowedTools: [...TOOL_PRESETS.readOnly],
...claudeMdOptions, ...claudeMdOptions,
...thinkingOptions,
...(config.abortController && { abortController: config.abortController }), ...(config.abortController && { abortController: config.abortController }),
}; };
} }
@@ -362,6 +446,9 @@ export function createSuggestionsOptions(config: CreateSdkOptionsConfig): Option
// Build CLAUDE.md auto-loading options if enabled // Build CLAUDE.md auto-loading options if enabled
const claudeMdOptions = buildClaudeMdOptions(config); const claudeMdOptions = buildClaudeMdOptions(config);
// Build thinking options
const thinkingOptions = buildThinkingOptions(config.thinkingLevel);
return { return {
...getBaseOptions(), ...getBaseOptions(),
model: getModelForUseCase('suggestions', config.model), model: getModelForUseCase('suggestions', config.model),
@@ -369,6 +456,7 @@ export function createSuggestionsOptions(config: CreateSdkOptionsConfig): Option
cwd: config.cwd, cwd: config.cwd,
allowedTools: [...TOOL_PRESETS.readOnly], allowedTools: [...TOOL_PRESETS.readOnly],
...claudeMdOptions, ...claudeMdOptions,
...thinkingOptions,
...(config.abortController && { abortController: config.abortController }), ...(config.abortController && { abortController: config.abortController }),
...(config.outputFormat && { outputFormat: config.outputFormat }), ...(config.outputFormat && { outputFormat: config.outputFormat }),
}; };
@@ -381,7 +469,6 @@ export function createSuggestionsOptions(config: CreateSdkOptionsConfig): Option
* - Full tool access for code modification * - Full tool access for code modification
* - Standard turns for interactive sessions * - Standard turns for interactive sessions
* - Model priority: explicit model > session model > chat default * - Model priority: explicit model > session model > chat default
* - Sandbox mode controlled by enableSandboxMode setting
* - When autoLoadClaudeMd is true, uses preset mode and settingSources for CLAUDE.md loading * - When autoLoadClaudeMd is true, uses preset mode and settingSources for CLAUDE.md loading
*/ */
export function createChatOptions(config: CreateSdkOptionsConfig): Options { export function createChatOptions(config: CreateSdkOptionsConfig): Options {
@@ -397,22 +484,17 @@ export function createChatOptions(config: CreateSdkOptionsConfig): Options {
// Build MCP-related options // Build MCP-related options
const mcpOptions = buildMcpOptions(config); const mcpOptions = buildMcpOptions(config);
// Build thinking options
const thinkingOptions = buildThinkingOptions(config.thinkingLevel);
return { return {
...getBaseOptions(), ...getBaseOptions(),
model: getModelForUseCase('chat', effectiveModel), model: getModelForUseCase('chat', effectiveModel),
maxTurns: MAX_TURNS.standard, maxTurns: MAX_TURNS.standard,
cwd: config.cwd, cwd: config.cwd,
// Only restrict tools if no MCP servers configured or unrestricted is disabled allowedTools: [...TOOL_PRESETS.chat],
...(mcpOptions.shouldRestrictTools && { allowedTools: [...TOOL_PRESETS.chat] }),
// Apply MCP bypass options if configured
...mcpOptions.bypassOptions,
...(config.enableSandboxMode && {
sandbox: {
enabled: true,
autoAllowBashIfSandboxed: true,
},
}),
...claudeMdOptions, ...claudeMdOptions,
...thinkingOptions,
...(config.abortController && { abortController: config.abortController }), ...(config.abortController && { abortController: config.abortController }),
...mcpOptions.mcpServerOptions, ...mcpOptions.mcpServerOptions,
}; };
@@ -425,7 +507,6 @@ export function createChatOptions(config: CreateSdkOptionsConfig): Options {
* - Full tool access for code modification and implementation * - Full tool access for code modification and implementation
* - Extended turns for thorough feature implementation * - Extended turns for thorough feature implementation
* - Uses default model (can be overridden) * - Uses default model (can be overridden)
* - Sandbox mode controlled by enableSandboxMode setting
* - When autoLoadClaudeMd is true, uses preset mode and settingSources for CLAUDE.md loading * - When autoLoadClaudeMd is true, uses preset mode and settingSources for CLAUDE.md loading
*/ */
export function createAutoModeOptions(config: CreateSdkOptionsConfig): Options { export function createAutoModeOptions(config: CreateSdkOptionsConfig): Options {
@@ -438,22 +519,17 @@ export function createAutoModeOptions(config: CreateSdkOptionsConfig): Options {
// Build MCP-related options // Build MCP-related options
const mcpOptions = buildMcpOptions(config); const mcpOptions = buildMcpOptions(config);
// Build thinking options
const thinkingOptions = buildThinkingOptions(config.thinkingLevel);
return { return {
...getBaseOptions(), ...getBaseOptions(),
model: getModelForUseCase('auto', config.model), model: getModelForUseCase('auto', config.model),
maxTurns: MAX_TURNS.maximum, maxTurns: MAX_TURNS.maximum,
cwd: config.cwd, cwd: config.cwd,
// Only restrict tools if no MCP servers configured or unrestricted is disabled allowedTools: [...TOOL_PRESETS.fullAccess],
...(mcpOptions.shouldRestrictTools && { allowedTools: [...TOOL_PRESETS.fullAccess] }),
// Apply MCP bypass options if configured
...mcpOptions.bypassOptions,
...(config.enableSandboxMode && {
sandbox: {
enabled: true,
autoAllowBashIfSandboxed: true,
},
}),
...claudeMdOptions, ...claudeMdOptions,
...thinkingOptions,
...(config.abortController && { abortController: config.abortController }), ...(config.abortController && { abortController: config.abortController }),
...mcpOptions.mcpServerOptions, ...mcpOptions.mcpServerOptions,
}; };
@@ -469,7 +545,6 @@ export function createCustomOptions(
config: CreateSdkOptionsConfig & { config: CreateSdkOptionsConfig & {
maxTurns?: number; maxTurns?: number;
allowedTools?: readonly string[]; allowedTools?: readonly string[];
sandbox?: { enabled: boolean; autoAllowBashIfSandboxed?: boolean };
} }
): Options { ): Options {
// Validate working directory before creating options // Validate working directory before creating options
@@ -481,23 +556,22 @@ export function createCustomOptions(
// Build MCP-related options // Build MCP-related options
const mcpOptions = buildMcpOptions(config); const mcpOptions = buildMcpOptions(config);
// For custom options: use explicit allowedTools if provided, otherwise use preset based on MCP settings // Build thinking options
const thinkingOptions = buildThinkingOptions(config.thinkingLevel);
// For custom options: use explicit allowedTools if provided, otherwise default to readOnly
const effectiveAllowedTools = config.allowedTools const effectiveAllowedTools = config.allowedTools
? [...config.allowedTools] ? [...config.allowedTools]
: mcpOptions.shouldRestrictTools : [...TOOL_PRESETS.readOnly];
? [...TOOL_PRESETS.readOnly]
: undefined;
return { return {
...getBaseOptions(), ...getBaseOptions(),
model: getModelForUseCase('default', config.model), model: getModelForUseCase('default', config.model),
maxTurns: config.maxTurns ?? MAX_TURNS.maximum, maxTurns: config.maxTurns ?? MAX_TURNS.maximum,
cwd: config.cwd, cwd: config.cwd,
...(effectiveAllowedTools && { allowedTools: effectiveAllowedTools }), allowedTools: effectiveAllowedTools,
...(config.sandbox && { sandbox: config.sandbox }),
// Apply MCP bypass options if configured
...mcpOptions.bypassOptions,
...claudeMdOptions, ...claudeMdOptions,
...thinkingOptions,
...(config.abortController && { abortController: config.abortController }), ...(config.abortController && { abortController: config.abortController }),
...mcpOptions.mcpServerOptions, ...mcpOptions.mcpServerOptions,
}; };

View File

@@ -11,6 +11,14 @@ import {
mergeAgentPrompts, mergeAgentPrompts,
mergeBacklogPlanPrompts, mergeBacklogPlanPrompts,
mergeEnhancementPrompts, mergeEnhancementPrompts,
mergeCommitMessagePrompts,
mergeTitleGenerationPrompts,
mergeIssueValidationPrompts,
mergeIdeationPrompts,
mergeAppSpecPrompts,
mergeContextDescriptionPrompts,
mergeSuggestionsPrompts,
mergeTaskExecutionPrompts,
} from '@automaker/prompts'; } from '@automaker/prompts';
const logger = createLogger('SettingsHelper'); const logger = createLogger('SettingsHelper');
@@ -55,34 +63,6 @@ export async function getAutoLoadClaudeMdSetting(
} }
} }
/**
* Get the enableSandboxMode setting from global settings.
* Returns false if settings service is not available.
*
* @param settingsService - Optional settings service instance
* @param logPrefix - Prefix for log messages (e.g., '[AgentService]')
* @returns Promise resolving to the enableSandboxMode setting value
*/
export async function getEnableSandboxModeSetting(
settingsService?: SettingsService | null,
logPrefix = '[SettingsHelper]'
): Promise<boolean> {
if (!settingsService) {
logger.info(`${logPrefix} SettingsService not available, sandbox mode disabled`);
return false;
}
try {
const globalSettings = await settingsService.getGlobalSettings();
const result = globalSettings.enableSandboxMode ?? false;
logger.info(`${logPrefix} enableSandboxMode from global settings: ${result}`);
return result;
} catch (error) {
logger.error(`${logPrefix} Failed to load enableSandboxMode setting:`, error);
throw error;
}
}
/** /**
* Filters out CLAUDE.md from context files when autoLoadClaudeMd is enabled * Filters out CLAUDE.md from context files when autoLoadClaudeMd is enabled
* and rebuilds the formatted prompt without it. * and rebuilds the formatted prompt without it.
@@ -191,41 +171,6 @@ export async function getMCPServersFromSettings(
} }
} }
/**
* Get MCP permission settings from global settings.
*
* @param settingsService - Optional settings service instance
* @param logPrefix - Prefix for log messages (e.g., '[AgentService]')
* @returns Promise resolving to MCP permission settings
*/
export async function getMCPPermissionSettings(
settingsService?: SettingsService | null,
logPrefix = '[SettingsHelper]'
): Promise<{ mcpAutoApproveTools: boolean; mcpUnrestrictedTools: boolean }> {
// Default to true for autonomous workflow. Security is enforced when adding servers
// via the security warning dialog that explains the risks.
const defaults = { mcpAutoApproveTools: true, mcpUnrestrictedTools: true };
if (!settingsService) {
return defaults;
}
try {
const globalSettings = await settingsService.getGlobalSettings();
const result = {
mcpAutoApproveTools: globalSettings.mcpAutoApproveTools ?? true,
mcpUnrestrictedTools: globalSettings.mcpUnrestrictedTools ?? true,
};
logger.info(
`${logPrefix} MCP permission settings: autoApprove=${result.mcpAutoApproveTools}, unrestricted=${result.mcpUnrestrictedTools}`
);
return result;
} catch (error) {
logger.error(`${logPrefix} Failed to load MCP permission settings:`, error);
return defaults;
}
}
/** /**
* Convert a settings MCPServerConfig to SDK McpServerConfig format. * Convert a settings MCPServerConfig to SDK McpServerConfig format.
* Validates required fields and throws informative errors if missing. * Validates required fields and throws informative errors if missing.
@@ -281,6 +226,14 @@ export async function getPromptCustomization(
agent: ReturnType<typeof mergeAgentPrompts>; agent: ReturnType<typeof mergeAgentPrompts>;
backlogPlan: ReturnType<typeof mergeBacklogPlanPrompts>; backlogPlan: ReturnType<typeof mergeBacklogPlanPrompts>;
enhancement: ReturnType<typeof mergeEnhancementPrompts>; enhancement: ReturnType<typeof mergeEnhancementPrompts>;
commitMessage: ReturnType<typeof mergeCommitMessagePrompts>;
titleGeneration: ReturnType<typeof mergeTitleGenerationPrompts>;
issueValidation: ReturnType<typeof mergeIssueValidationPrompts>;
ideation: ReturnType<typeof mergeIdeationPrompts>;
appSpec: ReturnType<typeof mergeAppSpecPrompts>;
contextDescription: ReturnType<typeof mergeContextDescriptionPrompts>;
suggestions: ReturnType<typeof mergeSuggestionsPrompts>;
taskExecution: ReturnType<typeof mergeTaskExecutionPrompts>;
}> { }> {
let customization: PromptCustomization = {}; let customization: PromptCustomization = {};
@@ -302,5 +255,93 @@ export async function getPromptCustomization(
agent: mergeAgentPrompts(customization.agent), agent: mergeAgentPrompts(customization.agent),
backlogPlan: mergeBacklogPlanPrompts(customization.backlogPlan), backlogPlan: mergeBacklogPlanPrompts(customization.backlogPlan),
enhancement: mergeEnhancementPrompts(customization.enhancement), enhancement: mergeEnhancementPrompts(customization.enhancement),
commitMessage: mergeCommitMessagePrompts(customization.commitMessage),
titleGeneration: mergeTitleGenerationPrompts(customization.titleGeneration),
issueValidation: mergeIssueValidationPrompts(customization.issueValidation),
ideation: mergeIdeationPrompts(customization.ideation),
appSpec: mergeAppSpecPrompts(customization.appSpec),
contextDescription: mergeContextDescriptionPrompts(customization.contextDescription),
suggestions: mergeSuggestionsPrompts(customization.suggestions),
taskExecution: mergeTaskExecutionPrompts(customization.taskExecution),
}; };
} }
/**
* Get Skills configuration from settings.
* Returns configuration for enabling skills and which sources to load from.
*
* @param settingsService - Settings service instance
* @returns Skills configuration with enabled state, sources, and tool inclusion flag
*/
export async function getSkillsConfiguration(settingsService: SettingsService): Promise<{
enabled: boolean;
sources: Array<'user' | 'project'>;
shouldIncludeInTools: boolean;
}> {
const settings = await settingsService.getGlobalSettings();
const enabled = settings.enableSkills ?? true; // Default enabled
const sources = settings.skillsSources ?? ['user', 'project']; // Default both sources
return {
enabled,
sources,
shouldIncludeInTools: enabled && sources.length > 0,
};
}
/**
* Get Subagents configuration from settings.
* Returns configuration for enabling subagents and which sources to load from.
*
* @param settingsService - Settings service instance
* @returns Subagents configuration with enabled state, sources, and tool inclusion flag
*/
export async function getSubagentsConfiguration(settingsService: SettingsService): Promise<{
enabled: boolean;
sources: Array<'user' | 'project'>;
shouldIncludeInTools: boolean;
}> {
const settings = await settingsService.getGlobalSettings();
const enabled = settings.enableSubagents ?? true; // Default enabled
const sources = settings.subagentsSources ?? ['user', 'project']; // Default both sources
return {
enabled,
sources,
shouldIncludeInTools: enabled && sources.length > 0,
};
}
/**
* Get custom subagents from settings, merging global and project-level definitions.
* Project-level subagents take precedence over global ones with the same name.
*
* @param settingsService - Settings service instance
* @param projectPath - Path to the project for loading project-specific subagents
* @returns Record of agent names to definitions, or undefined if none configured
*/
export async function getCustomSubagents(
settingsService: SettingsService,
projectPath?: string
): Promise<Record<string, import('@automaker/types').AgentDefinition> | undefined> {
// Get global subagents
const globalSettings = await settingsService.getGlobalSettings();
const globalSubagents = globalSettings.customSubagents || {};
// If no project path, return only global subagents
if (!projectPath) {
return Object.keys(globalSubagents).length > 0 ? globalSubagents : undefined;
}
// Get project-specific subagents
const projectSettings = await settingsService.getProjectSettings(projectPath);
const projectSubagents = projectSettings.customSubagents || {};
// Merge: project-level takes precedence
const merged = {
...globalSubagents,
...projectSubagents,
};
return Object.keys(merged).length > 0 ? merged : undefined;
}

View File

@@ -5,6 +5,9 @@
import { readFileSync } from 'fs'; import { readFileSync } from 'fs';
import { fileURLToPath } from 'url'; import { fileURLToPath } from 'url';
import { dirname, join } from 'path'; import { dirname, join } from 'path';
import { createLogger } from '@automaker/utils';
const logger = createLogger('Version');
const __filename = fileURLToPath(import.meta.url); const __filename = fileURLToPath(import.meta.url);
const __dirname = dirname(__filename); const __dirname = dirname(__filename);
@@ -27,7 +30,7 @@ export function getVersion(): string {
cachedVersion = version; cachedVersion = version;
return version; return version;
} catch (error) { } catch (error) {
console.warn('Failed to read version from package.json:', error); logger.warn('Failed to read version from package.json:', error);
return '0.0.0'; return '0.0.0';
} }
} }

View File

@@ -5,22 +5,24 @@
import * as secureFs from './secure-fs.js'; import * as secureFs from './secure-fs.js';
import * as path from 'path'; import * as path from 'path';
import type { PRState, WorktreePRInfo } from '@automaker/types';
// Re-export types for backwards compatibility
export type { PRState, WorktreePRInfo };
/** Maximum length for sanitized branch names in filesystem paths */ /** Maximum length for sanitized branch names in filesystem paths */
const MAX_SANITIZED_BRANCH_PATH_LENGTH = 200; const MAX_SANITIZED_BRANCH_PATH_LENGTH = 200;
export interface WorktreePRInfo {
number: number;
url: string;
title: string;
state: string;
createdAt: string;
}
export interface WorktreeMetadata { export interface WorktreeMetadata {
branch: string; branch: string;
createdAt: string; createdAt: string;
pr?: WorktreePRInfo; pr?: WorktreePRInfo;
/** Whether the init script has been executed for this worktree */
initScriptRan?: boolean;
/** Status of the init script execution */
initScriptStatus?: 'running' | 'success' | 'failed';
/** Error message if init script failed */
initScriptError?: string;
} }
/** /**

View File

@@ -0,0 +1,611 @@
/**
* XML Extraction Utilities
*
* Robust XML parsing utilities for extracting and updating sections
* from app_spec.txt XML content. Uses regex-based parsing which is
* sufficient for our controlled XML structure.
*
* Note: If more complex XML parsing is needed in the future, consider
* using a library like 'fast-xml-parser' or 'xml2js'.
*/
import { createLogger } from '@automaker/utils';
import type { SpecOutput } from '@automaker/types';
const logger = createLogger('XmlExtractor');
/**
* Represents an implemented feature extracted from XML
*/
export interface ImplementedFeature {
name: string;
description: string;
file_locations?: string[];
}
/**
* Logger interface for optional custom logging
*/
export interface XmlExtractorLogger {
debug: (message: string, ...args: unknown[]) => void;
warn?: (message: string, ...args: unknown[]) => void;
}
/**
* Options for XML extraction operations
*/
export interface ExtractXmlOptions {
/** Custom logger (defaults to internal logger) */
logger?: XmlExtractorLogger;
}
/**
* Escape special XML characters
* Handles undefined/null values by converting them to empty strings
*/
export function escapeXml(str: string | undefined | null): string {
if (str == null) {
return '';
}
return str
.replace(/&/g, '&amp;')
.replace(/</g, '&lt;')
.replace(/>/g, '&gt;')
.replace(/"/g, '&quot;')
.replace(/'/g, '&apos;');
}
/**
* Unescape XML entities back to regular characters
*/
export function unescapeXml(str: string): string {
return str
.replace(/&apos;/g, "'")
.replace(/&quot;/g, '"')
.replace(/&gt;/g, '>')
.replace(/&lt;/g, '<')
.replace(/&amp;/g, '&');
}
/**
* Extract the content of a specific XML section
*
* @param xmlContent - The full XML content
* @param tagName - The tag name to extract (e.g., 'implemented_features')
* @param options - Optional extraction options
* @returns The content between the tags, or null if not found
*/
export function extractXmlSection(
xmlContent: string,
tagName: string,
options: ExtractXmlOptions = {}
): string | null {
const log = options.logger || logger;
const regex = new RegExp(`<${tagName}>([\\s\\S]*?)<\\/${tagName}>`, 'i');
const match = xmlContent.match(regex);
if (match) {
log.debug(`Extracted <${tagName}> section`);
return match[1];
}
log.debug(`Section <${tagName}> not found`);
return null;
}
/**
* Extract all values from repeated XML elements
*
* @param xmlContent - The XML content to search
* @param tagName - The tag name to extract values from
* @param options - Optional extraction options
* @returns Array of extracted values (unescaped)
*/
export function extractXmlElements(
xmlContent: string,
tagName: string,
options: ExtractXmlOptions = {}
): string[] {
const log = options.logger || logger;
const values: string[] = [];
const regex = new RegExp(`<${tagName}>([\\s\\S]*?)<\\/${tagName}>`, 'g');
const matches = xmlContent.matchAll(regex);
for (const match of matches) {
values.push(unescapeXml(match[1].trim()));
}
log.debug(`Extracted ${values.length} <${tagName}> elements`);
return values;
}
/**
* Extract implemented features from app_spec.txt XML content
*
* @param specContent - The full XML content of app_spec.txt
* @param options - Optional extraction options
* @returns Array of implemented features with name, description, and optional file_locations
*/
export function extractImplementedFeatures(
specContent: string,
options: ExtractXmlOptions = {}
): ImplementedFeature[] {
const log = options.logger || logger;
const features: ImplementedFeature[] = [];
// Match <implemented_features>...</implemented_features> section
const implementedSection = extractXmlSection(specContent, 'implemented_features', options);
if (!implementedSection) {
log.debug('No implemented_features section found');
return features;
}
// Extract individual feature blocks
const featureRegex = /<feature>([\s\S]*?)<\/feature>/g;
const featureMatches = implementedSection.matchAll(featureRegex);
for (const featureMatch of featureMatches) {
const featureContent = featureMatch[1];
// Extract name
const nameMatch = featureContent.match(/<name>([\s\S]*?)<\/name>/);
const name = nameMatch ? unescapeXml(nameMatch[1].trim()) : '';
// Extract description
const descMatch = featureContent.match(/<description>([\s\S]*?)<\/description>/);
const description = descMatch ? unescapeXml(descMatch[1].trim()) : '';
// Extract file_locations if present
const locationsSection = extractXmlSection(featureContent, 'file_locations', options);
const file_locations = locationsSection
? extractXmlElements(locationsSection, 'location', options)
: undefined;
if (name) {
features.push({
name,
description,
...(file_locations && file_locations.length > 0 ? { file_locations } : {}),
});
}
}
log.debug(`Extracted ${features.length} implemented features`);
return features;
}
/**
* Extract only the feature names from implemented_features section
*
* @param specContent - The full XML content of app_spec.txt
* @param options - Optional extraction options
* @returns Array of feature names
*/
export function extractImplementedFeatureNames(
specContent: string,
options: ExtractXmlOptions = {}
): string[] {
const features = extractImplementedFeatures(specContent, options);
return features.map((f) => f.name);
}
/**
* Generate XML for a single implemented feature
*
* @param feature - The feature to convert to XML
* @param indent - The base indentation level (default: 2 spaces)
* @returns XML string for the feature
*/
export function featureToXml(feature: ImplementedFeature, indent: string = ' '): string {
const i2 = indent.repeat(2);
const i3 = indent.repeat(3);
const i4 = indent.repeat(4);
let xml = `${i2}<feature>
${i3}<name>${escapeXml(feature.name)}</name>
${i3}<description>${escapeXml(feature.description)}</description>`;
if (feature.file_locations && feature.file_locations.length > 0) {
xml += `
${i3}<file_locations>
${feature.file_locations.map((loc) => `${i4}<location>${escapeXml(loc)}</location>`).join('\n')}
${i3}</file_locations>`;
}
xml += `
${i2}</feature>`;
return xml;
}
/**
* Generate XML for an array of implemented features
*
* @param features - Array of features to convert to XML
* @param indent - The base indentation level (default: 2 spaces)
* @returns XML string for the implemented_features section content
*/
export function featuresToXml(features: ImplementedFeature[], indent: string = ' '): string {
return features.map((f) => featureToXml(f, indent)).join('\n');
}
/**
* Update the implemented_features section in XML content
*
* @param specContent - The full XML content
* @param newFeatures - The new features to set
* @param options - Optional extraction options
* @returns Updated XML content with the new implemented_features section
*/
export function updateImplementedFeaturesSection(
specContent: string,
newFeatures: ImplementedFeature[],
options: ExtractXmlOptions = {}
): string {
const log = options.logger || logger;
const indent = ' ';
// Generate new section content
const newSectionContent = featuresToXml(newFeatures, indent);
// Build the new section
const newSection = `<implemented_features>
${newSectionContent}
${indent}</implemented_features>`;
// Check if section exists
const sectionRegex = /<implemented_features>[\s\S]*?<\/implemented_features>/;
if (sectionRegex.test(specContent)) {
log.debug('Replacing existing implemented_features section');
return specContent.replace(sectionRegex, newSection);
}
// If section doesn't exist, try to insert after core_capabilities
const coreCapabilitiesEnd = '</core_capabilities>';
const insertIndex = specContent.indexOf(coreCapabilitiesEnd);
if (insertIndex !== -1) {
const insertPosition = insertIndex + coreCapabilitiesEnd.length;
log.debug('Inserting implemented_features after core_capabilities');
return (
specContent.slice(0, insertPosition) +
'\n\n' +
indent +
newSection +
specContent.slice(insertPosition)
);
}
// As a fallback, insert before </project_specification>
const projectSpecEnd = '</project_specification>';
const fallbackIndex = specContent.indexOf(projectSpecEnd);
if (fallbackIndex !== -1) {
log.debug('Inserting implemented_features before </project_specification>');
return (
specContent.slice(0, fallbackIndex) +
indent +
newSection +
'\n' +
specContent.slice(fallbackIndex)
);
}
log.warn?.('Could not find appropriate insertion point for implemented_features');
log.debug('Could not find appropriate insertion point for implemented_features');
return specContent;
}
/**
* Add a new feature to the implemented_features section
*
* @param specContent - The full XML content
* @param newFeature - The feature to add
* @param options - Optional extraction options
* @returns Updated XML content with the new feature added
*/
export function addImplementedFeature(
specContent: string,
newFeature: ImplementedFeature,
options: ExtractXmlOptions = {}
): string {
const log = options.logger || logger;
// Extract existing features
const existingFeatures = extractImplementedFeatures(specContent, options);
// Check for duplicates by name
const isDuplicate = existingFeatures.some(
(f) => f.name.toLowerCase() === newFeature.name.toLowerCase()
);
if (isDuplicate) {
log.debug(`Feature "${newFeature.name}" already exists, skipping`);
return specContent;
}
// Add the new feature
const updatedFeatures = [...existingFeatures, newFeature];
log.debug(`Adding feature "${newFeature.name}"`);
return updateImplementedFeaturesSection(specContent, updatedFeatures, options);
}
/**
* Remove a feature from the implemented_features section by name
*
* @param specContent - The full XML content
* @param featureName - The name of the feature to remove
* @param options - Optional extraction options
* @returns Updated XML content with the feature removed
*/
export function removeImplementedFeature(
specContent: string,
featureName: string,
options: ExtractXmlOptions = {}
): string {
const log = options.logger || logger;
// Extract existing features
const existingFeatures = extractImplementedFeatures(specContent, options);
// Filter out the feature to remove
const updatedFeatures = existingFeatures.filter(
(f) => f.name.toLowerCase() !== featureName.toLowerCase()
);
if (updatedFeatures.length === existingFeatures.length) {
log.debug(`Feature "${featureName}" not found, no changes made`);
return specContent;
}
log.debug(`Removing feature "${featureName}"`);
return updateImplementedFeaturesSection(specContent, updatedFeatures, options);
}
/**
* Update an existing feature in the implemented_features section
*
* @param specContent - The full XML content
* @param featureName - The name of the feature to update
* @param updates - Partial updates to apply to the feature
* @param options - Optional extraction options
* @returns Updated XML content with the feature modified
*/
export function updateImplementedFeature(
specContent: string,
featureName: string,
updates: Partial<ImplementedFeature>,
options: ExtractXmlOptions = {}
): string {
const log = options.logger || logger;
// Extract existing features
const existingFeatures = extractImplementedFeatures(specContent, options);
// Find and update the feature
let found = false;
const updatedFeatures = existingFeatures.map((f) => {
if (f.name.toLowerCase() === featureName.toLowerCase()) {
found = true;
return {
...f,
...updates,
// Preserve the original name if not explicitly updated
name: updates.name ?? f.name,
};
}
return f;
});
if (!found) {
log.debug(`Feature "${featureName}" not found, no changes made`);
return specContent;
}
log.debug(`Updating feature "${featureName}"`);
return updateImplementedFeaturesSection(specContent, updatedFeatures, options);
}
/**
* Check if a feature exists in the implemented_features section
*
* @param specContent - The full XML content
* @param featureName - The name of the feature to check
* @param options - Optional extraction options
* @returns True if the feature exists
*/
export function hasImplementedFeature(
specContent: string,
featureName: string,
options: ExtractXmlOptions = {}
): boolean {
const features = extractImplementedFeatures(specContent, options);
return features.some((f) => f.name.toLowerCase() === featureName.toLowerCase());
}
/**
* Convert extracted features to SpecOutput.implemented_features format
*
* @param features - Array of extracted features
* @returns Features in SpecOutput format
*/
export function toSpecOutputFeatures(
features: ImplementedFeature[]
): SpecOutput['implemented_features'] {
return features.map((f) => ({
name: f.name,
description: f.description,
...(f.file_locations && f.file_locations.length > 0
? { file_locations: f.file_locations }
: {}),
}));
}
/**
* Convert SpecOutput.implemented_features to ImplementedFeature format
*
* @param specFeatures - Features from SpecOutput
* @returns Features in ImplementedFeature format
*/
export function fromSpecOutputFeatures(
specFeatures: SpecOutput['implemented_features']
): ImplementedFeature[] {
return specFeatures.map((f) => ({
name: f.name,
description: f.description,
...(f.file_locations && f.file_locations.length > 0
? { file_locations: f.file_locations }
: {}),
}));
}
/**
* Represents a roadmap phase extracted from XML
*/
export interface RoadmapPhase {
name: string;
status: string;
description?: string;
}
/**
* Extract the technology stack from app_spec.txt XML content
*
* @param specContent - The full XML content
* @param options - Optional extraction options
* @returns Array of technology names
*/
export function extractTechnologyStack(
specContent: string,
options: ExtractXmlOptions = {}
): string[] {
const log = options.logger || logger;
const techSection = extractXmlSection(specContent, 'technology_stack', options);
if (!techSection) {
log.debug('No technology_stack section found');
return [];
}
const technologies = extractXmlElements(techSection, 'technology', options);
log.debug(`Extracted ${technologies.length} technologies`);
return technologies;
}
/**
* Update the technology_stack section in XML content
*
* @param specContent - The full XML content
* @param technologies - The new technology list
* @param options - Optional extraction options
* @returns Updated XML content
*/
export function updateTechnologyStack(
specContent: string,
technologies: string[],
options: ExtractXmlOptions = {}
): string {
const log = options.logger || logger;
const indent = ' ';
const i2 = indent.repeat(2);
// Generate new section content
const techXml = technologies
.map((t) => `${i2}<technology>${escapeXml(t)}</technology>`)
.join('\n');
const newSection = `<technology_stack>\n${techXml}\n${indent}</technology_stack>`;
// Check if section exists
const sectionRegex = /<technology_stack>[\s\S]*?<\/technology_stack>/;
if (sectionRegex.test(specContent)) {
log.debug('Replacing existing technology_stack section');
return specContent.replace(sectionRegex, newSection);
}
log.debug('No technology_stack section found to update');
return specContent;
}
/**
* Extract roadmap phases from app_spec.txt XML content
*
* @param specContent - The full XML content
* @param options - Optional extraction options
* @returns Array of roadmap phases
*/
export function extractRoadmapPhases(
specContent: string,
options: ExtractXmlOptions = {}
): RoadmapPhase[] {
const log = options.logger || logger;
const phases: RoadmapPhase[] = [];
const roadmapSection = extractXmlSection(specContent, 'implementation_roadmap', options);
if (!roadmapSection) {
log.debug('No implementation_roadmap section found');
return phases;
}
// Extract individual phase blocks
const phaseRegex = /<phase>([\s\S]*?)<\/phase>/g;
const phaseMatches = roadmapSection.matchAll(phaseRegex);
for (const phaseMatch of phaseMatches) {
const phaseContent = phaseMatch[1];
const nameMatch = phaseContent.match(/<name>([\s\S]*?)<\/name>/);
const name = nameMatch ? unescapeXml(nameMatch[1].trim()) : '';
const statusMatch = phaseContent.match(/<status>([\s\S]*?)<\/status>/);
const status = statusMatch ? unescapeXml(statusMatch[1].trim()) : 'pending';
const descMatch = phaseContent.match(/<description>([\s\S]*?)<\/description>/);
const description = descMatch ? unescapeXml(descMatch[1].trim()) : undefined;
if (name) {
phases.push({ name, status, description });
}
}
log.debug(`Extracted ${phases.length} roadmap phases`);
return phases;
}
/**
* Update a roadmap phase status in XML content
*
* @param specContent - The full XML content
* @param phaseName - The name of the phase to update
* @param newStatus - The new status value
* @param options - Optional extraction options
* @returns Updated XML content
*/
export function updateRoadmapPhaseStatus(
specContent: string,
phaseName: string,
newStatus: string,
options: ExtractXmlOptions = {}
): string {
const log = options.logger || logger;
// Find the phase and update its status
// Match the phase block containing the specific name
const phaseRegex = new RegExp(
`(<phase>\\s*<name>\\s*${escapeXml(phaseName)}\\s*<\\/name>\\s*<status>)[\\s\\S]*?(<\\/status>)`,
'i'
);
if (phaseRegex.test(specContent)) {
log.debug(`Updating phase "${phaseName}" status to "${newStatus}"`);
return specContent.replace(phaseRegex, `$1${escapeXml(newStatus)}$2`);
}
log.debug(`Phase "${phaseName}" not found`);
return specContent;
}

View File

@@ -8,12 +8,28 @@ import type { Request, Response, NextFunction } from 'express';
import { validatePath, PathNotAllowedError } from '@automaker/platform'; import { validatePath, PathNotAllowedError } from '@automaker/platform';
/** /**
* Creates a middleware that validates specified path parameters in req.body * Helper to get parameter value from request (checks body first, then query)
*/
function getParamValue(req: Request, paramName: string): unknown {
// Check body first (for POST/PUT/PATCH requests)
if (req.body && req.body[paramName] !== undefined) {
return req.body[paramName];
}
// Fall back to query params (for GET requests)
if (req.query && req.query[paramName] !== undefined) {
return req.query[paramName];
}
return undefined;
}
/**
* Creates a middleware that validates specified path parameters in req.body or req.query
* @param paramNames - Names of parameters to validate (e.g., 'projectPath', 'worktreePath') * @param paramNames - Names of parameters to validate (e.g., 'projectPath', 'worktreePath')
* @example * @example
* router.post('/create', validatePathParams('projectPath'), handler); * router.post('/create', validatePathParams('projectPath'), handler);
* router.post('/delete', validatePathParams('projectPath', 'worktreePath'), handler); * router.post('/delete', validatePathParams('projectPath', 'worktreePath'), handler);
* router.post('/send', validatePathParams('workingDirectory?', 'imagePaths[]'), handler); * router.post('/send', validatePathParams('workingDirectory?', 'imagePaths[]'), handler);
* router.get('/logs', validatePathParams('worktreePath'), handler); // Works with query params too
* *
* Special syntax: * Special syntax:
* - 'paramName?' - Optional parameter (only validated if present) * - 'paramName?' - Optional parameter (only validated if present)
@@ -26,8 +42,8 @@ export function validatePathParams(...paramNames: string[]) {
// Handle optional parameters (paramName?) // Handle optional parameters (paramName?)
if (paramName.endsWith('?')) { if (paramName.endsWith('?')) {
const actualName = paramName.slice(0, -1); const actualName = paramName.slice(0, -1);
const value = req.body[actualName]; const value = getParamValue(req, actualName);
if (value) { if (value && typeof value === 'string') {
validatePath(value); validatePath(value);
} }
continue; continue;
@@ -36,18 +52,20 @@ export function validatePathParams(...paramNames: string[]) {
// Handle array parameters (paramName[]) // Handle array parameters (paramName[])
if (paramName.endsWith('[]')) { if (paramName.endsWith('[]')) {
const actualName = paramName.slice(0, -2); const actualName = paramName.slice(0, -2);
const values = req.body[actualName]; const values = getParamValue(req, actualName);
if (Array.isArray(values) && values.length > 0) { if (Array.isArray(values) && values.length > 0) {
for (const value of values) { for (const value of values) {
validatePath(value); if (typeof value === 'string') {
validatePath(value);
}
} }
} }
continue; continue;
} }
// Handle regular parameters // Handle regular parameters
const value = req.body[paramName]; const value = getParamValue(req, paramName);
if (value) { if (value && typeof value === 'string') {
validatePath(value); validatePath(value);
} }
} }

View File

@@ -7,7 +7,10 @@
import { query, type Options } from '@anthropic-ai/claude-agent-sdk'; import { query, type Options } from '@anthropic-ai/claude-agent-sdk';
import { BaseProvider } from './base-provider.js'; import { BaseProvider } from './base-provider.js';
import { classifyError, getUserFriendlyErrorMessage } from '@automaker/utils'; import { classifyError, getUserFriendlyErrorMessage, createLogger } from '@automaker/utils';
const logger = createLogger('ClaudeProvider');
import { getThinkingTokenBudget, validateBareModelId } from '@automaker/types';
import type { import type {
ExecuteOptions, ExecuteOptions,
ProviderMessage, ProviderMessage,
@@ -19,6 +22,8 @@ import type {
// Only these vars are passed - nothing else from process.env leaks through. // Only these vars are passed - nothing else from process.env leaks through.
const ALLOWED_ENV_VARS = [ const ALLOWED_ENV_VARS = [
'ANTHROPIC_API_KEY', 'ANTHROPIC_API_KEY',
'ANTHROPIC_BASE_URL',
'ANTHROPIC_AUTH_TOKEN',
'PATH', 'PATH',
'HOME', 'HOME',
'SHELL', 'SHELL',
@@ -50,6 +55,10 @@ export class ClaudeProvider extends BaseProvider {
* Execute a query using Claude Agent SDK * Execute a query using Claude Agent SDK
*/ */
async *executeQuery(options: ExecuteOptions): AsyncGenerator<ProviderMessage> { async *executeQuery(options: ExecuteOptions): AsyncGenerator<ProviderMessage> {
// Validate that model doesn't have a provider prefix
// AgentService should strip prefixes before passing to providers
validateBareModelId(options.model, 'ClaudeProvider');
const { const {
prompt, prompt,
model, model,
@@ -60,24 +69,13 @@ export class ClaudeProvider extends BaseProvider {
abortController, abortController,
conversationHistory, conversationHistory,
sdkSessionId, sdkSessionId,
thinkingLevel,
} = options; } = options;
// Convert thinking level to token budget
const maxThinkingTokens = getThinkingTokenBudget(thinkingLevel);
// Build Claude SDK options // Build Claude SDK options
// MCP permission logic - determines how to handle tool permissions when MCP servers are configured.
// This logic mirrors buildMcpOptions() in sdk-options.ts but is applied here since
// the provider is the final point where SDK options are constructed.
const hasMcpServers = options.mcpServers && Object.keys(options.mcpServers).length > 0;
// Default to true for autonomous workflow. Security is enforced when adding servers
// via the security warning dialog that explains the risks.
const mcpAutoApprove = options.mcpAutoApproveTools ?? true;
const mcpUnrestricted = options.mcpUnrestrictedTools ?? true;
const defaultTools = ['Read', 'Write', 'Edit', 'Glob', 'Grep', 'Bash', 'WebSearch', 'WebFetch'];
// Determine permission mode based on settings
const shouldBypassPermissions = hasMcpServers && mcpAutoApprove;
// Determine if we should restrict tools (only when no MCP or unrestricted is disabled)
const shouldRestrictTools = !hasMcpServers || !mcpUnrestricted;
const sdkOptions: Options = { const sdkOptions: Options = {
model, model,
systemPrompt, systemPrompt,
@@ -85,13 +83,11 @@ export class ClaudeProvider extends BaseProvider {
cwd, cwd,
// Pass only explicitly allowed environment variables to SDK // Pass only explicitly allowed environment variables to SDK
env: buildEnv(), env: buildEnv(),
// Only restrict tools if explicitly set OR (no MCP / unrestricted disabled) // Pass through allowedTools if provided by caller (decided by sdk-options.ts)
...(allowedTools && shouldRestrictTools && { allowedTools }), ...(allowedTools && { allowedTools }),
...(!allowedTools && shouldRestrictTools && { allowedTools: defaultTools }), // AUTONOMOUS MODE: Always bypass permissions for fully autonomous operation
// When MCP servers are configured and auto-approve is enabled, use bypassPermissions permissionMode: 'bypassPermissions',
permissionMode: shouldBypassPermissions ? 'bypassPermissions' : 'default', allowDangerouslySkipPermissions: true,
// Required when using bypassPermissions mode
...(shouldBypassPermissions && { allowDangerouslySkipPermissions: true }),
abortController, abortController,
// Resume existing SDK session if we have a session ID // Resume existing SDK session if we have a session ID
...(sdkSessionId && conversationHistory && conversationHistory.length > 0 ...(sdkSessionId && conversationHistory && conversationHistory.length > 0
@@ -99,10 +95,14 @@ export class ClaudeProvider extends BaseProvider {
: {}), : {}),
// Forward settingSources for CLAUDE.md file loading // Forward settingSources for CLAUDE.md file loading
...(options.settingSources && { settingSources: options.settingSources }), ...(options.settingSources && { settingSources: options.settingSources }),
// Forward sandbox configuration
...(options.sandbox && { sandbox: options.sandbox }),
// Forward MCP servers configuration // Forward MCP servers configuration
...(options.mcpServers && { mcpServers: options.mcpServers }), ...(options.mcpServers && { mcpServers: options.mcpServers }),
// Extended thinking configuration
...(maxThinkingTokens && { maxThinkingTokens }),
// Subagents configuration for specialized task delegation
...(options.agents && { agents: options.agents }),
// Pass through outputFormat for structured JSON outputs
...(options.outputFormat && { outputFormat: options.outputFormat }),
}; };
// Build prompt payload // Build prompt payload
@@ -140,7 +140,7 @@ export class ClaudeProvider extends BaseProvider {
const errorInfo = classifyError(error); const errorInfo = classifyError(error);
const userMessage = getUserFriendlyErrorMessage(error); const userMessage = getUserFriendlyErrorMessage(error);
console.error('[ClaudeProvider] executeQuery() error during execution:', { logger.error('executeQuery() error during execution:', {
type: errorInfo.type, type: errorInfo.type,
message: errorInfo.message, message: errorInfo.message,
isRateLimit: errorInfo.isRateLimit, isRateLimit: errorInfo.isRateLimit,

View File

@@ -0,0 +1,625 @@
/**
* CliProvider - Abstract base class for CLI-based AI providers
*
* Provides common infrastructure for CLI tools that spawn subprocesses
* and stream JSONL output. Handles:
* - Platform-specific CLI detection (PATH, common locations)
* - Windows execution strategies (WSL, npx, direct, cmd)
* - JSONL subprocess spawning and streaming
* - Error mapping infrastructure
*
* @example
* ```typescript
* class CursorProvider extends CliProvider {
* getCliName(): string { return 'cursor-agent'; }
* getSpawnConfig(): CliSpawnConfig {
* return {
* windowsStrategy: 'wsl',
* commonPaths: {
* linux: ['~/.local/bin/cursor-agent'],
* darwin: ['~/.local/bin/cursor-agent'],
* }
* };
* }
* // ... implement abstract methods
* }
* ```
*/
import {
createWslCommand,
findCliInWsl,
isWslAvailable,
spawnJSONLProcess,
windowsToWslPath,
type SubprocessOptions,
type WslCliResult,
} from '@automaker/platform';
import { calculateReasoningTimeout } from '@automaker/types';
import { createLogger, isAbortError } from '@automaker/utils';
import { execSync } from 'child_process';
import * as fs from 'fs';
import * as os from 'os';
import * as path from 'path';
import { BaseProvider } from './base-provider.js';
import type { ExecuteOptions, ProviderConfig, ProviderMessage } from './types.js';
/**
* Spawn strategy for CLI tools on Windows
*
* Different CLI tools require different execution strategies:
* - 'wsl': Requires WSL, CLI only available on Linux/macOS (e.g., cursor-agent)
* - 'npx': Installed globally via npm/npx, use `npx <package>` to run
* - 'direct': Native Windows binary, can spawn directly
* - 'cmd': Windows batch file (.cmd/.bat), needs cmd.exe shell
*/
export type SpawnStrategy = 'wsl' | 'npx' | 'direct' | 'cmd';
/**
* Configuration for CLI tool spawning
*/
export interface CliSpawnConfig {
/** How to spawn on Windows */
windowsStrategy: SpawnStrategy;
/** NPX package name (required if windowsStrategy is 'npx') */
npxPackage?: string;
/** Preferred WSL distribution (if windowsStrategy is 'wsl') */
wslDistribution?: string;
/**
* Common installation paths per platform
* Use ~ for home directory (will be expanded)
* Keys: 'linux', 'darwin', 'win32'
*/
commonPaths: Record<string, string[]>;
/** Version check command (defaults to --version) */
versionCommand?: string;
}
/**
* CLI error information for consistent error handling
*/
export interface CliErrorInfo {
code: string;
message: string;
recoverable: boolean;
suggestion?: string;
}
/**
* Detection result from CLI path finding
*/
export interface CliDetectionResult {
/** Path to the CLI (or 'npx' for npx strategy) */
cliPath: string | null;
/** Whether using WSL mode */
useWsl: boolean;
/** WSL path if using WSL */
wslCliPath?: string;
/** WSL distribution if using WSL */
wslDistribution?: string;
/** Detected strategy used */
strategy: SpawnStrategy | 'native';
}
// Create logger for CLI operations
const cliLogger = createLogger('CliProvider');
/**
* Base timeout for CLI operations in milliseconds.
* CLI tools have longer startup and processing times compared to direct API calls,
* so we use a higher base timeout (120s) than the default provider timeout (30s).
* This is multiplied by reasoning effort multipliers when applicable.
* @see calculateReasoningTimeout from @automaker/types
*/
const CLI_BASE_TIMEOUT_MS = 120000;
/**
* Abstract base class for CLI-based providers
*
* Subclasses must implement:
* - getCliName(): CLI executable name
* - getSpawnConfig(): Platform-specific spawn configuration
* - buildCliArgs(): Convert ExecuteOptions to CLI arguments
* - normalizeEvent(): Convert CLI output to ProviderMessage
*/
export abstract class CliProvider extends BaseProvider {
// CLI detection results (cached after first detection)
protected cliPath: string | null = null;
protected useWsl: boolean = false;
protected wslCliPath: string | null = null;
protected wslDistribution: string | undefined = undefined;
protected detectedStrategy: SpawnStrategy | 'native' = 'native';
// NPX args (used when strategy is 'npx')
protected npxArgs: string[] = [];
constructor(config: ProviderConfig = {}) {
super(config);
// Detection happens lazily on first use
}
// ==========================================================================
// Abstract methods - must be implemented by subclasses
// ==========================================================================
/**
* Get the CLI executable name (e.g., 'cursor-agent', 'aider')
*/
abstract getCliName(): string;
/**
* Get spawn configuration for this CLI
*/
abstract getSpawnConfig(): CliSpawnConfig;
/**
* Build CLI arguments from execution options
* @param options Execution options
* @returns Array of CLI arguments
*/
abstract buildCliArgs(options: ExecuteOptions): string[];
/**
* Normalize a raw CLI event to ProviderMessage format
* @param event Raw event from CLI JSONL output
* @returns Normalized ProviderMessage or null to skip
*/
abstract normalizeEvent(event: unknown): ProviderMessage | null;
// ==========================================================================
// Optional overrides
// ==========================================================================
/**
* Map CLI stderr/exit code to error info
* Override to provide CLI-specific error mapping
*/
protected mapError(stderr: string, exitCode: number | null): CliErrorInfo {
const lower = stderr.toLowerCase();
// Common authentication errors
if (
lower.includes('not authenticated') ||
lower.includes('please log in') ||
lower.includes('unauthorized')
) {
return {
code: 'NOT_AUTHENTICATED',
message: `${this.getCliName()} is not authenticated`,
recoverable: true,
suggestion: `Run "${this.getCliName()} login" to authenticate`,
};
}
// Rate limiting
if (
lower.includes('rate limit') ||
lower.includes('too many requests') ||
lower.includes('429')
) {
return {
code: 'RATE_LIMITED',
message: 'API rate limit exceeded',
recoverable: true,
suggestion: 'Wait a few minutes and try again',
};
}
// Network errors
if (
lower.includes('network') ||
lower.includes('connection') ||
lower.includes('econnrefused') ||
lower.includes('timeout')
) {
return {
code: 'NETWORK_ERROR',
message: 'Network connection error',
recoverable: true,
suggestion: 'Check your internet connection and try again',
};
}
// Process killed
if (exitCode === 137 || lower.includes('killed') || lower.includes('sigterm')) {
return {
code: 'PROCESS_CRASHED',
message: 'Process was terminated',
recoverable: true,
suggestion: 'The process may have run out of memory. Try a simpler task.',
};
}
// Generic error
return {
code: 'UNKNOWN_ERROR',
message: stderr || `Process exited with code ${exitCode}`,
recoverable: false,
};
}
/**
* Get installation instructions for this CLI
* Override to provide CLI-specific instructions
*/
protected getInstallInstructions(): string {
const cliName = this.getCliName();
const config = this.getSpawnConfig();
if (process.platform === 'win32') {
switch (config.windowsStrategy) {
case 'wsl':
return `${cliName} requires WSL on Windows. Install WSL, then run inside WSL to install.`;
case 'npx':
return `Install with: npm install -g ${config.npxPackage || cliName}`;
case 'cmd':
case 'direct':
return `${cliName} is not installed. Check the documentation for installation instructions.`;
}
}
return `${cliName} is not installed. Check the documentation for installation instructions.`;
}
// ==========================================================================
// CLI Detection
// ==========================================================================
/**
* Expand ~ to home directory in path
*/
private expandPath(p: string): string {
if (p.startsWith('~')) {
return path.join(os.homedir(), p.slice(1));
}
return p;
}
/**
* Find CLI in PATH using 'which' (Unix) or 'where' (Windows)
*/
private findCliInPath(): string | null {
const cliName = this.getCliName();
try {
const command = process.platform === 'win32' ? 'where' : 'which';
const result = execSync(`${command} ${cliName}`, {
encoding: 'utf8',
timeout: 5000,
stdio: ['pipe', 'pipe', 'pipe'],
windowsHide: true,
})
.trim()
.split('\n')[0];
if (result && fs.existsSync(result)) {
cliLogger.debug(`Found ${cliName} in PATH: ${result}`);
return result;
}
} catch {
// Not in PATH
}
return null;
}
/**
* Find CLI in common installation paths for current platform
*/
private findCliInCommonPaths(): string | null {
const config = this.getSpawnConfig();
const cliName = this.getCliName();
const platform = process.platform as 'linux' | 'darwin' | 'win32';
const paths = config.commonPaths[platform] || [];
for (const p of paths) {
const expandedPath = this.expandPath(p);
if (fs.existsSync(expandedPath)) {
cliLogger.debug(`Found ${cliName} at: ${expandedPath}`);
return expandedPath;
}
}
return null;
}
/**
* Detect CLI installation using appropriate strategy
*/
protected detectCli(): CliDetectionResult {
const config = this.getSpawnConfig();
const cliName = this.getCliName();
const wslLogger = (msg: string) => cliLogger.debug(msg);
// Windows - use configured strategy
if (process.platform === 'win32') {
switch (config.windowsStrategy) {
case 'wsl': {
// Check WSL for CLI
if (isWslAvailable({ logger: wslLogger })) {
const wslResult: WslCliResult | null = findCliInWsl(cliName, {
logger: wslLogger,
distribution: config.wslDistribution,
});
if (wslResult) {
cliLogger.debug(
`Using ${cliName} via WSL (${wslResult.distribution || 'default'}): ${wslResult.wslPath}`
);
return {
cliPath: 'wsl.exe',
useWsl: true,
wslCliPath: wslResult.wslPath,
wslDistribution: wslResult.distribution,
strategy: 'wsl',
};
}
}
cliLogger.debug(`${cliName} not found (WSL not available or CLI not installed in WSL)`);
return { cliPath: null, useWsl: false, strategy: 'wsl' };
}
case 'npx': {
// For npx, we don't need to find the CLI, just return npx
cliLogger.debug(`Using ${cliName} via npx (package: ${config.npxPackage})`);
return {
cliPath: 'npx',
useWsl: false,
strategy: 'npx',
};
}
case 'direct':
case 'cmd': {
// Native Windows - check PATH and common paths
const pathResult = this.findCliInPath();
if (pathResult) {
return { cliPath: pathResult, useWsl: false, strategy: config.windowsStrategy };
}
const commonResult = this.findCliInCommonPaths();
if (commonResult) {
return { cliPath: commonResult, useWsl: false, strategy: config.windowsStrategy };
}
cliLogger.debug(`${cliName} not found on Windows`);
return { cliPath: null, useWsl: false, strategy: config.windowsStrategy };
}
}
}
// Linux/macOS - native execution
const pathResult = this.findCliInPath();
if (pathResult) {
return { cliPath: pathResult, useWsl: false, strategy: 'native' };
}
const commonResult = this.findCliInCommonPaths();
if (commonResult) {
return { cliPath: commonResult, useWsl: false, strategy: 'native' };
}
cliLogger.debug(`${cliName} not found`);
return { cliPath: null, useWsl: false, strategy: 'native' };
}
/**
* Ensure CLI is detected (lazy initialization)
*/
protected ensureCliDetected(): void {
if (this.cliPath !== null || this.detectedStrategy !== 'native') {
return; // Already detected
}
const result = this.detectCli();
this.cliPath = result.cliPath;
this.useWsl = result.useWsl;
this.wslCliPath = result.wslCliPath || null;
this.wslDistribution = result.wslDistribution;
this.detectedStrategy = result.strategy;
// Set up npx args if using npx strategy
const config = this.getSpawnConfig();
if (result.strategy === 'npx' && config.npxPackage) {
this.npxArgs = [config.npxPackage];
}
}
/**
* Check if CLI is installed
*/
async isInstalled(): Promise<boolean> {
this.ensureCliDetected();
return this.cliPath !== null;
}
// ==========================================================================
// Subprocess Spawning
// ==========================================================================
/**
* Build subprocess options based on detected strategy
*/
protected buildSubprocessOptions(options: ExecuteOptions, cliArgs: string[]): SubprocessOptions {
this.ensureCliDetected();
if (!this.cliPath) {
throw new Error(`${this.getCliName()} CLI not found. ${this.getInstallInstructions()}`);
}
const cwd = options.cwd || process.cwd();
// Filter undefined values from process.env
const filteredEnv: Record<string, string> = {};
for (const [key, value] of Object.entries(process.env)) {
if (value !== undefined) {
filteredEnv[key] = value;
}
}
// Calculate dynamic timeout based on reasoning effort.
// This addresses GitHub issue #530 where reasoning models with 'xhigh' effort would timeout.
const timeout = calculateReasoningTimeout(options.reasoningEffort, CLI_BASE_TIMEOUT_MS);
// WSL strategy
if (this.useWsl && this.wslCliPath) {
const wslCwd = windowsToWslPath(cwd);
const wslCmd = createWslCommand(this.wslCliPath, cliArgs, {
distribution: this.wslDistribution,
});
// Add --cd flag to change directory inside WSL
let args: string[];
if (this.wslDistribution) {
args = ['-d', this.wslDistribution, '--cd', wslCwd, this.wslCliPath, ...cliArgs];
} else {
args = ['--cd', wslCwd, this.wslCliPath, ...cliArgs];
}
cliLogger.debug(`WSL spawn: ${wslCmd.command} ${args.slice(0, 6).join(' ')}...`);
return {
command: wslCmd.command,
args,
cwd, // Windows cwd for spawn
env: filteredEnv,
abortController: options.abortController,
timeout,
};
}
// NPX strategy
if (this.detectedStrategy === 'npx') {
const allArgs = [...this.npxArgs, ...cliArgs];
cliLogger.debug(`NPX spawn: npx ${allArgs.slice(0, 6).join(' ')}...`);
return {
command: 'npx',
args: allArgs,
cwd,
env: filteredEnv,
abortController: options.abortController,
timeout,
};
}
// Direct strategy (native Unix or Windows direct/cmd)
cliLogger.debug(`Direct spawn: ${this.cliPath} ${cliArgs.slice(0, 6).join(' ')}...`);
return {
command: this.cliPath,
args: cliArgs,
cwd,
env: filteredEnv,
abortController: options.abortController,
timeout,
};
}
/**
* Execute a query using the CLI with JSONL streaming
*
* This is a default implementation that:
* 1. Builds CLI args from options
* 2. Spawns the subprocess with appropriate strategy
* 3. Streams and normalizes events
*
* Subclasses can override for custom behavior.
*/
async *executeQuery(options: ExecuteOptions): AsyncGenerator<ProviderMessage> {
this.ensureCliDetected();
if (!this.cliPath) {
throw new Error(`${this.getCliName()} CLI not found. ${this.getInstallInstructions()}`);
}
// Many CLI-based providers do not support a separate "system" message.
// If a systemPrompt is provided, embed it into the prompt so downstream models
// still receive critical formatting/schema instructions (e.g., JSON-only outputs).
const effectiveOptions = this.embedSystemPromptIntoPrompt(options);
const cliArgs = this.buildCliArgs(effectiveOptions);
const subprocessOptions = this.buildSubprocessOptions(effectiveOptions, cliArgs);
try {
for await (const rawEvent of spawnJSONLProcess(subprocessOptions)) {
const normalized = this.normalizeEvent(rawEvent);
if (normalized) {
yield normalized;
}
}
} catch (error) {
if (isAbortError(error)) {
cliLogger.debug('Query aborted');
return;
}
// Map CLI errors
if (error instanceof Error && 'stderr' in error) {
const errorInfo = this.mapError(
(error as { stderr?: string }).stderr || error.message,
(error as { exitCode?: number | null }).exitCode ?? null
);
const cliError = new Error(errorInfo.message) as Error & CliErrorInfo;
cliError.code = errorInfo.code;
cliError.recoverable = errorInfo.recoverable;
cliError.suggestion = errorInfo.suggestion;
throw cliError;
}
throw error;
}
}
/**
* Embed system prompt text into the user prompt for CLI providers.
*
* Most CLI providers we integrate with only accept a single prompt via stdin/args.
* When upstream code supplies `options.systemPrompt`, we prepend it to the prompt
* content and clear `systemPrompt` to avoid any accidental double-injection by
* subclasses.
*/
protected embedSystemPromptIntoPrompt(options: ExecuteOptions): ExecuteOptions {
if (!options.systemPrompt) {
return options;
}
// Only string system prompts can be reliably embedded for CLI providers.
// Presets are provider-specific (e.g., Claude SDK) and cannot be represented
// universally. If a preset is provided, we only embed its optional `append`.
const systemText =
typeof options.systemPrompt === 'string'
? options.systemPrompt
: options.systemPrompt.append
? options.systemPrompt.append
: '';
if (!systemText) {
return { ...options, systemPrompt: undefined };
}
// Preserve original prompt structure.
if (typeof options.prompt === 'string') {
return {
...options,
prompt: `${systemText}\n\n---\n\n${options.prompt}`,
systemPrompt: undefined,
};
}
if (Array.isArray(options.prompt)) {
return {
...options,
prompt: [{ type: 'text', text: systemText }, ...options.prompt],
systemPrompt: undefined,
};
}
// Should be unreachable due to ExecuteOptions typing, but keep safe.
return { ...options, systemPrompt: undefined };
}
}

View File

@@ -0,0 +1,85 @@
/**
* Codex Config Manager - Writes MCP server configuration for Codex CLI
*/
import path from 'path';
import type { McpServerConfig } from '@automaker/types';
import * as secureFs from '../lib/secure-fs.js';
const CODEX_CONFIG_DIR = '.codex';
const CODEX_CONFIG_FILENAME = 'config.toml';
const CODEX_MCP_SECTION = 'mcp_servers';
function formatTomlString(value: string): string {
return JSON.stringify(value);
}
function formatTomlArray(values: string[]): string {
const formatted = values.map((value) => formatTomlString(value)).join(', ');
return `[${formatted}]`;
}
function formatTomlInlineTable(values: Record<string, string>): string {
const entries = Object.entries(values).map(
([key, value]) => `${key} = ${formatTomlString(value)}`
);
return `{ ${entries.join(', ')} }`;
}
function formatTomlKey(key: string): string {
return `"${key.replace(/"/g, '\\"')}"`;
}
function buildServerBlock(name: string, server: McpServerConfig): string[] {
const lines: string[] = [];
const section = `${CODEX_MCP_SECTION}.${formatTomlKey(name)}`;
lines.push(`[${section}]`);
if (server.type) {
lines.push(`type = ${formatTomlString(server.type)}`);
}
if ('command' in server && server.command) {
lines.push(`command = ${formatTomlString(server.command)}`);
}
if ('args' in server && server.args && server.args.length > 0) {
lines.push(`args = ${formatTomlArray(server.args)}`);
}
if ('env' in server && server.env && Object.keys(server.env).length > 0) {
lines.push(`env = ${formatTomlInlineTable(server.env)}`);
}
if ('url' in server && server.url) {
lines.push(`url = ${formatTomlString(server.url)}`);
}
if ('headers' in server && server.headers && Object.keys(server.headers).length > 0) {
lines.push(`headers = ${formatTomlInlineTable(server.headers)}`);
}
return lines;
}
export class CodexConfigManager {
async configureMcpServers(
cwd: string,
mcpServers: Record<string, McpServerConfig>
): Promise<void> {
const configDir = path.join(cwd, CODEX_CONFIG_DIR);
const configPath = path.join(configDir, CODEX_CONFIG_FILENAME);
await secureFs.mkdir(configDir, { recursive: true });
const blocks: string[] = [];
for (const [name, server] of Object.entries(mcpServers)) {
blocks.push(...buildServerBlock(name, server), '');
}
const content = blocks.join('\n').trim();
if (content) {
await secureFs.writeFile(configPath, content + '\n', 'utf-8');
}
}
}

View File

@@ -0,0 +1,111 @@
/**
* Codex Model Definitions
*
* Official Codex CLI models as documented at https://developers.openai.com/codex/models/
*/
import { CODEX_MODEL_MAP } from '@automaker/types';
import type { ModelDefinition } from './types.js';
const CONTEXT_WINDOW_256K = 256000;
const CONTEXT_WINDOW_128K = 128000;
const MAX_OUTPUT_32K = 32000;
const MAX_OUTPUT_16K = 16000;
/**
* All available Codex models with their specifications
* Based on https://developers.openai.com/codex/models/
*/
export const CODEX_MODELS: ModelDefinition[] = [
// ========== Recommended Codex Models ==========
{
id: CODEX_MODEL_MAP.gpt52Codex,
name: 'GPT-5.2-Codex',
modelString: CODEX_MODEL_MAP.gpt52Codex,
provider: 'openai',
description:
'Most advanced agentic coding model for complex software engineering (default for ChatGPT users).',
contextWindow: CONTEXT_WINDOW_256K,
maxOutputTokens: MAX_OUTPUT_32K,
supportsVision: true,
supportsTools: true,
tier: 'premium' as const,
default: true,
hasReasoning: true,
},
{
id: CODEX_MODEL_MAP.gpt51CodexMax,
name: 'GPT-5.1-Codex-Max',
modelString: CODEX_MODEL_MAP.gpt51CodexMax,
provider: 'openai',
description: 'Optimized for long-horizon, agentic coding tasks in Codex.',
contextWindow: CONTEXT_WINDOW_256K,
maxOutputTokens: MAX_OUTPUT_32K,
supportsVision: true,
supportsTools: true,
tier: 'premium' as const,
hasReasoning: true,
},
{
id: CODEX_MODEL_MAP.gpt51CodexMini,
name: 'GPT-5.1-Codex-Mini',
modelString: CODEX_MODEL_MAP.gpt51CodexMini,
provider: 'openai',
description: 'Smaller, more cost-effective version for faster workflows.',
contextWindow: CONTEXT_WINDOW_128K,
maxOutputTokens: MAX_OUTPUT_16K,
supportsVision: true,
supportsTools: true,
tier: 'basic' as const,
hasReasoning: false,
},
// ========== General-Purpose GPT Models ==========
{
id: CODEX_MODEL_MAP.gpt52,
name: 'GPT-5.2',
modelString: CODEX_MODEL_MAP.gpt52,
provider: 'openai',
description: 'Best general agentic model for tasks across industries and domains.',
contextWindow: CONTEXT_WINDOW_256K,
maxOutputTokens: MAX_OUTPUT_32K,
supportsVision: true,
supportsTools: true,
tier: 'standard' as const,
hasReasoning: true,
},
{
id: CODEX_MODEL_MAP.gpt51,
name: 'GPT-5.1',
modelString: CODEX_MODEL_MAP.gpt51,
provider: 'openai',
description: 'Great for coding and agentic tasks across domains.',
contextWindow: CONTEXT_WINDOW_256K,
maxOutputTokens: MAX_OUTPUT_32K,
supportsVision: true,
supportsTools: true,
tier: 'standard' as const,
hasReasoning: true,
},
];
/**
* Get model definition by ID
*/
export function getCodexModelById(modelId: string): ModelDefinition | undefined {
return CODEX_MODELS.find((m) => m.id === modelId || m.modelString === modelId);
}
/**
* Get all models that support reasoning
*/
export function getReasoningModels(): ModelDefinition[] {
return CODEX_MODELS.filter((m) => m.hasReasoning);
}
/**
* Get models by tier
*/
export function getModelsByTier(tier: 'premium' | 'standard' | 'basic'): ModelDefinition[] {
return CODEX_MODELS.filter((m) => m.tier === tier);
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,173 @@
/**
* Codex SDK client - Executes Codex queries via official @openai/codex-sdk
*
* Used for programmatic control of Codex from within the application.
* Provides cleaner integration than spawning CLI processes.
*/
import { Codex } from '@openai/codex-sdk';
import { formatHistoryAsText, classifyError, getUserFriendlyErrorMessage } from '@automaker/utils';
import { supportsReasoningEffort } from '@automaker/types';
import type { ExecuteOptions, ProviderMessage } from './types.js';
const OPENAI_API_KEY_ENV = 'OPENAI_API_KEY';
const SDK_HISTORY_HEADER = 'Current request:\n';
const DEFAULT_RESPONSE_TEXT = '';
const SDK_ERROR_DETAILS_LABEL = 'Details:';
type PromptBlock = {
type: string;
text?: string;
source?: {
type?: string;
media_type?: string;
data?: string;
};
};
function resolveApiKey(): string {
const apiKey = process.env[OPENAI_API_KEY_ENV];
if (!apiKey) {
throw new Error('OPENAI_API_KEY is not set.');
}
return apiKey;
}
function normalizePromptBlocks(prompt: ExecuteOptions['prompt']): PromptBlock[] {
if (Array.isArray(prompt)) {
return prompt as PromptBlock[];
}
return [{ type: 'text', text: prompt }];
}
function buildPromptText(options: ExecuteOptions, systemPrompt: string | null): string {
const historyText =
options.conversationHistory && options.conversationHistory.length > 0
? formatHistoryAsText(options.conversationHistory)
: '';
const promptBlocks = normalizePromptBlocks(options.prompt);
const promptTexts: string[] = [];
for (const block of promptBlocks) {
if (block.type === 'text' && typeof block.text === 'string' && block.text.trim()) {
promptTexts.push(block.text);
}
}
const promptContent = promptTexts.join('\n\n');
if (!promptContent.trim()) {
throw new Error('Codex SDK prompt is empty.');
}
const parts: string[] = [];
if (systemPrompt) {
parts.push(`System: ${systemPrompt}`);
}
if (historyText) {
parts.push(historyText);
}
parts.push(`${SDK_HISTORY_HEADER}${promptContent}`);
return parts.join('\n\n');
}
function buildSdkErrorMessage(rawMessage: string, userMessage: string): string {
if (!rawMessage) {
return userMessage;
}
if (!userMessage || rawMessage === userMessage) {
return rawMessage;
}
return `${userMessage}\n\n${SDK_ERROR_DETAILS_LABEL} ${rawMessage}`;
}
/**
* Execute a query using the official Codex SDK
*
* The SDK provides a cleaner interface than spawning CLI processes:
* - Handles authentication automatically
* - Provides TypeScript types
* - Supports thread management and resumption
* - Better error handling
*/
export async function* executeCodexSdkQuery(
options: ExecuteOptions,
systemPrompt: string | null
): AsyncGenerator<ProviderMessage> {
try {
const apiKey = resolveApiKey();
const codex = new Codex({ apiKey });
// Resume existing thread or start new one
let thread;
if (options.sdkSessionId) {
try {
thread = codex.resumeThread(options.sdkSessionId);
} catch {
// If resume fails, start a new thread
thread = codex.startThread();
}
} else {
thread = codex.startThread();
}
const promptText = buildPromptText(options, systemPrompt);
// Build run options with reasoning effort if supported
const runOptions: {
signal?: AbortSignal;
reasoning?: { effort: string };
} = {
signal: options.abortController?.signal,
};
// Add reasoning effort if model supports it and reasoningEffort is specified
if (
options.reasoningEffort &&
supportsReasoningEffort(options.model) &&
options.reasoningEffort !== 'none'
) {
runOptions.reasoning = { effort: options.reasoningEffort };
}
// Run the query
const result = await thread.run(promptText, runOptions);
// Extract response text (from finalResponse property)
const outputText = result.finalResponse ?? DEFAULT_RESPONSE_TEXT;
// Get thread ID (may be null if not populated yet)
const threadId = thread.id ?? undefined;
// Yield assistant message
yield {
type: 'assistant',
session_id: threadId,
message: {
role: 'assistant',
content: [{ type: 'text', text: outputText }],
},
};
// Yield result
yield {
type: 'result',
subtype: 'success',
session_id: threadId,
result: outputText,
};
} catch (error) {
const errorInfo = classifyError(error);
const userMessage = getUserFriendlyErrorMessage(error);
const combinedMessage = buildSdkErrorMessage(errorInfo.message, userMessage);
console.error('[CodexSDK] executeQuery() error during execution:', {
type: errorInfo.type,
message: errorInfo.message,
isRateLimit: errorInfo.isRateLimit,
retryAfter: errorInfo.retryAfter,
stack: error instanceof Error ? error.stack : undefined,
});
yield { type: 'error', error: combinedMessage };
}
}

View File

@@ -0,0 +1,436 @@
export type CodexToolResolution = {
name: string;
input: Record<string, unknown>;
};
export type CodexTodoItem = {
content: string;
status: 'pending' | 'in_progress' | 'completed';
activeForm?: string;
};
const TOOL_NAME_BASH = 'Bash';
const TOOL_NAME_READ = 'Read';
const TOOL_NAME_EDIT = 'Edit';
const TOOL_NAME_WRITE = 'Write';
const TOOL_NAME_GREP = 'Grep';
const TOOL_NAME_GLOB = 'Glob';
const TOOL_NAME_TODO = 'TodoWrite';
const TOOL_NAME_DELETE = 'Delete';
const TOOL_NAME_LS = 'Ls';
const INPUT_KEY_COMMAND = 'command';
const INPUT_KEY_FILE_PATH = 'file_path';
const INPUT_KEY_PATTERN = 'pattern';
const SHELL_WRAPPER_PATTERNS = [
/^\/bin\/bash\s+-lc\s+["']([\s\S]+)["']$/,
/^bash\s+-lc\s+["']([\s\S]+)["']$/,
/^\/bin\/sh\s+-lc\s+["']([\s\S]+)["']$/,
/^sh\s+-lc\s+["']([\s\S]+)["']$/,
/^cmd\.exe\s+\/c\s+["']?([\s\S]+)["']?$/i,
/^powershell(?:\.exe)?\s+-Command\s+["']?([\s\S]+)["']?$/i,
/^pwsh(?:\.exe)?\s+-Command\s+["']?([\s\S]+)["']?$/i,
] as const;
const COMMAND_SEPARATOR_PATTERN = /\s*(?:&&|\|\||;)\s*/;
const SEGMENT_SKIP_PREFIXES = ['cd ', 'export ', 'set ', 'pushd '] as const;
const WRAPPER_COMMANDS = new Set(['sudo', 'env', 'command']);
const READ_COMMANDS = new Set(['cat', 'sed', 'head', 'tail', 'less', 'more', 'bat', 'stat', 'wc']);
const SEARCH_COMMANDS = new Set(['rg', 'grep', 'ag', 'ack']);
const GLOB_COMMANDS = new Set(['ls', 'find', 'fd', 'tree']);
const DELETE_COMMANDS = new Set(['rm', 'del', 'erase', 'remove', 'unlink']);
const LIST_COMMANDS = new Set(['ls', 'dir', 'll', 'la']);
const WRITE_COMMANDS = new Set(['tee', 'touch', 'mkdir']);
const APPLY_PATCH_COMMAND = 'apply_patch';
const APPLY_PATCH_PATTERN = /\bapply_patch\b/;
const REDIRECTION_TARGET_PATTERN = /(?:>>|>)\s*([^\s]+)/;
const SED_IN_PLACE_FLAGS = new Set(['-i', '--in-place']);
const PERL_IN_PLACE_FLAG = /-.*i/;
const SEARCH_PATTERN_FLAGS = new Set(['-e', '--regexp']);
const SEARCH_VALUE_FLAGS = new Set([
'-g',
'--glob',
'--iglob',
'--type',
'--type-add',
'--type-clear',
'--encoding',
]);
const SEARCH_FILE_LIST_FLAGS = new Set(['--files']);
const TODO_LINE_PATTERN = /^[-*]\s*(?:\[(?<status>[ x~])\]\s*)?(?<content>.+)$/;
const TODO_STATUS_COMPLETED = 'completed';
const TODO_STATUS_IN_PROGRESS = 'in_progress';
const TODO_STATUS_PENDING = 'pending';
const PATCH_FILE_MARKERS = [
'*** Update File: ',
'*** Add File: ',
'*** Delete File: ',
'*** Move to: ',
] as const;
function stripShellWrapper(command: string): string {
const trimmed = command.trim();
for (const pattern of SHELL_WRAPPER_PATTERNS) {
const match = trimmed.match(pattern);
if (match && match[1]) {
return unescapeCommand(match[1].trim());
}
}
return trimmed;
}
function unescapeCommand(command: string): string {
return command.replace(/\\(["'])/g, '$1');
}
function extractPrimarySegment(command: string): string {
const segments = command
.split(COMMAND_SEPARATOR_PATTERN)
.map((segment) => segment.trim())
.filter(Boolean);
for (const segment of segments) {
const shouldSkip = SEGMENT_SKIP_PREFIXES.some((prefix) => segment.startsWith(prefix));
if (!shouldSkip) {
return segment;
}
}
return command.trim();
}
function tokenizeCommand(command: string): string[] {
const tokens: string[] = [];
let current = '';
let inSingleQuote = false;
let inDoubleQuote = false;
let isEscaped = false;
for (const char of command) {
if (isEscaped) {
current += char;
isEscaped = false;
continue;
}
if (char === '\\') {
isEscaped = true;
continue;
}
if (char === "'" && !inDoubleQuote) {
inSingleQuote = !inSingleQuote;
continue;
}
if (char === '"' && !inSingleQuote) {
inDoubleQuote = !inDoubleQuote;
continue;
}
if (!inSingleQuote && !inDoubleQuote && /\s/.test(char)) {
if (current) {
tokens.push(current);
current = '';
}
continue;
}
current += char;
}
if (current) {
tokens.push(current);
}
return tokens;
}
function stripWrapperTokens(tokens: string[]): string[] {
let index = 0;
while (index < tokens.length && WRAPPER_COMMANDS.has(tokens[index].toLowerCase())) {
index += 1;
}
return tokens.slice(index);
}
function extractFilePathFromTokens(tokens: string[]): string | null {
const candidates = tokens.slice(1).filter((token) => token && !token.startsWith('-'));
if (candidates.length === 0) return null;
return candidates[candidates.length - 1];
}
function extractSearchPattern(tokens: string[]): string | null {
const remaining = tokens.slice(1);
for (let index = 0; index < remaining.length; index += 1) {
const token = remaining[index];
if (token === '--') {
return remaining[index + 1] ?? null;
}
if (SEARCH_PATTERN_FLAGS.has(token)) {
return remaining[index + 1] ?? null;
}
if (SEARCH_VALUE_FLAGS.has(token)) {
index += 1;
continue;
}
if (token.startsWith('-')) {
continue;
}
return token;
}
return null;
}
function extractTeeTarget(tokens: string[]): string | null {
const teeIndex = tokens.findIndex((token) => token === 'tee');
if (teeIndex < 0) return null;
const candidate = tokens[teeIndex + 1];
return candidate && !candidate.startsWith('-') ? candidate : null;
}
function extractRedirectionTarget(command: string): string | null {
const match = command.match(REDIRECTION_TARGET_PATTERN);
return match?.[1] ?? null;
}
function extractFilePathFromDeleteTokens(tokens: string[]): string | null {
// rm file.txt or rm /path/to/file.txt
// Skip flags and get the first non-flag argument
for (let i = 1; i < tokens.length; i++) {
const token = tokens[i];
if (token && !token.startsWith('-')) {
return token;
}
}
return null;
}
function hasSedInPlaceFlag(tokens: string[]): boolean {
return tokens.some((token) => SED_IN_PLACE_FLAGS.has(token) || token.startsWith('-i'));
}
function hasPerlInPlaceFlag(tokens: string[]): boolean {
return tokens.some((token) => PERL_IN_PLACE_FLAG.test(token));
}
function extractPatchFilePath(command: string): string | null {
for (const marker of PATCH_FILE_MARKERS) {
const index = command.indexOf(marker);
if (index < 0) continue;
const start = index + marker.length;
const end = command.indexOf('\n', start);
const rawPath = (end === -1 ? command.slice(start) : command.slice(start, end)).trim();
if (rawPath) return rawPath;
}
return null;
}
function buildInputWithFilePath(filePath: string | null): Record<string, unknown> {
return filePath ? { [INPUT_KEY_FILE_PATH]: filePath } : {};
}
function buildInputWithPattern(pattern: string | null): Record<string, unknown> {
return pattern ? { [INPUT_KEY_PATTERN]: pattern } : {};
}
export function resolveCodexToolCall(command: string): CodexToolResolution {
const normalized = stripShellWrapper(command);
const primarySegment = extractPrimarySegment(normalized);
const tokens = stripWrapperTokens(tokenizeCommand(primarySegment));
const commandToken = tokens[0]?.toLowerCase() ?? '';
const redirectionTarget = extractRedirectionTarget(primarySegment);
if (redirectionTarget) {
return {
name: TOOL_NAME_WRITE,
input: buildInputWithFilePath(redirectionTarget),
};
}
if (commandToken === APPLY_PATCH_COMMAND || APPLY_PATCH_PATTERN.test(primarySegment)) {
return {
name: TOOL_NAME_EDIT,
input: buildInputWithFilePath(extractPatchFilePath(primarySegment)),
};
}
if (commandToken === 'sed' && hasSedInPlaceFlag(tokens)) {
return {
name: TOOL_NAME_EDIT,
input: buildInputWithFilePath(extractFilePathFromTokens(tokens)),
};
}
if (commandToken === 'perl' && hasPerlInPlaceFlag(tokens)) {
return {
name: TOOL_NAME_EDIT,
input: buildInputWithFilePath(extractFilePathFromTokens(tokens)),
};
}
if (WRITE_COMMANDS.has(commandToken)) {
const filePath =
commandToken === 'tee' ? extractTeeTarget(tokens) : extractFilePathFromTokens(tokens);
return {
name: TOOL_NAME_WRITE,
input: buildInputWithFilePath(filePath),
};
}
if (SEARCH_COMMANDS.has(commandToken)) {
if (tokens.some((token) => SEARCH_FILE_LIST_FLAGS.has(token))) {
return {
name: TOOL_NAME_GLOB,
input: buildInputWithPattern(extractFilePathFromTokens(tokens)),
};
}
return {
name: TOOL_NAME_GREP,
input: buildInputWithPattern(extractSearchPattern(tokens)),
};
}
// Handle Delete commands (rm, del, erase, remove, unlink)
if (DELETE_COMMANDS.has(commandToken)) {
// Skip if -r or -rf flags (recursive delete should go to Bash)
if (
tokens.some((token) => token === '-r' || token === '-rf' || token === '-f' || token === '-rf')
) {
return {
name: TOOL_NAME_BASH,
input: { [INPUT_KEY_COMMAND]: normalized },
};
}
// Simple file deletion - extract the file path
const filePath = extractFilePathFromDeleteTokens(tokens);
if (filePath) {
return {
name: TOOL_NAME_DELETE,
input: { path: filePath },
};
}
// Fall back to bash if we can't determine the file path
return {
name: TOOL_NAME_BASH,
input: { [INPUT_KEY_COMMAND]: normalized },
};
}
// Handle simple Ls commands (just listing, not find/glob)
if (LIST_COMMANDS.has(commandToken)) {
const filePath = extractFilePathFromTokens(tokens);
return {
name: TOOL_NAME_LS,
input: { path: filePath || '.' },
};
}
if (GLOB_COMMANDS.has(commandToken)) {
return {
name: TOOL_NAME_GLOB,
input: buildInputWithPattern(extractFilePathFromTokens(tokens)),
};
}
if (READ_COMMANDS.has(commandToken)) {
return {
name: TOOL_NAME_READ,
input: buildInputWithFilePath(extractFilePathFromTokens(tokens)),
};
}
return {
name: TOOL_NAME_BASH,
input: { [INPUT_KEY_COMMAND]: normalized },
};
}
function parseTodoLines(lines: string[]): CodexTodoItem[] {
const todos: CodexTodoItem[] = [];
for (const line of lines) {
const match = line.match(TODO_LINE_PATTERN);
if (!match?.groups?.content) continue;
const statusToken = match.groups.status;
const status =
statusToken === 'x'
? TODO_STATUS_COMPLETED
: statusToken === '~'
? TODO_STATUS_IN_PROGRESS
: TODO_STATUS_PENDING;
todos.push({ content: match.groups.content.trim(), status });
}
return todos;
}
function extractTodoFromArray(value: unknown[]): CodexTodoItem[] {
return value
.map((entry) => {
if (typeof entry === 'string') {
return { content: entry, status: TODO_STATUS_PENDING };
}
if (entry && typeof entry === 'object') {
const record = entry as Record<string, unknown>;
const content =
typeof record.content === 'string'
? record.content
: typeof record.text === 'string'
? record.text
: typeof record.title === 'string'
? record.title
: null;
if (!content) return null;
const status =
record.status === TODO_STATUS_COMPLETED ||
record.status === TODO_STATUS_IN_PROGRESS ||
record.status === TODO_STATUS_PENDING
? (record.status as CodexTodoItem['status'])
: TODO_STATUS_PENDING;
const activeForm = typeof record.activeForm === 'string' ? record.activeForm : undefined;
return { content, status, activeForm };
}
return null;
})
.filter((item): item is CodexTodoItem => Boolean(item));
}
export function extractCodexTodoItems(item: Record<string, unknown>): CodexTodoItem[] | null {
const todosValue = item.todos;
if (Array.isArray(todosValue)) {
const todos = extractTodoFromArray(todosValue);
return todos.length > 0 ? todos : null;
}
const itemsValue = item.items;
if (Array.isArray(itemsValue)) {
const todos = extractTodoFromArray(itemsValue);
return todos.length > 0 ? todos : null;
}
const textValue =
typeof item.text === 'string'
? item.text
: typeof item.content === 'string'
? item.content
: null;
if (!textValue) return null;
const lines = textValue
.split('\n')
.map((line) => line.trim())
.filter(Boolean);
const todos = parseTodoLines(lines);
return todos.length > 0 ? todos : null;
}
export function getCodexTodoToolName(): string {
return TOOL_NAME_TODO;
}

View File

@@ -0,0 +1,197 @@
/**
* Cursor CLI Configuration Manager
*
* Manages Cursor CLI configuration stored in .automaker/cursor-config.json
*/
import * as fs from 'fs';
import * as path from 'path';
import { getAllCursorModelIds, type CursorCliConfig, type CursorModelId } from '@automaker/types';
import { createLogger } from '@automaker/utils';
import { getAutomakerDir } from '@automaker/platform';
// Create logger for this module
const logger = createLogger('CursorConfigManager');
/**
* Manages Cursor CLI configuration
* Config location: .automaker/cursor-config.json
*/
export class CursorConfigManager {
private configPath: string;
private config: CursorCliConfig;
constructor(projectPath: string) {
// Use getAutomakerDir for consistent path resolution
this.configPath = path.join(getAutomakerDir(projectPath), 'cursor-config.json');
this.config = this.loadConfig();
}
/**
* Load configuration from disk
*/
private loadConfig(): CursorCliConfig {
try {
if (fs.existsSync(this.configPath)) {
const content = fs.readFileSync(this.configPath, 'utf8');
const parsed = JSON.parse(content) as CursorCliConfig;
logger.debug(`Loaded config from ${this.configPath}`);
return parsed;
}
} catch (error) {
logger.warn('Failed to load config:', error);
}
// Return default config with all available models
return {
defaultModel: 'auto',
models: getAllCursorModelIds(),
};
}
/**
* Save configuration to disk
*/
private saveConfig(): void {
try {
const dir = path.dirname(this.configPath);
if (!fs.existsSync(dir)) {
fs.mkdirSync(dir, { recursive: true });
}
fs.writeFileSync(this.configPath, JSON.stringify(this.config, null, 2));
logger.debug('Config saved');
} catch (error) {
logger.error('Failed to save config:', error);
throw error;
}
}
/**
* Get the full configuration
*/
getConfig(): CursorCliConfig {
return { ...this.config };
}
/**
* Get the default model
*/
getDefaultModel(): CursorModelId {
return this.config.defaultModel || 'auto';
}
/**
* Set the default model
*/
setDefaultModel(model: CursorModelId): void {
this.config.defaultModel = model;
this.saveConfig();
logger.info(`Default model set to: ${model}`);
}
/**
* Get enabled models
*/
getEnabledModels(): CursorModelId[] {
return this.config.models || ['auto'];
}
/**
* Set enabled models
*/
setEnabledModels(models: CursorModelId[]): void {
this.config.models = models;
this.saveConfig();
logger.info(`Enabled models updated: ${models.join(', ')}`);
}
/**
* Add a model to enabled list
*/
addModel(model: CursorModelId): void {
if (!this.config.models) {
this.config.models = [];
}
if (!this.config.models.includes(model)) {
this.config.models.push(model);
this.saveConfig();
logger.info(`Model added: ${model}`);
}
}
/**
* Remove a model from enabled list
*/
removeModel(model: CursorModelId): void {
if (this.config.models) {
this.config.models = this.config.models.filter((m) => m !== model);
this.saveConfig();
logger.info(`Model removed: ${model}`);
}
}
/**
* Check if a model is enabled
*/
isModelEnabled(model: CursorModelId): boolean {
return this.config.models?.includes(model) ?? false;
}
/**
* Get MCP server configurations
*/
getMcpServers(): string[] {
return this.config.mcpServers || [];
}
/**
* Set MCP server configurations
*/
setMcpServers(servers: string[]): void {
this.config.mcpServers = servers;
this.saveConfig();
logger.info(`MCP servers updated: ${servers.join(', ')}`);
}
/**
* Get Cursor rules paths
*/
getRules(): string[] {
return this.config.rules || [];
}
/**
* Set Cursor rules paths
*/
setRules(rules: string[]): void {
this.config.rules = rules;
this.saveConfig();
logger.info(`Rules updated: ${rules.join(', ')}`);
}
/**
* Reset configuration to defaults
*/
reset(): void {
this.config = {
defaultModel: 'auto',
models: getAllCursorModelIds(),
};
this.saveConfig();
logger.info('Config reset to defaults');
}
/**
* Check if config file exists
*/
exists(): boolean {
return fs.existsSync(this.configPath);
}
/**
* Get the config file path
*/
getConfigPath(): string {
return this.configPath;
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,40 @@
/**
* Provider exports
*/
// Base providers
export { BaseProvider } from './base-provider.js';
export {
CliProvider,
type SpawnStrategy,
type CliSpawnConfig,
type CliErrorInfo,
} from './cli-provider.js';
export type {
ProviderConfig,
ExecuteOptions,
ProviderMessage,
InstallationStatus,
ModelDefinition,
} from './types.js';
// Claude provider
export { ClaudeProvider } from './claude-provider.js';
// Cursor provider
export { CursorProvider, CursorErrorCode, CursorError } from './cursor-provider.js';
export { CursorConfigManager } from './cursor-config-manager.js';
// OpenCode provider
export { OpencodeProvider } from './opencode-provider.js';
// Provider factory
export { ProviderFactory } from './provider-factory.js';
// Simple query service - unified interface for basic AI queries
export { simpleQuery, streamingQuery } from './simple-query-service.js';
export type {
SimpleQueryOptions,
SimpleQueryResult,
StreamingQueryOptions,
} from './simple-query-service.js';

File diff suppressed because it is too large Load Diff

View File

@@ -1,51 +1,168 @@
/** /**
* Provider Factory - Routes model IDs to the appropriate provider * Provider Factory - Routes model IDs to the appropriate provider
* *
* This factory implements model-based routing to automatically select * Uses a registry pattern for dynamic provider registration.
* the correct provider based on the model string. This makes adding * Providers register themselves on import, making it easy to add new providers.
* new providers (Cursor, OpenCode, etc.) trivial - just add one line.
*/ */
import { BaseProvider } from './base-provider.js'; import { BaseProvider } from './base-provider.js';
import { ClaudeProvider } from './claude-provider.js'; import type { InstallationStatus, ModelDefinition } from './types.js';
import type { InstallationStatus } from './types.js'; import { isCursorModel, isCodexModel, isOpencodeModel, type ModelProvider } from '@automaker/types';
import * as fs from 'fs';
import * as path from 'path';
const DISCONNECTED_MARKERS: Record<string, string> = {
claude: '.claude-disconnected',
codex: '.codex-disconnected',
cursor: '.cursor-disconnected',
opencode: '.opencode-disconnected',
};
/**
* Check if a provider CLI is disconnected from the app
*/
export function isProviderDisconnected(providerName: string): boolean {
const markerFile = DISCONNECTED_MARKERS[providerName.toLowerCase()];
if (!markerFile) return false;
const markerPath = path.join(process.cwd(), '.automaker', markerFile);
return fs.existsSync(markerPath);
}
/**
* Provider registration entry
*/
interface ProviderRegistration {
/** Factory function to create provider instance */
factory: () => BaseProvider;
/** Aliases for this provider (e.g., 'anthropic' for 'claude') */
aliases?: string[];
/** Function to check if this provider can handle a model ID */
canHandleModel?: (modelId: string) => boolean;
/** Priority for model matching (higher = checked first) */
priority?: number;
}
/**
* Provider registry - stores registered providers
*/
const providerRegistry = new Map<string, ProviderRegistration>();
/**
* Register a provider with the factory
*
* @param name Provider name (e.g., 'claude', 'cursor')
* @param registration Provider registration config
*/
export function registerProvider(name: string, registration: ProviderRegistration): void {
providerRegistry.set(name.toLowerCase(), registration);
}
export class ProviderFactory { export class ProviderFactory {
/** /**
* Get the appropriate provider for a given model ID * Determine which provider to use for a given model
* *
* @param modelId Model identifier (e.g., "claude-opus-4-5-20251101", "gpt-5.2", "cursor-fast") * @param model Model identifier
* @returns Provider instance for the model * @returns Provider name (ModelProvider type)
*/ */
static getProviderForModel(modelId: string): BaseProvider { static getProviderNameForModel(model: string): ModelProvider {
const lowerModel = modelId.toLowerCase(); const lowerModel = model.toLowerCase();
// Claude models (claude-*, opus, sonnet, haiku) // Get all registered providers sorted by priority (descending)
if (lowerModel.startsWith('claude-') || ['haiku', 'sonnet', 'opus'].includes(lowerModel)) { const registrations = Array.from(providerRegistry.entries()).sort(
return new ClaudeProvider(); ([, a], [, b]) => (b.priority ?? 0) - (a.priority ?? 0)
);
// Check each provider's canHandleModel function
for (const [name, reg] of registrations) {
if (reg.canHandleModel?.(lowerModel)) {
return name as ModelProvider;
}
} }
// Future providers: // Fallback: Check for explicit prefixes
// if (lowerModel.startsWith("cursor-")) { for (const [name] of registrations) {
// return new CursorProvider(); if (lowerModel.startsWith(`${name}-`)) {
// } return name as ModelProvider;
// if (lowerModel.startsWith("opencode-")) { }
// return new OpenCodeProvider(); }
// }
// Default to Claude for unknown models // Default to claude (first registered provider or claude)
console.warn(`[ProviderFactory] Unknown model prefix for "${modelId}", defaulting to Claude`); return 'claude';
return new ClaudeProvider(); }
/**
* Get the appropriate provider for a given model ID
*
* @param modelId Model identifier (e.g., "claude-opus-4-5-20251101", "cursor-gpt-4o", "cursor-auto")
* @param options Optional settings
* @param options.throwOnDisconnected Throw error if provider is disconnected (default: true)
* @returns Provider instance for the model
* @throws Error if provider is disconnected and throwOnDisconnected is true
*/
static getProviderForModel(
modelId: string,
options: { throwOnDisconnected?: boolean } = {}
): BaseProvider {
const { throwOnDisconnected = true } = options;
const providerName = this.getProviderForModelName(modelId);
// Check if provider is disconnected
if (throwOnDisconnected && isProviderDisconnected(providerName)) {
throw new Error(
`${providerName.charAt(0).toUpperCase() + providerName.slice(1)} CLI is disconnected from the app. ` +
`Please go to Settings > Providers and click "Sign In" to reconnect.`
);
}
const provider = this.getProviderByName(providerName);
if (!provider) {
// Fallback to claude if provider not found
const claudeReg = providerRegistry.get('claude');
if (claudeReg) {
return claudeReg.factory();
}
throw new Error(`No provider found for model: ${modelId}`);
}
return provider;
}
/**
* Get the provider name for a given model ID (without creating provider instance)
*/
static getProviderForModelName(modelId: string): string {
const lowerModel = modelId.toLowerCase();
// Get all registered providers sorted by priority (descending)
const registrations = Array.from(providerRegistry.entries()).sort(
([, a], [, b]) => (b.priority ?? 0) - (a.priority ?? 0)
);
// Check each provider's canHandleModel function
for (const [name, reg] of registrations) {
if (reg.canHandleModel?.(lowerModel)) {
return name;
}
}
// Fallback: Check for explicit prefixes
for (const [name] of registrations) {
if (lowerModel.startsWith(`${name}-`)) {
return name;
}
}
// Default to claude (first registered provider or claude)
return 'claude';
} }
/** /**
* Get all available providers * Get all available providers
*/ */
static getAllProviders(): BaseProvider[] { static getAllProviders(): BaseProvider[] {
return [ return Array.from(providerRegistry.values()).map((reg) => reg.factory());
new ClaudeProvider(),
// Future providers...
];
} }
/** /**
@@ -54,11 +171,10 @@ export class ProviderFactory {
* @returns Map of provider name to installation status * @returns Map of provider name to installation status
*/ */
static async checkAllProviders(): Promise<Record<string, InstallationStatus>> { static async checkAllProviders(): Promise<Record<string, InstallationStatus>> {
const providers = this.getAllProviders();
const statuses: Record<string, InstallationStatus> = {}; const statuses: Record<string, InstallationStatus> = {};
for (const provider of providers) { for (const [name, reg] of providerRegistry.entries()) {
const name = provider.getName(); const provider = reg.factory();
const status = await provider.detectInstallation(); const status = await provider.detectInstallation();
statuses[name] = status; statuses[name] = status;
} }
@@ -69,40 +185,119 @@ export class ProviderFactory {
/** /**
* Get provider by name (for direct access if needed) * Get provider by name (for direct access if needed)
* *
* @param name Provider name (e.g., "claude", "cursor") * @param name Provider name (e.g., "claude", "cursor") or alias (e.g., "anthropic")
* @returns Provider instance or null if not found * @returns Provider instance or null if not found
*/ */
static getProviderByName(name: string): BaseProvider | null { static getProviderByName(name: string): BaseProvider | null {
const lowerName = name.toLowerCase(); const lowerName = name.toLowerCase();
switch (lowerName) { // Direct lookup
case 'claude': const directReg = providerRegistry.get(lowerName);
case 'anthropic': if (directReg) {
return new ClaudeProvider(); return directReg.factory();
// Future providers:
// case "cursor":
// return new CursorProvider();
// case "opencode":
// return new OpenCodeProvider();
default:
return null;
} }
// Check aliases
for (const [, reg] of providerRegistry.entries()) {
if (reg.aliases?.includes(lowerName)) {
return reg.factory();
}
}
return null;
} }
/** /**
* Get all available models from all providers * Get all available models from all providers
*/ */
static getAllAvailableModels() { static getAllAvailableModels(): ModelDefinition[] {
const providers = this.getAllProviders(); const providers = this.getAllProviders();
const allModels = []; return providers.flatMap((p) => p.getAvailableModels());
}
for (const provider of providers) { /**
const models = provider.getAvailableModels(); * Get list of registered provider names
allModels.push(...models); */
static getRegisteredProviderNames(): string[] {
return Array.from(providerRegistry.keys());
}
/**
* Check if a specific model supports vision/image input
*
* @param modelId Model identifier
* @returns Whether the model supports vision (defaults to true if model not found)
*/
static modelSupportsVision(modelId: string): boolean {
const provider = this.getProviderForModel(modelId);
const models = provider.getAvailableModels();
// Find the model in the available models list
for (const model of models) {
if (
model.id === modelId ||
model.modelString === modelId ||
model.id.endsWith(`-${modelId}`) ||
model.modelString.endsWith(`-${modelId}`) ||
model.modelString === modelId.replace(/^(claude|cursor|codex)-/, '') ||
model.modelString === modelId.replace(/-(claude|cursor|codex)$/, '')
) {
return model.supportsVision ?? true;
}
} }
return allModels; // Also try exact match with model string from provider's model map
for (const model of models) {
if (model.modelString === modelId || model.id === modelId) {
return model.supportsVision ?? true;
}
}
// Default to true (Claude SDK supports vision by default)
return true;
} }
} }
// =============================================================================
// Provider Registrations
// =============================================================================
// Import providers for registration side-effects
import { ClaudeProvider } from './claude-provider.js';
import { CursorProvider } from './cursor-provider.js';
import { CodexProvider } from './codex-provider.js';
import { OpencodeProvider } from './opencode-provider.js';
// Register Claude provider
registerProvider('claude', {
factory: () => new ClaudeProvider(),
aliases: ['anthropic'],
canHandleModel: (model: string) => {
return (
model.startsWith('claude-') || ['opus', 'sonnet', 'haiku'].some((n) => model.includes(n))
);
},
priority: 0, // Default priority
});
// Register Cursor provider
registerProvider('cursor', {
factory: () => new CursorProvider(),
canHandleModel: (model: string) => isCursorModel(model),
priority: 10, // Higher priority - check Cursor models first
});
// Register Codex provider
registerProvider('codex', {
factory: () => new CodexProvider(),
aliases: ['openai'],
canHandleModel: (model: string) => isCodexModel(model),
priority: 5, // Medium priority - check after Cursor but before Claude
});
// Register OpenCode provider
registerProvider('opencode', {
factory: () => new OpencodeProvider(),
canHandleModel: (model: string) => isOpencodeModel(model),
priority: 3, // Between codex (5) and claude (0)
});

View File

@@ -0,0 +1,254 @@
/**
* Simple Query Service - Simplified interface for basic AI queries
*
* Use this for routes that need simple text responses without
* complex event handling. This service abstracts away the provider
* selection and streaming details, providing a clean interface
* for common query patterns.
*
* Benefits:
* - No direct SDK imports needed in route files
* - Consistent provider routing based on model
* - Automatic text extraction from streaming responses
* - Structured output support for JSON schema responses
* - Eliminates duplicate extractTextFromStream() functions
*/
import { ProviderFactory } from './provider-factory.js';
import type {
ProviderMessage,
ContentBlock,
ThinkingLevel,
ReasoningEffort,
} from '@automaker/types';
import { stripProviderPrefix } from '@automaker/types';
/**
* Options for simple query execution
*/
export interface SimpleQueryOptions {
/** The prompt to send to the AI (can be text or multi-part content) */
prompt: string | Array<{ type: string; text?: string; source?: object }>;
/** Model to use (with or without provider prefix) */
model?: string;
/** Working directory for the query */
cwd: string;
/** System prompt (combined with user prompt for some providers) */
systemPrompt?: string;
/** Maximum turns for agentic operations (default: 1) */
maxTurns?: number;
/** Tools to allow (default: [] for simple queries) */
allowedTools?: string[];
/** Abort controller for cancellation */
abortController?: AbortController;
/** Structured output format for JSON responses */
outputFormat?: {
type: 'json_schema';
schema: Record<string, unknown>;
};
/** Thinking level for Claude models */
thinkingLevel?: ThinkingLevel;
/** Reasoning effort for Codex/OpenAI models */
reasoningEffort?: ReasoningEffort;
/** If true, runs in read-only mode (no file writes) */
readOnly?: boolean;
/** Setting sources for CLAUDE.md loading */
settingSources?: Array<'user' | 'project' | 'local'>;
}
/**
* Result from a simple query
*/
export interface SimpleQueryResult {
/** The accumulated text response */
text: string;
/** Structured output if outputFormat was specified and provider supports it */
structured_output?: Record<string, unknown>;
}
/**
* Options for streaming query execution
*/
export interface StreamingQueryOptions extends SimpleQueryOptions {
/** Callback for each text chunk received */
onText?: (text: string) => void;
/** Callback for tool use events */
onToolUse?: (tool: string, input: unknown) => void;
/** Callback for thinking blocks (if available) */
onThinking?: (thinking: string) => void;
}
/**
* Default model to use when none specified
*/
const DEFAULT_MODEL = 'claude-sonnet-4-20250514';
/**
* Execute a simple query and return the text result
*
* Use this for simple, non-streaming queries where you just need
* the final text response. For more complex use cases with progress
* callbacks, use streamingQuery() instead.
*
* @example
* ```typescript
* const result = await simpleQuery({
* prompt: 'Generate a title for: user authentication',
* cwd: process.cwd(),
* systemPrompt: 'You are a title generator...',
* maxTurns: 1,
* allowedTools: [],
* });
* console.log(result.text); // "Add user authentication"
* ```
*/
export async function simpleQuery(options: SimpleQueryOptions): Promise<SimpleQueryResult> {
const model = options.model || DEFAULT_MODEL;
const provider = ProviderFactory.getProviderForModel(model);
const bareModel = stripProviderPrefix(model);
let responseText = '';
let structuredOutput: Record<string, unknown> | undefined;
// Build provider options
const providerOptions = {
prompt: options.prompt,
model: bareModel,
originalModel: model,
cwd: options.cwd,
systemPrompt: options.systemPrompt,
maxTurns: options.maxTurns ?? 1,
allowedTools: options.allowedTools ?? [],
abortController: options.abortController,
outputFormat: options.outputFormat,
thinkingLevel: options.thinkingLevel,
reasoningEffort: options.reasoningEffort,
readOnly: options.readOnly,
settingSources: options.settingSources,
};
for await (const msg of provider.executeQuery(providerOptions)) {
// Handle error messages
if (msg.type === 'error') {
const errorMessage = msg.error || 'Provider returned an error';
throw new Error(errorMessage);
}
// Extract text from assistant messages
if (msg.type === 'assistant' && msg.message?.content) {
for (const block of msg.message.content) {
if (block.type === 'text' && block.text) {
responseText += block.text;
}
}
}
// Handle result messages
if (msg.type === 'result') {
if (msg.subtype === 'success') {
// Use result text if longer than accumulated text
if (msg.result && msg.result.length > responseText.length) {
responseText = msg.result;
}
// Capture structured output if present
if (msg.structured_output) {
structuredOutput = msg.structured_output;
}
} else if (msg.subtype === 'error_max_turns') {
// Max turns reached - return what we have
break;
} else if (msg.subtype === 'error_max_structured_output_retries') {
throw new Error('Could not produce valid structured output after retries');
}
}
}
return { text: responseText, structured_output: structuredOutput };
}
/**
* Execute a streaming query with event callbacks
*
* Use this for queries where you need real-time progress updates,
* such as when displaying streaming output to a user.
*
* @example
* ```typescript
* const result = await streamingQuery({
* prompt: 'Analyze this project and suggest improvements',
* cwd: '/path/to/project',
* maxTurns: 250,
* allowedTools: ['Read', 'Glob', 'Grep'],
* onText: (text) => emitProgress(text),
* onToolUse: (tool, input) => emitToolUse(tool, input),
* });
* ```
*/
export async function streamingQuery(options: StreamingQueryOptions): Promise<SimpleQueryResult> {
const model = options.model || DEFAULT_MODEL;
const provider = ProviderFactory.getProviderForModel(model);
const bareModel = stripProviderPrefix(model);
let responseText = '';
let structuredOutput: Record<string, unknown> | undefined;
// Build provider options
const providerOptions = {
prompt: options.prompt,
model: bareModel,
originalModel: model,
cwd: options.cwd,
systemPrompt: options.systemPrompt,
maxTurns: options.maxTurns ?? 250,
allowedTools: options.allowedTools ?? ['Read', 'Glob', 'Grep'],
abortController: options.abortController,
outputFormat: options.outputFormat,
thinkingLevel: options.thinkingLevel,
reasoningEffort: options.reasoningEffort,
readOnly: options.readOnly,
settingSources: options.settingSources,
};
for await (const msg of provider.executeQuery(providerOptions)) {
// Handle error messages
if (msg.type === 'error') {
const errorMessage = msg.error || 'Provider returned an error';
throw new Error(errorMessage);
}
// Extract content from assistant messages
if (msg.type === 'assistant' && msg.message?.content) {
for (const block of msg.message.content) {
if (block.type === 'text' && block.text) {
responseText += block.text;
options.onText?.(block.text);
} else if (block.type === 'tool_use' && block.name) {
options.onToolUse?.(block.name, block.input);
} else if (block.type === 'thinking' && block.thinking) {
options.onThinking?.(block.thinking);
}
}
}
// Handle result messages
if (msg.type === 'result') {
if (msg.subtype === 'success') {
// Use result text if longer than accumulated text
if (msg.result && msg.result.length > responseText.length) {
responseText = msg.result;
}
// Capture structured output if present
if (msg.structured_output) {
structuredOutput = msg.structured_output;
}
} else if (msg.subtype === 'error_max_turns') {
// Max turns reached - return what we have
break;
} else if (msg.subtype === 'error_max_structured_output_retries') {
throw new Error('Could not produce valid structured output after retries');
}
}
}
return { text: responseText, structured_output: structuredOutput };
}

View File

@@ -2,6 +2,7 @@
* Shared types for AI model providers * Shared types for AI model providers
* *
* Re-exports types from @automaker/types for consistency across the codebase. * Re-exports types from @automaker/types for consistency across the codebase.
* All provider types are defined in @automaker/types to avoid duplication.
*/ */
// Re-export all provider types from @automaker/types // Re-export all provider types from @automaker/types
@@ -13,72 +14,9 @@ export type {
McpStdioServerConfig, McpStdioServerConfig,
McpSSEServerConfig, McpSSEServerConfig,
McpHttpServerConfig, McpHttpServerConfig,
ContentBlock,
ProviderMessage,
InstallationStatus,
ValidationResult,
ModelDefinition,
} from '@automaker/types'; } from '@automaker/types';
/**
* Content block in a provider message (matches Claude SDK format)
*/
export interface ContentBlock {
type: 'text' | 'tool_use' | 'thinking' | 'tool_result';
text?: string;
thinking?: string;
name?: string;
input?: unknown;
tool_use_id?: string;
content?: string;
}
/**
* Message returned by a provider (matches Claude SDK streaming format)
*/
export interface ProviderMessage {
type: 'assistant' | 'user' | 'error' | 'result';
subtype?: 'success' | 'error';
session_id?: string;
message?: {
role: 'user' | 'assistant';
content: ContentBlock[];
};
result?: string;
error?: string;
parent_tool_use_id?: string | null;
}
/**
* Installation status for a provider
*/
export interface InstallationStatus {
installed: boolean;
path?: string;
version?: string;
method?: 'cli' | 'npm' | 'brew' | 'sdk';
hasApiKey?: boolean;
authenticated?: boolean;
error?: string;
}
/**
* Validation result
*/
export interface ValidationResult {
valid: boolean;
errors: string[];
warnings?: string[];
}
/**
* Model definition
*/
export interface ModelDefinition {
id: string;
name: string;
modelString: string;
provider: string;
description: string;
contextWindow?: number;
maxOutputTokens?: number;
supportsVision?: boolean;
supportsTools?: boolean;
tier?: 'basic' | 'standard' | 'premium';
default?: boolean;
}

View File

@@ -3,17 +3,19 @@
*/ */
import type { Request, Response } from 'express'; import type { Request, Response } from 'express';
import type { ThinkingLevel } from '@automaker/types';
import { AgentService } from '../../../services/agent-service.js'; import { AgentService } from '../../../services/agent-service.js';
import { getErrorMessage, logError } from '../common.js'; import { getErrorMessage, logError } from '../common.js';
export function createQueueAddHandler(agentService: AgentService) { export function createQueueAddHandler(agentService: AgentService) {
return async (req: Request, res: Response): Promise<void> => { return async (req: Request, res: Response): Promise<void> => {
try { try {
const { sessionId, message, imagePaths, model } = req.body as { const { sessionId, message, imagePaths, model, thinkingLevel } = req.body as {
sessionId: string; sessionId: string;
message: string; message: string;
imagePaths?: string[]; imagePaths?: string[];
model?: string; model?: string;
thinkingLevel?: ThinkingLevel;
}; };
if (!sessionId || !message) { if (!sessionId || !message) {
@@ -24,7 +26,12 @@ export function createQueueAddHandler(agentService: AgentService) {
return; return;
} }
const result = await agentService.addToQueue(sessionId, { message, imagePaths, model }); const result = await agentService.addToQueue(sessionId, {
message,
imagePaths,
model,
thinkingLevel,
});
res.json(result); res.json(result);
} catch (error) { } catch (error) {
logError(error, 'Add to queue failed'); logError(error, 'Add to queue failed');

View File

@@ -3,6 +3,7 @@
*/ */
import type { Request, Response } from 'express'; import type { Request, Response } from 'express';
import type { ThinkingLevel } from '@automaker/types';
import { AgentService } from '../../../services/agent-service.js'; import { AgentService } from '../../../services/agent-service.js';
import { createLogger } from '@automaker/utils'; import { createLogger } from '@automaker/utils';
import { getErrorMessage, logError } from '../common.js'; import { getErrorMessage, logError } from '../common.js';
@@ -11,24 +12,27 @@ const logger = createLogger('Agent');
export function createSendHandler(agentService: AgentService) { export function createSendHandler(agentService: AgentService) {
return async (req: Request, res: Response): Promise<void> => { return async (req: Request, res: Response): Promise<void> => {
try { try {
const { sessionId, message, workingDirectory, imagePaths, model } = req.body as { const { sessionId, message, workingDirectory, imagePaths, model, thinkingLevel } =
sessionId: string; req.body as {
message: string; sessionId: string;
workingDirectory?: string; message: string;
imagePaths?: string[]; workingDirectory?: string;
model?: string; imagePaths?: string[];
}; model?: string;
thinkingLevel?: ThinkingLevel;
};
console.log('[Send Handler] Received request:', { logger.debug('Received request:', {
sessionId, sessionId,
messageLength: message?.length, messageLength: message?.length,
workingDirectory, workingDirectory,
imageCount: imagePaths?.length || 0, imageCount: imagePaths?.length || 0,
model, model,
thinkingLevel,
}); });
if (!sessionId || !message) { if (!sessionId || !message) {
console.log('[Send Handler] ERROR: Validation failed - missing sessionId or message'); logger.warn('Validation failed - missing sessionId or message');
res.status(400).json({ res.status(400).json({
success: false, success: false,
error: 'sessionId and message are required', error: 'sessionId and message are required',
@@ -36,7 +40,7 @@ export function createSendHandler(agentService: AgentService) {
return; return;
} }
console.log('[Send Handler] Validation passed, calling agentService.sendMessage()'); logger.debug('Validation passed, calling agentService.sendMessage()');
// Start the message processing (don't await - it streams via WebSocket) // Start the message processing (don't await - it streams via WebSocket)
agentService agentService
@@ -46,18 +50,19 @@ export function createSendHandler(agentService: AgentService) {
workingDirectory, workingDirectory,
imagePaths, imagePaths,
model, model,
thinkingLevel,
}) })
.catch((error) => { .catch((error) => {
console.error('[Send Handler] ERROR: Background error in sendMessage():', error); logger.error('Background error in sendMessage():', error);
logError(error, 'Send message failed (background)'); logError(error, 'Send message failed (background)');
}); });
console.log('[Send Handler] Returning immediate response to client'); logger.debug('Returning immediate response to client');
// Return immediately - responses come via WebSocket // Return immediately - responses come via WebSocket
res.json({ success: true, message: 'Message sent' }); res.json({ success: true, message: 'Message sent' });
} catch (error) { } catch (error) {
console.error('[Send Handler] ERROR: Synchronous error:', error); logger.error('Synchronous error:', error);
logError(error, 'Send message failed'); logError(error, 'Send message failed');
res.status(500).json({ success: false, error: getErrorMessage(error) }); res.status(500).json({ success: false, error: getErrorMessage(error) });
} }

View File

@@ -6,26 +6,103 @@ import { createLogger } from '@automaker/utils';
const logger = createLogger('SpecRegeneration'); const logger = createLogger('SpecRegeneration');
// Shared state for tracking generation status - private // Types for running generation
let isRunning = false; export type GenerationType = 'spec_regeneration' | 'feature_generation' | 'sync';
let currentAbortController: AbortController | null = null;
interface RunningGeneration {
isRunning: boolean;
type: GenerationType;
startedAt: string;
}
// Shared state for tracking generation status - scoped by project path
const runningProjects = new Map<string, RunningGeneration>();
const abortControllers = new Map<string, AbortController>();
/** /**
* Get the current running state * Get the running state for a specific project
*/ */
export function getSpecRegenerationStatus(): { export function getSpecRegenerationStatus(projectPath?: string): {
isRunning: boolean; isRunning: boolean;
currentAbortController: AbortController | null; currentAbortController: AbortController | null;
projectPath?: string;
type?: GenerationType;
startedAt?: string;
} { } {
return { isRunning, currentAbortController }; if (projectPath) {
const generation = runningProjects.get(projectPath);
return {
isRunning: generation?.isRunning || false,
currentAbortController: abortControllers.get(projectPath) || null,
projectPath,
type: generation?.type,
startedAt: generation?.startedAt,
};
}
// Fallback: check if any project is running (for backward compatibility)
const isAnyRunning = Array.from(runningProjects.values()).some((g) => g.isRunning);
return { isRunning: isAnyRunning, currentAbortController: null };
} }
/** /**
* Set the running state and abort controller * Get the project path that is currently running (if any)
*/ */
export function setRunningState(running: boolean, controller: AbortController | null = null): void { export function getRunningProjectPath(): string | null {
isRunning = running; for (const [path, running] of runningProjects.entries()) {
currentAbortController = controller; if (running) return path;
}
return null;
}
/**
* Set the running state and abort controller for a specific project
*/
export function setRunningState(
projectPath: string,
running: boolean,
controller: AbortController | null = null,
type: GenerationType = 'spec_regeneration'
): void {
if (running) {
runningProjects.set(projectPath, {
isRunning: true,
type,
startedAt: new Date().toISOString(),
});
if (controller) {
abortControllers.set(projectPath, controller);
}
} else {
runningProjects.delete(projectPath);
abortControllers.delete(projectPath);
}
}
/**
* Get all running spec/feature generations for the running agents view
*/
export function getAllRunningGenerations(): Array<{
projectPath: string;
type: GenerationType;
startedAt: string;
}> {
const results: Array<{
projectPath: string;
type: GenerationType;
startedAt: string;
}> = [];
for (const [projectPath, generation] of runningProjects.entries()) {
if (generation.isRunning) {
results.push({
projectPath,
type: generation.type,
startedAt: generation.startedAt,
});
}
}
return results;
} }
/** /**

View File

@@ -1,17 +1,21 @@
/** /**
* Generate features from existing app_spec.txt * Generate features from existing app_spec.txt
*
* Model is configurable via phaseModels.featureGenerationModel in settings
* (defaults to Sonnet for balanced speed and quality).
*/ */
import { query } from '@anthropic-ai/claude-agent-sdk';
import * as secureFs from '../../lib/secure-fs.js'; import * as secureFs from '../../lib/secure-fs.js';
import type { EventEmitter } from '../../lib/events.js'; import type { EventEmitter } from '../../lib/events.js';
import { createLogger } from '@automaker/utils'; import { createLogger } from '@automaker/utils';
import { createFeatureGenerationOptions } from '../../lib/sdk-options.js'; import { DEFAULT_PHASE_MODELS } from '@automaker/types';
import { logAuthStatus } from './common.js'; import { resolvePhaseModel } from '@automaker/model-resolver';
import { streamingQuery } from '../../providers/simple-query-service.js';
import { parseAndCreateFeatures } from './parse-and-create-features.js'; import { parseAndCreateFeatures } from './parse-and-create-features.js';
import { getAppSpecPath } from '@automaker/platform'; import { getAppSpecPath } from '@automaker/platform';
import type { SettingsService } from '../../services/settings-service.js'; import type { SettingsService } from '../../services/settings-service.js';
import { getAutoLoadClaudeMdSetting } from '../../lib/settings-helpers.js'; import { getAutoLoadClaudeMdSetting, getPromptCustomization } from '../../lib/settings-helpers.js';
import { FeatureLoader } from '../../services/feature-loader.js';
const logger = createLogger('SpecRegeneration'); const logger = createLogger('SpecRegeneration');
@@ -50,38 +54,48 @@ export async function generateFeaturesFromSpec(
return; return;
} }
// Get customized prompts from settings
const prompts = await getPromptCustomization(settingsService, '[FeatureGeneration]');
// Load existing features to prevent duplicates
const featureLoader = new FeatureLoader();
const existingFeatures = await featureLoader.getAll(projectPath);
logger.info(`Found ${existingFeatures.length} existing features to exclude from generation`);
// Build existing features context for the prompt
let existingFeaturesContext = '';
if (existingFeatures.length > 0) {
const featuresList = existingFeatures
.map(
(f) =>
`- "${f.title}" (ID: ${f.id}): ${f.description?.substring(0, 100) || 'No description'}`
)
.join('\n');
existingFeaturesContext = `
## EXISTING FEATURES (DO NOT REGENERATE THESE)
The following ${existingFeatures.length} features already exist in the project. You MUST NOT generate features that duplicate or overlap with these:
${featuresList}
CRITICAL INSTRUCTIONS:
- DO NOT generate any features with the same or similar titles as the existing features listed above
- DO NOT generate features that cover the same functionality as existing features
- ONLY generate NEW features that are not yet in the system
- If a feature from the roadmap already exists, skip it entirely
- Generate unique feature IDs that do not conflict with existing IDs: ${existingFeatures.map((f) => f.id).join(', ')}
`;
}
const prompt = `Based on this project specification: const prompt = `Based on this project specification:
${spec} ${spec}
${existingFeaturesContext}
${prompts.appSpec.generateFeaturesFromSpecPrompt}
Generate a prioritized list of implementable features. For each feature provide: Generate ${featureCount} NEW features that build on each other logically. Remember: ONLY generate features that DO NOT already exist.`;
1. **id**: A unique lowercase-hyphenated identifier
2. **category**: Functional category (e.g., "Core", "UI", "API", "Authentication", "Database")
3. **title**: Short descriptive title
4. **description**: What this feature does (2-3 sentences)
5. **priority**: 1 (high), 2 (medium), or 3 (low)
6. **complexity**: "simple", "moderate", or "complex"
7. **dependencies**: Array of feature IDs this depends on (can be empty)
Format as JSON:
{
"features": [
{
"id": "feature-id",
"category": "Feature Category",
"title": "Feature Title",
"description": "What it does",
"priority": 1,
"complexity": "moderate",
"dependencies": []
}
]
}
Generate ${featureCount} features that build on each other logically.
IMPORTANT: Do not ask for clarification. The specification is provided above. Generate the JSON immediately.`;
logger.info('========== PROMPT BEING SENT =========='); logger.info('========== PROMPT BEING SENT ==========');
logger.info(`Prompt length: ${prompt.length} chars`); logger.info(`Prompt length: ${prompt.length} chars`);
@@ -101,67 +115,38 @@ IMPORTANT: Do not ask for clarification. The specification is provided above. Ge
'[FeatureGeneration]' '[FeatureGeneration]'
); );
const options = createFeatureGenerationOptions({ // Get model from phase settings
const settings = await settingsService?.getGlobalSettings();
const phaseModelEntry =
settings?.phaseModels?.featureGenerationModel || DEFAULT_PHASE_MODELS.featureGenerationModel;
const { model, thinkingLevel } = resolvePhaseModel(phaseModelEntry);
logger.info('Using model:', model);
// Use streamingQuery with event callbacks
const result = await streamingQuery({
prompt,
model,
cwd: projectPath, cwd: projectPath,
maxTurns: 250,
allowedTools: ['Read', 'Glob', 'Grep'],
abortController, abortController,
autoLoadClaudeMd, thinkingLevel,
readOnly: true, // Feature generation only reads code, doesn't write
settingSources: autoLoadClaudeMd ? ['user', 'project', 'local'] : undefined,
onText: (text) => {
logger.debug(`Feature text block received (${text.length} chars)`);
events.emit('spec-regeneration:event', {
type: 'spec_regeneration_progress',
content: text,
projectPath: projectPath,
});
},
}); });
logger.debug('SDK Options:', JSON.stringify(options, null, 2)); const responseText = result.text;
logger.info('Calling Claude Agent SDK query() for features...');
logAuthStatus('Right before SDK query() for features'); logger.info(`Feature stream complete.`);
let stream;
try {
stream = query({ prompt, options });
logger.debug('query() returned stream successfully');
} catch (queryError) {
logger.error('❌ query() threw an exception:');
logger.error('Error:', queryError);
throw queryError;
}
let responseText = '';
let messageCount = 0;
logger.debug('Starting to iterate over feature stream...');
try {
for await (const msg of stream) {
messageCount++;
logger.debug(
`Feature stream message #${messageCount}:`,
JSON.stringify({ type: msg.type, subtype: (msg as any).subtype }, null, 2)
);
if (msg.type === 'assistant' && msg.message.content) {
for (const block of msg.message.content) {
if (block.type === 'text') {
responseText += block.text;
logger.debug(`Feature text block received (${block.text.length} chars)`);
events.emit('spec-regeneration:event', {
type: 'spec_regeneration_progress',
content: block.text,
projectPath: projectPath,
});
}
}
} else if (msg.type === 'result' && (msg as any).subtype === 'success') {
logger.debug('Received success result for features');
responseText = (msg as any).result || responseText;
} else if ((msg as { type: string }).type === 'error') {
logger.error('❌ Received error message from feature stream:');
logger.error('Error message:', JSON.stringify(msg, null, 2));
}
}
} catch (streamError) {
logger.error('❌ Error while iterating feature stream:');
logger.error('Stream error:', streamError);
throw streamError;
}
logger.info(`Feature stream complete. Total messages: ${messageCount}`);
logger.info(`Feature response length: ${responseText.length} chars`); logger.info(`Feature response length: ${responseText.length} chars`);
logger.info('========== FULL RESPONSE TEXT =========='); logger.info('========== FULL RESPONSE TEXT ==========');
logger.info(responseText); logger.info(responseText);

View File

@@ -1,24 +1,22 @@
/** /**
* Generate app_spec.txt from project overview * Generate app_spec.txt from project overview
*
* Model is configurable via phaseModels.specGenerationModel in settings
* (defaults to Opus for high-quality specification generation).
*/ */
import { query } from '@anthropic-ai/claude-agent-sdk';
import path from 'path';
import * as secureFs from '../../lib/secure-fs.js'; import * as secureFs from '../../lib/secure-fs.js';
import type { EventEmitter } from '../../lib/events.js'; import type { EventEmitter } from '../../lib/events.js';
import { import { specOutputSchema, specToXml, type SpecOutput } from '../../lib/app-spec-format.js';
specOutputSchema,
specToXml,
getStructuredSpecPromptInstruction,
type SpecOutput,
} from '../../lib/app-spec-format.js';
import { createLogger } from '@automaker/utils'; import { createLogger } from '@automaker/utils';
import { createSpecGenerationOptions } from '../../lib/sdk-options.js'; import { DEFAULT_PHASE_MODELS, isCursorModel } from '@automaker/types';
import { logAuthStatus } from './common.js'; import { resolvePhaseModel } from '@automaker/model-resolver';
import { extractJson } from '../../lib/json-extractor.js';
import { streamingQuery } from '../../providers/simple-query-service.js';
import { generateFeaturesFromSpec } from './generate-features-from-spec.js'; import { generateFeaturesFromSpec } from './generate-features-from-spec.js';
import { ensureAutomakerDir, getAppSpecPath } from '@automaker/platform'; import { ensureAutomakerDir, getAppSpecPath } from '@automaker/platform';
import type { SettingsService } from '../../services/settings-service.js'; import type { SettingsService } from '../../services/settings-service.js';
import { getAutoLoadClaudeMdSetting } from '../../lib/settings-helpers.js'; import { getAutoLoadClaudeMdSetting, getPromptCustomization } from '../../lib/settings-helpers.js';
const logger = createLogger('SpecRegeneration'); const logger = createLogger('SpecRegeneration');
@@ -40,6 +38,9 @@ export async function generateSpec(
logger.info('analyzeProject:', analyzeProject); logger.info('analyzeProject:', analyzeProject);
logger.info('maxFeatures:', maxFeatures); logger.info('maxFeatures:', maxFeatures);
// Get customized prompts from settings
const prompts = await getPromptCustomization(settingsService, '[SpecRegeneration]');
// Build the prompt based on whether we should analyze the project // Build the prompt based on whether we should analyze the project
let analysisInstructions = ''; let analysisInstructions = '';
let techStackDefaults = ''; let techStackDefaults = '';
@@ -63,9 +64,7 @@ export async function generateSpec(
Use these technologies as the foundation for the specification.`; Use these technologies as the foundation for the specification.`;
} }
const prompt = `You are helping to define a software project specification. const prompt = `${prompts.appSpec.generateSpecSystemPrompt}
IMPORTANT: Never ask for clarification or additional information. Use the information provided and make reasonable assumptions to create the best possible specification. If details are missing, infer them based on common patterns and best practices.
Project Overview: Project Overview:
${projectOverview} ${projectOverview}
@@ -74,7 +73,7 @@ ${techStackDefaults}
${analysisInstructions} ${analysisInstructions}
${getStructuredSpecPromptInstruction()}`; ${prompts.appSpec.structuredSpecInstructions}`;
logger.info('========== PROMPT BEING SENT =========='); logger.info('========== PROMPT BEING SENT ==========');
logger.info(`Prompt length: ${prompt.length} chars`); logger.info(`Prompt length: ${prompt.length} chars`);
@@ -93,105 +92,84 @@ ${getStructuredSpecPromptInstruction()}`;
'[SpecRegeneration]' '[SpecRegeneration]'
); );
const options = createSpecGenerationOptions({ // Get model from phase settings
const settings = await settingsService?.getGlobalSettings();
const phaseModelEntry =
settings?.phaseModels?.specGenerationModel || DEFAULT_PHASE_MODELS.specGenerationModel;
const { model, thinkingLevel } = resolvePhaseModel(phaseModelEntry);
logger.info('Using model:', model);
let responseText = '';
let structuredOutput: SpecOutput | null = null;
// Determine if we should use structured output (Claude supports it, Cursor doesn't)
const useStructuredOutput = !isCursorModel(model);
// Build the final prompt - for Cursor, include JSON schema instructions
let finalPrompt = prompt;
if (!useStructuredOutput) {
finalPrompt = `${prompt}
CRITICAL INSTRUCTIONS:
1. DO NOT write any files. DO NOT create any files like "project_specification.json".
2. After analyzing the project, respond with ONLY a JSON object - no explanations, no markdown, just raw JSON.
3. The JSON must match this exact schema:
${JSON.stringify(specOutputSchema, null, 2)}
Your entire response should be valid JSON starting with { and ending with }. No text before or after.`;
}
// Use streamingQuery with event callbacks
const result = await streamingQuery({
prompt: finalPrompt,
model,
cwd: projectPath, cwd: projectPath,
maxTurns: 250,
allowedTools: ['Read', 'Glob', 'Grep'],
abortController, abortController,
autoLoadClaudeMd, thinkingLevel,
outputFormat: { readOnly: true, // Spec generation only reads code, we write the spec ourselves
type: 'json_schema', settingSources: autoLoadClaudeMd ? ['user', 'project', 'local'] : undefined,
schema: specOutputSchema, outputFormat: useStructuredOutput
? {
type: 'json_schema',
schema: specOutputSchema,
}
: undefined,
onText: (text) => {
responseText += text;
logger.info(
`Text block received (${text.length} chars), total now: ${responseText.length} chars`
);
events.emit('spec-regeneration:event', {
type: 'spec_regeneration_progress',
content: text,
projectPath: projectPath,
});
},
onToolUse: (tool, input) => {
logger.info('Tool use:', tool);
events.emit('spec-regeneration:event', {
type: 'spec_tool',
tool,
input,
});
}, },
}); });
logger.debug('SDK Options:', JSON.stringify(options, null, 2)); // Get structured output if available
logger.info('Calling Claude Agent SDK query()...'); if (result.structured_output) {
structuredOutput = result.structured_output as unknown as SpecOutput;
// Log auth status right before the SDK call logger.info('✅ Received structured output');
logAuthStatus('Right before SDK query()'); logger.debug('Structured output:', JSON.stringify(structuredOutput, null, 2));
} else if (!useStructuredOutput && responseText) {
let stream; // For non-Claude providers, parse JSON from response text
try { structuredOutput = extractJson<SpecOutput>(responseText, { logger });
stream = query({ prompt, options });
logger.debug('query() returned stream successfully');
} catch (queryError) {
logger.error('❌ query() threw an exception:');
logger.error('Error:', queryError);
throw queryError;
} }
let responseText = ''; logger.info(`Stream iteration complete.`);
let messageCount = 0;
let structuredOutput: SpecOutput | null = null;
logger.info('Starting to iterate over stream...');
try {
for await (const msg of stream) {
messageCount++;
logger.info(
`Stream message #${messageCount}: type=${msg.type}, subtype=${(msg as any).subtype}`
);
if (msg.type === 'assistant') {
const msgAny = msg as any;
if (msgAny.message?.content) {
for (const block of msgAny.message.content) {
if (block.type === 'text') {
responseText += block.text;
logger.info(
`Text block received (${block.text.length} chars), total now: ${responseText.length} chars`
);
events.emit('spec-regeneration:event', {
type: 'spec_regeneration_progress',
content: block.text,
projectPath: projectPath,
});
} else if (block.type === 'tool_use') {
logger.info('Tool use:', block.name);
events.emit('spec-regeneration:event', {
type: 'spec_tool',
tool: block.name,
input: block.input,
});
}
}
}
} else if (msg.type === 'result' && (msg as any).subtype === 'success') {
logger.info('Received success result');
// Check for structured output - this is the reliable way to get spec data
const resultMsg = msg as any;
if (resultMsg.structured_output) {
structuredOutput = resultMsg.structured_output as SpecOutput;
logger.info('✅ Received structured output');
logger.debug('Structured output:', JSON.stringify(structuredOutput, null, 2));
} else {
logger.warn('⚠️ No structured output in result, will fall back to text parsing');
}
} else if (msg.type === 'result') {
// Handle error result types
const subtype = (msg as any).subtype;
logger.info(`Result message: subtype=${subtype}`);
if (subtype === 'error_max_turns') {
logger.error('❌ Hit max turns limit!');
} else if (subtype === 'error_max_structured_output_retries') {
logger.error('❌ Failed to produce valid structured output after retries');
throw new Error('Could not produce valid spec output');
}
} else if ((msg as { type: string }).type === 'error') {
logger.error('❌ Received error message from stream:');
logger.error('Error message:', JSON.stringify(msg, null, 2));
} else if (msg.type === 'user') {
// Log user messages (tool results)
logger.info(`User message (tool result): ${JSON.stringify(msg).substring(0, 500)}`);
}
}
} catch (streamError) {
logger.error('❌ Error while iterating stream:');
logger.error('Stream error:', streamError);
throw streamError;
}
logger.info(`Stream iteration complete. Total messages: ${messageCount}`);
logger.info(`Response text length: ${responseText.length} chars`); logger.info(`Response text length: ${responseText.length} chars`);
// Determine XML content to save // Determine XML content to save
@@ -223,19 +201,33 @@ ${getStructuredSpecPromptInstruction()}`;
xmlContent = responseText.substring(xmlStart, xmlEnd + '</project_specification>'.length); xmlContent = responseText.substring(xmlStart, xmlEnd + '</project_specification>'.length);
logger.info(`Extracted XML content: ${xmlContent.length} chars (from position ${xmlStart})`); logger.info(`Extracted XML content: ${xmlContent.length} chars (from position ${xmlStart})`);
} else { } else {
// No valid XML structure found in the response text // No XML found, try JSON extraction
// This happens when structured output was expected but not received, and the agent logger.warn('⚠️ No XML tags found, attempting JSON extraction...');
// output conversational text instead of XML (e.g., "The project directory appears to be empty...") const extractedJson = extractJson<SpecOutput>(responseText, { logger });
// We should NOT save this conversational text as it's not a valid spec
logger.error('❌ Response does not contain valid <project_specification> XML structure'); if (
logger.error( extractedJson &&
'This typically happens when structured output failed and the agent produced conversational text instead of XML' typeof extractedJson.project_name === 'string' &&
); typeof extractedJson.overview === 'string' &&
throw new Error( Array.isArray(extractedJson.technology_stack) &&
'Failed to generate spec: No valid XML structure found in response. ' + Array.isArray(extractedJson.core_capabilities) &&
'The response contained conversational text but no <project_specification> tags. ' + Array.isArray(extractedJson.implemented_features)
'Please try again.' ) {
); logger.info('✅ Successfully extracted JSON from response text');
xmlContent = specToXml(extractedJson);
logger.info(`✅ Converted extracted JSON to XML: ${xmlContent.length} chars`);
} else {
// Neither XML nor valid JSON found
logger.error('❌ Response does not contain valid XML or JSON structure');
logger.error(
'This typically happens when structured output failed and the agent produced conversational text instead of structured output'
);
throw new Error(
'Failed to generate spec: No valid XML or JSON structure found in response. ' +
'The response contained conversational text but no <project_specification> tags or valid JSON. ' +
'Please try again.'
);
}
} }
} }

View File

@@ -7,6 +7,7 @@ import type { EventEmitter } from '../../lib/events.js';
import { createCreateHandler } from './routes/create.js'; import { createCreateHandler } from './routes/create.js';
import { createGenerateHandler } from './routes/generate.js'; import { createGenerateHandler } from './routes/generate.js';
import { createGenerateFeaturesHandler } from './routes/generate-features.js'; import { createGenerateFeaturesHandler } from './routes/generate-features.js';
import { createSyncHandler } from './routes/sync.js';
import { createStopHandler } from './routes/stop.js'; import { createStopHandler } from './routes/stop.js';
import { createStatusHandler } from './routes/status.js'; import { createStatusHandler } from './routes/status.js';
import type { SettingsService } from '../../services/settings-service.js'; import type { SettingsService } from '../../services/settings-service.js';
@@ -20,6 +21,7 @@ export function createSpecRegenerationRoutes(
router.post('/create', createCreateHandler(events)); router.post('/create', createCreateHandler(events));
router.post('/generate', createGenerateHandler(events, settingsService)); router.post('/generate', createGenerateHandler(events, settingsService));
router.post('/generate-features', createGenerateFeaturesHandler(events, settingsService)); router.post('/generate-features', createGenerateFeaturesHandler(events, settingsService));
router.post('/sync', createSyncHandler(events, settingsService));
router.post('/stop', createStopHandler()); router.post('/stop', createStopHandler());
router.get('/status', createStatusHandler()); router.get('/status', createStatusHandler());

View File

@@ -5,8 +5,10 @@
import path from 'path'; import path from 'path';
import * as secureFs from '../../lib/secure-fs.js'; import * as secureFs from '../../lib/secure-fs.js';
import type { EventEmitter } from '../../lib/events.js'; import type { EventEmitter } from '../../lib/events.js';
import { createLogger } from '@automaker/utils'; import { createLogger, atomicWriteJson, DEFAULT_BACKUP_COUNT } from '@automaker/utils';
import { getFeaturesDir } from '@automaker/platform'; import { getFeaturesDir } from '@automaker/platform';
import { extractJsonWithArray } from '../../lib/json-extractor.js';
import { getNotificationService } from '../../services/notification-service.js';
const logger = createLogger('SpecRegeneration'); const logger = createLogger('SpecRegeneration');
@@ -22,23 +24,30 @@ export async function parseAndCreateFeatures(
logger.info('========== END CONTENT =========='); logger.info('========== END CONTENT ==========');
try { try {
// Extract JSON from response // Extract JSON from response using shared utility
logger.info('Extracting JSON from response...'); logger.info('Extracting JSON from response using extractJsonWithArray...');
logger.info(`Looking for pattern: /{[\\s\\S]*"features"[\\s\\S]*}/`);
const jsonMatch = content.match(/\{[\s\S]*"features"[\s\S]*\}/); interface FeaturesResponse {
if (!jsonMatch) { features: Array<{
logger.error('❌ No valid JSON found in response'); id: string;
category?: string;
title: string;
description: string;
priority?: number;
complexity?: string;
dependencies?: string[];
}>;
}
const parsed = extractJsonWithArray<FeaturesResponse>(content, 'features', { logger });
if (!parsed || !parsed.features) {
logger.error('❌ No valid JSON with "features" array found in response');
logger.error('Full content received:'); logger.error('Full content received:');
logger.error(content); logger.error(content);
throw new Error('No valid JSON found in response'); throw new Error('No valid JSON found in response');
} }
logger.info(`JSON match found (${jsonMatch[0].length} chars)`);
logger.info('========== MATCHED JSON ==========');
logger.info(jsonMatch[0]);
logger.info('========== END MATCHED JSON ==========');
const parsed = JSON.parse(jsonMatch[0]);
logger.info(`Parsed ${parsed.features?.length || 0} features`); logger.info(`Parsed ${parsed.features?.length || 0} features`);
logger.info('Parsed features:', JSON.stringify(parsed.features, null, 2)); logger.info('Parsed features:', JSON.stringify(parsed.features, null, 2));
@@ -65,10 +74,10 @@ export async function parseAndCreateFeatures(
updatedAt: new Date().toISOString(), updatedAt: new Date().toISOString(),
}; };
await secureFs.writeFile( // Use atomic write with backup support for crash protection
path.join(featureDir, 'feature.json'), await atomicWriteJson(path.join(featureDir, 'feature.json'), featureData, {
JSON.stringify(featureData, null, 2) backupCount: DEFAULT_BACKUP_COUNT,
); });
createdFeatures.push({ id: feature.id, title: feature.title }); createdFeatures.push({ id: feature.id, title: feature.title });
} }
@@ -80,6 +89,15 @@ export async function parseAndCreateFeatures(
message: `Spec regeneration complete! Created ${createdFeatures.length} features.`, message: `Spec regeneration complete! Created ${createdFeatures.length} features.`,
projectPath: projectPath, projectPath: projectPath,
}); });
// Create notification for spec generation completion
const notificationService = getNotificationService();
await notificationService.createNotification({
type: 'spec_regeneration_complete',
title: 'Spec Generation Complete',
message: `Created ${createdFeatures.length} features from the project specification.`,
projectPath: projectPath,
});
} catch (error) { } catch (error) {
logger.error('❌ parseAndCreateFeatures() failed:'); logger.error('❌ parseAndCreateFeatures() failed:');
logger.error('Error:', error); logger.error('Error:', error);

View File

@@ -47,17 +47,17 @@ export function createCreateHandler(events: EventEmitter) {
return; return;
} }
const { isRunning } = getSpecRegenerationStatus(); const { isRunning } = getSpecRegenerationStatus(projectPath);
if (isRunning) { if (isRunning) {
logger.warn('Generation already running, rejecting request'); logger.warn('Generation already running for project:', projectPath);
res.json({ success: false, error: 'Spec generation already running' }); res.json({ success: false, error: 'Spec generation already running for this project' });
return; return;
} }
logAuthStatus('Before starting generation'); logAuthStatus('Before starting generation');
const abortController = new AbortController(); const abortController = new AbortController();
setRunningState(true, abortController); setRunningState(projectPath, true, abortController);
logger.info('Starting background generation task...'); logger.info('Starting background generation task...');
// Start generation in background // Start generation in background
@@ -80,7 +80,7 @@ export function createCreateHandler(events: EventEmitter) {
}) })
.finally(() => { .finally(() => {
logger.info('Generation task finished (success or error)'); logger.info('Generation task finished (success or error)');
setRunningState(false, null); setRunningState(projectPath, false, null);
}); });
logger.info('Returning success response (generation running in background)'); logger.info('Returning success response (generation running in background)');

View File

@@ -40,17 +40,17 @@ export function createGenerateFeaturesHandler(
return; return;
} }
const { isRunning } = getSpecRegenerationStatus(); const { isRunning } = getSpecRegenerationStatus(projectPath);
if (isRunning) { if (isRunning) {
logger.warn('Generation already running, rejecting request'); logger.warn('Generation already running for project:', projectPath);
res.json({ success: false, error: 'Generation already running' }); res.json({ success: false, error: 'Generation already running for this project' });
return; return;
} }
logAuthStatus('Before starting feature generation'); logAuthStatus('Before starting feature generation');
const abortController = new AbortController(); const abortController = new AbortController();
setRunningState(true, abortController); setRunningState(projectPath, true, abortController, 'feature_generation');
logger.info('Starting background feature generation task...'); logger.info('Starting background feature generation task...');
generateFeaturesFromSpec(projectPath, events, abortController, maxFeatures, settingsService) generateFeaturesFromSpec(projectPath, events, abortController, maxFeatures, settingsService)
@@ -63,7 +63,7 @@ export function createGenerateFeaturesHandler(
}) })
.finally(() => { .finally(() => {
logger.info('Feature generation task finished (success or error)'); logger.info('Feature generation task finished (success or error)');
setRunningState(false, null); setRunningState(projectPath, false, null);
}); });
logger.info('Returning success response (generation running in background)'); logger.info('Returning success response (generation running in background)');

View File

@@ -48,17 +48,17 @@ export function createGenerateHandler(events: EventEmitter, settingsService?: Se
return; return;
} }
const { isRunning } = getSpecRegenerationStatus(); const { isRunning } = getSpecRegenerationStatus(projectPath);
if (isRunning) { if (isRunning) {
logger.warn('Generation already running, rejecting request'); logger.warn('Generation already running for project:', projectPath);
res.json({ success: false, error: 'Spec generation already running' }); res.json({ success: false, error: 'Spec generation already running for this project' });
return; return;
} }
logAuthStatus('Before starting generation'); logAuthStatus('Before starting generation');
const abortController = new AbortController(); const abortController = new AbortController();
setRunningState(true, abortController); setRunningState(projectPath, true, abortController);
logger.info('Starting background generation task...'); logger.info('Starting background generation task...');
generateSpec( generateSpec(
@@ -81,7 +81,7 @@ export function createGenerateHandler(events: EventEmitter, settingsService?: Se
}) })
.finally(() => { .finally(() => {
logger.info('Generation task finished (success or error)'); logger.info('Generation task finished (success or error)');
setRunningState(false, null); setRunningState(projectPath, false, null);
}); });
logger.info('Returning success response (generation running in background)'); logger.info('Returning success response (generation running in background)');

View File

@@ -6,10 +6,11 @@ import type { Request, Response } from 'express';
import { getSpecRegenerationStatus, getErrorMessage } from '../common.js'; import { getSpecRegenerationStatus, getErrorMessage } from '../common.js';
export function createStatusHandler() { export function createStatusHandler() {
return async (_req: Request, res: Response): Promise<void> => { return async (req: Request, res: Response): Promise<void> => {
try { try {
const { isRunning } = getSpecRegenerationStatus(); const projectPath = req.query.projectPath as string | undefined;
res.json({ success: true, isRunning }); const { isRunning } = getSpecRegenerationStatus(projectPath);
res.json({ success: true, isRunning, projectPath });
} catch (error) { } catch (error) {
res.status(500).json({ success: false, error: getErrorMessage(error) }); res.status(500).json({ success: false, error: getErrorMessage(error) });
} }

View File

@@ -6,13 +6,16 @@ import type { Request, Response } from 'express';
import { getSpecRegenerationStatus, setRunningState, getErrorMessage } from '../common.js'; import { getSpecRegenerationStatus, setRunningState, getErrorMessage } from '../common.js';
export function createStopHandler() { export function createStopHandler() {
return async (_req: Request, res: Response): Promise<void> => { return async (req: Request, res: Response): Promise<void> => {
try { try {
const { currentAbortController } = getSpecRegenerationStatus(); const { projectPath } = req.body as { projectPath?: string };
const { currentAbortController } = getSpecRegenerationStatus(projectPath);
if (currentAbortController) { if (currentAbortController) {
currentAbortController.abort(); currentAbortController.abort();
} }
setRunningState(false, null); if (projectPath) {
setRunningState(projectPath, false, null);
}
res.json({ success: true }); res.json({ success: true });
} catch (error) { } catch (error) {
res.status(500).json({ success: false, error: getErrorMessage(error) }); res.status(500).json({ success: false, error: getErrorMessage(error) });

View File

@@ -0,0 +1,76 @@
/**
* POST /sync endpoint - Sync spec with codebase and features
*/
import type { Request, Response } from 'express';
import type { EventEmitter } from '../../../lib/events.js';
import { createLogger } from '@automaker/utils';
import {
getSpecRegenerationStatus,
setRunningState,
logAuthStatus,
logError,
getErrorMessage,
} from '../common.js';
import { syncSpec } from '../sync-spec.js';
import type { SettingsService } from '../../../services/settings-service.js';
const logger = createLogger('SpecSync');
export function createSyncHandler(events: EventEmitter, settingsService?: SettingsService) {
return async (req: Request, res: Response): Promise<void> => {
logger.info('========== /sync endpoint called ==========');
logger.debug('Request body:', JSON.stringify(req.body, null, 2));
try {
const { projectPath } = req.body as {
projectPath: string;
};
logger.debug('projectPath:', projectPath);
if (!projectPath) {
logger.error('Missing projectPath parameter');
res.status(400).json({ success: false, error: 'projectPath required' });
return;
}
const { isRunning } = getSpecRegenerationStatus(projectPath);
if (isRunning) {
logger.warn('Generation/sync already running for project:', projectPath);
res.json({ success: false, error: 'Operation already running for this project' });
return;
}
logAuthStatus('Before starting spec sync');
const abortController = new AbortController();
setRunningState(projectPath, true, abortController, 'sync');
logger.info('Starting background spec sync task...');
syncSpec(projectPath, events, abortController, settingsService)
.then((result) => {
logger.info('Spec sync completed successfully');
logger.info('Result:', JSON.stringify(result, null, 2));
})
.catch((error) => {
logError(error, 'Spec sync failed with error');
events.emit('spec-regeneration:event', {
type: 'spec_regeneration_error',
error: getErrorMessage(error),
projectPath,
});
})
.finally(() => {
logger.info('Spec sync task finished (success or error)');
setRunningState(projectPath, false, null);
});
logger.info('Returning success response (sync running in background)');
res.json({ success: true });
} catch (error) {
logError(error, 'Sync route handler failed');
res.status(500).json({ success: false, error: getErrorMessage(error) });
}
};
}

View File

@@ -0,0 +1,307 @@
/**
* Sync spec with current codebase and feature state
*
* Updates the spec file based on:
* - Completed Automaker features
* - Code analysis for tech stack and implementations
* - Roadmap phase status updates
*/
import * as secureFs from '../../lib/secure-fs.js';
import type { EventEmitter } from '../../lib/events.js';
import { createLogger } from '@automaker/utils';
import { DEFAULT_PHASE_MODELS } from '@automaker/types';
import { resolvePhaseModel } from '@automaker/model-resolver';
import { streamingQuery } from '../../providers/simple-query-service.js';
import { getAppSpecPath } from '@automaker/platform';
import type { SettingsService } from '../../services/settings-service.js';
import { getAutoLoadClaudeMdSetting } from '../../lib/settings-helpers.js';
import { FeatureLoader } from '../../services/feature-loader.js';
import {
extractImplementedFeatures,
extractTechnologyStack,
extractRoadmapPhases,
updateImplementedFeaturesSection,
updateTechnologyStack,
updateRoadmapPhaseStatus,
type ImplementedFeature,
type RoadmapPhase,
} from '../../lib/xml-extractor.js';
import { getNotificationService } from '../../services/notification-service.js';
const logger = createLogger('SpecSync');
/**
* Result of a sync operation
*/
export interface SyncResult {
techStackUpdates: {
added: string[];
removed: string[];
};
implementedFeaturesUpdates: {
addedFromFeatures: string[];
removed: string[];
};
roadmapUpdates: Array<{ phaseName: string; newStatus: string }>;
summary: string;
}
/**
* Sync the spec with current codebase and feature state
*/
export async function syncSpec(
projectPath: string,
events: EventEmitter,
abortController: AbortController,
settingsService?: SettingsService
): Promise<SyncResult> {
logger.info('========== syncSpec() started ==========');
logger.info('projectPath:', projectPath);
const result: SyncResult = {
techStackUpdates: { added: [], removed: [] },
implementedFeaturesUpdates: { addedFromFeatures: [], removed: [] },
roadmapUpdates: [],
summary: '',
};
// Read existing spec
const specPath = getAppSpecPath(projectPath);
let specContent: string;
try {
specContent = (await secureFs.readFile(specPath, 'utf-8')) as string;
logger.info(`Spec loaded successfully (${specContent.length} chars)`);
} catch (readError) {
logger.error('Failed to read spec file:', readError);
events.emit('spec-regeneration:event', {
type: 'spec_regeneration_error',
error: 'No project spec found. Create or regenerate spec first.',
projectPath,
});
throw new Error('No project spec found');
}
events.emit('spec-regeneration:event', {
type: 'spec_regeneration_progress',
content: '[Phase: sync] Starting spec sync...\n',
projectPath,
});
// Extract current state from spec
const currentImplementedFeatures = extractImplementedFeatures(specContent);
const currentTechStack = extractTechnologyStack(specContent);
const currentRoadmapPhases = extractRoadmapPhases(specContent);
logger.info(`Current spec has ${currentImplementedFeatures.length} implemented features`);
logger.info(`Current spec has ${currentTechStack.length} technologies`);
logger.info(`Current spec has ${currentRoadmapPhases.length} roadmap phases`);
// Load completed Automaker features
const featureLoader = new FeatureLoader();
const allFeatures = await featureLoader.getAll(projectPath);
const completedFeatures = allFeatures.filter(
(f) => f.status === 'completed' || f.status === 'verified'
);
logger.info(`Found ${completedFeatures.length} completed/verified features in Automaker`);
events.emit('spec-regeneration:event', {
type: 'spec_regeneration_progress',
content: `Found ${completedFeatures.length} completed features to sync...\n`,
projectPath,
});
// Build new implemented features list from completed Automaker features
const newImplementedFeatures: ImplementedFeature[] = [];
const existingNames = new Set(currentImplementedFeatures.map((f) => f.name.toLowerCase()));
for (const feature of completedFeatures) {
const name = feature.title || `Feature: ${feature.id}`;
if (!existingNames.has(name.toLowerCase())) {
newImplementedFeatures.push({
name,
description: feature.description || '',
});
result.implementedFeaturesUpdates.addedFromFeatures.push(name);
}
}
// Merge: keep existing + add new from completed features
const mergedFeatures = [...currentImplementedFeatures, ...newImplementedFeatures];
// Update spec with merged features
if (result.implementedFeaturesUpdates.addedFromFeatures.length > 0) {
specContent = updateImplementedFeaturesSection(specContent, mergedFeatures);
logger.info(
`Added ${result.implementedFeaturesUpdates.addedFromFeatures.length} features to spec`
);
}
// Analyze codebase for tech stack updates using AI
events.emit('spec-regeneration:event', {
type: 'spec_regeneration_progress',
content: 'Analyzing codebase for technology updates...\n',
projectPath,
});
const autoLoadClaudeMd = await getAutoLoadClaudeMdSetting(
projectPath,
settingsService,
'[SpecSync]'
);
const settings = await settingsService?.getGlobalSettings();
const phaseModelEntry =
settings?.phaseModels?.specGenerationModel || DEFAULT_PHASE_MODELS.specGenerationModel;
const { model, thinkingLevel } = resolvePhaseModel(phaseModelEntry);
// Use AI to analyze tech stack
const techAnalysisPrompt = `Analyze this project and return ONLY a JSON object with the current technology stack.
Current known technologies: ${currentTechStack.join(', ')}
Look at package.json, config files, and source code to identify:
- Frameworks (React, Vue, Express, etc.)
- Languages (TypeScript, JavaScript, Python, etc.)
- Build tools (Vite, Webpack, etc.)
- Databases (PostgreSQL, MongoDB, etc.)
- Key libraries and tools
Return ONLY this JSON format, no other text:
{
"technologies": ["Technology 1", "Technology 2", ...]
}`;
try {
const techResult = await streamingQuery({
prompt: techAnalysisPrompt,
model,
cwd: projectPath,
maxTurns: 10,
allowedTools: ['Read', 'Glob', 'Grep'],
abortController,
thinkingLevel,
readOnly: true,
settingSources: autoLoadClaudeMd ? ['user', 'project', 'local'] : undefined,
onText: (text) => {
logger.debug(`Tech analysis text: ${text.substring(0, 100)}`);
},
});
// Parse tech stack from response
const jsonMatch = techResult.text.match(/\{[\s\S]*"technologies"[\s\S]*\}/);
if (jsonMatch) {
const parsed = JSON.parse(jsonMatch[0]);
if (Array.isArray(parsed.technologies)) {
const newTechStack = parsed.technologies as string[];
// Calculate differences
const currentSet = new Set(currentTechStack.map((t) => t.toLowerCase()));
const newSet = new Set(newTechStack.map((t) => t.toLowerCase()));
for (const tech of newTechStack) {
if (!currentSet.has(tech.toLowerCase())) {
result.techStackUpdates.added.push(tech);
}
}
for (const tech of currentTechStack) {
if (!newSet.has(tech.toLowerCase())) {
result.techStackUpdates.removed.push(tech);
}
}
// Update spec with new tech stack if there are changes
if (
result.techStackUpdates.added.length > 0 ||
result.techStackUpdates.removed.length > 0
) {
specContent = updateTechnologyStack(specContent, newTechStack);
logger.info(
`Updated tech stack: +${result.techStackUpdates.added.length}, -${result.techStackUpdates.removed.length}`
);
}
}
}
} catch (error) {
logger.warn('Failed to analyze tech stack:', error);
// Continue with other sync operations
}
// Update roadmap phase statuses based on completed features
events.emit('spec-regeneration:event', {
type: 'spec_regeneration_progress',
content: 'Checking roadmap phase statuses...\n',
projectPath,
});
// For each phase, check if all its features are completed
// This is a heuristic - we check if the phase name appears in any feature titles/descriptions
for (const phase of currentRoadmapPhases) {
if (phase.status === 'completed') continue; // Already completed
// Check if this phase should be marked as completed
// A phase is considered complete if we have completed features that mention it
const phaseNameLower = phase.name.toLowerCase();
const relatedCompletedFeatures = completedFeatures.filter(
(f) =>
f.title?.toLowerCase().includes(phaseNameLower) ||
f.description?.toLowerCase().includes(phaseNameLower) ||
f.category?.toLowerCase().includes(phaseNameLower)
);
// If we have related completed features and the phase is still pending/in_progress,
// update it to in_progress or completed based on feature count
if (relatedCompletedFeatures.length > 0 && phase.status !== 'completed') {
const newStatus = 'in_progress';
specContent = updateRoadmapPhaseStatus(specContent, phase.name, newStatus);
result.roadmapUpdates.push({ phaseName: phase.name, newStatus });
logger.info(`Updated phase "${phase.name}" to ${newStatus}`);
}
}
// Save updated spec
await secureFs.writeFile(specPath, specContent, 'utf-8');
logger.info('Spec saved successfully');
// Build summary
const summaryParts: string[] = [];
if (result.implementedFeaturesUpdates.addedFromFeatures.length > 0) {
summaryParts.push(
`Added ${result.implementedFeaturesUpdates.addedFromFeatures.length} implemented features`
);
}
if (result.techStackUpdates.added.length > 0) {
summaryParts.push(`Added ${result.techStackUpdates.added.length} technologies`);
}
if (result.techStackUpdates.removed.length > 0) {
summaryParts.push(`Removed ${result.techStackUpdates.removed.length} technologies`);
}
if (result.roadmapUpdates.length > 0) {
summaryParts.push(`Updated ${result.roadmapUpdates.length} roadmap phases`);
}
result.summary = summaryParts.length > 0 ? summaryParts.join(', ') : 'Spec is already up to date';
// Create notification
const notificationService = getNotificationService();
await notificationService.createNotification({
type: 'spec_regeneration_complete',
title: 'Spec Sync Complete',
message: result.summary,
projectPath,
});
events.emit('spec-regeneration:event', {
type: 'spec_regeneration_complete',
message: `Spec sync complete! ${result.summary}`,
projectPath,
});
logger.info('========== syncSpec() completed ==========');
logger.info('Summary:', result.summary);
return result;
}

View File

@@ -229,12 +229,13 @@ export function createAuthRoutes(): Router {
await invalidateSession(sessionToken); await invalidateSession(sessionToken);
} }
// Clear the cookie // Clear the cookie by setting it to empty with immediate expiration
res.clearCookie(cookieName, { // Using res.cookie() with maxAge: 0 is more reliable than clearCookie()
httpOnly: true, // in cross-origin development environments
secure: process.env.NODE_ENV === 'production', res.cookie(cookieName, '', {
sameSite: 'strict', ...getSessionCookieOptions(),
path: '/', maxAge: 0,
expires: new Date(0),
}); });
res.json({ res.json({

View File

@@ -17,6 +17,7 @@ import { createAnalyzeProjectHandler } from './routes/analyze-project.js';
import { createFollowUpFeatureHandler } from './routes/follow-up-feature.js'; import { createFollowUpFeatureHandler } from './routes/follow-up-feature.js';
import { createCommitFeatureHandler } from './routes/commit-feature.js'; import { createCommitFeatureHandler } from './routes/commit-feature.js';
import { createApprovePlanHandler } from './routes/approve-plan.js'; import { createApprovePlanHandler } from './routes/approve-plan.js';
import { createResumeInterruptedHandler } from './routes/resume-interrupted.js';
export function createAutoModeRoutes(autoModeService: AutoModeService): Router { export function createAutoModeRoutes(autoModeService: AutoModeService): Router {
const router = Router(); const router = Router();
@@ -63,6 +64,11 @@ export function createAutoModeRoutes(autoModeService: AutoModeService): Router {
validatePathParams('projectPath'), validatePathParams('projectPath'),
createApprovePlanHandler(autoModeService) createApprovePlanHandler(autoModeService)
); );
router.post(
'/resume-interrupted',
validatePathParams('projectPath'),
createResumeInterruptedHandler(autoModeService)
);
return router; return router;
} }

View File

@@ -31,7 +31,9 @@ export function createFollowUpFeatureHandler(autoModeService: AutoModeService) {
// Start follow-up in background // Start follow-up in background
// followUpFeature derives workDir from feature.branchName // followUpFeature derives workDir from feature.branchName
autoModeService autoModeService
.followUpFeature(projectPath, featureId, prompt, imagePaths, useWorktrees ?? true) // Default to false to match run-feature/resume-feature behavior.
// Worktrees should only be used when explicitly enabled by the user.
.followUpFeature(projectPath, featureId, prompt, imagePaths, useWorktrees ?? false)
.catch((error) => { .catch((error) => {
logger.error(`[AutoMode] Follow up feature ${featureId} error:`, error); logger.error(`[AutoMode] Follow up feature ${featureId} error:`, error);
}) })

View File

@@ -31,7 +31,7 @@ export function createResumeFeatureHandler(autoModeService: AutoModeService) {
autoModeService autoModeService
.resumeFeature(projectPath, featureId, useWorktrees ?? false) .resumeFeature(projectPath, featureId, useWorktrees ?? false)
.catch((error) => { .catch((error) => {
logger.error(`[AutoMode] Resume feature ${featureId} error:`, error); logger.error(`Resume feature ${featureId} error:`, error);
}); });
res.json({ success: true }); res.json({ success: true });

View File

@@ -0,0 +1,42 @@
/**
* Resume Interrupted Features Handler
*
* Checks for features that were interrupted (in pipeline steps or in_progress)
* when the server was restarted and resumes them.
*/
import type { Request, Response } from 'express';
import { createLogger } from '@automaker/utils';
import type { AutoModeService } from '../../../services/auto-mode-service.js';
const logger = createLogger('ResumeInterrupted');
interface ResumeInterruptedRequest {
projectPath: string;
}
export function createResumeInterruptedHandler(autoModeService: AutoModeService) {
return async (req: Request, res: Response): Promise<void> => {
const { projectPath } = req.body as ResumeInterruptedRequest;
if (!projectPath) {
res.status(400).json({ error: 'Project path is required' });
return;
}
logger.info(`Checking for interrupted features in ${projectPath}`);
try {
await autoModeService.resumeInterruptedFeatures(projectPath);
res.json({
success: true,
message: 'Resume check completed',
});
} catch (error) {
logger.error('Error resuming interrupted features:', error);
res.status(500).json({
error: error instanceof Error ? error.message : 'Unknown error',
});
}
};
}

View File

@@ -31,7 +31,7 @@ export function createRunFeatureHandler(autoModeService: AutoModeService) {
autoModeService autoModeService
.executeFeature(projectPath, featureId, useWorktrees ?? false, false) .executeFeature(projectPath, featureId, useWorktrees ?? false, false)
.catch((error) => { .catch((error) => {
logger.error(`[AutoMode] Feature ${featureId} error:`, error); logger.error(`Feature ${featureId} error:`, error);
}) })
.finally(() => { .finally(() => {
// Release the starting slot when execution completes (success or error) // Release the starting slot when execution completes (success or error)

View File

@@ -3,12 +3,31 @@
*/ */
import { createLogger } from '@automaker/utils'; import { createLogger } from '@automaker/utils';
import { ensureAutomakerDir, getAutomakerDir } from '@automaker/platform';
import * as secureFs from '../../lib/secure-fs.js';
import path from 'path';
import type { BacklogPlanResult } from '@automaker/types';
const logger = createLogger('BacklogPlan'); const logger = createLogger('BacklogPlan');
// State for tracking running generation // State for tracking running generation
let isRunning = false; let isRunning = false;
let currentAbortController: AbortController | null = null; let currentAbortController: AbortController | null = null;
let runningDetails: {
projectPath: string;
prompt: string;
model?: string;
startedAt: string;
} | null = null;
const BACKLOG_PLAN_FILENAME = 'backlog-plan.json';
export interface StoredBacklogPlan {
savedAt: string;
prompt: string;
model?: string;
result: BacklogPlanResult;
}
export function getBacklogPlanStatus(): { isRunning: boolean } { export function getBacklogPlanStatus(): { isRunning: boolean } {
return { isRunning }; return { isRunning };
@@ -16,11 +35,67 @@ export function getBacklogPlanStatus(): { isRunning: boolean } {
export function setRunningState(running: boolean, abortController?: AbortController | null): void { export function setRunningState(running: boolean, abortController?: AbortController | null): void {
isRunning = running; isRunning = running;
if (!running) {
runningDetails = null;
}
if (abortController !== undefined) { if (abortController !== undefined) {
currentAbortController = abortController; currentAbortController = abortController;
} }
} }
export function setRunningDetails(
details: {
projectPath: string;
prompt: string;
model?: string;
startedAt: string;
} | null
): void {
runningDetails = details;
}
export function getRunningDetails(): {
projectPath: string;
prompt: string;
model?: string;
startedAt: string;
} | null {
return runningDetails;
}
function getBacklogPlanPath(projectPath: string): string {
return path.join(getAutomakerDir(projectPath), BACKLOG_PLAN_FILENAME);
}
export async function saveBacklogPlan(projectPath: string, plan: StoredBacklogPlan): Promise<void> {
await ensureAutomakerDir(projectPath);
const filePath = getBacklogPlanPath(projectPath);
await secureFs.writeFile(filePath, JSON.stringify(plan, null, 2), 'utf-8');
}
export async function loadBacklogPlan(projectPath: string): Promise<StoredBacklogPlan | null> {
try {
const filePath = getBacklogPlanPath(projectPath);
const raw = await secureFs.readFile(filePath, 'utf-8');
const parsed = JSON.parse(raw as string) as StoredBacklogPlan;
if (!Array.isArray(parsed?.result?.changes)) {
return null;
}
return parsed;
} catch {
return null;
}
}
export async function clearBacklogPlan(projectPath: string): Promise<void> {
try {
const filePath = getBacklogPlanPath(projectPath);
await secureFs.unlink(filePath);
} catch {
// ignore missing file
}
}
export function getAbortController(): AbortController | null { export function getAbortController(): AbortController | null {
return currentAbortController; return currentAbortController;
} }

View File

@@ -1,12 +1,29 @@
/** /**
* Generate backlog plan using Claude AI * Generate backlog plan using Claude AI
*
* Model is configurable via phaseModels.backlogPlanningModel in settings
* (defaults to Sonnet). Can be overridden per-call via model parameter.
*/ */
import type { EventEmitter } from '../../lib/events.js'; import type { EventEmitter } from '../../lib/events.js';
import type { Feature, BacklogPlanResult, BacklogChange, DependencyUpdate } from '@automaker/types'; import type { Feature, BacklogPlanResult, BacklogChange, DependencyUpdate } from '@automaker/types';
import {
DEFAULT_PHASE_MODELS,
isCursorModel,
stripProviderPrefix,
type ThinkingLevel,
} from '@automaker/types';
import { resolvePhaseModel } from '@automaker/model-resolver';
import { FeatureLoader } from '../../services/feature-loader.js'; import { FeatureLoader } from '../../services/feature-loader.js';
import { ProviderFactory } from '../../providers/provider-factory.js'; import { ProviderFactory } from '../../providers/provider-factory.js';
import { logger, setRunningState, getErrorMessage } from './common.js'; import { extractJsonWithArray } from '../../lib/json-extractor.js';
import {
logger,
setRunningState,
setRunningDetails,
getErrorMessage,
saveBacklogPlan,
} from './common.js';
import type { SettingsService } from '../../services/settings-service.js'; import type { SettingsService } from '../../services/settings-service.js';
import { getAutoLoadClaudeMdSetting, getPromptCustomization } from '../../lib/settings-helpers.js'; import { getAutoLoadClaudeMdSetting, getPromptCustomization } from '../../lib/settings-helpers.js';
@@ -39,24 +56,28 @@ function formatFeaturesForPrompt(features: Feature[]): string {
* Parse the AI response into a BacklogPlanResult * Parse the AI response into a BacklogPlanResult
*/ */
function parsePlanResponse(response: string): BacklogPlanResult { function parsePlanResponse(response: string): BacklogPlanResult {
try { // Use shared JSON extraction utility for robust parsing
// Try to extract JSON from the response // extractJsonWithArray validates that 'changes' exists AND is an array
const jsonMatch = response.match(/```json\n?([\s\S]*?)\n?```/); const parsed = extractJsonWithArray<BacklogPlanResult>(response, 'changes', {
if (jsonMatch) { logger,
return JSON.parse(jsonMatch[1]); });
}
// Try to parse the whole response as JSON if (parsed) {
return JSON.parse(response); return parsed;
} catch {
// If parsing fails, return an empty result
logger.warn('[BacklogPlan] Failed to parse AI response as JSON');
return {
changes: [],
summary: 'Failed to parse AI response',
dependencyUpdates: [],
};
} }
// If parsing fails, log details and return an empty result
logger.warn('[BacklogPlan] Failed to parse AI response as JSON');
logger.warn('[BacklogPlan] Response text length:', response.length);
logger.warn('[BacklogPlan] Response preview:', response.slice(0, 500));
if (response.length === 0) {
logger.error('[BacklogPlan] Response text is EMPTY! No content was extracted from stream.');
}
return {
changes: [],
summary: 'Failed to parse AI response',
dependencyUpdates: [],
};
} }
/** /**
@@ -96,9 +117,22 @@ export async function generateBacklogPlan(
content: 'Generating plan with AI...', content: 'Generating plan with AI...',
}); });
// Get the model to use // Get the model to use from settings or provided override
const effectiveModel = model || 'sonnet'; let effectiveModel = model;
let thinkingLevel: ThinkingLevel | undefined;
if (!effectiveModel) {
const settings = await settingsService?.getGlobalSettings();
const phaseModelEntry =
settings?.phaseModels?.backlogPlanningModel || DEFAULT_PHASE_MODELS.backlogPlanningModel;
const resolved = resolvePhaseModel(phaseModelEntry);
effectiveModel = resolved.model;
thinkingLevel = resolved.thinkingLevel;
}
logger.info('[BacklogPlan] Using model:', effectiveModel);
const provider = ProviderFactory.getProviderForModel(effectiveModel); const provider = ProviderFactory.getProviderForModel(effectiveModel);
// Strip provider prefix - providers expect bare model IDs
const bareModel = stripProviderPrefix(effectiveModel);
// Get autoLoadClaudeMd setting // Get autoLoadClaudeMd setting
const autoLoadClaudeMd = await getAutoLoadClaudeMdSetting( const autoLoadClaudeMd = await getAutoLoadClaudeMdSetting(
@@ -107,16 +141,38 @@ export async function generateBacklogPlan(
'[BacklogPlan]' '[BacklogPlan]'
); );
// For Cursor models, we need to combine prompts with explicit instructions
// because Cursor doesn't support systemPrompt separation like Claude SDK
let finalPrompt = userPrompt;
let finalSystemPrompt: string | undefined = systemPrompt;
if (isCursorModel(effectiveModel)) {
logger.info('[BacklogPlan] Using Cursor model - adding explicit no-file-write instructions');
finalPrompt = `${systemPrompt}
CRITICAL INSTRUCTIONS:
1. DO NOT write any files. Return the JSON in your response only.
2. DO NOT use Write, Edit, or any file modification tools.
3. Respond with ONLY a JSON object - no explanations, no markdown, just raw JSON.
4. Your entire response should be valid JSON starting with { and ending with }.
5. No text before or after the JSON object.
${userPrompt}`;
finalSystemPrompt = undefined; // System prompt is now embedded in the user prompt
}
// Execute the query // Execute the query
const stream = provider.executeQuery({ const stream = provider.executeQuery({
prompt: userPrompt, prompt: finalPrompt,
model: effectiveModel, model: bareModel,
cwd: projectPath, cwd: projectPath,
systemPrompt, systemPrompt: finalSystemPrompt,
maxTurns: 1, maxTurns: 1,
allowedTools: [], // No tools needed for this allowedTools: [], // No tools needed for this
abortController, abortController,
settingSources: autoLoadClaudeMd ? ['user', 'project'] : undefined, settingSources: autoLoadClaudeMd ? ['user', 'project'] : undefined,
readOnly: true, // Plan generation only generates text, doesn't write files
thinkingLevel, // Pass thinking level for extended thinking
}); });
let responseText = ''; let responseText = '';
@@ -134,12 +190,29 @@ export async function generateBacklogPlan(
} }
} }
} }
} else if (msg.type === 'result' && msg.subtype === 'success' && msg.result) {
// Use result if it's a final accumulated message (from Cursor provider)
logger.info('[BacklogPlan] Received result from Cursor, length:', msg.result.length);
logger.info('[BacklogPlan] Previous responseText length:', responseText.length);
if (msg.result.length > responseText.length) {
logger.info('[BacklogPlan] Using Cursor result (longer than accumulated text)');
responseText = msg.result;
} else {
logger.info('[BacklogPlan] Keeping accumulated text (longer than Cursor result)');
}
} }
} }
// Parse the response // Parse the response
const result = parsePlanResponse(responseText); const result = parsePlanResponse(responseText);
await saveBacklogPlan(projectPath, {
savedAt: new Date().toISOString(),
prompt,
model: effectiveModel,
result,
});
events.emit('backlog-plan:event', { events.emit('backlog-plan:event', {
type: 'backlog_plan_complete', type: 'backlog_plan_complete',
result, result,
@@ -158,5 +231,6 @@ export async function generateBacklogPlan(
throw error; throw error;
} finally { } finally {
setRunningState(false, null); setRunningState(false, null);
setRunningDetails(null);
} }
} }

View File

@@ -9,6 +9,7 @@ import { createGenerateHandler } from './routes/generate.js';
import { createStopHandler } from './routes/stop.js'; import { createStopHandler } from './routes/stop.js';
import { createStatusHandler } from './routes/status.js'; import { createStatusHandler } from './routes/status.js';
import { createApplyHandler } from './routes/apply.js'; import { createApplyHandler } from './routes/apply.js';
import { createClearHandler } from './routes/clear.js';
import type { SettingsService } from '../../services/settings-service.js'; import type { SettingsService } from '../../services/settings-service.js';
export function createBacklogPlanRoutes( export function createBacklogPlanRoutes(
@@ -23,8 +24,9 @@ export function createBacklogPlanRoutes(
createGenerateHandler(events, settingsService) createGenerateHandler(events, settingsService)
); );
router.post('/stop', createStopHandler()); router.post('/stop', createStopHandler());
router.get('/status', createStatusHandler()); router.get('/status', validatePathParams('projectPath'), createStatusHandler());
router.post('/apply', validatePathParams('projectPath'), createApplyHandler()); router.post('/apply', validatePathParams('projectPath'), createApplyHandler());
router.post('/clear', validatePathParams('projectPath'), createClearHandler());
return router; return router;
} }

View File

@@ -5,18 +5,29 @@
import type { Request, Response } from 'express'; import type { Request, Response } from 'express';
import type { BacklogPlanResult, BacklogChange, Feature } from '@automaker/types'; import type { BacklogPlanResult, BacklogChange, Feature } from '@automaker/types';
import { FeatureLoader } from '../../../services/feature-loader.js'; import { FeatureLoader } from '../../../services/feature-loader.js';
import { getErrorMessage, logError, logger } from '../common.js'; import { clearBacklogPlan, getErrorMessage, logError, logger } from '../common.js';
const featureLoader = new FeatureLoader(); const featureLoader = new FeatureLoader();
export function createApplyHandler() { export function createApplyHandler() {
return async (req: Request, res: Response): Promise<void> => { return async (req: Request, res: Response): Promise<void> => {
try { try {
const { projectPath, plan } = req.body as { const {
projectPath,
plan,
branchName: rawBranchName,
} = req.body as {
projectPath: string; projectPath: string;
plan: BacklogPlanResult; plan: BacklogPlanResult;
branchName?: string;
}; };
// Validate branchName: must be undefined or a non-empty trimmed string
const branchName =
typeof rawBranchName === 'string' && rawBranchName.trim().length > 0
? rawBranchName.trim()
: undefined;
if (!projectPath) { if (!projectPath) {
res.status(400).json({ success: false, error: 'projectPath required' }); res.status(400).json({ success: false, error: 'projectPath required' });
return; return;
@@ -82,6 +93,7 @@ export function createApplyHandler() {
dependencies: change.feature.dependencies, dependencies: change.feature.dependencies,
priority: change.feature.priority, priority: change.feature.priority,
status: 'backlog', status: 'backlog',
branchName,
}); });
appliedChanges.push(`added:${newFeature.id}`); appliedChanges.push(`added:${newFeature.id}`);
@@ -135,6 +147,17 @@ export function createApplyHandler() {
} }
} }
// Clear the plan before responding
try {
await clearBacklogPlan(projectPath);
} catch (error) {
logger.warn(
`[BacklogPlan] Failed to clear backlog plan after apply:`,
getErrorMessage(error)
);
// Don't throw - operation succeeded, just cleanup failed
}
res.json({ res.json({
success: true, success: true,
appliedChanges, appliedChanges,

View File

@@ -0,0 +1,25 @@
/**
* POST /clear endpoint - Clear saved backlog plan
*/
import type { Request, Response } from 'express';
import { clearBacklogPlan, getErrorMessage, logError } from '../common.js';
export function createClearHandler() {
return async (req: Request, res: Response): Promise<void> => {
try {
const { projectPath } = req.body as { projectPath: string };
if (!projectPath) {
res.status(400).json({ success: false, error: 'projectPath required' });
return;
}
await clearBacklogPlan(projectPath);
res.json({ success: true });
} catch (error) {
logError(error, 'Clear backlog plan failed');
res.status(500).json({ success: false, error: getErrorMessage(error) });
}
};
}

View File

@@ -4,7 +4,13 @@
import type { Request, Response } from 'express'; import type { Request, Response } from 'express';
import type { EventEmitter } from '../../../lib/events.js'; import type { EventEmitter } from '../../../lib/events.js';
import { getBacklogPlanStatus, setRunningState, getErrorMessage, logError } from '../common.js'; import {
getBacklogPlanStatus,
setRunningState,
setRunningDetails,
getErrorMessage,
logError,
} from '../common.js';
import { generateBacklogPlan } from '../generate-plan.js'; import { generateBacklogPlan } from '../generate-plan.js';
import type { SettingsService } from '../../../services/settings-service.js'; import type { SettingsService } from '../../../services/settings-service.js';
@@ -37,6 +43,12 @@ export function createGenerateHandler(events: EventEmitter, settingsService?: Se
} }
setRunningState(true); setRunningState(true);
setRunningDetails({
projectPath,
prompt,
model,
startedAt: new Date().toISOString(),
});
const abortController = new AbortController(); const abortController = new AbortController();
setRunningState(true, abortController); setRunningState(true, abortController);
@@ -51,6 +63,7 @@ export function createGenerateHandler(events: EventEmitter, settingsService?: Se
}) })
.finally(() => { .finally(() => {
setRunningState(false, null); setRunningState(false, null);
setRunningDetails(null);
}); });
res.json({ success: true }); res.json({ success: true });

View File

@@ -3,13 +3,15 @@
*/ */
import type { Request, Response } from 'express'; import type { Request, Response } from 'express';
import { getBacklogPlanStatus, getErrorMessage, logError } from '../common.js'; import { getBacklogPlanStatus, loadBacklogPlan, getErrorMessage, logError } from '../common.js';
export function createStatusHandler() { export function createStatusHandler() {
return async (_req: Request, res: Response): Promise<void> => { return async (req: Request, res: Response): Promise<void> => {
try { try {
const status = getBacklogPlanStatus(); const status = getBacklogPlanStatus();
res.json({ success: true, ...status }); const projectPath = typeof req.query.projectPath === 'string' ? req.query.projectPath : '';
const savedPlan = projectPath ? await loadBacklogPlan(projectPath) : null;
res.json({ success: true, ...status, savedPlan });
} catch (error) { } catch (error) {
logError(error, 'Get backlog plan status failed'); logError(error, 'Get backlog plan status failed');
res.status(500).json({ success: false, error: getErrorMessage(error) }); res.status(500).json({ success: false, error: getErrorMessage(error) });

View File

@@ -3,7 +3,13 @@
*/ */
import type { Request, Response } from 'express'; import type { Request, Response } from 'express';
import { getAbortController, setRunningState, getErrorMessage, logError } from '../common.js'; import {
getAbortController,
setRunningState,
setRunningDetails,
getErrorMessage,
logError,
} from '../common.js';
export function createStopHandler() { export function createStopHandler() {
return async (_req: Request, res: Response): Promise<void> => { return async (_req: Request, res: Response): Promise<void> => {
@@ -12,6 +18,7 @@ export function createStopHandler() {
if (abortController) { if (abortController) {
abortController.abort(); abortController.abort();
setRunningState(false, null); setRunningState(false, null);
setRunningDetails(null);
} }
res.json({ success: true }); res.json({ success: true });
} catch (error) { } catch (error) {

View File

@@ -1,5 +1,8 @@
import { Router, Request, Response } from 'express'; import { Router, Request, Response } from 'express';
import { ClaudeUsageService } from '../../services/claude-usage-service.js'; import { ClaudeUsageService } from '../../services/claude-usage-service.js';
import { createLogger } from '@automaker/utils';
const logger = createLogger('Claude');
export function createClaudeRoutes(service: ClaudeUsageService): Router { export function createClaudeRoutes(service: ClaudeUsageService): Router {
const router = Router(); const router = Router();
@@ -10,7 +13,10 @@ export function createClaudeRoutes(service: ClaudeUsageService): Router {
// Check if Claude CLI is available first // Check if Claude CLI is available first
const isAvailable = await service.isAvailable(); const isAvailable = await service.isAvailable();
if (!isAvailable) { if (!isAvailable) {
res.status(503).json({ // IMPORTANT: This endpoint is behind Automaker session auth already.
// Use a 200 + error payload for Claude CLI issues so the UI doesn't
// interpret it as an invalid Automaker session (401/403 triggers logout).
res.status(200).json({
error: 'Claude CLI not found', error: 'Claude CLI not found',
message: "Please install Claude Code CLI and run 'claude login' to authenticate", message: "Please install Claude Code CLI and run 'claude login' to authenticate",
}); });
@@ -23,17 +29,25 @@ export function createClaudeRoutes(service: ClaudeUsageService): Router {
const message = error instanceof Error ? error.message : 'Unknown error'; const message = error instanceof Error ? error.message : 'Unknown error';
if (message.includes('Authentication required') || message.includes('token_expired')) { if (message.includes('Authentication required') || message.includes('token_expired')) {
res.status(401).json({ // Do NOT use 401/403 here: that status code is reserved for Automaker session auth.
res.status(200).json({
error: 'Authentication required', error: 'Authentication required',
message: "Please run 'claude login' to authenticate", message: "Please run 'claude login' to authenticate",
}); });
} else if (message.includes('TRUST_PROMPT_PENDING')) {
// Trust prompt appeared but couldn't be auto-approved
res.status(200).json({
error: 'Trust prompt pending',
message:
'Claude CLI needs folder permission. Please run "claude" in your terminal and approve access.',
});
} else if (message.includes('timed out')) { } else if (message.includes('timed out')) {
res.status(504).json({ res.status(200).json({
error: 'Command timed out', error: 'Command timed out',
message: 'The Claude CLI took too long to respond', message: 'The Claude CLI took too long to respond',
}); });
} else { } else {
console.error('Error fetching usage:', error); logger.error('Error fetching usage:', error);
res.status(500).json({ error: message }); res.status(500).json({ error: message });
} }
} }

View File

@@ -0,0 +1,78 @@
/**
* Common utilities for code-review routes
*/
import { createLogger } from '@automaker/utils';
import { getErrorMessage as getErrorMessageShared, createLogError } from '../common.js';
const logger = createLogger('CodeReview');
// Re-export shared utilities
export { getErrorMessageShared as getErrorMessage };
export const logError = createLogError(logger);
/**
* Review state interface
*/
interface ReviewState {
isRunning: boolean;
abortController: AbortController | null;
projectPath: string | null;
}
/**
* Shared state for code review operations
* Using an object to avoid mutable `let` exports which can cause issues in ES modules
*/
const reviewState: ReviewState = {
isRunning: false,
abortController: null,
projectPath: null,
};
/**
* Check if a review is currently running
*/
export function isRunning(): boolean {
return reviewState.isRunning;
}
/**
* Get the current abort controller (for stopping reviews)
*/
export function getAbortController(): AbortController | null {
return reviewState.abortController;
}
/**
* Get the current project path being reviewed
*/
export function getCurrentProjectPath(): string | null {
return reviewState.projectPath;
}
/**
* Set the running state for code review operations
*/
export function setRunningState(
running: boolean,
controller: AbortController | null = null,
projectPath: string | null = null
): void {
reviewState.isRunning = running;
reviewState.abortController = controller;
reviewState.projectPath = projectPath;
}
/**
* Get the current review status
*/
export function getReviewStatus(): {
isRunning: boolean;
projectPath: string | null;
} {
return {
isRunning: reviewState.isRunning,
projectPath: reviewState.projectPath,
};
}

View File

@@ -0,0 +1,40 @@
/**
* Code Review routes - HTTP API for triggering and managing code reviews
*
* Provides endpoints for:
* - Triggering code reviews on projects
* - Checking review status
* - Stopping in-progress reviews
*
* Uses the CodeReviewService for actual review execution with AI providers.
*/
import { Router } from 'express';
import type { CodeReviewService } from '../../services/code-review-service.js';
import { validatePathParams } from '../../middleware/validate-paths.js';
import { createTriggerHandler } from './routes/trigger.js';
import { createStatusHandler } from './routes/status.js';
import { createStopHandler } from './routes/stop.js';
import { createProvidersHandler } from './routes/providers.js';
export function createCodeReviewRoutes(codeReviewService: CodeReviewService): Router {
const router = Router();
// POST /trigger - Start a new code review
router.post(
'/trigger',
validatePathParams('projectPath'),
createTriggerHandler(codeReviewService)
);
// GET /status - Get current review status
router.get('/status', createStatusHandler());
// POST /stop - Stop current review
router.post('/stop', createStopHandler());
// GET /providers - Get available providers and their status
router.get('/providers', createProvidersHandler(codeReviewService));
return router;
}

View File

@@ -0,0 +1,38 @@
/**
* GET /providers endpoint - Get available code review providers
*
* Returns the status of all available AI providers that can be used for code reviews.
*/
import type { Request, Response } from 'express';
import type { CodeReviewService } from '../../../services/code-review-service.js';
import { createLogger } from '@automaker/utils';
import { getErrorMessage, logError } from '../common.js';
const logger = createLogger('CodeReview');
export function createProvidersHandler(codeReviewService: CodeReviewService) {
return async (req: Request, res: Response): Promise<void> => {
logger.debug('========== /providers endpoint called ==========');
try {
// Check if refresh is requested
const forceRefresh = req.query.refresh === 'true';
const providers = await codeReviewService.getProviderStatus(forceRefresh);
const bestProvider = await codeReviewService.getBestProvider();
res.json({
success: true,
providers,
recommended: bestProvider,
});
} catch (error) {
logError(error, 'Providers handler exception');
res.status(500).json({
success: false,
error: getErrorMessage(error),
});
}
};
}

View File

@@ -0,0 +1,32 @@
/**
* GET /status endpoint - Get current code review status
*
* Returns whether a code review is currently running and which project.
*/
import type { Request, Response } from 'express';
import { createLogger } from '@automaker/utils';
import { getReviewStatus, getErrorMessage, logError } from '../common.js';
const logger = createLogger('CodeReview');
export function createStatusHandler() {
return async (_req: Request, res: Response): Promise<void> => {
logger.debug('========== /status endpoint called ==========');
try {
const status = getReviewStatus();
res.json({
success: true,
...status,
});
} catch (error) {
logError(error, 'Status handler exception');
res.status(500).json({
success: false,
error: getErrorMessage(error),
});
}
};
}

View File

@@ -0,0 +1,54 @@
/**
* POST /stop endpoint - Stop the current code review
*
* Aborts any running code review operation.
*/
import type { Request, Response } from 'express';
import { createLogger } from '@automaker/utils';
import {
isRunning,
getAbortController,
setRunningState,
getErrorMessage,
logError,
} from '../common.js';
const logger = createLogger('CodeReview');
export function createStopHandler() {
return async (_req: Request, res: Response): Promise<void> => {
logger.info('========== /stop endpoint called ==========');
try {
if (!isRunning()) {
res.json({
success: true,
message: 'No code review is currently running',
});
return;
}
// Abort the current operation
const abortController = getAbortController();
if (abortController) {
abortController.abort();
logger.info('Code review aborted');
}
// Reset state
setRunningState(false, null, null);
res.json({
success: true,
message: 'Code review stopped',
});
} catch (error) {
logError(error, 'Stop handler exception');
res.status(500).json({
success: false,
error: getErrorMessage(error),
});
}
};
}

View File

@@ -0,0 +1,188 @@
/**
* POST /trigger endpoint - Trigger a code review
*
* Starts an asynchronous code review on the specified project.
* Progress updates are streamed via WebSocket events.
*/
import type { Request, Response } from 'express';
import type { CodeReviewService } from '../../../services/code-review-service.js';
import type { CodeReviewCategory, ThinkingLevel, ModelId } from '@automaker/types';
import { createLogger } from '@automaker/utils';
import { isRunning, setRunningState, getErrorMessage, logError } from '../common.js';
const logger = createLogger('CodeReview');
/**
* Maximum number of files allowed per review request
*/
const MAX_FILES_PER_REQUEST = 100;
/**
* Maximum length for baseRef parameter
*/
const MAX_BASE_REF_LENGTH = 256;
/**
* Valid categories for code review
*/
const VALID_CATEGORIES: CodeReviewCategory[] = [
'tech_stack',
'security',
'code_quality',
'implementation',
'architecture',
'performance',
'testing',
'documentation',
];
/**
* Valid thinking levels
*/
const VALID_THINKING_LEVELS: ThinkingLevel[] = ['low', 'medium', 'high'];
interface TriggerRequestBody {
projectPath: string;
files?: string[];
baseRef?: string;
categories?: CodeReviewCategory[];
autoFix?: boolean;
model?: ModelId;
thinkingLevel?: ThinkingLevel;
}
/**
* Validate and sanitize the request body
*/
function validateRequestBody(body: TriggerRequestBody): { valid: boolean; error?: string } {
const { files, baseRef, categories, autoFix, thinkingLevel } = body;
// Validate files array
if (files !== undefined) {
if (!Array.isArray(files)) {
return { valid: false, error: 'files must be an array' };
}
if (files.length > MAX_FILES_PER_REQUEST) {
return { valid: false, error: `Maximum ${MAX_FILES_PER_REQUEST} files allowed per request` };
}
for (const file of files) {
if (typeof file !== 'string') {
return { valid: false, error: 'Each file must be a string' };
}
if (file.length > 500) {
return { valid: false, error: 'File path too long' };
}
}
}
// Validate baseRef
if (baseRef !== undefined) {
if (typeof baseRef !== 'string') {
return { valid: false, error: 'baseRef must be a string' };
}
if (baseRef.length > MAX_BASE_REF_LENGTH) {
return { valid: false, error: 'baseRef is too long' };
}
}
// Validate categories
if (categories !== undefined) {
if (!Array.isArray(categories)) {
return { valid: false, error: 'categories must be an array' };
}
for (const category of categories) {
if (!VALID_CATEGORIES.includes(category)) {
return { valid: false, error: `Invalid category: ${category}` };
}
}
}
// Validate autoFix
if (autoFix !== undefined && typeof autoFix !== 'boolean') {
return { valid: false, error: 'autoFix must be a boolean' };
}
// Validate thinkingLevel
if (thinkingLevel !== undefined) {
if (!VALID_THINKING_LEVELS.includes(thinkingLevel)) {
return { valid: false, error: `Invalid thinkingLevel: ${thinkingLevel}` };
}
}
return { valid: true };
}
export function createTriggerHandler(codeReviewService: CodeReviewService) {
return async (req: Request, res: Response): Promise<void> => {
logger.info('========== /trigger endpoint called ==========');
try {
const body = req.body as TriggerRequestBody;
const { projectPath, files, baseRef, categories, autoFix, model, thinkingLevel } = body;
// Validate required parameters
if (!projectPath) {
res.status(400).json({
success: false,
error: 'projectPath is required',
});
return;
}
// SECURITY: Validate all input parameters
const validation = validateRequestBody(body);
if (!validation.valid) {
res.status(400).json({
success: false,
error: validation.error,
});
return;
}
// Check if a review is already running
if (isRunning()) {
res.status(409).json({
success: false,
error: 'A code review is already in progress',
});
return;
}
// Set up abort controller for cancellation
const abortController = new AbortController();
setRunningState(true, abortController, projectPath);
// Start the review in the background
codeReviewService
.executeReview({
projectPath,
files,
baseRef,
categories,
autoFix,
model,
thinkingLevel,
abortController,
})
.catch((error) => {
logError(error, 'Code review failed');
})
.finally(() => {
setRunningState(false, null, null);
});
// Return immediate response
res.json({
success: true,
message: 'Code review started',
});
} catch (error) {
logError(error, 'Trigger handler exception');
res.status(500).json({
success: false,
error: getErrorMessage(error),
});
}
};
}

View File

@@ -0,0 +1,90 @@
import { Router, Request, Response } from 'express';
import { CodexUsageService } from '../../services/codex-usage-service.js';
import { CodexModelCacheService } from '../../services/codex-model-cache-service.js';
import { createLogger } from '@automaker/utils';
const logger = createLogger('Codex');
export function createCodexRoutes(
usageService: CodexUsageService,
modelCacheService: CodexModelCacheService
): Router {
const router = Router();
// Get current usage (attempts to fetch from Codex CLI)
router.get('/usage', async (_req: Request, res: Response) => {
try {
// Check if Codex CLI is available first
const isAvailable = await usageService.isAvailable();
if (!isAvailable) {
// IMPORTANT: This endpoint is behind Automaker session auth already.
// Use a 200 + error payload for Codex CLI issues so the UI doesn't
// interpret it as an invalid Automaker session (401/403 triggers logout).
res.status(200).json({
error: 'Codex CLI not found',
message: "Please install Codex CLI and run 'codex login' to authenticate",
});
return;
}
const usage = await usageService.fetchUsageData();
res.json(usage);
} catch (error) {
const message = error instanceof Error ? error.message : 'Unknown error';
if (message.includes('not authenticated') || message.includes('login')) {
// Do NOT use 401/403 here: that status code is reserved for Automaker session auth.
res.status(200).json({
error: 'Authentication required',
message: "Please run 'codex login' to authenticate",
});
} else if (message.includes('not available') || message.includes('does not provide')) {
// This is the expected case - Codex doesn't provide usage stats
res.status(200).json({
error: 'Usage statistics not available',
message: message,
});
} else if (message.includes('timed out')) {
res.status(200).json({
error: 'Command timed out',
message: 'The Codex CLI took too long to respond',
});
} else {
logger.error('Error fetching usage:', error);
res.status(500).json({ error: message });
}
}
});
// Get available Codex models (cached)
router.get('/models', async (req: Request, res: Response) => {
try {
const forceRefresh = req.query.refresh === 'true';
const { models, cachedAt } = await modelCacheService.getModelsWithMetadata(forceRefresh);
if (models.length === 0) {
res.status(503).json({
success: false,
error: 'Codex CLI not available or not authenticated',
message: "Please install Codex CLI and run 'codex login' to authenticate",
});
return;
}
res.json({
success: true,
models,
cachedAt,
});
} catch (error) {
logger.error('Error fetching models:', error);
const message = error instanceof Error ? error.message : 'Unknown error';
res.status(500).json({
success: false,
error: message,
});
}
});
return router;
}

View File

@@ -1,8 +1,9 @@
/** /**
* POST /context/describe-file endpoint - Generate description for a text file * POST /context/describe-file endpoint - Generate description for a text file
* *
* Uses Claude Haiku to analyze a text file and generate a concise description * Uses AI to analyze a text file and generate a concise description
* suitable for context file metadata. * suitable for context file metadata. Model is configurable via
* phaseModels.fileDescriptionModel in settings (defaults to Haiku).
* *
* SECURITY: This endpoint validates file paths against ALLOWED_ROOT_DIRECTORY * SECURITY: This endpoint validates file paths against ALLOWED_ROOT_DIRECTORY
* and reads file content directly (not via Claude's Read tool) to prevent * and reads file content directly (not via Claude's Read tool) to prevent
@@ -10,15 +11,18 @@
*/ */
import type { Request, Response } from 'express'; import type { Request, Response } from 'express';
import { query } from '@anthropic-ai/claude-agent-sdk';
import { createLogger } from '@automaker/utils'; import { createLogger } from '@automaker/utils';
import { CLAUDE_MODEL_MAP } from '@automaker/types'; import { DEFAULT_PHASE_MODELS } from '@automaker/types';
import { PathNotAllowedError } from '@automaker/platform'; import { PathNotAllowedError } from '@automaker/platform';
import { createCustomOptions } from '../../../lib/sdk-options.js'; import { resolvePhaseModel } from '@automaker/model-resolver';
import { simpleQuery } from '../../../providers/simple-query-service.js';
import * as secureFs from '../../../lib/secure-fs.js'; import * as secureFs from '../../../lib/secure-fs.js';
import * as path from 'path'; import * as path from 'path';
import type { SettingsService } from '../../../services/settings-service.js'; import type { SettingsService } from '../../../services/settings-service.js';
import { getAutoLoadClaudeMdSetting } from '../../../lib/settings-helpers.js'; import {
getAutoLoadClaudeMdSetting,
getPromptCustomization,
} from '../../../lib/settings-helpers.js';
const logger = createLogger('DescribeFile'); const logger = createLogger('DescribeFile');
@@ -46,31 +50,6 @@ interface DescribeFileErrorResponse {
error: string; error: string;
} }
/**
* Extract text content from Claude SDK response messages
*/
async function extractTextFromStream(
// eslint-disable-next-line @typescript-eslint/no-explicit-any
stream: AsyncIterable<any>
): Promise<string> {
let responseText = '';
for await (const msg of stream) {
if (msg.type === 'assistant' && msg.message?.content) {
const blocks = msg.message.content as Array<{ type: string; text?: string }>;
for (const block of blocks) {
if (block.type === 'text' && block.text) {
responseText += block.text;
}
}
} else if (msg.type === 'result' && msg.subtype === 'success') {
responseText = msg.result || responseText;
}
}
return responseText;
}
/** /**
* Create the describe-file request handler * Create the describe-file request handler
* *
@@ -94,7 +73,7 @@ export function createDescribeFileHandler(
return; return;
} }
logger.info(`[DescribeFile] Starting description generation for: ${filePath}`); logger.info(`Starting description generation for: ${filePath}`);
// Resolve the path for logging and cwd derivation // Resolve the path for logging and cwd derivation
const resolvedPath = secureFs.resolvePath(filePath); const resolvedPath = secureFs.resolvePath(filePath);
@@ -109,7 +88,7 @@ export function createDescribeFileHandler(
} catch (readError) { } catch (readError) {
// Path not allowed - return 403 Forbidden // Path not allowed - return 403 Forbidden
if (readError instanceof PathNotAllowedError) { if (readError instanceof PathNotAllowedError) {
logger.warn(`[DescribeFile] Path not allowed: ${filePath}`); logger.warn(`Path not allowed: ${filePath}`);
const response: DescribeFileErrorResponse = { const response: DescribeFileErrorResponse = {
success: false, success: false,
error: 'File path is not within the allowed directory', error: 'File path is not within the allowed directory',
@@ -125,7 +104,7 @@ export function createDescribeFileHandler(
'code' in readError && 'code' in readError &&
readError.code === 'ENOENT' readError.code === 'ENOENT'
) { ) {
logger.warn(`[DescribeFile] File not found: ${resolvedPath}`); logger.warn(`File not found: ${resolvedPath}`);
const response: DescribeFileErrorResponse = { const response: DescribeFileErrorResponse = {
success: false, success: false,
error: `File not found: ${filePath}`, error: `File not found: ${filePath}`,
@@ -135,7 +114,7 @@ export function createDescribeFileHandler(
} }
const errorMessage = readError instanceof Error ? readError.message : 'Unknown error'; const errorMessage = readError instanceof Error ? readError.message : 'Unknown error';
logger.error(`[DescribeFile] Failed to read file: ${errorMessage}`); logger.error(`Failed to read file: ${errorMessage}`);
const response: DescribeFileErrorResponse = { const response: DescribeFileErrorResponse = {
success: false, success: false,
error: `Failed to read file: ${errorMessage}`, error: `Failed to read file: ${errorMessage}`,
@@ -154,18 +133,17 @@ export function createDescribeFileHandler(
// Get the filename for context // Get the filename for context
const fileName = path.basename(resolvedPath); const fileName = path.basename(resolvedPath);
// Get customized prompts from settings
const prompts = await getPromptCustomization(settingsService, '[DescribeFile]');
// Build prompt with file content passed as structured data // Build prompt with file content passed as structured data
// The file content is included directly, not via tool invocation // The file content is included directly, not via tool invocation
const instructionText = `Analyze the following file and provide a 1-2 sentence description suitable for use as context in an AI coding assistant. Focus on what the file contains, its purpose, and why an AI agent might want to use this context in the future (e.g., "API documentation for the authentication endpoints", "Configuration file for database connections", "Coding style guidelines for the project"). const prompt = `${prompts.contextDescription.describeFilePrompt}
Respond with ONLY the description text, no additional formatting, preamble, or explanation. File: ${fileName}${truncated ? ' (truncated)' : ''}
File: ${fileName}${truncated ? ' (truncated)' : ''}`; --- FILE CONTENT ---
${contentToAnalyze}`;
const promptContent = [
{ type: 'text' as const, text: instructionText },
{ type: 'text' as const, text: `\n\n--- FILE CONTENT ---\n${contentToAnalyze}` },
];
// Use the file's directory as the working directory // Use the file's directory as the working directory
const cwd = path.dirname(resolvedPath); const cwd = path.dirname(resolvedPath);
@@ -177,30 +155,29 @@ File: ${fileName}${truncated ? ' (truncated)' : ''}`;
'[DescribeFile]' '[DescribeFile]'
); );
// Use centralized SDK options with proper cwd validation // Get model from phase settings
// No tools needed since we're passing file content directly const settings = await settingsService?.getGlobalSettings();
const sdkOptions = createCustomOptions({ logger.info(`Raw phaseModels from settings:`, JSON.stringify(settings?.phaseModels, null, 2));
const phaseModelEntry =
settings?.phaseModels?.fileDescriptionModel || DEFAULT_PHASE_MODELS.fileDescriptionModel;
logger.info(`fileDescriptionModel entry:`, JSON.stringify(phaseModelEntry));
const { model, thinkingLevel } = resolvePhaseModel(phaseModelEntry);
logger.info(`Resolved model: ${model}, thinkingLevel: ${thinkingLevel}`);
// Use simpleQuery - provider abstraction handles routing to correct provider
const result = await simpleQuery({
prompt,
model,
cwd, cwd,
model: CLAUDE_MODEL_MAP.haiku,
maxTurns: 1, maxTurns: 1,
allowedTools: [], allowedTools: [],
autoLoadClaudeMd, thinkingLevel,
sandbox: { enabled: true, autoAllowBashIfSandboxed: true }, readOnly: true, // File description only reads, doesn't write
settingSources: autoLoadClaudeMd ? ['user', 'project', 'local'] : undefined,
}); });
const promptGenerator = (async function* () { const description = result.text;
yield {
type: 'user' as const,
session_id: '',
message: { role: 'user' as const, content: promptContent },
parent_tool_use_id: null,
};
})();
const stream = query({ prompt: promptGenerator, options: sdkOptions });
// Extract the description from the response
const description = await extractTextFromStream(stream);
if (!description || description.trim().length === 0) { if (!description || description.trim().length === 0) {
logger.warn('Received empty response from Claude'); logger.warn('Received empty response from Claude');

View File

@@ -1,8 +1,9 @@
/** /**
* POST /context/describe-image endpoint - Generate description for an image * POST /context/describe-image endpoint - Generate description for an image
* *
* Uses Claude Haiku to analyze an image and generate a concise description * Uses AI to analyze an image and generate a concise description
* suitable for context file metadata. * suitable for context file metadata. Model is configurable via
* phaseModels.imageDescriptionModel in settings (defaults to Haiku).
* *
* IMPORTANT: * IMPORTANT:
* The agent runner (chat/auto-mode) sends images as multi-part content blocks (base64 image blocks), * The agent runner (chat/auto-mode) sends images as multi-part content blocks (base64 image blocks),
@@ -11,14 +12,17 @@
*/ */
import type { Request, Response } from 'express'; import type { Request, Response } from 'express';
import { query } from '@anthropic-ai/claude-agent-sdk';
import { createLogger, readImageAsBase64 } from '@automaker/utils'; import { createLogger, readImageAsBase64 } from '@automaker/utils';
import { CLAUDE_MODEL_MAP } from '@automaker/types'; import { DEFAULT_PHASE_MODELS, isCursorModel } from '@automaker/types';
import { createCustomOptions } from '../../../lib/sdk-options.js'; import { resolvePhaseModel } from '@automaker/model-resolver';
import { simpleQuery } from '../../../providers/simple-query-service.js';
import * as secureFs from '../../../lib/secure-fs.js'; import * as secureFs from '../../../lib/secure-fs.js';
import * as path from 'path'; import * as path from 'path';
import type { SettingsService } from '../../../services/settings-service.js'; import type { SettingsService } from '../../../services/settings-service.js';
import { getAutoLoadClaudeMdSetting } from '../../../lib/settings-helpers.js'; import {
getAutoLoadClaudeMdSetting,
getPromptCustomization,
} from '../../../lib/settings-helpers.js';
const logger = createLogger('DescribeImage'); const logger = createLogger('DescribeImage');
@@ -175,57 +179,10 @@ function mapDescribeImageError(rawMessage: string | undefined): {
return baseResponse; return baseResponse;
} }
/**
* Extract text content from Claude SDK response messages and log high-signal stream events.
*/
async function extractTextFromStream(
// eslint-disable-next-line @typescript-eslint/no-explicit-any
stream: AsyncIterable<any>,
requestId: string
): Promise<string> {
let responseText = '';
let messageCount = 0;
logger.info(`[${requestId}] [Stream] Begin reading SDK stream...`);
for await (const msg of stream) {
messageCount++;
const msgType = msg?.type;
const msgSubtype = msg?.subtype;
// Keep this concise but informative. Full error object is logged in catch blocks.
logger.info(
`[${requestId}] [Stream] #${messageCount} type=${String(msgType)} subtype=${String(msgSubtype ?? '')}`
);
if (msgType === 'assistant' && msg.message?.content) {
const blocks = msg.message.content as Array<{ type: string; text?: string }>;
logger.info(`[${requestId}] [Stream] assistant blocks=${blocks.length}`);
for (const block of blocks) {
if (block.type === 'text' && block.text) {
responseText += block.text;
}
}
}
if (msgType === 'result' && msgSubtype === 'success') {
if (typeof msg.result === 'string' && msg.result.length > 0) {
responseText = msg.result;
}
}
}
logger.info(
`[${requestId}] [Stream] End of stream. messages=${messageCount} textLength=${responseText.length}`
);
return responseText;
}
/** /**
* Create the describe-image request handler * Create the describe-image request handler
* *
* Uses Claude SDK query with multi-part content blocks to include the image (base64), * Uses the provider abstraction with multi-part content blocks to include the image (base64),
* matching the agent runner behavior. * matching the agent runner behavior.
* *
* @param settingsService - Optional settings service for loading autoLoadClaudeMd setting * @param settingsService - Optional settings service for loading autoLoadClaudeMd setting
@@ -306,27 +263,6 @@ export function createDescribeImageHandler(
`[${requestId}] image meta filename=${imageData.filename} mime=${imageData.mimeType} base64Len=${base64Length} estBytes=${estimatedBytes}` `[${requestId}] image meta filename=${imageData.filename} mime=${imageData.mimeType} base64Len=${base64Length} estBytes=${estimatedBytes}`
); );
// Build multi-part prompt with image block (no Read tool required)
const instructionText =
`Describe this image in 1-2 sentences suitable for use as context in an AI coding assistant. ` +
`Focus on what the image shows and its purpose (e.g., "UI mockup showing login form with email/password fields", ` +
`"Architecture diagram of microservices", "Screenshot of error message in terminal").\n\n` +
`Respond with ONLY the description text, no additional formatting, preamble, or explanation.`;
const promptContent = [
{ type: 'text' as const, text: instructionText },
{
type: 'image' as const,
source: {
type: 'base64' as const,
media_type: imageData.mimeType,
data: imageData.base64,
},
},
];
logger.info(`[${requestId}] Built multi-part prompt blocks=${promptContent.length}`);
const cwd = path.dirname(actualPath); const cwd = path.dirname(actualPath);
logger.info(`[${requestId}] Using cwd=${cwd}`); logger.info(`[${requestId}] Using cwd=${cwd}`);
@@ -337,43 +273,66 @@ export function createDescribeImageHandler(
'[DescribeImage]' '[DescribeImage]'
); );
// Use the same centralized option builder used across the server (validates cwd) // Get model from phase settings
const sdkOptions = createCustomOptions({ const settings = await settingsService?.getGlobalSettings();
const phaseModelEntry =
settings?.phaseModels?.imageDescriptionModel || DEFAULT_PHASE_MODELS.imageDescriptionModel;
const { model, thinkingLevel } = resolvePhaseModel(phaseModelEntry);
logger.info(`[${requestId}] Using model: ${model}`);
// Get customized prompts from settings
const prompts = await getPromptCustomization(settingsService, '[DescribeImage]');
// Build the instruction text from centralized prompts
const instructionText = prompts.contextDescription.describeImagePrompt;
// Build prompt based on provider capability
// Some providers (like Cursor) may not support image content blocks
let prompt: string | Array<{ type: string; text?: string; source?: object }>;
if (isCursorModel(model)) {
// Cursor may not support base64 image blocks directly
// Use text prompt with image path reference
logger.info(`[${requestId}] Using text prompt for Cursor model`);
prompt = `${instructionText}\n\nImage file: ${actualPath}\nMIME type: ${imageData.mimeType}`;
} else {
// Claude and other vision-capable models support multi-part prompts with images
logger.info(`[${requestId}] Using multi-part prompt with image block`);
prompt = [
{ type: 'text', text: instructionText },
{
type: 'image',
source: {
type: 'base64',
media_type: imageData.mimeType,
data: imageData.base64,
},
},
];
}
logger.info(`[${requestId}] Calling simpleQuery...`);
const queryStart = Date.now();
// Use simpleQuery - provider abstraction handles routing
const result = await simpleQuery({
prompt,
model,
cwd, cwd,
model: CLAUDE_MODEL_MAP.haiku,
maxTurns: 1, maxTurns: 1,
allowedTools: [], allowedTools: isCursorModel(model) ? ['Read'] : [], // Allow Read for Cursor to read image if needed
autoLoadClaudeMd, thinkingLevel,
sandbox: { enabled: true, autoAllowBashIfSandboxed: true }, readOnly: true, // Image description only reads, doesn't write
settingSources: autoLoadClaudeMd ? ['user', 'project', 'local'] : undefined,
}); });
logger.info( logger.info(`[${requestId}] simpleQuery completed in ${Date.now() - queryStart}ms`);
`[${requestId}] SDK options model=${sdkOptions.model} maxTurns=${sdkOptions.maxTurns} allowedTools=${JSON.stringify(
sdkOptions.allowedTools
)} sandbox=${JSON.stringify(sdkOptions.sandbox)}`
);
const promptGenerator = (async function* () { const description = result.text;
yield {
type: 'user' as const,
session_id: '',
message: { role: 'user' as const, content: promptContent },
parent_tool_use_id: null,
};
})();
logger.info(`[${requestId}] Calling query()...`);
const queryStart = Date.now();
const stream = query({ prompt: promptGenerator, options: sdkOptions });
logger.info(`[${requestId}] query() returned stream in ${Date.now() - queryStart}ms`);
// Extract the description from the response
const extractStart = Date.now();
const description = await extractTextFromStream(stream, requestId);
logger.info(`[${requestId}] extractMs=${Date.now() - extractStart}`);
if (!description || description.trim().length === 0) { if (!description || description.trim().length === 0) {
logger.warn(`[${requestId}] Received empty response from Claude`); logger.warn(`[${requestId}] Received empty response from AI`);
const response: DescribeImageErrorResponse = { const response: DescribeImageErrorResponse = {
success: false, success: false,
error: 'Failed to generate description - empty response', error: 'Failed to generate description - empty response',

View File

@@ -1,15 +1,16 @@
/** /**
* POST /enhance-prompt endpoint - Enhance user input text * POST /enhance-prompt endpoint - Enhance user input text
* *
* Uses Claude AI to enhance text based on the specified enhancement mode. * Uses the provider abstraction to enhance text based on the specified
* Supports modes: improve, technical, simplify, acceptance * enhancement mode. Works with any configured provider (Claude, Cursor, etc.).
* Supports modes: improve, technical, simplify, acceptance, ux-reviewer
*/ */
import type { Request, Response } from 'express'; import type { Request, Response } from 'express';
import { query } from '@anthropic-ai/claude-agent-sdk';
import { createLogger } from '@automaker/utils'; import { createLogger } from '@automaker/utils';
import { resolveModelString } from '@automaker/model-resolver'; import { resolveModelString } from '@automaker/model-resolver';
import { CLAUDE_MODEL_MAP } from '@automaker/types'; import { CLAUDE_MODEL_MAP, type ThinkingLevel } from '@automaker/types';
import { simpleQuery } from '../../../providers/simple-query-service.js';
import type { SettingsService } from '../../../services/settings-service.js'; import type { SettingsService } from '../../../services/settings-service.js';
import { getPromptCustomization } from '../../../lib/settings-helpers.js'; import { getPromptCustomization } from '../../../lib/settings-helpers.js';
import { import {
@@ -30,6 +31,8 @@ interface EnhanceRequestBody {
enhancementMode: string; enhancementMode: string;
/** Optional model override */ /** Optional model override */
model?: string; model?: string;
/** Optional thinking level for Claude models */
thinkingLevel?: ThinkingLevel;
} }
/** /**
@@ -48,39 +51,6 @@ interface EnhanceErrorResponse {
error: string; error: string;
} }
/**
* Extract text content from Claude SDK response messages
*
* @param stream - The async iterable from the query function
* @returns The extracted text content
*/
async function extractTextFromStream(
stream: AsyncIterable<{
type: string;
subtype?: string;
result?: string;
message?: {
content?: Array<{ type: string; text?: string }>;
};
}>
): Promise<string> {
let responseText = '';
for await (const msg of stream) {
if (msg.type === 'assistant' && msg.message?.content) {
for (const block of msg.message.content) {
if (block.type === 'text' && block.text) {
responseText += block.text;
}
}
} else if (msg.type === 'result' && msg.subtype === 'success') {
responseText = msg.result || responseText;
}
}
return responseText;
}
/** /**
* Create the enhance request handler * Create the enhance request handler
* *
@@ -92,7 +62,8 @@ export function createEnhanceHandler(
): (req: Request, res: Response) => Promise<void> { ): (req: Request, res: Response) => Promise<void> {
return async (req: Request, res: Response): Promise<void> => { return async (req: Request, res: Response): Promise<void> => {
try { try {
const { originalText, enhancementMode, model } = req.body as EnhanceRequestBody; const { originalText, enhancementMode, model, thinkingLevel } =
req.body as EnhanceRequestBody;
// Validate required fields // Validate required fields
if (!originalText || typeof originalText !== 'string') { if (!originalText || typeof originalText !== 'string') {
@@ -141,13 +112,13 @@ export function createEnhanceHandler(
technical: prompts.enhancement.technicalSystemPrompt, technical: prompts.enhancement.technicalSystemPrompt,
simplify: prompts.enhancement.simplifySystemPrompt, simplify: prompts.enhancement.simplifySystemPrompt,
acceptance: prompts.enhancement.acceptanceSystemPrompt, acceptance: prompts.enhancement.acceptanceSystemPrompt,
'ux-reviewer': prompts.enhancement.uxReviewerSystemPrompt,
}; };
const systemPrompt = systemPromptMap[validMode]; const systemPrompt = systemPromptMap[validMode];
logger.debug(`Using ${validMode} system prompt (length: ${systemPrompt.length} chars)`); logger.debug(`Using ${validMode} system prompt (length: ${systemPrompt.length} chars)`);
// Build the user prompt with few-shot examples // Build the user prompt with few-shot examples
// This helps the model understand this is text transformation, not a coding task
const userPrompt = buildUserPrompt(validMode, trimmedText, true); const userPrompt = buildUserPrompt(validMode, trimmedText, true);
// Resolve the model - use the passed model, default to sonnet for quality // Resolve the model - use the passed model, default to sonnet for quality
@@ -155,24 +126,23 @@ export function createEnhanceHandler(
logger.debug(`Using model: ${resolvedModel}`); logger.debug(`Using model: ${resolvedModel}`);
// Call Claude SDK with minimal configuration for text transformation // Use simpleQuery - provider abstraction handles routing to correct provider
// Key: no tools, just text completion // The system prompt is combined with user prompt since some providers
const stream = query({ // don't have a separate system prompt concept
prompt: userPrompt, const result = await simpleQuery({
options: { prompt: `${systemPrompt}\n\n${userPrompt}`,
model: resolvedModel, model: resolvedModel,
systemPrompt, cwd: process.cwd(), // Enhancement doesn't need a specific working directory
maxTurns: 1, maxTurns: 1,
allowedTools: [], allowedTools: [],
permissionMode: 'acceptEdits', thinkingLevel,
}, readOnly: true, // Prompt enhancement only generates text, doesn't write files
}); });
// Extract the enhanced text from the response const enhancedText = result.text;
const enhancedText = await extractTextFromStream(stream);
if (!enhancedText || enhancedText.trim().length === 0) { if (!enhancedText || enhancedText.trim().length === 0) {
logger.warn('Received empty response from Claude'); logger.warn('Received empty response from AI');
const response: EnhanceErrorResponse = { const response: EnhanceErrorResponse = {
success: false, success: false,
error: 'Failed to generate enhanced text - empty response', error: 'Failed to generate enhanced text - empty response',

View File

@@ -0,0 +1,19 @@
/**
* Common utilities for event history routes
*/
import { createLogger } from '@automaker/utils';
import { getErrorMessage as getErrorMessageShared, createLogError } from '../common.js';
/** Logger instance for event history operations */
export const logger = createLogger('EventHistory');
/**
* Extract user-friendly error message from error objects
*/
export { getErrorMessageShared as getErrorMessage };
/**
* Log error with automatic logger binding
*/
export const logError = createLogError(logger);

View File

@@ -0,0 +1,68 @@
/**
* Event History routes - HTTP API for event history management
*
* Provides endpoints for:
* - Listing events with filtering
* - Getting individual event details
* - Deleting events
* - Clearing all events
* - Replaying events to test hooks
*
* Mounted at /api/event-history in the main server.
*/
import { Router } from 'express';
import type { EventHistoryService } from '../../services/event-history-service.js';
import type { SettingsService } from '../../services/settings-service.js';
import { validatePathParams } from '../../middleware/validate-paths.js';
import { createListHandler } from './routes/list.js';
import { createGetHandler } from './routes/get.js';
import { createDeleteHandler } from './routes/delete.js';
import { createClearHandler } from './routes/clear.js';
import { createReplayHandler } from './routes/replay.js';
/**
* Create event history router with all endpoints
*
* Endpoints:
* - POST /list - List events with optional filtering
* - POST /get - Get a single event by ID
* - POST /delete - Delete an event by ID
* - POST /clear - Clear all events for a project
* - POST /replay - Replay an event to trigger hooks
*
* @param eventHistoryService - Instance of EventHistoryService
* @param settingsService - Instance of SettingsService (for replay)
* @returns Express Router configured with all event history endpoints
*/
export function createEventHistoryRoutes(
eventHistoryService: EventHistoryService,
settingsService: SettingsService
): Router {
const router = Router();
// List events with filtering
router.post('/list', validatePathParams('projectPath'), createListHandler(eventHistoryService));
// Get single event
router.post('/get', validatePathParams('projectPath'), createGetHandler(eventHistoryService));
// Delete event
router.post(
'/delete',
validatePathParams('projectPath'),
createDeleteHandler(eventHistoryService)
);
// Clear all events
router.post('/clear', validatePathParams('projectPath'), createClearHandler(eventHistoryService));
// Replay event
router.post(
'/replay',
validatePathParams('projectPath'),
createReplayHandler(eventHistoryService, settingsService)
);
return router;
}

Some files were not shown because too many files have changed in this diff Show More