Compare commits

...

125 Commits

Author SHA1 Message Date
Shirone
5b620011ad feat: add CodeRabbit integration for AI-powered code reviews
This commit introduces the CodeRabbit service and its associated routes, enabling users to trigger, manage, and check the status of code reviews through a new API. Key features include:

- New routes for triggering code reviews, checking status, and stopping reviews.
- Integration with the CodeRabbit CLI for authentication and status checks.
- UI components for displaying code review results and settings management.
- Unit tests for the new code review functionality to ensure reliability.

This enhancement aims to streamline the code review process and leverage AI capabilities for improved code quality.
2026-01-24 21:10:33 +01:00
Shirone
327aef89a2 Merge pull request #562 from AutoMaker-Org/feature/v0.12.0rc-1768688900786-5ea1
refactor: standardize PR state representation across the application
2026-01-18 10:45:59 +00:00
Shirone
44e665f1bf fix: adress pr comments 2026-01-18 00:22:27 +01:00
Shirone
5b1e0105f4 refactor: standardize PR state representation across the application
Updated the PR state handling to use a consistent uppercase format ('OPEN', 'MERGED', 'CLOSED') throughout the codebase. This includes changes to the worktree metadata interface, PR creation logic, and related tests to ensure uniformity and prevent potential mismatches in state representation.

Additionally, modified the GitHub PR fetching logic to retrieve all PR states, allowing for better detection of state changes.

This refactor enhances clarity and consistency in how PR states are managed and displayed.
2026-01-17 23:58:19 +01:00
webdevcody
832d10e133 refactor: replace Loader2 with Spinner component across the application
This update standardizes the loading indicators by replacing all instances of Loader2 with the new Spinner component. The Spinner component provides a consistent look and feel for loading states throughout the UI, enhancing the user experience.

Changes include:
- Updated loading indicators in various components such as popovers, modals, and views.
- Ensured that the Spinner component is used with appropriate sizes for different contexts.

No functional changes were made; this is purely a visual and structural improvement.
2026-01-17 17:58:16 -05:00
DhanushSantosh
044c3d50d1 fix: mark dmg-license as optional dependency for cross-platform builds
dmg-license is a macOS-only package used for building DMG installers.
Moving it from devDependencies to optionalDependencies allows npm ci
to succeed on Linux and Windows without failing on platform checks.

macOS developers will still get the package when available.
Linux/Windows developers can now run npm ci without errors.

Fixes: npm ci failing on Linux with "EBADPLATFORM" error

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-18 01:28:14 +05:30
Shirone
a1de0a78a0 Merge pull request #545 from stefandevo/fix/sandbox-warning-persistence
fix: sandbox warning persistence and add env var option
2026-01-17 18:57:19 +00:00
Shirone
fef9639e01 Merge pull request #539 from stefandevo/fix/light-mode-agent-output
fix: respect theme in agent output modal and log viewer
2026-01-17 18:35:32 +00:00
Stefan de Vogelaere
aef479218d fix: use DEFAULT_FONT_VALUE for initial terminal font
The initial terminalState.fontFamily was set to a raw font string
that didn't match any option in TERMINAL_FONT_OPTIONS, causing the
dropdown to appear empty. Changed to use DEFAULT_FONT_VALUE sentinel.
2026-01-17 19:32:42 +01:00
Stefan de Vogelaere
ded5ecf4e9 refactor: reduce code duplication in font settings and sync logic
Address CodeRabbit review feedback:
- Create getEffectiveFont helper to deduplicate getEffectiveFontSans/Mono
- Extract getSettingsFieldValue and hasSettingsFieldChanged helpers
- Create reusable FontSelector component for font selection UI
- Refactor project-theme-section and appearance-section to use FontSelector
2026-01-17 19:30:00 +01:00
Stefan de Vogelaere
a01f299597 fix: resolve type errors after merging upstream v0.12.0rc
- Fix ThemeMode type casting in __root.tsx
- Use specRegeneration.create() instead of non-existent generateAppSpec
- Add missing keyboard shortcut entries for projectSettings and notifications
- Fix lucide-react type casts with intermediate unknown cast
- Remove unused pipelineConfig prop from ListRow component
- Align SettingsProject interface with Project type
- Fix defaultDeleteBranchWithWorktree property name
2026-01-17 19:20:49 +01:00
Stefan de Vogelaere
21c9e88a86 Merge remote-tracking branch 'upstream/v0.12.0rc' into fix/light-mode-agent-output 2026-01-17 19:10:49 +01:00
Shirone
af17f6e36f Merge pull request #535 from stefandevo/v0.12.0rc
feat: add font customization and 8 new themes
2026-01-17 18:06:04 +00:00
Stefan de Vogelaere
e69a2ad722 docs: add AUTOMAKER_SKIP_SANDBOX_WARNING env var documentation
Document the new environment variable in README.md and .env.example
2026-01-17 18:33:08 +01:00
DhanushSantosh
0480f6ccd6 fix: handle dynamic model IDs with slashes in the model name
isOpencodeModel was rejecting valid dynamic model IDs like
'openrouter/qwen/qwen3-14b:free' because it was splitting on all slashes
and expecting exactly 2 parts. This caused valid OpenCode models to be
treated as unknown, falling back to Claude.

Now correctly splits on the FIRST slash only, allowing model names
like 'qwen/qwen3-14b:free' to be recognized as valid.

Fixes: User selects openrouter/qwen/qwen3-14b:free → server falls back to Claude

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-17 21:13:47 +05:30
DhanushSantosh
24042d20c2 fix: filter dynamic OpenCode models by enabled status in model selector
The phase model selector was showing ALL discovered dynamic models regardless
of whether they were enabled in settings. Now it filters dynamic models by
enabledDynamicModelIds, matching the behavior of Cursor models and making
the enable/disable setting meaningful.

Users can now:
- Disable models in settings they don't want to use
- See only enabled dynamic models in the model selector dropdown
- Have the "Select all" checkbox properly control which models appear

This ensures consistency: enabling/disabling models in settings affects
which models are available for feature execution.

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-17 21:00:22 +05:30
DhanushSantosh
9c3b3a4104 fix: make dynamic models select-all checkbox respect search filters
The "Select all" checkbox for dynamic models was using the unfiltered models list,
causing the checkbox state to not reflect what users see when searching. Now it
correctly operates on the filtered models list so:

- Checkbox state matches the visible filtered models
- "Select all" only toggles models the user can see
- Indeterminate state shows if some filtered models are selected

This ensures the checkbox has a meaningful purpose when filtering/searching models.

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-17 20:32:10 +05:30
Stefan de Vogelaere
17e2cdfc85 fix: sandbox warning persistence and add env var option
Fix race condition where sandbox warning appeared on every refresh
even after checking "Do not show again". The issue was that the
sandbox check effect ran before settings were hydrated from the
server, so skipSandboxWarning was always false (the default).

Changes:
- Add settingsLoaded to sandbox check dependencies to ensure the
  user's preference is loaded before checking
- Add AUTOMAKER_SKIP_SANDBOX_WARNING env var option to skip the
  warning entirely (useful for dev/CI environments)
2026-01-17 15:33:51 +01:00
DhanushSantosh
466c34afd4 ci: improve release workflow artifact uploads
- Use explicit file patterns to exclude builder config/debug files (builder-*.yml, *.yaml)
- Include blockmap files for efficient delta updates in auto-update scenarios
- Ensure only production-ready artifacts are uploaded to GitHub releases

This prevents accidental inclusion of builder configuration files in the release assets.

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-17 19:18:15 +05:30
Shirone
b9567f5904 Merge pull request #542 from stefandevo/fix/api-key-info-on-dev-restart
docs: add hint about AUTOMAKER_API_KEY env var to API key banner
2026-01-17 13:09:36 +00:00
Shirone
c2cf8ae892 Merge pull request #540 from stefandevo/fix/gh-not-in-git-folder
fix: stop repeated GitHub PR fetch warnings for non-GitHub repos
2026-01-17 13:08:28 +00:00
Stefan de Vogelaere
3aa3c10ea4 docs: add hint about AUTOMAKER_API_KEY env var to API key banner
When the dev server restarts, developers need to re-enter the API key
in the browser. While the key is persisted to ./data/.api-key, this
file may be missing in clean dev scenarios.

This adds a helpful tip to the API key banner informing developers
they can set AUTOMAKER_API_KEY environment variable for a persistent
API key during development, avoiding the need to re-enter it after
server restarts.
2026-01-17 13:53:34 +01:00
Stefan de Vogelaere
5cd4183a7b fix: use fresh timestamp when setting cache entry
Use Date.now() after checkGitHubRemote() completes instead of the
pre-captured timestamp to ensure accurate 5-minute TTL.
2026-01-17 12:36:33 +01:00
Stefan de Vogelaere
2d9e38ad99 fix: stop repeated GitHub PR fetch warnings for non-GitHub repos
When opening a git repository without a GitHub remote, the server logs
were spammed with warnings every 5 seconds during worktree polling:

  WARN [Worktree] Failed to fetch GitHub PRs: Command failed: gh pr list
  ... no git remotes found

This happened because fetchGitHubPRs() ran `gh pr list` without first
checking if the project has a GitHub remote configured.

Changes:
- Add per-project cache for GitHub remote status with 5-minute TTL
- Check cache before attempting to fetch PRs, skip silently if no remote
- Add forceRefreshGitHub parameter to clear cache on manual refresh
- Pass forceRefreshGitHub when user clicks the refresh worktrees button

This allows users to add a GitHub remote and immediately detect it by
clicking the refresh button, while preventing log spam during normal
polling for projects without GitHub remotes.
2026-01-17 12:32:42 +01:00
Shirone
93d73f6d26 Merge pull request #529 from AutoMaker-Org/feature/v0.12.0rc-1768603410796-o2fn
fix: UUID generation fail in docker env
2026-01-17 11:19:01 +00:00
Stefan de Vogelaere
5209395a74 fix: respect theme in agent output modal and log viewer
The Agent Output modal and LogViewer component had hardcoded dark zinc
colors that didn't adapt to light mode themes. Replaced all hardcoded
colors with semantic Tailwind classes (bg-popover, text-foreground,
text-muted-foreground, bg-muted, border-border) that automatically
respect the active theme.
2026-01-17 11:44:33 +01:00
DhanushSantosh
ef6b9ac2d2 fix: add --force flag to npm ci for platform-specific dependencies
npm ci without --force rejects platform-specific packages like dmg-license
which is macOS-only. The --force flag tells npm to proceed even when
platform constraints are violated.

This allows Linux containers to skip dmg-license and continue with the
install, matching the behavior we want for Docker development.
2026-01-17 15:53:31 +05:30
DhanushSantosh
92afbeb6bd fix: run npm install as root to avoid permission issues
The named Docker volume for node_modules is created with root ownership,
causing EACCES errors when npm tries to write as the automaker user.

Solution:
- Run npm ci as root (installation phase)
- Use --legacy-peer-deps to properly handle optional dependencies
- Fix permissions after install
- Run server process as automaker user for security

This eliminates permission denied errors during npm install in dev containers.
2026-01-17 15:36:50 +05:30
DhanushSantosh
bbdc11ce47 fix: improve docker-compose npm install permissions and use npm ci
Fixes permission denied errors when installing dependencies in Docker containers:

Changes:
- Remove stale node_modules directories before installing (fresh start)
- Use 'npm ci --force' instead of 'npm install --force' for deterministic installs
- Add chmod to ensure writable permissions on node_modules
- Properly fix directory ownership and permissions before install

This prevents EACCES errors when multiple processes try to write to node_modules
and handles lingering permission issues from previous failed container runs.
2026-01-17 15:30:21 +05:30
DhanushSantosh
545bf2045d fix: add --force flag to npm install in docker-compose files
Allow npm to install platform-specific devDependencies (like dmg-license
which is macOS-only) by skipping platform checks in Linux Docker containers.
This matches the behavior already used in CI workflows.

Fixes Docker container startup failure:
- docker-compose.dev.yml (full stack development)
- docker-compose.dev-server.yml (server-only with local Electron)

The --force flag allows npm to proceed with installation even when some
optional/platform-specific dependencies can't be installed on the current
platform.
2026-01-17 15:00:48 +05:30
DhanushSantosh
a0471098fa fix: use specific data-testid selectors in project switcher assertions
Replace generic getByRole('button', { name: /.../ }) selectors with specific
getByTestId('project-switcher-project-') to avoid strict mode
violations where the selector resolves to multiple elements (project switcher
button and sidebar button).

Fixes failing E2E tests:
- feature-manual-review-flow.spec.ts
- new-project-creation.spec.ts
- open-existing-project.spec.ts
2026-01-17 14:53:06 +05:30
Stefan de Vogelaere
3320b40d15 feat: align terminal font settings with appearance fonts
- Terminal font dropdown now uses mono fonts from UI font options
- Unified font list between appearance section and terminal settings
- Terminal font persisted to GlobalSettings for import/export support
- Aligned global terminal settings popover with per-terminal popover:
  - Same settings in same order (Font Size, Run on New Terminal, Font Family, Scrollback, Line Height, Screen Reader)
  - Consistent styling (Radix Select instead of native select)
- Added terminal padding (12px vertical, 16px horizontal) for readability
2026-01-17 10:18:11 +01:00
DhanushSantosh
bac5e1c220 Merge upstream/v0.12.0rc into feature/fedora-rpm-support
Resolved conflict in backlog-plan/common.ts:
- Kept local (stricter) validation: Array.isArray(parsed?.result?.changes)
- This ensures type safety for the changes array
2026-01-17 14:44:37 +05:30
DhanushSantosh
33fa138d21 feat: add docker group support with sg docker command
Improve Docker access handling by detecting and using 'sg docker' command
when the user is in the docker group but hasn't logged out yet. This allows
running docker commands without requiring a full session restart after
`usermod -aG docker $USER`.

Changes:
- Detect docker group access and fall back to sg docker -c when needed
- Export DOCKER_CMD variable for use throughout the script
- Update all docker compose and docker ps commands to use DOCKER_CMD
- Improve error messages to guide users on fixing docker access issues
2026-01-17 14:40:08 +05:30
DhanushSantosh
bc09a22e1f fix: extract app version from apps/ui/package.json instead of monorepo root
The start-automaker.sh script now correctly sources the app version (0.12.0)
from apps/ui/package.json instead of the monorepo version (1.0.0) from the
root package.json. This ensures the launcher displays the correct Automaker
application version.
2026-01-17 14:19:12 +05:30
Stefan de Vogelaere
b771b51842 fix: address code review feedback
- Fix git+ssh URL to git+https for @electron/node-gyp (build compatibility)
- Remove duplicate @fontsource packages from root package.json
- Refactor font state initialization to reduce code duplication
2026-01-17 09:15:35 +01:00
Stefan de Vogelaere
1a7bf27ead feat: add new themes, Zed fonts, and sort theme/font lists
New themes added:
- Dark: Ayu Dark, Ayu Mirage, Ember, Matcha
- Light: Ayu Light, One Light, Bluloco, Feather

Other changes:
- Bundle Zed Sans and Zed Mono fonts from zed-industries/zed-fonts
- Sort font options alphabetically (default first)
- Sort theme options alphabetically (Dark/Light first)
- Improve Ayu Dark text contrast for better readability
- Fix Matcha theme to have green undertone instead of blue
2026-01-17 09:15:35 +01:00
Stefan de Vogelaere
f3b00d0f78 feat: add global font settings with per-project override
- Add fontFamilySans and fontFamilyMono to GlobalSettings type
- Add global font state and actions to app store
- Update getEffectiveFontSans/Mono to fall back to global settings
- Add font selectors to global Settings → Appearance
- Add "Use Global Font" checkboxes in Project Settings → Theme
- Add fonts to settings sync and migration
- Include fonts in import/export JSON
2026-01-17 09:15:34 +01:00
Stefan de Vogelaere
c747baaee2 fix: use sentinel value for default font selection
Radix UI Select doesn't allow empty string values, so use 'default'
as a sentinel value instead.
2026-01-17 09:15:34 +01:00
Stefan de Vogelaere
1322722db2 feat: add per-project font override settings
Add font selectors that allow per-project font customization for both
sans and mono fonts, independent of theme selection. Uses system fonts.

- Add fontFamilySans and fontFamilyMono to ProjectSettings and Project types
- Create ui-font-options.ts config with system font options
- Add store actions: setProjectFontSans, setProjectFontMono, getEffectiveFontSans, getEffectiveFontMono
- Apply font CSS variables in root component
- Add font selector UI in project-theme-section (Project Settings → Theme)
2026-01-17 09:15:34 +01:00
webdevcody
aa35eb3d3a feat: implement spec synchronization feature for improved project management
- Added a new `/sync` endpoint to synchronize the project specification with the current codebase and feature state.
- Introduced `syncSpec` function to handle the synchronization logic, updating technology stack, implemented features, and roadmap phases.
- Enhanced the running state management to track synchronization tasks alongside existing generation tasks.
- Updated UI components to support synchronization actions, including loading indicators and status updates.
- Improved logging and error handling for better visibility during sync operations.

These changes enhance project management capabilities by ensuring that the specification remains up-to-date with the latest code and feature developments.
2026-01-17 01:45:45 -05:00
webdevcody
616e2ef75f feat: add HOSTNAME and VITE_HOSTNAME support for improved server URL configuration
- Introduced `HOSTNAME` environment variable for user-facing URLs, defaulting to localhost.
- Updated server and client code to utilize `HOSTNAME` for constructing URLs instead of hardcoded localhost.
- Enhanced documentation in CLAUDE.md to reflect new configuration options.
- Added `VITE_HOSTNAME` for frontend API URLs, ensuring consistent hostname usage across the application.

These changes improve flexibility in server configuration and enhance the user experience by providing accurate URLs.
2026-01-16 22:40:36 -05:00
webdevcody
d98cae124f feat: enhance sidebar functionality for mobile and compact views
- Introduced a floating toggle button for mobile to show/hide the sidebar when collapsed.
- Updated sidebar behavior to completely hide on mobile when the new mobileSidebarHidden state is true.
- Added logic to conditionally render sidebar components based on screen size using the new useIsCompact hook.
- Enhanced SidebarHeader to include close and expand buttons for mobile views.
- Refactored CollapseToggleButton to hide in compact mode.
- Implemented HeaderActionsPanel for mobile actions in various views, improving accessibility and usability on smaller screens.

These changes improve the user experience on mobile devices by providing better navigation options and visibility controls.
2026-01-16 22:27:19 -05:00
Web Dev Cody
26aaef002d Merge pull request #537 from AutoMaker-Org/claude/issue-536-20260117-0132
feat: add configurable host binding for server and Vite dev server
2026-01-16 21:22:34 -05:00
claude[bot]
09bb59d090 feat: add configurable host binding for server and Vite dev server
- Add HOST environment variable (default: 0.0.0.0) to allow binding to specific network interfaces
- Update server to listen on configurable host instead of hardcoded localhost
- Update Vite dev server to respect HOST environment variable
- Enhanced server startup banner to display listening address
- Updated .env.example and CLAUDE.md documentation

Fixes #536

Co-authored-by: Web Dev Cody <webdevcody@users.noreply.github.com>
2026-01-17 01:34:06 +00:00
Shirone
2f38ffe2d5 Merge pull request #532 from AutoMaker-Org/feature/v0.12.0rc-1768605251997-8ufb
fix: feature.json corruption on crash lose
2026-01-17 00:00:18 +00:00
Shirone
12fa9d858d Merge pull request #533 from AutoMaker-Org/feature/v0.12.0rc-1768605477061-fhv5
fix: Codex freezes
2026-01-16 23:59:16 +00:00
Shirone
c4e1a58e0d refactor: update timeout constants in CLI and Codex providers
- Removed redundant definition of CLI base timeout in `cli-provider.ts` and added a detailed comment explaining its purpose.
- Updated `codex-provider.ts` to use the imported `DEFAULT_TIMEOUT_MS` directly instead of an alias.
- Enhanced unit tests to ensure fallback behavior for invalid reasoning effort values in timeout calculations.
2026-01-17 00:52:57 +01:00
Shirone
8661f33c6d feat: implement atomic file writing and recovery utilities
- Introduced atomic write functionality for JSON files to ensure data integrity during writes.
- Added recovery mechanisms to read JSON files with fallback options for corrupted or missing files.
- Enhanced existing services to utilize atomic write and recovery features for improved reliability.
- Updated tests to cover new atomic writing and recovery scenarios, ensuring robust error handling and data consistency.
2026-01-17 00:50:51 +01:00
Shirone
5c24ca2220 feat: implement dynamic timeout calculation for reasoning efforts in CLI and Codex providers
- Added `calculateReasoningTimeout` function to dynamically adjust timeouts based on reasoning effort levels.
- Updated CLI and Codex providers to utilize the new timeout calculation, addressing potential timeouts for high reasoning efforts.
- Enhanced unit tests to validate timeout behavior for various reasoning efforts, ensuring correct timeout values are applied.
2026-01-17 00:50:06 +01:00
webdevcody
14559354dd refactor: update sidebar navigation sections for clarity
- Added Notifications and Project Settings as standalone sections in the sidebar without labels for visual separation.
- Removed the previous 'Other' label to enhance the organization of navigation items.
2026-01-16 18:49:35 -05:00
webdevcody
3bf9dbd43a Merge branch 'v0.12.0rc' of github.com:AutoMaker-Org/automaker into v0.12.0rc 2026-01-16 18:39:31 -05:00
webdevcody
bd3999416b feat: implement notifications and event history features
- Added Notification Service to manage project-level notifications, including creation, listing, marking as read, and dismissing notifications.
- Introduced Event History Service to store and manage historical events, allowing for listing, retrieval, deletion, and replaying of events.
- Integrated notifications into the server and UI, providing real-time updates for feature statuses and operations.
- Enhanced sidebar and project switcher components to display unread notifications count.
- Created dedicated views for managing notifications and event history, improving user experience and accessibility.

These changes enhance the application's ability to inform users about important events and statuses, improving overall usability and responsiveness.
2026-01-16 18:37:11 -05:00
Shirone
cc9f7d48c8 fix: enhance authentication error handling in Claude usage service tests
- Updated test to send a specific authentication error pattern to the data callback.
- Triggered the exit handler to validate the handling of authentication errors.
- Improved error message expectations for better clarity during test failures.
2026-01-16 23:58:48 +01:00
Shirone
6bb0461be7 Merge pull request #527 from AutoMaker-Org/feature/v0.12.0rc-1768598412391-lnp7
feat: implement XML extraction utilities and enhance feature handling
2026-01-16 22:52:55 +00:00
Shirone
16ef026b38 refactor: Centralize UUID generation with fallback support 2026-01-16 23:49:36 +01:00
Shirone
50ed405c4a fix: adress pr comments 2026-01-16 23:41:23 +01:00
Web Dev Cody
5407e1a9ff Merge pull request #525 from stefandevo/feature/project-settings
feat: Separate Project Settings from Global Settings
2026-01-16 17:31:19 -05:00
Stefan de Vogelaere
5436b18f70 refactor: move Project Settings below Tools section in sidebar
- Remove Project Settings from Project section
- Add Project Settings as standalone section below Tools/GitHub
- Use empty label for visual separation without header
- Add horizontal separator line above sections without labels
- Rename to "Project Settings" for clarity
- Keep "Global Settings" at bottom of sidebar
2026-01-16 23:27:53 +01:00
Stefan de Vogelaere
8b7700364d refactor: move project settings to Project section, rename global settings
- Move "Settings" from Tools section to Project section in sidebar
- Rename bottom settings link from "Settings" to "Global Settings"
- Update keyboard shortcut description accordingly
2026-01-16 23:17:50 +01:00
Shirone
3bdf3cbb5c fix: improve branch name generation logic in BoardView and useBoardActions
- Updated the logic for auto-generating branch names to consistently use the primary branch (main/master) and avoid nested feature paths.
- Removed references to currentWorktreeBranch in favor of getPrimaryWorktreeBranch for better clarity and maintainability.
- Enhanced comments to clarify the purpose of branch name generation.
2026-01-16 23:14:22 +01:00
webdevcody
45d9c9a5d8 fix: adjust menu dimensions and formatting in start-automaker.sh
- Increased MENU_BOX_WIDTH and MENU_INNER_WIDTH for better layout.
- Updated printf statements in show_menu() for consistent spacing and alignment of menu options.
- Enhanced exit option formatting for improved readability.
2026-01-16 17:10:20 -05:00
Stefan de Vogelaere
6a23e6ce78 fix: address PR review feedback
- Fix race conditions when rapidly switching projects
  - Added cancellation logic to prevent stale responses from updating state
  - Both project settings and init script loading now properly cancelled on unmount

- Improve error handling in custom icon upload
  - Added toast notifications for validation errors (file type, file size)
  - Added toast notifications for upload success/failure
  - Handle network errors gracefully with user feedback
  - Handle file reader errors
2026-01-16 23:03:21 +01:00
Stefan de Vogelaere
4e53215104 chore: reset package-lock.json to match base branch 2026-01-16 23:03:21 +01:00
Stefan de Vogelaere
2899b6d416 feat: separate project settings from global settings
This PR introduces a new dedicated Project Settings screen accessible from
the sidebar, clearly separating project-specific settings from global
application settings.

- Added new route `/project-settings` with dedicated view
- Sidebar navigation item "Settings" in Tools section (Shift+S shortcut)
- Sidebar-based navigation matching global Settings pattern
- Sections: Identity, Worktrees, Theme, Danger Zone

**Moved to Project Settings:**
- Project name and icon customization
- Project-specific theme override
- Worktree isolation enable/disable (per-project override)
- Init script indicator visibility and auto-dismiss
- Delete branch by default preference
- Initialization script editor
- Delete project (Danger Zone)

**Remains in Global Settings:**
- Global theme (default for all projects)
- Global worktree isolation (default for new projects)
- Feature Defaults, Model Defaults
- API Keys, AI Providers, MCP Servers
- Terminal, Keyboard Shortcuts, Audio
- Account, Security, Developer settings

Both Theme and Worktree Isolation now follow a consistent override pattern:
1. Global Settings defines the default value
2. New projects inherit the global value
3. Project Settings can override for that specific project
4. Changing global setting doesn't affect projects with overrides

- Fixed: Changing global theme was incorrectly overwriting project themes
- Fixed: Project worktree setting not persisting across sessions
- Project settings now properly load from server on component mount

- Shell syntax editor: improved background contrast (bg-background)
- Shell syntax editor: removed distracting active line highlight
- Project Settings header matches Context/Memory views pattern

- `apps/ui/src/routes/project-settings.tsx`
- `apps/ui/src/components/views/project-settings-view/` (9 files)

- Global settings simplified (removed project-specific options)
- Sidebar navigation updated with project settings link
- App store: added project-specific useWorktrees state/actions
- Types: added projectSettings keyboard shortcut
- HTTP client: added missing project settings response fields
2026-01-16 23:03:21 +01:00
Kacper
b263cc615e feat: implement XML extraction utilities and enhance feature handling
- Introduced a new xml-extractor module with functions for XML parsing, including escaping/unescaping XML characters, extracting sections and elements, and managing implemented features.
- Added functionality to add, remove, update, and check for implemented features in the app_spec.txt file.
- Enhanced the create and update feature handlers to check for duplicate titles and trigger synchronization with app_spec.txt on status changes.
- Updated tests to cover new XML extraction utilities and feature handling logic, ensuring robust functionality and reliability.
2026-01-16 22:55:10 +01:00
webdevcody
97b0028919 chore: update package versions to 0.12.0 and 0.12.0rc
- Updated the version in package.json for the main project to 0.12.0rc.
- Updated the version in apps/server/package.json and apps/ui/package.json to 0.12.0.
- Adjusted the version extraction logic in start-automaker.sh to reference the correct package.json path.
2026-01-16 16:48:43 -05:00
webdevcody
fd1727a443 Merge branch 'v0.12.0rc' of github.com:AutoMaker-Org/automaker into v0.12.0rc 2026-01-16 16:11:56 -05:00
webdevcody
597cb9bfae refactor: remove dev.mjs and integrate start-automaker.sh for development mode
- Deleted the dev.mjs script, consolidating development mode functionality into start-automaker.sh.
- Updated package.json to use start-automaker.sh for the "dev" script and added a "start" script for production mode.
- Enhanced start-automaker.sh with production build capabilities and improved argument parsing for better user experience.
- Removed launcher-utils.mjs as its functionality has been integrated into start-automaker.sh.
2026-01-16 16:11:53 -05:00
Kacper
c2430e5bd3 feat: enhance PTY handling for Windows in ClaudeUsageService and TerminalService
- Added detection for Electron environment to improve compatibility with Windows PTY processes.
- Implemented winpty fallback for ConPTY failures, ensuring robust terminal session creation in Electron and other contexts.
- Updated error handling to provide clearer messages for authentication and terminal access issues.
- Refined usage data detection logic to avoid false positives, improving the accuracy of usage reporting.

These changes aim to enhance the reliability and user experience of terminal interactions on Windows, particularly in Electron applications.
2026-01-16 21:53:53 +01:00
Shirone
68df8efd10 Merge pull request #522 from AutoMaker-Org/feature/v0.12.0rc-1768590871767-bl1c
feat: add filters to github issues view
2026-01-16 20:08:05 +00:00
Kacper
c0d64bc994 fix: adress pr comments 2026-01-16 21:05:58 +01:00
Kacper
6237f1a0fe feat: add filtering capabilities to GitHub issues view
- Implemented a comprehensive filtering system for GitHub issues, allowing users to filter by state, labels, assignees, and validation status.
- Introduced a new IssuesFilterControls component for managing filter options.
- Updated the GitHubIssuesView to utilize the new filtering logic, enhancing the user experience by providing clearer visibility into matching issues.
- Added hooks for filtering logic and state management, ensuring efficient updates and rendering of filtered issues.

These changes aim to improve the usability of the issues view by enabling users to easily navigate and manage their issues based on specific criteria.
2026-01-16 20:56:23 +01:00
Web Dev Cody
30c50d9b78 Merge pull request #513 from JZilla808/feature/tui-launcher
feat: add TUI launcher script for easy app startup
2026-01-16 14:51:36 -05:00
Web Dev Cody
03516ac09e Merge pull request #519 from WikiRik/WikiRik/audit-fix
chore: run npm audit fix
2026-01-16 14:43:27 -05:00
Shirone
5e5a136f1f Merge pull request #521 from AutoMaker-Org/feature/v0.12.0rc-1768591325146-pye6
fix: Signals not supported on windows.
2026-01-16 19:39:00 +00:00
Kacper
98c50d44a4 test: mock Unix platform for SIGTERM behavior in ClaudeUsageService tests
Added a mock for the Unix platform in the SIGTERM test case to ensure proper behavior during testing on non-Windows systems. This change enhances the reliability of the tests by simulating the expected environment for process termination.
2026-01-16 20:38:29 +01:00
Kacper
0e9369816f fix: unify PTY process termination handling across platforms
Refactored the process termination logic in both ClaudeUsageService and TerminalService to use a centralized method for killing PTY processes. This ensures consistent handling of process termination across Windows and Unix-like systems, improving reliability and maintainability of the code.
2026-01-16 20:34:12 +01:00
Kacper
be63a59e9c fix: improve process termination handling for Windows
Updated the process termination logic in ClaudeUsageService to handle Windows environments correctly. The code now checks the operating system and calls the appropriate kill method, ensuring consistent behavior across platforms.
2026-01-16 20:27:53 +01:00
Kacper
dbb84aba23 fix: ensure proper type handling for JSON parsing in loadBacklogPlan function
Updated the JSON parsing in the loadBacklogPlan function to explicitly cast the raw input as a string, improving type safety and preventing potential runtime errors when handling backlog plan data.
2026-01-16 20:09:01 +01:00
Shirone
9819d2e91c Merge pull request #514 from Seonfx/fix/510-spec-generation-json-fallback
fix: add JSON fallback for spec generation with custom API endpoints
2026-01-16 19:01:47 +00:00
Kacper
4c24ba5a8b feat: enhance TUI launcher with Docker/Electron process detection
- Add 4 launch options matching dev.mjs (Web, Electron, Docker Dev, Electron+Docker)
- Add arrow key navigation in menu with visual selection indicator
- Add cross-platform port conflict detection and resolution (Windows/Unix)
- Add Docker container detection with Stop/Restart/Attach/Cancel options
- Add Electron process detection when switching between modes
- Add centered, styled output for Docker build progress
- Add HUSKY=0 to docker-compose files to prevent permission errors
- Fix Windows/Git Bash compatibility (platform detection, netstat/taskkill)
- Fix bash arithmetic issue with set -e causing script to hang

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-16 19:58:32 +01:00
Rik Smale
e67cab1e07 chore: fix lockfile 2026-01-16 19:23:18 +01:00
Rik Smale
132b8f7529 chore: run npm audit fix 2026-01-16 19:18:16 +01:00
Seonfx
d651e9d8d6 fix: address PR review feedback for JSON fallback
- Simplify escapeXml() using 'str == null' check (type narrowing)
- Add validation for extracted JSON before passing to specToXml()
- Prevents runtime errors when JSON doesn't match SpecOutput schema

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-01-16 13:43:56 -04:00
webdevcody
92f14508aa chore: update environment variable documentation for Anthropic API key
- Changed comments in docker-compose files to clarify that the ANTHROPIC_API_KEY is optional.
- Updated README to reflect changes in authentication setup, emphasizing integration with Claude Code CLI and removing outdated API key instructions.
- Improved clarity on authentication methods and streamlined the setup process for users.
2026-01-16 11:23:45 -05:00
DhanushSantosh
842b059fac fix: remove invalid local keyword in main script body
The 'local' keyword can only be used inside functions. Line 423 had
'local timeout_count=0' in the main script body which caused a bash error.
Removed the unused variable declaration.

Fixes: bash error 'local: can only be used in a function'

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-16 20:44:17 +05:30
DhanushSantosh
49f9ecc168 feat: enhance TUI launcher with production-ready features and documentation
Major improvements to start-automaker.sh launcher script:

**Architecture & Code Quality:**
- Organized into logical sections with clear separators (8 sections)
- Extracted all magic numbers into named constants at top
- Added comprehensive comments throughout

**Functionality:**
- Dynamic version extraction from package.json (no manual updates)
- Pre-flight checks: validates Node.js, npm, tput installed
- Platform detection: warns on Windows/unsupported systems
- Terminal size validation: checks min 70x20, displays warning if too small
- Input timeout: 30-second auto-timeout for hands-free operation
- History tracking: remembers last selected mode in ~/.automaker_launcher_history

**User Experience:**
- Added --help flag with comprehensive usage documentation
- Added --version flag showing version, Node.js, Bash info
- Added --check-deps flag to verify project dependencies
- Added --no-colors flag for terminals without color support
- Added --no-history flag to disable history tracking
- Enhanced cleanup function: restores cursor + echo, better signal handling
- Better error messages with actionable remediation steps
- Improved exit experience: "Goodbye! See you soon." message

**Robustness:**
- Real initialization checks (validates node_modules, build artifacts)
- Spinner uses frame counting instead of infinite loop (max 1.6s)
- Proper signal trap handling (EXIT, INT, TERM)
- Error recovery: respects --no-colors in pre-flight checks

**File Management:**
- Renamed from "start automaker.sh" to "start-automaker.sh" for consistency
- Made script more portable with SCRIPT_DIR detection

**Documentation:**
- Added section to README.md: "Interactive TUI Launcher"
- Documented all launch modes and options with examples
- Added feature list, history file location, usage tips
- Updated table of contents with TUI launcher section

Fixes: #511 (CI test failures resolved)
Improvements: Better UX for new users, production-ready error handling

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-16 20:27:53 +05:30
DhanushSantosh
e02fd889c2 fix: add --force flag to npm install in format-check workflow
Ensures dmg-license can be installed on Linux CI runners even though it's
a darwin-only package. The --force flag allows npm to skip platform mismatches.

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-16 19:51:23 +05:30
DhanushSantosh
52a821d3bb fix: add --force flag to npm install in CI to allow platform-specific devDependencies
dmg-license is a darwin-only package required for macOS DMG building. The CI runs on
Linux, so npm install fails when trying to install a platform-specific devDependency.

Using --force allows npm to skip platform mismatches instead of erroring out, allowing
the build to proceed on non-darwin platforms where the darwin-only dependency will simply
be skipped.

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-16 19:43:09 +05:30
DhanushSantosh
becd79f1e3 fix: add missing dmg-license dependency to fix release builds
The release workflow was failing for all platforms because macOS DMG
builder requires dmg-license. This single dependency was preventing
AppImage, DEB, RPM, DMG, and EXE artifacts from being built and
uploaded to any release since v0.7.3.

Includes lockfile updates and conversion of git+ssh:// URLs to https://
to prevent SSH key requirement issues in CI/CD and across environments.

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-16 19:38:10 +05:30
DhanushSantosh
883ad2a04b fix(backlog-plan): clear running details in generate-plan finally block
Ensure running details are cleared when generation completes or fails, preventing state leaks.

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-16 19:37:21 +05:30
DhanushSantosh
bf93cdf0c4 fix(backlog-plan): clear running details when stopping generation
Add setRunningDetails(null) to stop handler to prevent state leaks when aborting operation.

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-16 19:37:21 +05:30
DhanushSantosh
c0ea1c736a fix(backlog-plan): clear running details and handle plan cleanup safely
- Add setRunningDetails(null) in finally block of generate handler to prevent state leaks
- Move clearBacklogPlan before response in apply handler and wrap in try-catch to prevent errors after headers sent

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-16 19:37:21 +05:30
DhanushSantosh
8b448b9481 fix: address CodeRabbit security and validation issues in Fedora docs and backlog plan
Documentation improvements:
- Fix GitHub URL placeholder issues in install-fedora.md - GitHub /latest/download/ endpoint
  doesn't support version substitution, use explicit download URL pattern instead
- Improve security in network troubleshooting section:
  - Change ping target from claude.ai (marketing site) to api.anthropic.com (actual API)
  - Remove unsafe 'echo \$ANTHROPIC_API_KEY' command that exposes secrets in shell history
  - Use safe API key check with conditional output instead

Code improvements:
- apps/server/src/routes/backlog-plan/common.ts: Add Array.isArray() validation
  for stored plan shape before returning it. Ensures changes is actually an array,
  not just truthy, preventing downstream runtime errors.

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-16 19:37:21 +05:30
DhanushSantosh
12f2b9f2b3 fix: remove invalid license property from RPM configuration
The 'license' property is not supported by electron-builder's RPM schema.
Valid RPM properties are: afterInstall, afterRemove, appArmorProfile,
artifactName, category, compression, depends, description, desktop,
executableArgs, fpm, icon, maintainer, mimeTypes, packageCategory,
packageName, publish, synopsis, vendor.

This fix allows electron-builder to proceed to the RPM build stage.

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-16 19:37:21 +05:30
DhanushSantosh
017ff3ca0a fix: resolve TypeScript error in backlog plan loading
Fix type mismatch in loadBacklogPlan where secureFs.readFile with 'utf-8'
encoding returns union type string | Buffer, causing JSON.parse to fail type checking.
Cast raw to string to satisfy TypeScript strict mode.

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-16 19:37:21 +05:30
Seonfx
bcec178bbe fix: add JSON fallback for spec generation with custom API endpoints
Fixes spec generation failure when using custom API endpoints (e.g., GLM proxy)
that don't support structured output. The AI returns JSON instead of XML, but
the fallback parser only looked for XML tags.

Changes:
- escapeXml: Handle undefined/null values gracefully (converts to empty string)
- generate-spec: Add JSON extraction fallback when XML tags aren't found
  - Reuses existing extractJson() utility (already used for Cursor models)
  - Converts extracted JSON to XML using specToXml()

Closes #510

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-01-16 08:37:53 -04:00
Jay Zhou
e3347c7b9c feat: add TUI launcher script for easy app startup
Add a beautiful terminal user interface (TUI) script that provides an
interactive menu for launching Automaker in different modes:

- [1] Web Browser mode (localhost:3007)
- [2] Desktop App (Electron)
- [3] Desktop + Debug (Electron with DevTools)
- [Q] Exit

Features:
- ASCII art logo with gradient colors
- Centered, responsive layout that adapts to terminal size
- Animated spinner during launch sequence
- Cross-shell compatibility (bash/zsh)
- Clean exit handling with cursor restoration

This provides a more user-friendly alternative to remembering
npm commands, especially for new users getting started with
the project.
2026-01-16 03:34:47 -08:00
DhanushSantosh
6529446281 feat: add Fedora/RHEL RPM package support with comprehensive documentation
Add native RPM package building for Fedora-based distributions:
- Extend electron-builder configuration to include RPM target
- Add rpm-build installation to GitHub Actions CI/CD workflow
- Update artifact upload patterns to include .rpm files
- Declare proper RPM dependencies (gtk3, libnotify, nss, etc.)
- Use xz compression for optimal package size

Documentation:
- Update README.md with Fedora/RHEL installation instructions
- Create comprehensive docs/install-fedora.md guide covering:
  - Installation methods (dnf/yum, direct URL)
  - System requirements and capabilities
  - Configuration and troubleshooting
  - SELinux handling and firewall rules
  - Performance tips and security considerations
  - Building from source
- Support for Fedora 39+, RHEL 9+, Rocky Linux, AlmaLinux

End-to-end support enables Fedora users to install Automaker via:
  sudo dnf install ./Automaker-<version>-x86_64.rpm

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-16 12:31:30 +05:30
webdevcody
379551c40e feat: add JSON import/export functionality in settings view
- Introduced a new ImportExportDialog component for managing settings import and export via JSON.
- Integrated JsonSyntaxEditor for editing JSON settings with syntax highlighting.
- Updated SettingsView to include the import/export dialog and associated state management.
- Enhanced SettingsHeader with an import/export button for easy access.

These changes aim to improve user experience by allowing seamless transfer of settings between installations.
2026-01-16 00:34:59 -05:00
webdevcody
7465017600 feat: implement server logging and event hook features
- Introduced server log level configuration and HTTP request logging settings, allowing users to control the verbosity of server logs and enable or disable request logging at runtime.
- Added an Event Hook Service to execute custom actions based on system events, supporting shell commands and HTTP webhooks.
- Enhanced the UI with new sections for managing server logging preferences and event hooks, including a dialog for creating and editing hooks.
- Updated global settings to include server log level and request logging options, ensuring persistence across sessions.

These changes aim to improve debugging capabilities and provide users with customizable event-driven actions within the application.
2026-01-16 00:21:49 -05:00
webdevcody
874c5a36de feat: enable auto loading of ClaudeMd by default
- Updated the default setting for autoLoadClaudeMd from false to true in the global settings. This change aims to enhance user experience by automatically loading ClaudeMd, streamlining the workflow for users.
2026-01-15 23:54:08 -05:00
webdevcody
03436103d1 feat: implement backlog plan management and UI enhancements
- Added functionality to save, clear, and load backlog plans within the application.
- Introduced a new API endpoint for clearing saved backlog plans.
- Enhanced the backlog plan dialog to allow users to review and apply changes to their features.
- Integrated dependency management features in the UI, allowing users to select parent and child dependencies for features.
- Improved the graph view with options to manage plans and visualize dependencies effectively.
- Updated the sidebar and settings to include provider visibility toggles for better user control over model selection.

These changes aim to enhance the user experience by providing robust backlog management capabilities and improving the overall UI for feature planning.
2026-01-15 22:21:46 -05:00
Shirone
cb544e0011 Merge pull request #505 from AutoMaker-Org/feature/v0.12.0rc-1768509532254-tt6z
fix: "Remove Project" button not working on right click of the project
2026-01-15 22:05:24 +00:00
Shirone
df23c9e6ab Merge pull request #507 from AutoMaker-Org/feat/improve-ideation-view
feat: improve ideation view
2026-01-15 22:03:50 +00:00
Shirone
52cc82fb3f feat: enhance ideation dashboard and prompt components
- Added a helper function to map priority levels to badge variants in the IdeationDashboard.
- Improved UI elements in SuggestionCard for better spacing and visual hierarchy.
- Updated PromptCategoryGrid and PromptList components with enhanced hover effects and layout adjustments for a more responsive design.
- Refined button styles and interactions for better user experience across components.

These changes aim to improve the overall usability and aesthetics of the ideation view.
2026-01-15 23:01:12 +01:00
Shirone
d9571bfb8d Merge pull request #506 from AutoMaker-Org/feature/v0.12.0rc-1768509904121-pjft
feat: add discard all functionality to ideation view
2026-01-15 21:39:54 +00:00
Shirone
07d800b589 feat: add discard all functionality to ideation view
- Introduced a new button in the IdeationHeader for discarding all ideas when in dashboard mode.
- Implemented state management for discard readiness and count in IdeationView.
- Added confirmation dialog for discarding ideas in IdeationDashboard.
- Enhanced bulk action readiness checks to include discard operations.

This update improves user experience by allowing bulk discarding of ideas with confirmation, ensuring actions are intentional.
2026-01-15 22:37:26 +01:00
Shirone
ec042de69c fix: streamline context menu behavior for project removal dialog
- Ensure the context menu closes consistently after the confirmation dialog, regardless of user action.
- Reset confirmation state upon dialog closure to prevent unintended interactions.
2026-01-15 22:20:30 +01:00
Shirone
585ae32c32 fix: Prevent race condition in project removal dialog cleanup 2026-01-15 22:15:16 +01:00
Shirone
a89ba04109 fix: project removal not being executed
- Prevent context menu from closing when a confirmation dialog is open.
- Add success toast notification upon project removal.
- Refactor event handlers to account for dialog state, improving user experience.
2026-01-15 22:06:35 +01:00
Shirone
05a3b95d75 Merge pull request #501 from AutoMaker-Org/feature/v0.11.0rc-1768426435282-1ogl
feat: centralize prompts and add customization UI for App Spec, Context, Suggestions, Tasks
2026-01-15 20:20:56 +00:00
Shirone
0e269ca15d fix: update outdated server unit tests
- auto-mode-service-planning.test.ts: Add taskExecutionPrompts argument
  to buildFeaturePrompt calls, update test for implementation instructions
- claude-usage-service.test.ts: Skip deprecated Mac tests (service now
  uses PTY for all platforms), rename Windows tests to PTY tests, update
  to use process.cwd() instead of home directory
- claude-provider.test.ts: Add missing model parameter to environment
  variable passthrough tests

All tests now pass (1093 passed, 23 skipped).

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 21:16:46 +01:00
Shirone
fd03cb4afa refactor: split prompt customization into multiple files
Split prompt-customization-section.tsx into focused modules:
- types.ts (51 lines) - Type definitions
- tab-configs.ts (448 lines) - Configuration data for all tabs
- components.tsx (159 lines) - Reusable Banner, PromptField, PromptFieldList
- prompt-customization-section.tsx (176 lines) - Main component

Benefits:
- Main component reduced from ~810 to 176 lines
- Clear separation of concerns
- Easier to find and modify specific parts
- Configuration data isolated for easy updates

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 21:07:38 +01:00
Shirone
d6c5c93fe5 refactor: use data-driven configuration for prompt customization UI
- Replace repetitive JSX with TAB_CONFIGS array defining all tabs and fields
- Create reusable Banner component for info/warning banners
- Create PromptFieldList component for rendering fields from config
- Support nested sections (like Auto Mode's Template Prompts section)
- Reduce file from ~950 lines to ~810 lines (-15% code)

Benefits:
- Adding new prompt tabs/fields is now declarative (just add to config)
- Consistent structure enforced by TypeScript interfaces
- Much easier to maintain and extend

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 21:03:51 +01:00
Shirone
1abf219230 refactor: create reusable PromptTabContent component and add {{count}} placeholder
- Create PromptTabContent reusable component in prompt-customization-section.tsx
- Update all tabs (Agent, Commit Message, Title Generation, Ideation, App Spec,
  Context Description, Suggestions, Task Execution) to use the new component
- Add {{count}} placeholder to DEFAULT_SUGGESTIONS_SYSTEM_PROMPT for dynamic
  suggestion count

Addresses PR review comments from Gemini.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 21:00:32 +01:00
Shirone
3a2ba6dbfe feat: connect Task Execution prompts to auto-mode-service
Update auto-mode-service.ts to use centralized Task Execution prompts
from settings, making all 9 task execution prompts customizable via UI:

- buildFeaturePrompt: uses implementationInstructions and
  playwrightVerificationInstructions from settings
- buildTaskPrompt: uses taskPromptTemplate with variable substitution
- buildPipelineStepPrompt: updated to pass prompts through
- executeFeatureWithContext: uses resumeFeatureTemplate
- resolvePlanApproval recovery: uses continuationAfterApprovalTemplate
- Multi-agent continuation: uses continuationAfterApprovalTemplate
- recordLearningsFromFeature: uses learningExtractionSystemPrompt
  and learningExtractionUserPromptTemplate

All 12 prompt categories are now fully customizable from the UI.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 20:54:26 +01:00
Shirone
8fa8ba0a16 fix: address PR comments and complete prompt centralization
- Fix inline type imports in defaults.ts (move to top-level imports)
- Update ideation-service.ts to use centralized prompts from settings
- Update generate-title.ts to use centralized prompts
- Update validate-issue.ts to use centralized prompts
- Clean up validation-schema.ts (prompts already centralized)
- Minor server index cleanup

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 20:31:19 +01:00
Shirone
285f526e0c feat: centralize prompts and add customization UI for App Spec, Context, Suggestions, Tasks
- Add 4 new prompt type interfaces (AppSpecPrompts, ContextDescriptionPrompts,
  SuggestionsPrompts, TaskExecutionPrompts) with resolved types
- Add default prompts for all new categories to @automaker/prompts/defaults.ts
- Add merge functions for new prompt categories in merge.ts
- Update settings-helpers.ts getPromptCustomization() to return all 12 categories
- Update server routes (generate-spec, generate-features-from-spec, describe-file,
  describe-image, generate-suggestions) to use centralized prompts
- Add 4 new tabs in prompt customization UI (App Spec, Context, Suggestions, Tasks)
- Fix Ideation tab layout using grid-cols-4 for even distribution

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 20:13:14 +01:00
webdevcody
bd68b497ac Merge branch 'v0.12.0rc' of github.com:AutoMaker-Org/automaker into v0.12.0rc 2026-01-15 13:14:19 -05:00
webdevcody
06b047cfcb feat: implement bulk feature verification and enhance selection mode
- Added functionality for bulk verifying features in the BoardView, allowing users to mark multiple features as verified at once.
- Introduced a selection target mechanism to differentiate between 'backlog' and 'waiting_approval' features during selection mode.
- Updated the KanbanCard and SelectionActionBar components to support the new selection target logic, improving user experience for bulk actions.
- Enhanced the UI to provide appropriate actions based on the current selection target, including verification options for waiting approval features.
2026-01-15 13:14:15 -05:00
DhanushSantosh
c585cee12f feat: add dynamic usage status icon and tab-aware updates to usage button
- Add provider icon (Anthropic/OpenAI) that displays based on active tab
- Icon color reflects usage status (green/orange/red)
- Progress bar and stale indicator update dynamically when switching tabs
- Shows Claude metrics when Claude tab is active, Codex metrics when Codex tab is active

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-15 21:05:35 +05:30
DhanushSantosh
241fd0b252 Merge remote-tracking branch 'upstream/v0.12.0rc' into patchcraft 2026-01-15 19:38:37 +05:30
DhanushSantosh
164acc1b4e chore: update package-lock.json 2026-01-15 19:38:18 +05:30
358 changed files with 28528 additions and 6027 deletions

View File

@@ -41,7 +41,8 @@ runs:
# Use npm install instead of npm ci to correctly resolve platform-specific # Use npm install instead of npm ci to correctly resolve platform-specific
# optional dependencies (e.g., @tailwindcss/oxide, lightningcss binaries) # optional dependencies (e.g., @tailwindcss/oxide, lightningcss binaries)
# Skip scripts to avoid electron-builder install-app-deps which uses too much memory # Skip scripts to avoid electron-builder install-app-deps which uses too much memory
run: npm install --ignore-scripts # Use --force to allow platform-specific dev dependencies like dmg-license on non-darwin platforms
run: npm install --ignore-scripts --force
- name: Install Linux native bindings - name: Install Linux native bindings
shell: bash shell: bash

View File

@@ -25,7 +25,7 @@ jobs:
cache-dependency-path: package-lock.json cache-dependency-path: package-lock.json
- name: Install dependencies - name: Install dependencies
run: npm install --ignore-scripts run: npm install --ignore-scripts --force
- name: Check formatting - name: Check formatting
run: npm run format:check run: npm run format:check

View File

@@ -35,6 +35,11 @@ jobs:
with: with:
check-lockfile: 'true' check-lockfile: 'true'
- name: Install RPM build tools (Linux)
if: matrix.os == 'ubuntu-latest'
shell: bash
run: sudo apt-get update && sudo apt-get install -y rpm
- name: Build Electron app (macOS) - name: Build Electron app (macOS)
if: matrix.os == 'macos-latest' if: matrix.os == 'macos-latest'
shell: bash shell: bash
@@ -73,7 +78,7 @@ jobs:
uses: actions/upload-artifact@v4 uses: actions/upload-artifact@v4
with: with:
name: linux-builds name: linux-builds
path: apps/ui/release/*.{AppImage,deb} path: apps/ui/release/*.{AppImage,deb,rpm}
retention-days: 30 retention-days: 30
upload: upload:
@@ -104,8 +109,8 @@ jobs:
uses: softprops/action-gh-release@v2 uses: softprops/action-gh-release@v2
with: with:
files: | files: |
artifacts/macos-builds/* artifacts/macos-builds/*.{dmg,zip,blockmap}
artifacts/windows-builds/* artifacts/windows-builds/*.{exe,blockmap}
artifacts/linux-builds/* artifacts/linux-builds/*.{AppImage,deb,rpm,blockmap}
env: env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -166,7 +166,10 @@ Use `resolveModelString()` from `@automaker/model-resolver` to convert model ali
## Environment Variables ## Environment Variables
- `ANTHROPIC_API_KEY` - Anthropic API key (or use Claude Code CLI auth) - `ANTHROPIC_API_KEY` - Anthropic API key (or use Claude Code CLI auth)
- `HOST` - Host to bind server to (default: 0.0.0.0)
- `HOSTNAME` - Hostname for user-facing URLs (default: localhost)
- `PORT` - Server port (default: 3008) - `PORT` - Server port (default: 3008)
- `DATA_DIR` - Data storage directory (default: ./data) - `DATA_DIR` - Data storage directory (default: ./data)
- `ALLOWED_ROOT_DIRECTORY` - Restrict file operations to specific directory - `ALLOWED_ROOT_DIRECTORY` - Restrict file operations to specific directory
- `AUTOMAKER_MOCK_AGENT=true` - Enable mock agent mode for CI testing - `AUTOMAKER_MOCK_AGENT=true` - Enable mock agent mode for CI testing
- `VITE_HOSTNAME` - Hostname for frontend API URLs (default: localhost)

159
README.md
View File

@@ -28,6 +28,7 @@
- [Quick Start](#quick-start) - [Quick Start](#quick-start)
- [How to Run](#how-to-run) - [How to Run](#how-to-run)
- [Development Mode](#development-mode) - [Development Mode](#development-mode)
- [Interactive TUI Launcher](#interactive-tui-launcher-recommended-for-new-users)
- [Building for Production](#building-for-production) - [Building for Production](#building-for-production)
- [Testing](#testing) - [Testing](#testing)
- [Linting](#linting) - [Linting](#linting)
@@ -101,11 +102,9 @@ In the Discord, you can:
### Prerequisites ### Prerequisites
- **Node.js 18+** (tested with Node.js 22) - **Node.js 22+** (required: >=22.0.0 <23.0.0)
- **npm** (comes with Node.js) - **npm** (comes with Node.js)
- **Authentication** (choose one): - **[Claude Code CLI](https://code.claude.com/docs/en/overview)** - Install and authenticate with your Anthropic subscription. Automaker integrates with your authenticated Claude Code CLI to access Claude models.
- **[Claude Code CLI](https://code.claude.com/docs/en/overview)** (recommended) - Install and authenticate, credentials used automatically
- **Anthropic API Key** - Direct API key for Claude Agent SDK ([get one here](https://console.anthropic.com/))
### Quick Start ### Quick Start
@@ -117,30 +116,14 @@ cd automaker
# 2. Install dependencies # 2. Install dependencies
npm install npm install
# 3. Build shared packages (can be skipped - npm run dev does it automatically) # 3. Start Automaker
npm run build:packages
# 4. Start Automaker
npm run dev npm run dev
# Choose between: # Choose between:
# 1. Web Application (browser at localhost:3007) # 1. Web Application (browser at localhost:3007)
# 2. Desktop Application (Electron - recommended) # 2. Desktop Application (Electron - recommended)
``` ```
**Authentication Setup:** On first run, Automaker will automatically show a setup wizard where you can configure authentication. You can choose to: **Authentication:** Automaker integrates with your authenticated Claude Code CLI. Make sure you have [installed and authenticated](https://code.claude.com/docs/en/quickstart) the Claude Code CLI before running Automaker. Your CLI credentials will be detected automatically.
- Use **Claude Code CLI** (recommended) - Automaker will detect your CLI credentials automatically
- Enter an **API key** directly in the wizard
If you prefer to set up authentication before running (e.g., for headless deployments or CI/CD), you can set it manually:
```bash
# Option A: Environment variable
export ANTHROPIC_API_KEY="sk-ant-..."
# Option B: Create .env file in project root
echo "ANTHROPIC_API_KEY=sk-ant-..." > .env
```
**For Development:** `npm run dev` starts the development server with Vite live reload and hot module replacement for fast refresh and instant updates as you make changes. **For Development:** `npm run dev` starts the development server with Vite live reload and hot module replacement for fast refresh and instant updates as you make changes.
@@ -179,6 +162,40 @@ npm run dev:electron:wsl:gpu
npm run dev:web npm run dev:web
``` ```
### Interactive TUI Launcher (Recommended for New Users)
For a user-friendly interactive menu, use the built-in TUI launcher script:
```bash
# Show interactive menu with all launch options
./start-automaker.sh
# Or launch directly without menu
./start-automaker.sh web # Web browser
./start-automaker.sh electron # Desktop app
./start-automaker.sh electron-debug # Desktop + DevTools
# Additional options
./start-automaker.sh --help # Show all available options
./start-automaker.sh --version # Show version information
./start-automaker.sh --check-deps # Verify project dependencies
./start-automaker.sh --no-colors # Disable colored output
./start-automaker.sh --no-history # Don't remember last choice
```
**Features:**
- 🎨 Beautiful terminal UI with gradient colors and ASCII art
- Interactive menu (press 1-3 to select, Q to exit)
- 💾 Remembers your last choice
- Pre-flight checks (validates Node.js, npm, dependencies)
- 📏 Responsive layout (adapts to terminal size)
- 30-second timeout for hands-free selection
- 🌐 Cross-shell compatible (bash/zsh)
**History File:**
Your last selected mode is saved in `~/.automaker_launcher_history` for quick re-runs.
### Building for Production ### Building for Production
#### Web Application #### Web Application
@@ -197,11 +214,30 @@ npm run build:electron
# Platform-specific builds # Platform-specific builds
npm run build:electron:mac # macOS (DMG + ZIP, x64 + arm64) npm run build:electron:mac # macOS (DMG + ZIP, x64 + arm64)
npm run build:electron:win # Windows (NSIS installer, x64) npm run build:electron:win # Windows (NSIS installer, x64)
npm run build:electron:linux # Linux (AppImage + DEB, x64) npm run build:electron:linux # Linux (AppImage + DEB + RPM, x64)
# Output directory: apps/ui/release/ # Output directory: apps/ui/release/
``` ```
**Linux Distribution Packages:**
- **AppImage**: Universal format, works on any Linux distribution
- **DEB**: Ubuntu, Debian, Linux Mint, Pop!\_OS
- **RPM**: Fedora, RHEL, Rocky Linux, AlmaLinux, openSUSE
**Installing on Fedora/RHEL:**
```bash
# Download the RPM package
wget https://github.com/AutoMaker-Org/automaker/releases/latest/download/Automaker-<version>-x86_64.rpm
# Install with dnf (Fedora)
sudo dnf install ./Automaker-<version>-x86_64.rpm
# Or with yum (RHEL/CentOS)
sudo yum localinstall ./Automaker-<version>-x86_64.rpm
```
#### Docker Deployment #### Docker Deployment
Docker provides the most secure way to run Automaker by isolating it from your host filesystem. Docker provides the most secure way to run Automaker by isolating it from your host filesystem.
@@ -220,16 +256,9 @@ docker-compose logs -f
docker-compose down docker-compose down
``` ```
##### Configuration ##### Authentication
Create a `.env` file in the project root if using API key authentication: Automaker integrates with your authenticated Claude Code CLI. To use CLI authentication in Docker, mount your Claude CLI config directory (see [Claude CLI Authentication](#claude-cli-authentication) below).
```bash
# Optional: Anthropic API key (not needed if using Claude CLI authentication)
ANTHROPIC_API_KEY=sk-ant-...
```
**Note:** Most users authenticate via Claude CLI instead of API keys. See [Claude CLI Authentication](#claude-cli-authentication-optional) below.
##### Working with Projects (Host Directory Access) ##### Working with Projects (Host Directory Access)
@@ -243,9 +272,9 @@ services:
- /path/to/your/project:/projects/your-project - /path/to/your/project:/projects/your-project
``` ```
##### Claude CLI Authentication (Optional) ##### Claude CLI Authentication
To use Claude Code CLI authentication instead of an API key, mount your Claude CLI config directory: Mount your Claude CLI config directory to use your authenticated CLI credentials:
```yaml ```yaml
services: services:
@@ -343,10 +372,6 @@ npm run lint
### Environment Configuration ### Environment Configuration
#### Authentication (if not using Claude Code CLI)
- `ANTHROPIC_API_KEY` - Your Anthropic API key for Claude Agent SDK (not needed if using Claude Code CLI)
#### Optional - Server #### Optional - Server
- `PORT` - Server port (default: 3008) - `PORT` - Server port (default: 3008)
@@ -357,49 +382,22 @@ npm run lint
- `AUTOMAKER_API_KEY` - Optional API authentication for the server - `AUTOMAKER_API_KEY` - Optional API authentication for the server
- `ALLOWED_ROOT_DIRECTORY` - Restrict file operations to specific directory - `ALLOWED_ROOT_DIRECTORY` - Restrict file operations to specific directory
- `CORS_ORIGIN` - CORS policy (default: \*) - `CORS_ORIGIN` - CORS allowed origins (comma-separated list; defaults to localhost only)
#### Optional - Development #### Optional - Development
- `VITE_SKIP_ELECTRON` - Skip Electron in dev mode - `VITE_SKIP_ELECTRON` - Skip Electron in dev mode
- `OPEN_DEVTOOLS` - Auto-open DevTools in Electron - `OPEN_DEVTOOLS` - Auto-open DevTools in Electron
- `AUTOMAKER_SKIP_SANDBOX_WARNING` - Skip sandbox warning dialog (useful for dev/CI)
### Authentication Setup ### Authentication Setup
#### Option 1: Claude Code CLI (Recommended) Automaker integrates with your authenticated Claude Code CLI and uses your Anthropic subscription.
Install and authenticate the Claude Code CLI following the [official quickstart guide](https://code.claude.com/docs/en/quickstart). Install and authenticate the Claude Code CLI following the [official quickstart guide](https://code.claude.com/docs/en/quickstart).
Once authenticated, Automaker will automatically detect and use your CLI credentials. No additional configuration needed! Once authenticated, Automaker will automatically detect and use your CLI credentials. No additional configuration needed!
#### Option 2: Direct API Key
If you prefer not to use the CLI, you can provide an Anthropic API key directly using one of these methods:
##### 2a. Shell Configuration
Add to your `~/.bashrc` or `~/.zshrc`:
```bash
export ANTHROPIC_API_KEY="sk-ant-..."
```
Then restart your terminal or run `source ~/.bashrc` (or `source ~/.zshrc`).
##### 2b. .env File
Create a `.env` file in the project root (gitignored):
```bash
ANTHROPIC_API_KEY=sk-ant-...
PORT=3008
DATA_DIR=./data
```
##### 2c. In-App Storage
The application can store your API key securely in the settings UI. The key is persisted in the `DATA_DIR` directory.
## Features ## Features
### Core Workflow ### Core Workflow
@@ -508,20 +506,24 @@ Automaker provides several specialized views accessible via the sidebar or keybo
| **Agent** | `A` | Interactive chat sessions with AI agents for exploratory work and questions | | **Agent** | `A` | Interactive chat sessions with AI agents for exploratory work and questions |
| **Spec** | `D` | Project specification editor with AI-powered generation and feature suggestions | | **Spec** | `D` | Project specification editor with AI-powered generation and feature suggestions |
| **Context** | `C` | Manage context files (markdown, images) that AI agents automatically reference | | **Context** | `C` | Manage context files (markdown, images) that AI agents automatically reference |
| **Profiles** | `M` | Create and manage AI agent profiles with custom prompts and configurations |
| **Settings** | `S` | Configure themes, shortcuts, defaults, authentication, and more | | **Settings** | `S` | Configure themes, shortcuts, defaults, authentication, and more |
| **Terminal** | `T` | Integrated terminal with tabs, splits, and persistent sessions | | **Terminal** | `T` | Integrated terminal with tabs, splits, and persistent sessions |
| **GitHub Issues** | - | Import and validate GitHub issues, convert to tasks | | **Graph** | `H` | Visualize feature dependencies with interactive graph visualization |
| **Ideation** | `I` | Brainstorm and generate ideas with AI assistance |
| **Memory** | `Y` | View and manage agent memory and conversation history |
| **GitHub Issues** | `G` | Import and validate GitHub issues, convert to tasks |
| **GitHub PRs** | `R` | View and manage GitHub pull requests |
| **Running Agents** | - | View all active agents across projects with status and progress | | **Running Agents** | - | View all active agents across projects with status and progress |
### Keyboard Navigation ### Keyboard Navigation
All shortcuts are customizable in Settings. Default shortcuts: All shortcuts are customizable in Settings. Default shortcuts:
- **Navigation:** `K` (Board), `A` (Agent), `D` (Spec), `C` (Context), `S` (Settings), `M` (Profiles), `T` (Terminal) - **Navigation:** `K` (Board), `A` (Agent), `D` (Spec), `C` (Context), `S` (Settings), `T` (Terminal), `H` (Graph), `I` (Ideation), `Y` (Memory), `G` (GitHub Issues), `R` (GitHub PRs)
- **UI:** `` ` `` (Toggle sidebar) - **UI:** `` ` `` (Toggle sidebar)
- **Actions:** `N` (New item in current view), `G` (Start next features), `O` (Open project), `P` (Project picker) - **Actions:** `N` (New item in current view), `O` (Open project), `P` (Project picker)
- **Projects:** `Q`/`E` (Cycle previous/next project) - **Projects:** `Q`/`E` (Cycle previous/next project)
- **Terminal:** `Alt+D` (Split right), `Alt+S` (Split down), `Alt+W` (Close), `Alt+T` (New tab)
## Architecture ## Architecture
@@ -586,10 +588,16 @@ Stored in `{projectPath}/.automaker/`:
│ ├── agent-output.md # AI agent output log │ ├── agent-output.md # AI agent output log
│ └── images/ # Attached images │ └── images/ # Attached images
├── context/ # Context files for AI agents ├── context/ # Context files for AI agents
├── worktrees/ # Git worktree metadata
├── validations/ # GitHub issue validation results
├── ideation/ # Brainstorming and analysis data
│ └── analysis.json # Project structure analysis
├── board/ # Board-related data
├── images/ # Project-level images
├── settings.json # Project-specific settings ├── settings.json # Project-specific settings
├── spec.md # Project specification ├── app_spec.txt # Project specification (XML format)
├── analysis.json # Project structure analysis ├── active-branches.json # Active git branches tracking
└── feature-suggestions.json # AI-generated suggestions └── execution-state.json # Auto-mode execution state
``` ```
#### Global Data #### Global Data
@@ -627,7 +635,6 @@ data/
- [Contributing Guide](./CONTRIBUTING.md) - How to contribute to Automaker - [Contributing Guide](./CONTRIBUTING.md) - How to contribute to Automaker
- [Project Documentation](./docs/) - Architecture guides, patterns, and developer docs - [Project Documentation](./docs/) - Architecture guides, patterns, and developer docs
- [Docker Isolation Guide](./docs/docker-isolation.md) - Security-focused Docker deployment
- [Shared Packages Guide](./docs/llm-shared-packages.md) - Using monorepo packages - [Shared Packages Guide](./docs/llm-shared-packages.md) - Using monorepo packages
### Community ### Community

View File

@@ -44,6 +44,11 @@ CORS_ORIGIN=http://localhost:3007
# OPTIONAL - Server # OPTIONAL - Server
# ============================================ # ============================================
# Host to bind the server to (default: 0.0.0.0)
# Use 0.0.0.0 to listen on all interfaces (recommended for Docker/remote access)
# Use 127.0.0.1 or localhost to restrict to local connections only
HOST=0.0.0.0
# Port to run the server on # Port to run the server on
PORT=3008 PORT=3008
@@ -63,6 +68,14 @@ TERMINAL_PASSWORD=
ENABLE_REQUEST_LOGGING=false ENABLE_REQUEST_LOGGING=false
# ============================================
# OPTIONAL - UI Behavior
# ============================================
# Skip the sandbox warning dialog on startup (default: false)
# Set to "true" to disable the warning entirely (useful for dev/CI environments)
AUTOMAKER_SKIP_SANDBOX_WARNING=false
# ============================================ # ============================================
# OPTIONAL - Debugging # OPTIONAL - Debugging
# ============================================ # ============================================

View File

@@ -1,6 +1,6 @@
{ {
"name": "@automaker/server", "name": "@automaker/server",
"version": "0.11.0", "version": "0.12.0",
"description": "Backend server for Automaker - provides API for both web and Electron modes", "description": "Backend server for Automaker - provides API for both web and Electron modes",
"author": "AutoMaker Team", "author": "AutoMaker Team",
"license": "SEE LICENSE IN LICENSE", "license": "SEE LICENSE IN LICENSE",

View File

@@ -17,9 +17,19 @@ import dotenv from 'dotenv';
import { createEventEmitter, type EventEmitter } from './lib/events.js'; import { createEventEmitter, type EventEmitter } from './lib/events.js';
import { initAllowedPaths } from '@automaker/platform'; import { initAllowedPaths } from '@automaker/platform';
import { createLogger } from '@automaker/utils'; import { createLogger, setLogLevel, LogLevel } from '@automaker/utils';
const logger = createLogger('Server'); const logger = createLogger('Server');
/**
* Map server log level string to LogLevel enum
*/
const LOG_LEVEL_MAP: Record<string, LogLevel> = {
error: LogLevel.ERROR,
warn: LogLevel.WARN,
info: LogLevel.INFO,
debug: LogLevel.DEBUG,
};
import { authMiddleware, validateWsConnectionToken, checkRawAuthentication } from './lib/auth.js'; import { authMiddleware, validateWsConnectionToken, checkRawAuthentication } from './lib/auth.js';
import { requireJsonContentType } from './middleware/require-json-content-type.js'; import { requireJsonContentType } from './middleware/require-json-content-type.js';
import { createAuthRoutes } from './routes/auth/index.js'; import { createAuthRoutes } from './routes/auth/index.js';
@@ -68,13 +78,39 @@ import { pipelineService } from './services/pipeline-service.js';
import { createIdeationRoutes } from './routes/ideation/index.js'; import { createIdeationRoutes } from './routes/ideation/index.js';
import { IdeationService } from './services/ideation-service.js'; import { IdeationService } from './services/ideation-service.js';
import { getDevServerService } from './services/dev-server-service.js'; import { getDevServerService } from './services/dev-server-service.js';
import { eventHookService } from './services/event-hook-service.js';
import { createNotificationsRoutes } from './routes/notifications/index.js';
import { getNotificationService } from './services/notification-service.js';
import { createEventHistoryRoutes } from './routes/event-history/index.js';
import { getEventHistoryService } from './services/event-history-service.js';
import { createCodeReviewRoutes } from './routes/code-review/index.js';
import { CodeReviewService } from './services/code-review-service.js';
// Load environment variables // Load environment variables
dotenv.config(); dotenv.config();
const PORT = parseInt(process.env.PORT || '3008', 10); const PORT = parseInt(process.env.PORT || '3008', 10);
const HOST = process.env.HOST || '0.0.0.0';
const HOSTNAME = process.env.HOSTNAME || 'localhost';
const DATA_DIR = process.env.DATA_DIR || './data'; const DATA_DIR = process.env.DATA_DIR || './data';
const ENABLE_REQUEST_LOGGING = process.env.ENABLE_REQUEST_LOGGING !== 'false'; // Default to true const ENABLE_REQUEST_LOGGING_DEFAULT = process.env.ENABLE_REQUEST_LOGGING !== 'false'; // Default to true
// Runtime-configurable request logging flag (can be changed via settings)
let requestLoggingEnabled = ENABLE_REQUEST_LOGGING_DEFAULT;
/**
* Enable or disable HTTP request logging at runtime
*/
export function setRequestLoggingEnabled(enabled: boolean): void {
requestLoggingEnabled = enabled;
}
/**
* Get current request logging state
*/
export function isRequestLoggingEnabled(): boolean {
return requestLoggingEnabled;
}
// Check for required environment variables // Check for required environment variables
const hasAnthropicKey = !!process.env.ANTHROPIC_API_KEY; const hasAnthropicKey = !!process.env.ANTHROPIC_API_KEY;
@@ -103,22 +139,21 @@ initAllowedPaths();
const app = express(); const app = express();
// Middleware // Middleware
// Custom colored logger showing only endpoint and status code (configurable via ENABLE_REQUEST_LOGGING env var) // Custom colored logger showing only endpoint and status code (dynamically configurable)
if (ENABLE_REQUEST_LOGGING) { morgan.token('status-colored', (_req, res) => {
morgan.token('status-colored', (_req, res) => { const status = res.statusCode;
const status = res.statusCode; if (status >= 500) return `\x1b[31m${status}\x1b[0m`; // Red for server errors
if (status >= 500) return `\x1b[31m${status}\x1b[0m`; // Red for server errors if (status >= 400) return `\x1b[33m${status}\x1b[0m`; // Yellow for client errors
if (status >= 400) return `\x1b[33m${status}\x1b[0m`; // Yellow for client errors if (status >= 300) return `\x1b[36m${status}\x1b[0m`; // Cyan for redirects
if (status >= 300) return `\x1b[36m${status}\x1b[0m`; // Cyan for redirects return `\x1b[32m${status}\x1b[0m`; // Green for success
return `\x1b[32m${status}\x1b[0m`; // Green for success });
});
app.use( app.use(
morgan(':method :url :status-colored', { morgan(':method :url :status-colored', {
skip: (req) => req.url === '/api/health', // Skip health check logs // Skip when request logging is disabled or for health check endpoints
}) skip: (req) => !requestLoggingEnabled || req.url === '/api/health',
); })
} );
// CORS configuration // CORS configuration
// When using credentials (cookies), origin cannot be '*' // When using credentials (cookies), origin cannot be '*'
// We dynamically allow the requesting origin for local development // We dynamically allow the requesting origin for local development
@@ -176,13 +211,39 @@ const codexModelCacheService = new CodexModelCacheService(DATA_DIR, codexAppServ
const codexUsageService = new CodexUsageService(codexAppServerService); const codexUsageService = new CodexUsageService(codexAppServerService);
const mcpTestService = new MCPTestService(settingsService); const mcpTestService = new MCPTestService(settingsService);
const ideationService = new IdeationService(events, settingsService, featureLoader); const ideationService = new IdeationService(events, settingsService, featureLoader);
const codeReviewService = new CodeReviewService(events, settingsService);
// Initialize DevServerService with event emitter for real-time log streaming // Initialize DevServerService with event emitter for real-time log streaming
const devServerService = getDevServerService(); const devServerService = getDevServerService();
devServerService.setEventEmitter(events); devServerService.setEventEmitter(events);
// Initialize Notification Service with event emitter for real-time updates
const notificationService = getNotificationService();
notificationService.setEventEmitter(events);
// Initialize Event History Service
const eventHistoryService = getEventHistoryService();
// Initialize Event Hook Service for custom event triggers (with history storage)
eventHookService.initialize(events, settingsService, eventHistoryService);
// Initialize services // Initialize services
(async () => { (async () => {
// Apply logging settings from saved settings
try {
const settings = await settingsService.getGlobalSettings();
if (settings.serverLogLevel && LOG_LEVEL_MAP[settings.serverLogLevel] !== undefined) {
setLogLevel(LOG_LEVEL_MAP[settings.serverLogLevel]);
logger.info(`Server log level set to: ${settings.serverLogLevel}`);
}
// Apply request logging setting (default true if not set)
const enableRequestLog = settings.enableRequestLogging ?? true;
setRequestLoggingEnabled(enableRequestLog);
logger.info(`HTTP request logging: ${enableRequestLog ? 'enabled' : 'disabled'}`);
} catch (err) {
logger.warn('Failed to load logging settings, using defaults');
}
await agentService.initialize(); await agentService.initialize();
logger.info('Agent service initialized'); logger.info('Agent service initialized');
@@ -219,7 +280,7 @@ app.get('/api/health/detailed', createDetailedHandler());
app.use('/api/fs', createFsRoutes(events)); app.use('/api/fs', createFsRoutes(events));
app.use('/api/agent', createAgentRoutes(agentService, events)); app.use('/api/agent', createAgentRoutes(agentService, events));
app.use('/api/sessions', createSessionsRoutes(agentService)); app.use('/api/sessions', createSessionsRoutes(agentService));
app.use('/api/features', createFeaturesRoutes(featureLoader)); app.use('/api/features', createFeaturesRoutes(featureLoader, settingsService, events));
app.use('/api/auto-mode', createAutoModeRoutes(autoModeService)); app.use('/api/auto-mode', createAutoModeRoutes(autoModeService));
app.use('/api/enhance-prompt', createEnhancePromptRoutes(settingsService)); app.use('/api/enhance-prompt', createEnhancePromptRoutes(settingsService));
app.use('/api/worktree', createWorktreeRoutes(events, settingsService)); app.use('/api/worktree', createWorktreeRoutes(events, settingsService));
@@ -240,6 +301,9 @@ app.use('/api/backlog-plan', createBacklogPlanRoutes(events, settingsService));
app.use('/api/mcp', createMCPRoutes(mcpTestService)); app.use('/api/mcp', createMCPRoutes(mcpTestService));
app.use('/api/pipeline', createPipelineRoutes(pipelineService)); app.use('/api/pipeline', createPipelineRoutes(pipelineService));
app.use('/api/ideation', createIdeationRoutes(events, ideationService, featureLoader)); app.use('/api/ideation', createIdeationRoutes(events, ideationService, featureLoader));
app.use('/api/notifications', createNotificationsRoutes(notificationService));
app.use('/api/event-history', createEventHistoryRoutes(eventHistoryService, settingsService));
app.use('/api/code-review', createCodeReviewRoutes(codeReviewService));
// Create HTTP server // Create HTTP server
const server = createServer(app); const server = createServer(app);
@@ -551,8 +615,8 @@ terminalWss.on('connection', (ws: WebSocket, req: import('http').IncomingMessage
}); });
// Start server with error handling for port conflicts // Start server with error handling for port conflicts
const startServer = (port: number) => { const startServer = (port: number, host: string) => {
server.listen(port, () => { server.listen(port, host, () => {
const terminalStatus = isTerminalEnabled() const terminalStatus = isTerminalEnabled()
? isTerminalPasswordRequired() ? isTerminalPasswordRequired()
? 'enabled (password protected)' ? 'enabled (password protected)'
@@ -563,10 +627,11 @@ const startServer = (port: number) => {
╔═══════════════════════════════════════════════════════╗ ╔═══════════════════════════════════════════════════════╗
║ Automaker Backend Server ║ ║ Automaker Backend Server ║
╠═══════════════════════════════════════════════════════╣ ╠═══════════════════════════════════════════════════════╣
HTTP API: http://localhost:${portStr} Listening: ${host}:${port}${' '.repeat(Math.max(0, 34 - host.length - port.toString().length))}
WebSocket: ws://localhost:${portStr}/api/events HTTP API: http://${HOSTNAME}:${portStr}
Terminal: ws://localhost:${portStr}/api/terminal/ws WebSocket: ws://${HOSTNAME}:${portStr}/api/events
Health: http://localhost:${portStr}/api/health Terminal: ws://${HOSTNAME}:${portStr}/api/terminal/ws
║ Health: http://${HOSTNAME}:${portStr}/api/health ║
║ Terminal: ${terminalStatus.padEnd(37)} ║ Terminal: ${terminalStatus.padEnd(37)}
╚═══════════════════════════════════════════════════════╝ ╚═══════════════════════════════════════════════════════╝
`); `);
@@ -600,7 +665,7 @@ const startServer = (port: number) => {
}); });
}; };
startServer(PORT); startServer(PORT, HOST);
// Global error handlers to prevent crashes from uncaught errors // Global error handlers to prevent crashes from uncaught errors
process.on('unhandledRejection', (reason: unknown, _promise: Promise<unknown>) => { process.on('unhandledRejection', (reason: unknown, _promise: Promise<unknown>) => {

View File

@@ -11,8 +11,12 @@ export { specOutputSchema } from '@automaker/types';
/** /**
* Escape special XML characters * Escape special XML characters
* Handles undefined/null values by converting them to empty strings
*/ */
function escapeXml(str: string): string { export function escapeXml(str: string | undefined | null): string {
if (str == null) {
return '';
}
return str return str
.replace(/&/g, '&amp;') .replace(/&/g, '&amp;')
.replace(/</g, '&lt;') .replace(/</g, '&lt;')

View File

@@ -142,6 +142,8 @@ if (process.env.AUTOMAKER_HIDE_API_KEY !== 'true') {
${API_KEY} ${API_KEY}
║ ║ ║ ║
║ In Electron mode, authentication is handled automatically. ║ ║ In Electron mode, authentication is handled automatically. ║
║ ║
║ 💡 Tip: Set AUTOMAKER_API_KEY env var to use a fixed key for dev ║
╚═══════════════════════════════════════════════════════════════════════╝ ╚═══════════════════════════════════════════════════════════════════════╝
`); `);
} else { } else {

View File

@@ -40,6 +40,7 @@ export interface UnifiedCliDetection {
claude?: CliDetectionResult; claude?: CliDetectionResult;
codex?: CliDetectionResult; codex?: CliDetectionResult;
cursor?: CliDetectionResult; cursor?: CliDetectionResult;
coderabbit?: CliDetectionResult;
} }
/** /**
@@ -76,6 +77,16 @@ const CLI_CONFIGS = {
win32: 'iwr https://cursor.sh/install.ps1 -UseBasicParsing | iex', win32: 'iwr https://cursor.sh/install.ps1 -UseBasicParsing | iex',
}, },
}, },
coderabbit: {
name: 'CodeRabbit CLI',
commands: ['coderabbit', 'cr'],
versionArgs: ['--version'],
installCommands: {
darwin: 'npm install -g coderabbit',
linux: 'npm install -g coderabbit',
win32: 'npm install -g coderabbit',
},
},
} as const; } as const;
/** /**
@@ -230,6 +241,8 @@ export async function checkCliAuth(
return await checkCodexAuth(command); return await checkCodexAuth(command);
case 'cursor': case 'cursor':
return await checkCursorAuth(command); return await checkCursorAuth(command);
case 'coderabbit':
return await checkCodeRabbitAuth(command);
default: default:
return 'none'; return 'none';
} }
@@ -355,6 +368,64 @@ async function checkCursorAuth(command: string): Promise<'cli' | 'api_key' | 'no
return 'none'; return 'none';
} }
/**
* Check CodeRabbit CLI authentication
*
* Expected output when authenticated:
* ```
* CodeRabbit CLI Status
* ✅ Authentication: Logged in
* User Information:
* 👤 Name: ...
* ```
*/
async function checkCodeRabbitAuth(command: string): Promise<'cli' | 'api_key' | 'none'> {
// Check for environment variable
if (process.env.CODERABBIT_API_KEY) {
return 'api_key';
}
// Try running auth status command
return new Promise((resolve) => {
const child = spawn(command, ['auth', 'status'], {
stdio: 'pipe',
timeout: 10000, // Increased timeout for slower systems
});
let stdout = '';
let stderr = '';
child.stdout?.on('data', (data) => {
stdout += data.toString();
});
child.stderr?.on('data', (data) => {
stderr += data.toString();
});
child.on('close', (code) => {
const output = stdout + stderr;
// Check for positive authentication indicators in output
const isAuthenticated =
code === 0 &&
(output.includes('Logged in') || output.includes('logged in')) &&
!output.toLowerCase().includes('not logged in') &&
!output.toLowerCase().includes('not authenticated');
if (isAuthenticated) {
resolve('cli');
} else {
resolve('none');
}
});
child.on('error', () => {
resolve('none');
});
});
}
/** /**
* Get installation instructions for a provider * Get installation instructions for a provider
*/ */

View File

@@ -11,6 +11,14 @@ import {
mergeAgentPrompts, mergeAgentPrompts,
mergeBacklogPlanPrompts, mergeBacklogPlanPrompts,
mergeEnhancementPrompts, mergeEnhancementPrompts,
mergeCommitMessagePrompts,
mergeTitleGenerationPrompts,
mergeIssueValidationPrompts,
mergeIdeationPrompts,
mergeAppSpecPrompts,
mergeContextDescriptionPrompts,
mergeSuggestionsPrompts,
mergeTaskExecutionPrompts,
} from '@automaker/prompts'; } from '@automaker/prompts';
const logger = createLogger('SettingsHelper'); const logger = createLogger('SettingsHelper');
@@ -218,6 +226,14 @@ export async function getPromptCustomization(
agent: ReturnType<typeof mergeAgentPrompts>; agent: ReturnType<typeof mergeAgentPrompts>;
backlogPlan: ReturnType<typeof mergeBacklogPlanPrompts>; backlogPlan: ReturnType<typeof mergeBacklogPlanPrompts>;
enhancement: ReturnType<typeof mergeEnhancementPrompts>; enhancement: ReturnType<typeof mergeEnhancementPrompts>;
commitMessage: ReturnType<typeof mergeCommitMessagePrompts>;
titleGeneration: ReturnType<typeof mergeTitleGenerationPrompts>;
issueValidation: ReturnType<typeof mergeIssueValidationPrompts>;
ideation: ReturnType<typeof mergeIdeationPrompts>;
appSpec: ReturnType<typeof mergeAppSpecPrompts>;
contextDescription: ReturnType<typeof mergeContextDescriptionPrompts>;
suggestions: ReturnType<typeof mergeSuggestionsPrompts>;
taskExecution: ReturnType<typeof mergeTaskExecutionPrompts>;
}> { }> {
let customization: PromptCustomization = {}; let customization: PromptCustomization = {};
@@ -239,6 +255,14 @@ export async function getPromptCustomization(
agent: mergeAgentPrompts(customization.agent), agent: mergeAgentPrompts(customization.agent),
backlogPlan: mergeBacklogPlanPrompts(customization.backlogPlan), backlogPlan: mergeBacklogPlanPrompts(customization.backlogPlan),
enhancement: mergeEnhancementPrompts(customization.enhancement), enhancement: mergeEnhancementPrompts(customization.enhancement),
commitMessage: mergeCommitMessagePrompts(customization.commitMessage),
titleGeneration: mergeTitleGenerationPrompts(customization.titleGeneration),
issueValidation: mergeIssueValidationPrompts(customization.issueValidation),
ideation: mergeIdeationPrompts(customization.ideation),
appSpec: mergeAppSpecPrompts(customization.appSpec),
contextDescription: mergeContextDescriptionPrompts(customization.contextDescription),
suggestions: mergeSuggestionsPrompts(customization.suggestions),
taskExecution: mergeTaskExecutionPrompts(customization.taskExecution),
}; };
} }

View File

@@ -5,18 +5,14 @@
import * as secureFs from './secure-fs.js'; import * as secureFs from './secure-fs.js';
import * as path from 'path'; import * as path from 'path';
import type { PRState, WorktreePRInfo } from '@automaker/types';
// Re-export types for backwards compatibility
export type { PRState, WorktreePRInfo };
/** Maximum length for sanitized branch names in filesystem paths */ /** Maximum length for sanitized branch names in filesystem paths */
const MAX_SANITIZED_BRANCH_PATH_LENGTH = 200; const MAX_SANITIZED_BRANCH_PATH_LENGTH = 200;
export interface WorktreePRInfo {
number: number;
url: string;
title: string;
state: string;
createdAt: string;
}
export interface WorktreeMetadata { export interface WorktreeMetadata {
branch: string; branch: string;
createdAt: string; createdAt: string;

View File

@@ -0,0 +1,611 @@
/**
* XML Extraction Utilities
*
* Robust XML parsing utilities for extracting and updating sections
* from app_spec.txt XML content. Uses regex-based parsing which is
* sufficient for our controlled XML structure.
*
* Note: If more complex XML parsing is needed in the future, consider
* using a library like 'fast-xml-parser' or 'xml2js'.
*/
import { createLogger } from '@automaker/utils';
import type { SpecOutput } from '@automaker/types';
const logger = createLogger('XmlExtractor');
/**
* Represents an implemented feature extracted from XML
*/
export interface ImplementedFeature {
name: string;
description: string;
file_locations?: string[];
}
/**
* Logger interface for optional custom logging
*/
export interface XmlExtractorLogger {
debug: (message: string, ...args: unknown[]) => void;
warn?: (message: string, ...args: unknown[]) => void;
}
/**
* Options for XML extraction operations
*/
export interface ExtractXmlOptions {
/** Custom logger (defaults to internal logger) */
logger?: XmlExtractorLogger;
}
/**
* Escape special XML characters
* Handles undefined/null values by converting them to empty strings
*/
export function escapeXml(str: string | undefined | null): string {
if (str == null) {
return '';
}
return str
.replace(/&/g, '&amp;')
.replace(/</g, '&lt;')
.replace(/>/g, '&gt;')
.replace(/"/g, '&quot;')
.replace(/'/g, '&apos;');
}
/**
* Unescape XML entities back to regular characters
*/
export function unescapeXml(str: string): string {
return str
.replace(/&apos;/g, "'")
.replace(/&quot;/g, '"')
.replace(/&gt;/g, '>')
.replace(/&lt;/g, '<')
.replace(/&amp;/g, '&');
}
/**
* Extract the content of a specific XML section
*
* @param xmlContent - The full XML content
* @param tagName - The tag name to extract (e.g., 'implemented_features')
* @param options - Optional extraction options
* @returns The content between the tags, or null if not found
*/
export function extractXmlSection(
xmlContent: string,
tagName: string,
options: ExtractXmlOptions = {}
): string | null {
const log = options.logger || logger;
const regex = new RegExp(`<${tagName}>([\\s\\S]*?)<\\/${tagName}>`, 'i');
const match = xmlContent.match(regex);
if (match) {
log.debug(`Extracted <${tagName}> section`);
return match[1];
}
log.debug(`Section <${tagName}> not found`);
return null;
}
/**
* Extract all values from repeated XML elements
*
* @param xmlContent - The XML content to search
* @param tagName - The tag name to extract values from
* @param options - Optional extraction options
* @returns Array of extracted values (unescaped)
*/
export function extractXmlElements(
xmlContent: string,
tagName: string,
options: ExtractXmlOptions = {}
): string[] {
const log = options.logger || logger;
const values: string[] = [];
const regex = new RegExp(`<${tagName}>([\\s\\S]*?)<\\/${tagName}>`, 'g');
const matches = xmlContent.matchAll(regex);
for (const match of matches) {
values.push(unescapeXml(match[1].trim()));
}
log.debug(`Extracted ${values.length} <${tagName}> elements`);
return values;
}
/**
* Extract implemented features from app_spec.txt XML content
*
* @param specContent - The full XML content of app_spec.txt
* @param options - Optional extraction options
* @returns Array of implemented features with name, description, and optional file_locations
*/
export function extractImplementedFeatures(
specContent: string,
options: ExtractXmlOptions = {}
): ImplementedFeature[] {
const log = options.logger || logger;
const features: ImplementedFeature[] = [];
// Match <implemented_features>...</implemented_features> section
const implementedSection = extractXmlSection(specContent, 'implemented_features', options);
if (!implementedSection) {
log.debug('No implemented_features section found');
return features;
}
// Extract individual feature blocks
const featureRegex = /<feature>([\s\S]*?)<\/feature>/g;
const featureMatches = implementedSection.matchAll(featureRegex);
for (const featureMatch of featureMatches) {
const featureContent = featureMatch[1];
// Extract name
const nameMatch = featureContent.match(/<name>([\s\S]*?)<\/name>/);
const name = nameMatch ? unescapeXml(nameMatch[1].trim()) : '';
// Extract description
const descMatch = featureContent.match(/<description>([\s\S]*?)<\/description>/);
const description = descMatch ? unescapeXml(descMatch[1].trim()) : '';
// Extract file_locations if present
const locationsSection = extractXmlSection(featureContent, 'file_locations', options);
const file_locations = locationsSection
? extractXmlElements(locationsSection, 'location', options)
: undefined;
if (name) {
features.push({
name,
description,
...(file_locations && file_locations.length > 0 ? { file_locations } : {}),
});
}
}
log.debug(`Extracted ${features.length} implemented features`);
return features;
}
/**
* Extract only the feature names from implemented_features section
*
* @param specContent - The full XML content of app_spec.txt
* @param options - Optional extraction options
* @returns Array of feature names
*/
export function extractImplementedFeatureNames(
specContent: string,
options: ExtractXmlOptions = {}
): string[] {
const features = extractImplementedFeatures(specContent, options);
return features.map((f) => f.name);
}
/**
* Generate XML for a single implemented feature
*
* @param feature - The feature to convert to XML
* @param indent - The base indentation level (default: 2 spaces)
* @returns XML string for the feature
*/
export function featureToXml(feature: ImplementedFeature, indent: string = ' '): string {
const i2 = indent.repeat(2);
const i3 = indent.repeat(3);
const i4 = indent.repeat(4);
let xml = `${i2}<feature>
${i3}<name>${escapeXml(feature.name)}</name>
${i3}<description>${escapeXml(feature.description)}</description>`;
if (feature.file_locations && feature.file_locations.length > 0) {
xml += `
${i3}<file_locations>
${feature.file_locations.map((loc) => `${i4}<location>${escapeXml(loc)}</location>`).join('\n')}
${i3}</file_locations>`;
}
xml += `
${i2}</feature>`;
return xml;
}
/**
* Generate XML for an array of implemented features
*
* @param features - Array of features to convert to XML
* @param indent - The base indentation level (default: 2 spaces)
* @returns XML string for the implemented_features section content
*/
export function featuresToXml(features: ImplementedFeature[], indent: string = ' '): string {
return features.map((f) => featureToXml(f, indent)).join('\n');
}
/**
* Update the implemented_features section in XML content
*
* @param specContent - The full XML content
* @param newFeatures - The new features to set
* @param options - Optional extraction options
* @returns Updated XML content with the new implemented_features section
*/
export function updateImplementedFeaturesSection(
specContent: string,
newFeatures: ImplementedFeature[],
options: ExtractXmlOptions = {}
): string {
const log = options.logger || logger;
const indent = ' ';
// Generate new section content
const newSectionContent = featuresToXml(newFeatures, indent);
// Build the new section
const newSection = `<implemented_features>
${newSectionContent}
${indent}</implemented_features>`;
// Check if section exists
const sectionRegex = /<implemented_features>[\s\S]*?<\/implemented_features>/;
if (sectionRegex.test(specContent)) {
log.debug('Replacing existing implemented_features section');
return specContent.replace(sectionRegex, newSection);
}
// If section doesn't exist, try to insert after core_capabilities
const coreCapabilitiesEnd = '</core_capabilities>';
const insertIndex = specContent.indexOf(coreCapabilitiesEnd);
if (insertIndex !== -1) {
const insertPosition = insertIndex + coreCapabilitiesEnd.length;
log.debug('Inserting implemented_features after core_capabilities');
return (
specContent.slice(0, insertPosition) +
'\n\n' +
indent +
newSection +
specContent.slice(insertPosition)
);
}
// As a fallback, insert before </project_specification>
const projectSpecEnd = '</project_specification>';
const fallbackIndex = specContent.indexOf(projectSpecEnd);
if (fallbackIndex !== -1) {
log.debug('Inserting implemented_features before </project_specification>');
return (
specContent.slice(0, fallbackIndex) +
indent +
newSection +
'\n' +
specContent.slice(fallbackIndex)
);
}
log.warn?.('Could not find appropriate insertion point for implemented_features');
log.debug('Could not find appropriate insertion point for implemented_features');
return specContent;
}
/**
* Add a new feature to the implemented_features section
*
* @param specContent - The full XML content
* @param newFeature - The feature to add
* @param options - Optional extraction options
* @returns Updated XML content with the new feature added
*/
export function addImplementedFeature(
specContent: string,
newFeature: ImplementedFeature,
options: ExtractXmlOptions = {}
): string {
const log = options.logger || logger;
// Extract existing features
const existingFeatures = extractImplementedFeatures(specContent, options);
// Check for duplicates by name
const isDuplicate = existingFeatures.some(
(f) => f.name.toLowerCase() === newFeature.name.toLowerCase()
);
if (isDuplicate) {
log.debug(`Feature "${newFeature.name}" already exists, skipping`);
return specContent;
}
// Add the new feature
const updatedFeatures = [...existingFeatures, newFeature];
log.debug(`Adding feature "${newFeature.name}"`);
return updateImplementedFeaturesSection(specContent, updatedFeatures, options);
}
/**
* Remove a feature from the implemented_features section by name
*
* @param specContent - The full XML content
* @param featureName - The name of the feature to remove
* @param options - Optional extraction options
* @returns Updated XML content with the feature removed
*/
export function removeImplementedFeature(
specContent: string,
featureName: string,
options: ExtractXmlOptions = {}
): string {
const log = options.logger || logger;
// Extract existing features
const existingFeatures = extractImplementedFeatures(specContent, options);
// Filter out the feature to remove
const updatedFeatures = existingFeatures.filter(
(f) => f.name.toLowerCase() !== featureName.toLowerCase()
);
if (updatedFeatures.length === existingFeatures.length) {
log.debug(`Feature "${featureName}" not found, no changes made`);
return specContent;
}
log.debug(`Removing feature "${featureName}"`);
return updateImplementedFeaturesSection(specContent, updatedFeatures, options);
}
/**
* Update an existing feature in the implemented_features section
*
* @param specContent - The full XML content
* @param featureName - The name of the feature to update
* @param updates - Partial updates to apply to the feature
* @param options - Optional extraction options
* @returns Updated XML content with the feature modified
*/
export function updateImplementedFeature(
specContent: string,
featureName: string,
updates: Partial<ImplementedFeature>,
options: ExtractXmlOptions = {}
): string {
const log = options.logger || logger;
// Extract existing features
const existingFeatures = extractImplementedFeatures(specContent, options);
// Find and update the feature
let found = false;
const updatedFeatures = existingFeatures.map((f) => {
if (f.name.toLowerCase() === featureName.toLowerCase()) {
found = true;
return {
...f,
...updates,
// Preserve the original name if not explicitly updated
name: updates.name ?? f.name,
};
}
return f;
});
if (!found) {
log.debug(`Feature "${featureName}" not found, no changes made`);
return specContent;
}
log.debug(`Updating feature "${featureName}"`);
return updateImplementedFeaturesSection(specContent, updatedFeatures, options);
}
/**
* Check if a feature exists in the implemented_features section
*
* @param specContent - The full XML content
* @param featureName - The name of the feature to check
* @param options - Optional extraction options
* @returns True if the feature exists
*/
export function hasImplementedFeature(
specContent: string,
featureName: string,
options: ExtractXmlOptions = {}
): boolean {
const features = extractImplementedFeatures(specContent, options);
return features.some((f) => f.name.toLowerCase() === featureName.toLowerCase());
}
/**
* Convert extracted features to SpecOutput.implemented_features format
*
* @param features - Array of extracted features
* @returns Features in SpecOutput format
*/
export function toSpecOutputFeatures(
features: ImplementedFeature[]
): SpecOutput['implemented_features'] {
return features.map((f) => ({
name: f.name,
description: f.description,
...(f.file_locations && f.file_locations.length > 0
? { file_locations: f.file_locations }
: {}),
}));
}
/**
* Convert SpecOutput.implemented_features to ImplementedFeature format
*
* @param specFeatures - Features from SpecOutput
* @returns Features in ImplementedFeature format
*/
export function fromSpecOutputFeatures(
specFeatures: SpecOutput['implemented_features']
): ImplementedFeature[] {
return specFeatures.map((f) => ({
name: f.name,
description: f.description,
...(f.file_locations && f.file_locations.length > 0
? { file_locations: f.file_locations }
: {}),
}));
}
/**
* Represents a roadmap phase extracted from XML
*/
export interface RoadmapPhase {
name: string;
status: string;
description?: string;
}
/**
* Extract the technology stack from app_spec.txt XML content
*
* @param specContent - The full XML content
* @param options - Optional extraction options
* @returns Array of technology names
*/
export function extractTechnologyStack(
specContent: string,
options: ExtractXmlOptions = {}
): string[] {
const log = options.logger || logger;
const techSection = extractXmlSection(specContent, 'technology_stack', options);
if (!techSection) {
log.debug('No technology_stack section found');
return [];
}
const technologies = extractXmlElements(techSection, 'technology', options);
log.debug(`Extracted ${technologies.length} technologies`);
return technologies;
}
/**
* Update the technology_stack section in XML content
*
* @param specContent - The full XML content
* @param technologies - The new technology list
* @param options - Optional extraction options
* @returns Updated XML content
*/
export function updateTechnologyStack(
specContent: string,
technologies: string[],
options: ExtractXmlOptions = {}
): string {
const log = options.logger || logger;
const indent = ' ';
const i2 = indent.repeat(2);
// Generate new section content
const techXml = technologies
.map((t) => `${i2}<technology>${escapeXml(t)}</technology>`)
.join('\n');
const newSection = `<technology_stack>\n${techXml}\n${indent}</technology_stack>`;
// Check if section exists
const sectionRegex = /<technology_stack>[\s\S]*?<\/technology_stack>/;
if (sectionRegex.test(specContent)) {
log.debug('Replacing existing technology_stack section');
return specContent.replace(sectionRegex, newSection);
}
log.debug('No technology_stack section found to update');
return specContent;
}
/**
* Extract roadmap phases from app_spec.txt XML content
*
* @param specContent - The full XML content
* @param options - Optional extraction options
* @returns Array of roadmap phases
*/
export function extractRoadmapPhases(
specContent: string,
options: ExtractXmlOptions = {}
): RoadmapPhase[] {
const log = options.logger || logger;
const phases: RoadmapPhase[] = [];
const roadmapSection = extractXmlSection(specContent, 'implementation_roadmap', options);
if (!roadmapSection) {
log.debug('No implementation_roadmap section found');
return phases;
}
// Extract individual phase blocks
const phaseRegex = /<phase>([\s\S]*?)<\/phase>/g;
const phaseMatches = roadmapSection.matchAll(phaseRegex);
for (const phaseMatch of phaseMatches) {
const phaseContent = phaseMatch[1];
const nameMatch = phaseContent.match(/<name>([\s\S]*?)<\/name>/);
const name = nameMatch ? unescapeXml(nameMatch[1].trim()) : '';
const statusMatch = phaseContent.match(/<status>([\s\S]*?)<\/status>/);
const status = statusMatch ? unescapeXml(statusMatch[1].trim()) : 'pending';
const descMatch = phaseContent.match(/<description>([\s\S]*?)<\/description>/);
const description = descMatch ? unescapeXml(descMatch[1].trim()) : undefined;
if (name) {
phases.push({ name, status, description });
}
}
log.debug(`Extracted ${phases.length} roadmap phases`);
return phases;
}
/**
* Update a roadmap phase status in XML content
*
* @param specContent - The full XML content
* @param phaseName - The name of the phase to update
* @param newStatus - The new status value
* @param options - Optional extraction options
* @returns Updated XML content
*/
export function updateRoadmapPhaseStatus(
specContent: string,
phaseName: string,
newStatus: string,
options: ExtractXmlOptions = {}
): string {
const log = options.logger || logger;
// Find the phase and update its status
// Match the phase block containing the specific name
const phaseRegex = new RegExp(
`(<phase>\\s*<name>\\s*${escapeXml(phaseName)}\\s*<\\/name>\\s*<status>)[\\s\\S]*?(<\\/status>)`,
'i'
);
if (phaseRegex.test(specContent)) {
log.debug(`Updating phase "${phaseName}" status to "${newStatus}"`);
return specContent.replace(phaseRegex, `$1${escapeXml(newStatus)}$2`);
}
log.debug(`Phase "${phaseName}" not found`);
return specContent;
}

View File

@@ -35,6 +35,7 @@ import {
type SubprocessOptions, type SubprocessOptions,
type WslCliResult, type WslCliResult,
} from '@automaker/platform'; } from '@automaker/platform';
import { calculateReasoningTimeout } from '@automaker/types';
import { createLogger, isAbortError } from '@automaker/utils'; import { createLogger, isAbortError } from '@automaker/utils';
import { execSync } from 'child_process'; import { execSync } from 'child_process';
import * as fs from 'fs'; import * as fs from 'fs';
@@ -107,6 +108,15 @@ export interface CliDetectionResult {
// Create logger for CLI operations // Create logger for CLI operations
const cliLogger = createLogger('CliProvider'); const cliLogger = createLogger('CliProvider');
/**
* Base timeout for CLI operations in milliseconds.
* CLI tools have longer startup and processing times compared to direct API calls,
* so we use a higher base timeout (120s) than the default provider timeout (30s).
* This is multiplied by reasoning effort multipliers when applicable.
* @see calculateReasoningTimeout from @automaker/types
*/
const CLI_BASE_TIMEOUT_MS = 120000;
/** /**
* Abstract base class for CLI-based providers * Abstract base class for CLI-based providers
* *
@@ -450,6 +460,10 @@ export abstract class CliProvider extends BaseProvider {
} }
} }
// Calculate dynamic timeout based on reasoning effort.
// This addresses GitHub issue #530 where reasoning models with 'xhigh' effort would timeout.
const timeout = calculateReasoningTimeout(options.reasoningEffort, CLI_BASE_TIMEOUT_MS);
// WSL strategy // WSL strategy
if (this.useWsl && this.wslCliPath) { if (this.useWsl && this.wslCliPath) {
const wslCwd = windowsToWslPath(cwd); const wslCwd = windowsToWslPath(cwd);
@@ -473,7 +487,7 @@ export abstract class CliProvider extends BaseProvider {
cwd, // Windows cwd for spawn cwd, // Windows cwd for spawn
env: filteredEnv, env: filteredEnv,
abortController: options.abortController, abortController: options.abortController,
timeout: 120000, // CLI operations may take longer timeout,
}; };
} }
@@ -488,7 +502,7 @@ export abstract class CliProvider extends BaseProvider {
cwd, cwd,
env: filteredEnv, env: filteredEnv,
abortController: options.abortController, abortController: options.abortController,
timeout: 120000, timeout,
}; };
} }
@@ -501,7 +515,7 @@ export abstract class CliProvider extends BaseProvider {
cwd, cwd,
env: filteredEnv, env: filteredEnv,
abortController: options.abortController, abortController: options.abortController,
timeout: 120000, timeout,
}; };
} }

View File

@@ -33,6 +33,8 @@ import {
CODEX_MODEL_MAP, CODEX_MODEL_MAP,
supportsReasoningEffort, supportsReasoningEffort,
validateBareModelId, validateBareModelId,
calculateReasoningTimeout,
DEFAULT_TIMEOUT_MS,
type CodexApprovalPolicy, type CodexApprovalPolicy,
type CodexSandboxMode, type CodexSandboxMode,
type CodexAuthStatus, type CodexAuthStatus,
@@ -91,7 +93,14 @@ const CODEX_ITEM_TYPES = {
const SYSTEM_PROMPT_LABEL = 'System instructions'; const SYSTEM_PROMPT_LABEL = 'System instructions';
const HISTORY_HEADER = 'Current request:\n'; const HISTORY_HEADER = 'Current request:\n';
const TEXT_ENCODING = 'utf-8'; const TEXT_ENCODING = 'utf-8';
const DEFAULT_TIMEOUT_MS = 30000; /**
* Default timeout for Codex CLI operations in milliseconds.
* This is the "no output" timeout - if the CLI doesn't produce any JSONL output
* for this duration, the process is killed. For reasoning models with high
* reasoning effort, this timeout is dynamically extended via calculateReasoningTimeout().
* @see calculateReasoningTimeout from @automaker/types
*/
const CODEX_CLI_TIMEOUT_MS = DEFAULT_TIMEOUT_MS;
const CONTEXT_WINDOW_256K = 256000; const CONTEXT_WINDOW_256K = 256000;
const MAX_OUTPUT_32K = 32000; const MAX_OUTPUT_32K = 32000;
const MAX_OUTPUT_16K = 16000; const MAX_OUTPUT_16K = 16000;
@@ -814,13 +823,19 @@ export class CodexProvider extends BaseProvider {
envOverrides[OPENAI_API_KEY_ENV] = executionPlan.openAiApiKey; envOverrides[OPENAI_API_KEY_ENV] = executionPlan.openAiApiKey;
} }
// Calculate dynamic timeout based on reasoning effort.
// Higher reasoning effort (e.g., 'xhigh' for "xtra thinking" mode) requires more time
// for the model to generate reasoning tokens before producing output.
// This fixes GitHub issue #530 where features would get stuck with reasoning models.
const timeout = calculateReasoningTimeout(options.reasoningEffort, CODEX_CLI_TIMEOUT_MS);
const stream = spawnJSONLProcess({ const stream = spawnJSONLProcess({
command: commandPath, command: commandPath,
args, args,
cwd: options.cwd, cwd: options.cwd,
env: envOverrides, env: envOverrides,
abortController: options.abortController, abortController: options.abortController,
timeout: DEFAULT_TIMEOUT_MS, timeout,
stdinData: promptText, // Pass prompt via stdin stdinData: promptText, // Pass prompt via stdin
}); });

View File

@@ -6,8 +6,17 @@ import { createLogger } from '@automaker/utils';
const logger = createLogger('SpecRegeneration'); const logger = createLogger('SpecRegeneration');
// Types for running generation
export type GenerationType = 'spec_regeneration' | 'feature_generation' | 'sync';
interface RunningGeneration {
isRunning: boolean;
type: GenerationType;
startedAt: string;
}
// Shared state for tracking generation status - scoped by project path // Shared state for tracking generation status - scoped by project path
const runningProjects = new Map<string, boolean>(); const runningProjects = new Map<string, RunningGeneration>();
const abortControllers = new Map<string, AbortController>(); const abortControllers = new Map<string, AbortController>();
/** /**
@@ -17,16 +26,21 @@ export function getSpecRegenerationStatus(projectPath?: string): {
isRunning: boolean; isRunning: boolean;
currentAbortController: AbortController | null; currentAbortController: AbortController | null;
projectPath?: string; projectPath?: string;
type?: GenerationType;
startedAt?: string;
} { } {
if (projectPath) { if (projectPath) {
const generation = runningProjects.get(projectPath);
return { return {
isRunning: runningProjects.get(projectPath) || false, isRunning: generation?.isRunning || false,
currentAbortController: abortControllers.get(projectPath) || null, currentAbortController: abortControllers.get(projectPath) || null,
projectPath, projectPath,
type: generation?.type,
startedAt: generation?.startedAt,
}; };
} }
// Fallback: check if any project is running (for backward compatibility) // Fallback: check if any project is running (for backward compatibility)
const isAnyRunning = Array.from(runningProjects.values()).some((running) => running); const isAnyRunning = Array.from(runningProjects.values()).some((g) => g.isRunning);
return { isRunning: isAnyRunning, currentAbortController: null }; return { isRunning: isAnyRunning, currentAbortController: null };
} }
@@ -46,10 +60,15 @@ export function getRunningProjectPath(): string | null {
export function setRunningState( export function setRunningState(
projectPath: string, projectPath: string,
running: boolean, running: boolean,
controller: AbortController | null = null controller: AbortController | null = null,
type: GenerationType = 'spec_regeneration'
): void { ): void {
if (running) { if (running) {
runningProjects.set(projectPath, true); runningProjects.set(projectPath, {
isRunning: true,
type,
startedAt: new Date().toISOString(),
});
if (controller) { if (controller) {
abortControllers.set(projectPath, controller); abortControllers.set(projectPath, controller);
} }
@@ -59,6 +78,33 @@ export function setRunningState(
} }
} }
/**
* Get all running spec/feature generations for the running agents view
*/
export function getAllRunningGenerations(): Array<{
projectPath: string;
type: GenerationType;
startedAt: string;
}> {
const results: Array<{
projectPath: string;
type: GenerationType;
startedAt: string;
}> = [];
for (const [projectPath, generation] of runningProjects.entries()) {
if (generation.isRunning) {
results.push({
projectPath,
type: generation.type,
startedAt: generation.startedAt,
});
}
}
return results;
}
/** /**
* Helper to log authentication status * Helper to log authentication status
*/ */

View File

@@ -14,7 +14,8 @@ import { streamingQuery } from '../../providers/simple-query-service.js';
import { parseAndCreateFeatures } from './parse-and-create-features.js'; import { parseAndCreateFeatures } from './parse-and-create-features.js';
import { getAppSpecPath } from '@automaker/platform'; import { getAppSpecPath } from '@automaker/platform';
import type { SettingsService } from '../../services/settings-service.js'; import type { SettingsService } from '../../services/settings-service.js';
import { getAutoLoadClaudeMdSetting } from '../../lib/settings-helpers.js'; import { getAutoLoadClaudeMdSetting, getPromptCustomization } from '../../lib/settings-helpers.js';
import { FeatureLoader } from '../../services/feature-loader.js';
const logger = createLogger('SpecRegeneration'); const logger = createLogger('SpecRegeneration');
@@ -53,38 +54,48 @@ export async function generateFeaturesFromSpec(
return; return;
} }
// Get customized prompts from settings
const prompts = await getPromptCustomization(settingsService, '[FeatureGeneration]');
// Load existing features to prevent duplicates
const featureLoader = new FeatureLoader();
const existingFeatures = await featureLoader.getAll(projectPath);
logger.info(`Found ${existingFeatures.length} existing features to exclude from generation`);
// Build existing features context for the prompt
let existingFeaturesContext = '';
if (existingFeatures.length > 0) {
const featuresList = existingFeatures
.map(
(f) =>
`- "${f.title}" (ID: ${f.id}): ${f.description?.substring(0, 100) || 'No description'}`
)
.join('\n');
existingFeaturesContext = `
## EXISTING FEATURES (DO NOT REGENERATE THESE)
The following ${existingFeatures.length} features already exist in the project. You MUST NOT generate features that duplicate or overlap with these:
${featuresList}
CRITICAL INSTRUCTIONS:
- DO NOT generate any features with the same or similar titles as the existing features listed above
- DO NOT generate features that cover the same functionality as existing features
- ONLY generate NEW features that are not yet in the system
- If a feature from the roadmap already exists, skip it entirely
- Generate unique feature IDs that do not conflict with existing IDs: ${existingFeatures.map((f) => f.id).join(', ')}
`;
}
const prompt = `Based on this project specification: const prompt = `Based on this project specification:
${spec} ${spec}
${existingFeaturesContext}
${prompts.appSpec.generateFeaturesFromSpecPrompt}
Generate a prioritized list of implementable features. For each feature provide: Generate ${featureCount} NEW features that build on each other logically. Remember: ONLY generate features that DO NOT already exist.`;
1. **id**: A unique lowercase-hyphenated identifier
2. **category**: Functional category (e.g., "Core", "UI", "API", "Authentication", "Database")
3. **title**: Short descriptive title
4. **description**: What this feature does (2-3 sentences)
5. **priority**: 1 (high), 2 (medium), or 3 (low)
6. **complexity**: "simple", "moderate", or "complex"
7. **dependencies**: Array of feature IDs this depends on (can be empty)
Format as JSON:
{
"features": [
{
"id": "feature-id",
"category": "Feature Category",
"title": "Feature Title",
"description": "What it does",
"priority": 1,
"complexity": "moderate",
"dependencies": []
}
]
}
Generate ${featureCount} features that build on each other logically.
IMPORTANT: Do not ask for clarification. The specification is provided above. Generate the JSON immediately.`;
logger.info('========== PROMPT BEING SENT =========='); logger.info('========== PROMPT BEING SENT ==========');
logger.info(`Prompt length: ${prompt.length} chars`); logger.info(`Prompt length: ${prompt.length} chars`);

View File

@@ -7,12 +7,7 @@
import * as secureFs from '../../lib/secure-fs.js'; import * as secureFs from '../../lib/secure-fs.js';
import type { EventEmitter } from '../../lib/events.js'; import type { EventEmitter } from '../../lib/events.js';
import { import { specOutputSchema, specToXml, type SpecOutput } from '../../lib/app-spec-format.js';
specOutputSchema,
specToXml,
getStructuredSpecPromptInstruction,
type SpecOutput,
} from '../../lib/app-spec-format.js';
import { createLogger } from '@automaker/utils'; import { createLogger } from '@automaker/utils';
import { DEFAULT_PHASE_MODELS, isCursorModel } from '@automaker/types'; import { DEFAULT_PHASE_MODELS, isCursorModel } from '@automaker/types';
import { resolvePhaseModel } from '@automaker/model-resolver'; import { resolvePhaseModel } from '@automaker/model-resolver';
@@ -21,7 +16,7 @@ import { streamingQuery } from '../../providers/simple-query-service.js';
import { generateFeaturesFromSpec } from './generate-features-from-spec.js'; import { generateFeaturesFromSpec } from './generate-features-from-spec.js';
import { ensureAutomakerDir, getAppSpecPath } from '@automaker/platform'; import { ensureAutomakerDir, getAppSpecPath } from '@automaker/platform';
import type { SettingsService } from '../../services/settings-service.js'; import type { SettingsService } from '../../services/settings-service.js';
import { getAutoLoadClaudeMdSetting } from '../../lib/settings-helpers.js'; import { getAutoLoadClaudeMdSetting, getPromptCustomization } from '../../lib/settings-helpers.js';
const logger = createLogger('SpecRegeneration'); const logger = createLogger('SpecRegeneration');
@@ -43,6 +38,9 @@ export async function generateSpec(
logger.info('analyzeProject:', analyzeProject); logger.info('analyzeProject:', analyzeProject);
logger.info('maxFeatures:', maxFeatures); logger.info('maxFeatures:', maxFeatures);
// Get customized prompts from settings
const prompts = await getPromptCustomization(settingsService, '[SpecRegeneration]');
// Build the prompt based on whether we should analyze the project // Build the prompt based on whether we should analyze the project
let analysisInstructions = ''; let analysisInstructions = '';
let techStackDefaults = ''; let techStackDefaults = '';
@@ -66,9 +64,7 @@ export async function generateSpec(
Use these technologies as the foundation for the specification.`; Use these technologies as the foundation for the specification.`;
} }
const prompt = `You are helping to define a software project specification. const prompt = `${prompts.appSpec.generateSpecSystemPrompt}
IMPORTANT: Never ask for clarification or additional information. Use the information provided and make reasonable assumptions to create the best possible specification. If details are missing, infer them based on common patterns and best practices.
Project Overview: Project Overview:
${projectOverview} ${projectOverview}
@@ -77,7 +73,7 @@ ${techStackDefaults}
${analysisInstructions} ${analysisInstructions}
${getStructuredSpecPromptInstruction()}`; ${prompts.appSpec.structuredSpecInstructions}`;
logger.info('========== PROMPT BEING SENT =========='); logger.info('========== PROMPT BEING SENT ==========');
logger.info(`Prompt length: ${prompt.length} chars`); logger.info(`Prompt length: ${prompt.length} chars`);
@@ -205,19 +201,33 @@ Your entire response should be valid JSON starting with { and ending with }. No
xmlContent = responseText.substring(xmlStart, xmlEnd + '</project_specification>'.length); xmlContent = responseText.substring(xmlStart, xmlEnd + '</project_specification>'.length);
logger.info(`Extracted XML content: ${xmlContent.length} chars (from position ${xmlStart})`); logger.info(`Extracted XML content: ${xmlContent.length} chars (from position ${xmlStart})`);
} else { } else {
// No valid XML structure found in the response text // No XML found, try JSON extraction
// This happens when structured output was expected but not received, and the agent logger.warn('⚠️ No XML tags found, attempting JSON extraction...');
// output conversational text instead of XML (e.g., "The project directory appears to be empty...") const extractedJson = extractJson<SpecOutput>(responseText, { logger });
// We should NOT save this conversational text as it's not a valid spec
logger.error('❌ Response does not contain valid <project_specification> XML structure'); if (
logger.error( extractedJson &&
'This typically happens when structured output failed and the agent produced conversational text instead of XML' typeof extractedJson.project_name === 'string' &&
); typeof extractedJson.overview === 'string' &&
throw new Error( Array.isArray(extractedJson.technology_stack) &&
'Failed to generate spec: No valid XML structure found in response. ' + Array.isArray(extractedJson.core_capabilities) &&
'The response contained conversational text but no <project_specification> tags. ' + Array.isArray(extractedJson.implemented_features)
'Please try again.' ) {
); logger.info('✅ Successfully extracted JSON from response text');
xmlContent = specToXml(extractedJson);
logger.info(`✅ Converted extracted JSON to XML: ${xmlContent.length} chars`);
} else {
// Neither XML nor valid JSON found
logger.error('❌ Response does not contain valid XML or JSON structure');
logger.error(
'This typically happens when structured output failed and the agent produced conversational text instead of structured output'
);
throw new Error(
'Failed to generate spec: No valid XML or JSON structure found in response. ' +
'The response contained conversational text but no <project_specification> tags or valid JSON. ' +
'Please try again.'
);
}
} }
} }

View File

@@ -7,6 +7,7 @@ import type { EventEmitter } from '../../lib/events.js';
import { createCreateHandler } from './routes/create.js'; import { createCreateHandler } from './routes/create.js';
import { createGenerateHandler } from './routes/generate.js'; import { createGenerateHandler } from './routes/generate.js';
import { createGenerateFeaturesHandler } from './routes/generate-features.js'; import { createGenerateFeaturesHandler } from './routes/generate-features.js';
import { createSyncHandler } from './routes/sync.js';
import { createStopHandler } from './routes/stop.js'; import { createStopHandler } from './routes/stop.js';
import { createStatusHandler } from './routes/status.js'; import { createStatusHandler } from './routes/status.js';
import type { SettingsService } from '../../services/settings-service.js'; import type { SettingsService } from '../../services/settings-service.js';
@@ -20,6 +21,7 @@ export function createSpecRegenerationRoutes(
router.post('/create', createCreateHandler(events)); router.post('/create', createCreateHandler(events));
router.post('/generate', createGenerateHandler(events, settingsService)); router.post('/generate', createGenerateHandler(events, settingsService));
router.post('/generate-features', createGenerateFeaturesHandler(events, settingsService)); router.post('/generate-features', createGenerateFeaturesHandler(events, settingsService));
router.post('/sync', createSyncHandler(events, settingsService));
router.post('/stop', createStopHandler()); router.post('/stop', createStopHandler());
router.get('/status', createStatusHandler()); router.get('/status', createStatusHandler());

View File

@@ -5,9 +5,10 @@
import path from 'path'; import path from 'path';
import * as secureFs from '../../lib/secure-fs.js'; import * as secureFs from '../../lib/secure-fs.js';
import type { EventEmitter } from '../../lib/events.js'; import type { EventEmitter } from '../../lib/events.js';
import { createLogger } from '@automaker/utils'; import { createLogger, atomicWriteJson, DEFAULT_BACKUP_COUNT } from '@automaker/utils';
import { getFeaturesDir } from '@automaker/platform'; import { getFeaturesDir } from '@automaker/platform';
import { extractJsonWithArray } from '../../lib/json-extractor.js'; import { extractJsonWithArray } from '../../lib/json-extractor.js';
import { getNotificationService } from '../../services/notification-service.js';
const logger = createLogger('SpecRegeneration'); const logger = createLogger('SpecRegeneration');
@@ -73,10 +74,10 @@ export async function parseAndCreateFeatures(
updatedAt: new Date().toISOString(), updatedAt: new Date().toISOString(),
}; };
await secureFs.writeFile( // Use atomic write with backup support for crash protection
path.join(featureDir, 'feature.json'), await atomicWriteJson(path.join(featureDir, 'feature.json'), featureData, {
JSON.stringify(featureData, null, 2) backupCount: DEFAULT_BACKUP_COUNT,
); });
createdFeatures.push({ id: feature.id, title: feature.title }); createdFeatures.push({ id: feature.id, title: feature.title });
} }
@@ -88,6 +89,15 @@ export async function parseAndCreateFeatures(
message: `Spec regeneration complete! Created ${createdFeatures.length} features.`, message: `Spec regeneration complete! Created ${createdFeatures.length} features.`,
projectPath: projectPath, projectPath: projectPath,
}); });
// Create notification for spec generation completion
const notificationService = getNotificationService();
await notificationService.createNotification({
type: 'spec_regeneration_complete',
title: 'Spec Generation Complete',
message: `Created ${createdFeatures.length} features from the project specification.`,
projectPath: projectPath,
});
} catch (error) { } catch (error) {
logger.error('❌ parseAndCreateFeatures() failed:'); logger.error('❌ parseAndCreateFeatures() failed:');
logger.error('Error:', error); logger.error('Error:', error);

View File

@@ -50,7 +50,7 @@ export function createGenerateFeaturesHandler(
logAuthStatus('Before starting feature generation'); logAuthStatus('Before starting feature generation');
const abortController = new AbortController(); const abortController = new AbortController();
setRunningState(projectPath, true, abortController); setRunningState(projectPath, true, abortController, 'feature_generation');
logger.info('Starting background feature generation task...'); logger.info('Starting background feature generation task...');
generateFeaturesFromSpec(projectPath, events, abortController, maxFeatures, settingsService) generateFeaturesFromSpec(projectPath, events, abortController, maxFeatures, settingsService)

View File

@@ -0,0 +1,76 @@
/**
* POST /sync endpoint - Sync spec with codebase and features
*/
import type { Request, Response } from 'express';
import type { EventEmitter } from '../../../lib/events.js';
import { createLogger } from '@automaker/utils';
import {
getSpecRegenerationStatus,
setRunningState,
logAuthStatus,
logError,
getErrorMessage,
} from '../common.js';
import { syncSpec } from '../sync-spec.js';
import type { SettingsService } from '../../../services/settings-service.js';
const logger = createLogger('SpecSync');
export function createSyncHandler(events: EventEmitter, settingsService?: SettingsService) {
return async (req: Request, res: Response): Promise<void> => {
logger.info('========== /sync endpoint called ==========');
logger.debug('Request body:', JSON.stringify(req.body, null, 2));
try {
const { projectPath } = req.body as {
projectPath: string;
};
logger.debug('projectPath:', projectPath);
if (!projectPath) {
logger.error('Missing projectPath parameter');
res.status(400).json({ success: false, error: 'projectPath required' });
return;
}
const { isRunning } = getSpecRegenerationStatus(projectPath);
if (isRunning) {
logger.warn('Generation/sync already running for project:', projectPath);
res.json({ success: false, error: 'Operation already running for this project' });
return;
}
logAuthStatus('Before starting spec sync');
const abortController = new AbortController();
setRunningState(projectPath, true, abortController, 'sync');
logger.info('Starting background spec sync task...');
syncSpec(projectPath, events, abortController, settingsService)
.then((result) => {
logger.info('Spec sync completed successfully');
logger.info('Result:', JSON.stringify(result, null, 2));
})
.catch((error) => {
logError(error, 'Spec sync failed with error');
events.emit('spec-regeneration:event', {
type: 'spec_regeneration_error',
error: getErrorMessage(error),
projectPath,
});
})
.finally(() => {
logger.info('Spec sync task finished (success or error)');
setRunningState(projectPath, false, null);
});
logger.info('Returning success response (sync running in background)');
res.json({ success: true });
} catch (error) {
logError(error, 'Sync route handler failed');
res.status(500).json({ success: false, error: getErrorMessage(error) });
}
};
}

View File

@@ -0,0 +1,307 @@
/**
* Sync spec with current codebase and feature state
*
* Updates the spec file based on:
* - Completed Automaker features
* - Code analysis for tech stack and implementations
* - Roadmap phase status updates
*/
import * as secureFs from '../../lib/secure-fs.js';
import type { EventEmitter } from '../../lib/events.js';
import { createLogger } from '@automaker/utils';
import { DEFAULT_PHASE_MODELS } from '@automaker/types';
import { resolvePhaseModel } from '@automaker/model-resolver';
import { streamingQuery } from '../../providers/simple-query-service.js';
import { getAppSpecPath } from '@automaker/platform';
import type { SettingsService } from '../../services/settings-service.js';
import { getAutoLoadClaudeMdSetting } from '../../lib/settings-helpers.js';
import { FeatureLoader } from '../../services/feature-loader.js';
import {
extractImplementedFeatures,
extractTechnologyStack,
extractRoadmapPhases,
updateImplementedFeaturesSection,
updateTechnologyStack,
updateRoadmapPhaseStatus,
type ImplementedFeature,
type RoadmapPhase,
} from '../../lib/xml-extractor.js';
import { getNotificationService } from '../../services/notification-service.js';
const logger = createLogger('SpecSync');
/**
* Result of a sync operation
*/
export interface SyncResult {
techStackUpdates: {
added: string[];
removed: string[];
};
implementedFeaturesUpdates: {
addedFromFeatures: string[];
removed: string[];
};
roadmapUpdates: Array<{ phaseName: string; newStatus: string }>;
summary: string;
}
/**
* Sync the spec with current codebase and feature state
*/
export async function syncSpec(
projectPath: string,
events: EventEmitter,
abortController: AbortController,
settingsService?: SettingsService
): Promise<SyncResult> {
logger.info('========== syncSpec() started ==========');
logger.info('projectPath:', projectPath);
const result: SyncResult = {
techStackUpdates: { added: [], removed: [] },
implementedFeaturesUpdates: { addedFromFeatures: [], removed: [] },
roadmapUpdates: [],
summary: '',
};
// Read existing spec
const specPath = getAppSpecPath(projectPath);
let specContent: string;
try {
specContent = (await secureFs.readFile(specPath, 'utf-8')) as string;
logger.info(`Spec loaded successfully (${specContent.length} chars)`);
} catch (readError) {
logger.error('Failed to read spec file:', readError);
events.emit('spec-regeneration:event', {
type: 'spec_regeneration_error',
error: 'No project spec found. Create or regenerate spec first.',
projectPath,
});
throw new Error('No project spec found');
}
events.emit('spec-regeneration:event', {
type: 'spec_regeneration_progress',
content: '[Phase: sync] Starting spec sync...\n',
projectPath,
});
// Extract current state from spec
const currentImplementedFeatures = extractImplementedFeatures(specContent);
const currentTechStack = extractTechnologyStack(specContent);
const currentRoadmapPhases = extractRoadmapPhases(specContent);
logger.info(`Current spec has ${currentImplementedFeatures.length} implemented features`);
logger.info(`Current spec has ${currentTechStack.length} technologies`);
logger.info(`Current spec has ${currentRoadmapPhases.length} roadmap phases`);
// Load completed Automaker features
const featureLoader = new FeatureLoader();
const allFeatures = await featureLoader.getAll(projectPath);
const completedFeatures = allFeatures.filter(
(f) => f.status === 'completed' || f.status === 'verified'
);
logger.info(`Found ${completedFeatures.length} completed/verified features in Automaker`);
events.emit('spec-regeneration:event', {
type: 'spec_regeneration_progress',
content: `Found ${completedFeatures.length} completed features to sync...\n`,
projectPath,
});
// Build new implemented features list from completed Automaker features
const newImplementedFeatures: ImplementedFeature[] = [];
const existingNames = new Set(currentImplementedFeatures.map((f) => f.name.toLowerCase()));
for (const feature of completedFeatures) {
const name = feature.title || `Feature: ${feature.id}`;
if (!existingNames.has(name.toLowerCase())) {
newImplementedFeatures.push({
name,
description: feature.description || '',
});
result.implementedFeaturesUpdates.addedFromFeatures.push(name);
}
}
// Merge: keep existing + add new from completed features
const mergedFeatures = [...currentImplementedFeatures, ...newImplementedFeatures];
// Update spec with merged features
if (result.implementedFeaturesUpdates.addedFromFeatures.length > 0) {
specContent = updateImplementedFeaturesSection(specContent, mergedFeatures);
logger.info(
`Added ${result.implementedFeaturesUpdates.addedFromFeatures.length} features to spec`
);
}
// Analyze codebase for tech stack updates using AI
events.emit('spec-regeneration:event', {
type: 'spec_regeneration_progress',
content: 'Analyzing codebase for technology updates...\n',
projectPath,
});
const autoLoadClaudeMd = await getAutoLoadClaudeMdSetting(
projectPath,
settingsService,
'[SpecSync]'
);
const settings = await settingsService?.getGlobalSettings();
const phaseModelEntry =
settings?.phaseModels?.specGenerationModel || DEFAULT_PHASE_MODELS.specGenerationModel;
const { model, thinkingLevel } = resolvePhaseModel(phaseModelEntry);
// Use AI to analyze tech stack
const techAnalysisPrompt = `Analyze this project and return ONLY a JSON object with the current technology stack.
Current known technologies: ${currentTechStack.join(', ')}
Look at package.json, config files, and source code to identify:
- Frameworks (React, Vue, Express, etc.)
- Languages (TypeScript, JavaScript, Python, etc.)
- Build tools (Vite, Webpack, etc.)
- Databases (PostgreSQL, MongoDB, etc.)
- Key libraries and tools
Return ONLY this JSON format, no other text:
{
"technologies": ["Technology 1", "Technology 2", ...]
}`;
try {
const techResult = await streamingQuery({
prompt: techAnalysisPrompt,
model,
cwd: projectPath,
maxTurns: 10,
allowedTools: ['Read', 'Glob', 'Grep'],
abortController,
thinkingLevel,
readOnly: true,
settingSources: autoLoadClaudeMd ? ['user', 'project', 'local'] : undefined,
onText: (text) => {
logger.debug(`Tech analysis text: ${text.substring(0, 100)}`);
},
});
// Parse tech stack from response
const jsonMatch = techResult.text.match(/\{[\s\S]*"technologies"[\s\S]*\}/);
if (jsonMatch) {
const parsed = JSON.parse(jsonMatch[0]);
if (Array.isArray(parsed.technologies)) {
const newTechStack = parsed.technologies as string[];
// Calculate differences
const currentSet = new Set(currentTechStack.map((t) => t.toLowerCase()));
const newSet = new Set(newTechStack.map((t) => t.toLowerCase()));
for (const tech of newTechStack) {
if (!currentSet.has(tech.toLowerCase())) {
result.techStackUpdates.added.push(tech);
}
}
for (const tech of currentTechStack) {
if (!newSet.has(tech.toLowerCase())) {
result.techStackUpdates.removed.push(tech);
}
}
// Update spec with new tech stack if there are changes
if (
result.techStackUpdates.added.length > 0 ||
result.techStackUpdates.removed.length > 0
) {
specContent = updateTechnologyStack(specContent, newTechStack);
logger.info(
`Updated tech stack: +${result.techStackUpdates.added.length}, -${result.techStackUpdates.removed.length}`
);
}
}
}
} catch (error) {
logger.warn('Failed to analyze tech stack:', error);
// Continue with other sync operations
}
// Update roadmap phase statuses based on completed features
events.emit('spec-regeneration:event', {
type: 'spec_regeneration_progress',
content: 'Checking roadmap phase statuses...\n',
projectPath,
});
// For each phase, check if all its features are completed
// This is a heuristic - we check if the phase name appears in any feature titles/descriptions
for (const phase of currentRoadmapPhases) {
if (phase.status === 'completed') continue; // Already completed
// Check if this phase should be marked as completed
// A phase is considered complete if we have completed features that mention it
const phaseNameLower = phase.name.toLowerCase();
const relatedCompletedFeatures = completedFeatures.filter(
(f) =>
f.title?.toLowerCase().includes(phaseNameLower) ||
f.description?.toLowerCase().includes(phaseNameLower) ||
f.category?.toLowerCase().includes(phaseNameLower)
);
// If we have related completed features and the phase is still pending/in_progress,
// update it to in_progress or completed based on feature count
if (relatedCompletedFeatures.length > 0 && phase.status !== 'completed') {
const newStatus = 'in_progress';
specContent = updateRoadmapPhaseStatus(specContent, phase.name, newStatus);
result.roadmapUpdates.push({ phaseName: phase.name, newStatus });
logger.info(`Updated phase "${phase.name}" to ${newStatus}`);
}
}
// Save updated spec
await secureFs.writeFile(specPath, specContent, 'utf-8');
logger.info('Spec saved successfully');
// Build summary
const summaryParts: string[] = [];
if (result.implementedFeaturesUpdates.addedFromFeatures.length > 0) {
summaryParts.push(
`Added ${result.implementedFeaturesUpdates.addedFromFeatures.length} implemented features`
);
}
if (result.techStackUpdates.added.length > 0) {
summaryParts.push(`Added ${result.techStackUpdates.added.length} technologies`);
}
if (result.techStackUpdates.removed.length > 0) {
summaryParts.push(`Removed ${result.techStackUpdates.removed.length} technologies`);
}
if (result.roadmapUpdates.length > 0) {
summaryParts.push(`Updated ${result.roadmapUpdates.length} roadmap phases`);
}
result.summary = summaryParts.length > 0 ? summaryParts.join(', ') : 'Spec is already up to date';
// Create notification
const notificationService = getNotificationService();
await notificationService.createNotification({
type: 'spec_regeneration_complete',
title: 'Spec Sync Complete',
message: result.summary,
projectPath,
});
events.emit('spec-regeneration:event', {
type: 'spec_regeneration_complete',
message: `Spec sync complete! ${result.summary}`,
projectPath,
});
logger.info('========== syncSpec() completed ==========');
logger.info('Summary:', result.summary);
return result;
}

View File

@@ -3,12 +3,31 @@
*/ */
import { createLogger } from '@automaker/utils'; import { createLogger } from '@automaker/utils';
import { ensureAutomakerDir, getAutomakerDir } from '@automaker/platform';
import * as secureFs from '../../lib/secure-fs.js';
import path from 'path';
import type { BacklogPlanResult } from '@automaker/types';
const logger = createLogger('BacklogPlan'); const logger = createLogger('BacklogPlan');
// State for tracking running generation // State for tracking running generation
let isRunning = false; let isRunning = false;
let currentAbortController: AbortController | null = null; let currentAbortController: AbortController | null = null;
let runningDetails: {
projectPath: string;
prompt: string;
model?: string;
startedAt: string;
} | null = null;
const BACKLOG_PLAN_FILENAME = 'backlog-plan.json';
export interface StoredBacklogPlan {
savedAt: string;
prompt: string;
model?: string;
result: BacklogPlanResult;
}
export function getBacklogPlanStatus(): { isRunning: boolean } { export function getBacklogPlanStatus(): { isRunning: boolean } {
return { isRunning }; return { isRunning };
@@ -16,11 +35,67 @@ export function getBacklogPlanStatus(): { isRunning: boolean } {
export function setRunningState(running: boolean, abortController?: AbortController | null): void { export function setRunningState(running: boolean, abortController?: AbortController | null): void {
isRunning = running; isRunning = running;
if (!running) {
runningDetails = null;
}
if (abortController !== undefined) { if (abortController !== undefined) {
currentAbortController = abortController; currentAbortController = abortController;
} }
} }
export function setRunningDetails(
details: {
projectPath: string;
prompt: string;
model?: string;
startedAt: string;
} | null
): void {
runningDetails = details;
}
export function getRunningDetails(): {
projectPath: string;
prompt: string;
model?: string;
startedAt: string;
} | null {
return runningDetails;
}
function getBacklogPlanPath(projectPath: string): string {
return path.join(getAutomakerDir(projectPath), BACKLOG_PLAN_FILENAME);
}
export async function saveBacklogPlan(projectPath: string, plan: StoredBacklogPlan): Promise<void> {
await ensureAutomakerDir(projectPath);
const filePath = getBacklogPlanPath(projectPath);
await secureFs.writeFile(filePath, JSON.stringify(plan, null, 2), 'utf-8');
}
export async function loadBacklogPlan(projectPath: string): Promise<StoredBacklogPlan | null> {
try {
const filePath = getBacklogPlanPath(projectPath);
const raw = await secureFs.readFile(filePath, 'utf-8');
const parsed = JSON.parse(raw as string) as StoredBacklogPlan;
if (!Array.isArray(parsed?.result?.changes)) {
return null;
}
return parsed;
} catch {
return null;
}
}
export async function clearBacklogPlan(projectPath: string): Promise<void> {
try {
const filePath = getBacklogPlanPath(projectPath);
await secureFs.unlink(filePath);
} catch {
// ignore missing file
}
}
export function getAbortController(): AbortController | null { export function getAbortController(): AbortController | null {
return currentAbortController; return currentAbortController;
} }

View File

@@ -17,7 +17,13 @@ import { resolvePhaseModel } from '@automaker/model-resolver';
import { FeatureLoader } from '../../services/feature-loader.js'; import { FeatureLoader } from '../../services/feature-loader.js';
import { ProviderFactory } from '../../providers/provider-factory.js'; import { ProviderFactory } from '../../providers/provider-factory.js';
import { extractJsonWithArray } from '../../lib/json-extractor.js'; import { extractJsonWithArray } from '../../lib/json-extractor.js';
import { logger, setRunningState, getErrorMessage } from './common.js'; import {
logger,
setRunningState,
setRunningDetails,
getErrorMessage,
saveBacklogPlan,
} from './common.js';
import type { SettingsService } from '../../services/settings-service.js'; import type { SettingsService } from '../../services/settings-service.js';
import { getAutoLoadClaudeMdSetting, getPromptCustomization } from '../../lib/settings-helpers.js'; import { getAutoLoadClaudeMdSetting, getPromptCustomization } from '../../lib/settings-helpers.js';
@@ -200,6 +206,13 @@ ${userPrompt}`;
// Parse the response // Parse the response
const result = parsePlanResponse(responseText); const result = parsePlanResponse(responseText);
await saveBacklogPlan(projectPath, {
savedAt: new Date().toISOString(),
prompt,
model: effectiveModel,
result,
});
events.emit('backlog-plan:event', { events.emit('backlog-plan:event', {
type: 'backlog_plan_complete', type: 'backlog_plan_complete',
result, result,
@@ -218,5 +231,6 @@ ${userPrompt}`;
throw error; throw error;
} finally { } finally {
setRunningState(false, null); setRunningState(false, null);
setRunningDetails(null);
} }
} }

View File

@@ -9,6 +9,7 @@ import { createGenerateHandler } from './routes/generate.js';
import { createStopHandler } from './routes/stop.js'; import { createStopHandler } from './routes/stop.js';
import { createStatusHandler } from './routes/status.js'; import { createStatusHandler } from './routes/status.js';
import { createApplyHandler } from './routes/apply.js'; import { createApplyHandler } from './routes/apply.js';
import { createClearHandler } from './routes/clear.js';
import type { SettingsService } from '../../services/settings-service.js'; import type { SettingsService } from '../../services/settings-service.js';
export function createBacklogPlanRoutes( export function createBacklogPlanRoutes(
@@ -23,8 +24,9 @@ export function createBacklogPlanRoutes(
createGenerateHandler(events, settingsService) createGenerateHandler(events, settingsService)
); );
router.post('/stop', createStopHandler()); router.post('/stop', createStopHandler());
router.get('/status', createStatusHandler()); router.get('/status', validatePathParams('projectPath'), createStatusHandler());
router.post('/apply', validatePathParams('projectPath'), createApplyHandler()); router.post('/apply', validatePathParams('projectPath'), createApplyHandler());
router.post('/clear', validatePathParams('projectPath'), createClearHandler());
return router; return router;
} }

View File

@@ -5,7 +5,7 @@
import type { Request, Response } from 'express'; import type { Request, Response } from 'express';
import type { BacklogPlanResult, BacklogChange, Feature } from '@automaker/types'; import type { BacklogPlanResult, BacklogChange, Feature } from '@automaker/types';
import { FeatureLoader } from '../../../services/feature-loader.js'; import { FeatureLoader } from '../../../services/feature-loader.js';
import { getErrorMessage, logError, logger } from '../common.js'; import { clearBacklogPlan, getErrorMessage, logError, logger } from '../common.js';
const featureLoader = new FeatureLoader(); const featureLoader = new FeatureLoader();
@@ -147,6 +147,17 @@ export function createApplyHandler() {
} }
} }
// Clear the plan before responding
try {
await clearBacklogPlan(projectPath);
} catch (error) {
logger.warn(
`[BacklogPlan] Failed to clear backlog plan after apply:`,
getErrorMessage(error)
);
// Don't throw - operation succeeded, just cleanup failed
}
res.json({ res.json({
success: true, success: true,
appliedChanges, appliedChanges,

View File

@@ -0,0 +1,25 @@
/**
* POST /clear endpoint - Clear saved backlog plan
*/
import type { Request, Response } from 'express';
import { clearBacklogPlan, getErrorMessage, logError } from '../common.js';
export function createClearHandler() {
return async (req: Request, res: Response): Promise<void> => {
try {
const { projectPath } = req.body as { projectPath: string };
if (!projectPath) {
res.status(400).json({ success: false, error: 'projectPath required' });
return;
}
await clearBacklogPlan(projectPath);
res.json({ success: true });
} catch (error) {
logError(error, 'Clear backlog plan failed');
res.status(500).json({ success: false, error: getErrorMessage(error) });
}
};
}

View File

@@ -4,7 +4,13 @@
import type { Request, Response } from 'express'; import type { Request, Response } from 'express';
import type { EventEmitter } from '../../../lib/events.js'; import type { EventEmitter } from '../../../lib/events.js';
import { getBacklogPlanStatus, setRunningState, getErrorMessage, logError } from '../common.js'; import {
getBacklogPlanStatus,
setRunningState,
setRunningDetails,
getErrorMessage,
logError,
} from '../common.js';
import { generateBacklogPlan } from '../generate-plan.js'; import { generateBacklogPlan } from '../generate-plan.js';
import type { SettingsService } from '../../../services/settings-service.js'; import type { SettingsService } from '../../../services/settings-service.js';
@@ -37,6 +43,12 @@ export function createGenerateHandler(events: EventEmitter, settingsService?: Se
} }
setRunningState(true); setRunningState(true);
setRunningDetails({
projectPath,
prompt,
model,
startedAt: new Date().toISOString(),
});
const abortController = new AbortController(); const abortController = new AbortController();
setRunningState(true, abortController); setRunningState(true, abortController);
@@ -51,6 +63,7 @@ export function createGenerateHandler(events: EventEmitter, settingsService?: Se
}) })
.finally(() => { .finally(() => {
setRunningState(false, null); setRunningState(false, null);
setRunningDetails(null);
}); });
res.json({ success: true }); res.json({ success: true });

View File

@@ -3,13 +3,15 @@
*/ */
import type { Request, Response } from 'express'; import type { Request, Response } from 'express';
import { getBacklogPlanStatus, getErrorMessage, logError } from '../common.js'; import { getBacklogPlanStatus, loadBacklogPlan, getErrorMessage, logError } from '../common.js';
export function createStatusHandler() { export function createStatusHandler() {
return async (_req: Request, res: Response): Promise<void> => { return async (req: Request, res: Response): Promise<void> => {
try { try {
const status = getBacklogPlanStatus(); const status = getBacklogPlanStatus();
res.json({ success: true, ...status }); const projectPath = typeof req.query.projectPath === 'string' ? req.query.projectPath : '';
const savedPlan = projectPath ? await loadBacklogPlan(projectPath) : null;
res.json({ success: true, ...status, savedPlan });
} catch (error) { } catch (error) {
logError(error, 'Get backlog plan status failed'); logError(error, 'Get backlog plan status failed');
res.status(500).json({ success: false, error: getErrorMessage(error) }); res.status(500).json({ success: false, error: getErrorMessage(error) });

View File

@@ -3,7 +3,13 @@
*/ */
import type { Request, Response } from 'express'; import type { Request, Response } from 'express';
import { getAbortController, setRunningState, getErrorMessage, logError } from '../common.js'; import {
getAbortController,
setRunningState,
setRunningDetails,
getErrorMessage,
logError,
} from '../common.js';
export function createStopHandler() { export function createStopHandler() {
return async (_req: Request, res: Response): Promise<void> => { return async (_req: Request, res: Response): Promise<void> => {
@@ -12,6 +18,7 @@ export function createStopHandler() {
if (abortController) { if (abortController) {
abortController.abort(); abortController.abort();
setRunningState(false, null); setRunningState(false, null);
setRunningDetails(null);
} }
res.json({ success: true }); res.json({ success: true });
} catch (error) { } catch (error) {

View File

@@ -0,0 +1,78 @@
/**
* Common utilities for code-review routes
*/
import { createLogger } from '@automaker/utils';
import { getErrorMessage as getErrorMessageShared, createLogError } from '../common.js';
const logger = createLogger('CodeReview');
// Re-export shared utilities
export { getErrorMessageShared as getErrorMessage };
export const logError = createLogError(logger);
/**
* Review state interface
*/
interface ReviewState {
isRunning: boolean;
abortController: AbortController | null;
projectPath: string | null;
}
/**
* Shared state for code review operations
* Using an object to avoid mutable `let` exports which can cause issues in ES modules
*/
const reviewState: ReviewState = {
isRunning: false,
abortController: null,
projectPath: null,
};
/**
* Check if a review is currently running
*/
export function isRunning(): boolean {
return reviewState.isRunning;
}
/**
* Get the current abort controller (for stopping reviews)
*/
export function getAbortController(): AbortController | null {
return reviewState.abortController;
}
/**
* Get the current project path being reviewed
*/
export function getCurrentProjectPath(): string | null {
return reviewState.projectPath;
}
/**
* Set the running state for code review operations
*/
export function setRunningState(
running: boolean,
controller: AbortController | null = null,
projectPath: string | null = null
): void {
reviewState.isRunning = running;
reviewState.abortController = controller;
reviewState.projectPath = projectPath;
}
/**
* Get the current review status
*/
export function getReviewStatus(): {
isRunning: boolean;
projectPath: string | null;
} {
return {
isRunning: reviewState.isRunning,
projectPath: reviewState.projectPath,
};
}

View File

@@ -0,0 +1,40 @@
/**
* Code Review routes - HTTP API for triggering and managing code reviews
*
* Provides endpoints for:
* - Triggering code reviews on projects
* - Checking review status
* - Stopping in-progress reviews
*
* Uses the CodeReviewService for actual review execution with AI providers.
*/
import { Router } from 'express';
import type { CodeReviewService } from '../../services/code-review-service.js';
import { validatePathParams } from '../../middleware/validate-paths.js';
import { createTriggerHandler } from './routes/trigger.js';
import { createStatusHandler } from './routes/status.js';
import { createStopHandler } from './routes/stop.js';
import { createProvidersHandler } from './routes/providers.js';
export function createCodeReviewRoutes(codeReviewService: CodeReviewService): Router {
const router = Router();
// POST /trigger - Start a new code review
router.post(
'/trigger',
validatePathParams('projectPath'),
createTriggerHandler(codeReviewService)
);
// GET /status - Get current review status
router.get('/status', createStatusHandler());
// POST /stop - Stop current review
router.post('/stop', createStopHandler());
// GET /providers - Get available providers and their status
router.get('/providers', createProvidersHandler(codeReviewService));
return router;
}

View File

@@ -0,0 +1,38 @@
/**
* GET /providers endpoint - Get available code review providers
*
* Returns the status of all available AI providers that can be used for code reviews.
*/
import type { Request, Response } from 'express';
import type { CodeReviewService } from '../../../services/code-review-service.js';
import { createLogger } from '@automaker/utils';
import { getErrorMessage, logError } from '../common.js';
const logger = createLogger('CodeReview');
export function createProvidersHandler(codeReviewService: CodeReviewService) {
return async (req: Request, res: Response): Promise<void> => {
logger.debug('========== /providers endpoint called ==========');
try {
// Check if refresh is requested
const forceRefresh = req.query.refresh === 'true';
const providers = await codeReviewService.getProviderStatus(forceRefresh);
const bestProvider = await codeReviewService.getBestProvider();
res.json({
success: true,
providers,
recommended: bestProvider,
});
} catch (error) {
logError(error, 'Providers handler exception');
res.status(500).json({
success: false,
error: getErrorMessage(error),
});
}
};
}

View File

@@ -0,0 +1,32 @@
/**
* GET /status endpoint - Get current code review status
*
* Returns whether a code review is currently running and which project.
*/
import type { Request, Response } from 'express';
import { createLogger } from '@automaker/utils';
import { getReviewStatus, getErrorMessage, logError } from '../common.js';
const logger = createLogger('CodeReview');
export function createStatusHandler() {
return async (_req: Request, res: Response): Promise<void> => {
logger.debug('========== /status endpoint called ==========');
try {
const status = getReviewStatus();
res.json({
success: true,
...status,
});
} catch (error) {
logError(error, 'Status handler exception');
res.status(500).json({
success: false,
error: getErrorMessage(error),
});
}
};
}

View File

@@ -0,0 +1,54 @@
/**
* POST /stop endpoint - Stop the current code review
*
* Aborts any running code review operation.
*/
import type { Request, Response } from 'express';
import { createLogger } from '@automaker/utils';
import {
isRunning,
getAbortController,
setRunningState,
getErrorMessage,
logError,
} from '../common.js';
const logger = createLogger('CodeReview');
export function createStopHandler() {
return async (_req: Request, res: Response): Promise<void> => {
logger.info('========== /stop endpoint called ==========');
try {
if (!isRunning()) {
res.json({
success: true,
message: 'No code review is currently running',
});
return;
}
// Abort the current operation
const abortController = getAbortController();
if (abortController) {
abortController.abort();
logger.info('Code review aborted');
}
// Reset state
setRunningState(false, null, null);
res.json({
success: true,
message: 'Code review stopped',
});
} catch (error) {
logError(error, 'Stop handler exception');
res.status(500).json({
success: false,
error: getErrorMessage(error),
});
}
};
}

View File

@@ -0,0 +1,188 @@
/**
* POST /trigger endpoint - Trigger a code review
*
* Starts an asynchronous code review on the specified project.
* Progress updates are streamed via WebSocket events.
*/
import type { Request, Response } from 'express';
import type { CodeReviewService } from '../../../services/code-review-service.js';
import type { CodeReviewCategory, ThinkingLevel, ModelId } from '@automaker/types';
import { createLogger } from '@automaker/utils';
import { isRunning, setRunningState, getErrorMessage, logError } from '../common.js';
const logger = createLogger('CodeReview');
/**
* Maximum number of files allowed per review request
*/
const MAX_FILES_PER_REQUEST = 100;
/**
* Maximum length for baseRef parameter
*/
const MAX_BASE_REF_LENGTH = 256;
/**
* Valid categories for code review
*/
const VALID_CATEGORIES: CodeReviewCategory[] = [
'tech_stack',
'security',
'code_quality',
'implementation',
'architecture',
'performance',
'testing',
'documentation',
];
/**
* Valid thinking levels
*/
const VALID_THINKING_LEVELS: ThinkingLevel[] = ['low', 'medium', 'high'];
interface TriggerRequestBody {
projectPath: string;
files?: string[];
baseRef?: string;
categories?: CodeReviewCategory[];
autoFix?: boolean;
model?: ModelId;
thinkingLevel?: ThinkingLevel;
}
/**
* Validate and sanitize the request body
*/
function validateRequestBody(body: TriggerRequestBody): { valid: boolean; error?: string } {
const { files, baseRef, categories, autoFix, thinkingLevel } = body;
// Validate files array
if (files !== undefined) {
if (!Array.isArray(files)) {
return { valid: false, error: 'files must be an array' };
}
if (files.length > MAX_FILES_PER_REQUEST) {
return { valid: false, error: `Maximum ${MAX_FILES_PER_REQUEST} files allowed per request` };
}
for (const file of files) {
if (typeof file !== 'string') {
return { valid: false, error: 'Each file must be a string' };
}
if (file.length > 500) {
return { valid: false, error: 'File path too long' };
}
}
}
// Validate baseRef
if (baseRef !== undefined) {
if (typeof baseRef !== 'string') {
return { valid: false, error: 'baseRef must be a string' };
}
if (baseRef.length > MAX_BASE_REF_LENGTH) {
return { valid: false, error: 'baseRef is too long' };
}
}
// Validate categories
if (categories !== undefined) {
if (!Array.isArray(categories)) {
return { valid: false, error: 'categories must be an array' };
}
for (const category of categories) {
if (!VALID_CATEGORIES.includes(category)) {
return { valid: false, error: `Invalid category: ${category}` };
}
}
}
// Validate autoFix
if (autoFix !== undefined && typeof autoFix !== 'boolean') {
return { valid: false, error: 'autoFix must be a boolean' };
}
// Validate thinkingLevel
if (thinkingLevel !== undefined) {
if (!VALID_THINKING_LEVELS.includes(thinkingLevel)) {
return { valid: false, error: `Invalid thinkingLevel: ${thinkingLevel}` };
}
}
return { valid: true };
}
export function createTriggerHandler(codeReviewService: CodeReviewService) {
return async (req: Request, res: Response): Promise<void> => {
logger.info('========== /trigger endpoint called ==========');
try {
const body = req.body as TriggerRequestBody;
const { projectPath, files, baseRef, categories, autoFix, model, thinkingLevel } = body;
// Validate required parameters
if (!projectPath) {
res.status(400).json({
success: false,
error: 'projectPath is required',
});
return;
}
// SECURITY: Validate all input parameters
const validation = validateRequestBody(body);
if (!validation.valid) {
res.status(400).json({
success: false,
error: validation.error,
});
return;
}
// Check if a review is already running
if (isRunning()) {
res.status(409).json({
success: false,
error: 'A code review is already in progress',
});
return;
}
// Set up abort controller for cancellation
const abortController = new AbortController();
setRunningState(true, abortController, projectPath);
// Start the review in the background
codeReviewService
.executeReview({
projectPath,
files,
baseRef,
categories,
autoFix,
model,
thinkingLevel,
abortController,
})
.catch((error) => {
logError(error, 'Code review failed');
})
.finally(() => {
setRunningState(false, null, null);
});
// Return immediate response
res.json({
success: true,
message: 'Code review started',
});
} catch (error) {
logError(error, 'Trigger handler exception');
res.status(500).json({
success: false,
error: getErrorMessage(error),
});
}
};
}

View File

@@ -19,7 +19,10 @@ import { simpleQuery } from '../../../providers/simple-query-service.js';
import * as secureFs from '../../../lib/secure-fs.js'; import * as secureFs from '../../../lib/secure-fs.js';
import * as path from 'path'; import * as path from 'path';
import type { SettingsService } from '../../../services/settings-service.js'; import type { SettingsService } from '../../../services/settings-service.js';
import { getAutoLoadClaudeMdSetting } from '../../../lib/settings-helpers.js'; import {
getAutoLoadClaudeMdSetting,
getPromptCustomization,
} from '../../../lib/settings-helpers.js';
const logger = createLogger('DescribeFile'); const logger = createLogger('DescribeFile');
@@ -130,11 +133,12 @@ export function createDescribeFileHandler(
// Get the filename for context // Get the filename for context
const fileName = path.basename(resolvedPath); const fileName = path.basename(resolvedPath);
// Get customized prompts from settings
const prompts = await getPromptCustomization(settingsService, '[DescribeFile]');
// Build prompt with file content passed as structured data // Build prompt with file content passed as structured data
// The file content is included directly, not via tool invocation // The file content is included directly, not via tool invocation
const prompt = `Analyze the following file and provide a 1-2 sentence description suitable for use as context in an AI coding assistant. Focus on what the file contains, its purpose, and why an AI agent might want to use this context in the future (e.g., "API documentation for the authentication endpoints", "Configuration file for database connections", "Coding style guidelines for the project"). const prompt = `${prompts.contextDescription.describeFilePrompt}
Respond with ONLY the description text, no additional formatting, preamble, or explanation.
File: ${fileName}${truncated ? ' (truncated)' : ''} File: ${fileName}${truncated ? ' (truncated)' : ''}

View File

@@ -19,7 +19,10 @@ import { simpleQuery } from '../../../providers/simple-query-service.js';
import * as secureFs from '../../../lib/secure-fs.js'; import * as secureFs from '../../../lib/secure-fs.js';
import * as path from 'path'; import * as path from 'path';
import type { SettingsService } from '../../../services/settings-service.js'; import type { SettingsService } from '../../../services/settings-service.js';
import { getAutoLoadClaudeMdSetting } from '../../../lib/settings-helpers.js'; import {
getAutoLoadClaudeMdSetting,
getPromptCustomization,
} from '../../../lib/settings-helpers.js';
const logger = createLogger('DescribeImage'); const logger = createLogger('DescribeImage');
@@ -278,12 +281,11 @@ export function createDescribeImageHandler(
logger.info(`[${requestId}] Using model: ${model}`); logger.info(`[${requestId}] Using model: ${model}`);
// Build the instruction text // Get customized prompts from settings
const instructionText = const prompts = await getPromptCustomization(settingsService, '[DescribeImage]');
`Describe this image in 1-2 sentences suitable for use as context in an AI coding assistant. ` +
`Focus on what the image shows and its purpose (e.g., "UI mockup showing login form with email/password fields", ` + // Build the instruction text from centralized prompts
`"Architecture diagram of microservices", "Screenshot of error message in terminal").\n\n` + const instructionText = prompts.contextDescription.describeImagePrompt;
`Respond with ONLY the description text, no additional formatting, preamble, or explanation.`;
// Build prompt based on provider capability // Build prompt based on provider capability
// Some providers (like Cursor) may not support image content blocks // Some providers (like Cursor) may not support image content blocks

View File

@@ -0,0 +1,19 @@
/**
* Common utilities for event history routes
*/
import { createLogger } from '@automaker/utils';
import { getErrorMessage as getErrorMessageShared, createLogError } from '../common.js';
/** Logger instance for event history operations */
export const logger = createLogger('EventHistory');
/**
* Extract user-friendly error message from error objects
*/
export { getErrorMessageShared as getErrorMessage };
/**
* Log error with automatic logger binding
*/
export const logError = createLogError(logger);

View File

@@ -0,0 +1,68 @@
/**
* Event History routes - HTTP API for event history management
*
* Provides endpoints for:
* - Listing events with filtering
* - Getting individual event details
* - Deleting events
* - Clearing all events
* - Replaying events to test hooks
*
* Mounted at /api/event-history in the main server.
*/
import { Router } from 'express';
import type { EventHistoryService } from '../../services/event-history-service.js';
import type { SettingsService } from '../../services/settings-service.js';
import { validatePathParams } from '../../middleware/validate-paths.js';
import { createListHandler } from './routes/list.js';
import { createGetHandler } from './routes/get.js';
import { createDeleteHandler } from './routes/delete.js';
import { createClearHandler } from './routes/clear.js';
import { createReplayHandler } from './routes/replay.js';
/**
* Create event history router with all endpoints
*
* Endpoints:
* - POST /list - List events with optional filtering
* - POST /get - Get a single event by ID
* - POST /delete - Delete an event by ID
* - POST /clear - Clear all events for a project
* - POST /replay - Replay an event to trigger hooks
*
* @param eventHistoryService - Instance of EventHistoryService
* @param settingsService - Instance of SettingsService (for replay)
* @returns Express Router configured with all event history endpoints
*/
export function createEventHistoryRoutes(
eventHistoryService: EventHistoryService,
settingsService: SettingsService
): Router {
const router = Router();
// List events with filtering
router.post('/list', validatePathParams('projectPath'), createListHandler(eventHistoryService));
// Get single event
router.post('/get', validatePathParams('projectPath'), createGetHandler(eventHistoryService));
// Delete event
router.post(
'/delete',
validatePathParams('projectPath'),
createDeleteHandler(eventHistoryService)
);
// Clear all events
router.post('/clear', validatePathParams('projectPath'), createClearHandler(eventHistoryService));
// Replay event
router.post(
'/replay',
validatePathParams('projectPath'),
createReplayHandler(eventHistoryService, settingsService)
);
return router;
}

View File

@@ -0,0 +1,33 @@
/**
* POST /api/event-history/clear - Clear all events for a project
*
* Request body: { projectPath: string }
* Response: { success: true, cleared: number }
*/
import type { Request, Response } from 'express';
import type { EventHistoryService } from '../../../services/event-history-service.js';
import { getErrorMessage, logError } from '../common.js';
export function createClearHandler(eventHistoryService: EventHistoryService) {
return async (req: Request, res: Response): Promise<void> => {
try {
const { projectPath } = req.body as { projectPath: string };
if (!projectPath || typeof projectPath !== 'string') {
res.status(400).json({ success: false, error: 'projectPath is required' });
return;
}
const cleared = await eventHistoryService.clearEvents(projectPath);
res.json({
success: true,
cleared,
});
} catch (error) {
logError(error, 'Clear events failed');
res.status(500).json({ success: false, error: getErrorMessage(error) });
}
};
}

View File

@@ -0,0 +1,43 @@
/**
* POST /api/event-history/delete - Delete an event by ID
*
* Request body: { projectPath: string, eventId: string }
* Response: { success: true } or { success: false, error: string }
*/
import type { Request, Response } from 'express';
import type { EventHistoryService } from '../../../services/event-history-service.js';
import { getErrorMessage, logError } from '../common.js';
export function createDeleteHandler(eventHistoryService: EventHistoryService) {
return async (req: Request, res: Response): Promise<void> => {
try {
const { projectPath, eventId } = req.body as {
projectPath: string;
eventId: string;
};
if (!projectPath || typeof projectPath !== 'string') {
res.status(400).json({ success: false, error: 'projectPath is required' });
return;
}
if (!eventId || typeof eventId !== 'string') {
res.status(400).json({ success: false, error: 'eventId is required' });
return;
}
const deleted = await eventHistoryService.deleteEvent(projectPath, eventId);
if (!deleted) {
res.status(404).json({ success: false, error: 'Event not found' });
return;
}
res.json({ success: true });
} catch (error) {
logError(error, 'Delete event failed');
res.status(500).json({ success: false, error: getErrorMessage(error) });
}
};
}

View File

@@ -0,0 +1,46 @@
/**
* POST /api/event-history/get - Get a single event by ID
*
* Request body: { projectPath: string, eventId: string }
* Response: { success: true, event: StoredEvent } or { success: false, error: string }
*/
import type { Request, Response } from 'express';
import type { EventHistoryService } from '../../../services/event-history-service.js';
import { getErrorMessage, logError } from '../common.js';
export function createGetHandler(eventHistoryService: EventHistoryService) {
return async (req: Request, res: Response): Promise<void> => {
try {
const { projectPath, eventId } = req.body as {
projectPath: string;
eventId: string;
};
if (!projectPath || typeof projectPath !== 'string') {
res.status(400).json({ success: false, error: 'projectPath is required' });
return;
}
if (!eventId || typeof eventId !== 'string') {
res.status(400).json({ success: false, error: 'eventId is required' });
return;
}
const event = await eventHistoryService.getEvent(projectPath, eventId);
if (!event) {
res.status(404).json({ success: false, error: 'Event not found' });
return;
}
res.json({
success: true,
event,
});
} catch (error) {
logError(error, 'Get event failed');
res.status(500).json({ success: false, error: getErrorMessage(error) });
}
};
}

View File

@@ -0,0 +1,53 @@
/**
* POST /api/event-history/list - List events for a project
*
* Request body: {
* projectPath: string,
* filter?: {
* trigger?: EventHookTrigger,
* featureId?: string,
* since?: string,
* until?: string,
* limit?: number,
* offset?: number
* }
* }
* Response: { success: true, events: StoredEventSummary[], total: number }
*/
import type { Request, Response } from 'express';
import type { EventHistoryService } from '../../../services/event-history-service.js';
import type { EventHistoryFilter } from '@automaker/types';
import { getErrorMessage, logError } from '../common.js';
export function createListHandler(eventHistoryService: EventHistoryService) {
return async (req: Request, res: Response): Promise<void> => {
try {
const { projectPath, filter } = req.body as {
projectPath: string;
filter?: EventHistoryFilter;
};
if (!projectPath || typeof projectPath !== 'string') {
res.status(400).json({ success: false, error: 'projectPath is required' });
return;
}
const events = await eventHistoryService.getEvents(projectPath, filter);
const total = await eventHistoryService.getEventCount(projectPath, {
...filter,
limit: undefined,
offset: undefined,
});
res.json({
success: true,
events,
total,
});
} catch (error) {
logError(error, 'List events failed');
res.status(500).json({ success: false, error: getErrorMessage(error) });
}
};
}

View File

@@ -0,0 +1,234 @@
/**
* POST /api/event-history/replay - Replay an event to trigger hooks
*
* Request body: {
* projectPath: string,
* eventId: string,
* hookIds?: string[] // Optional: specific hooks to run (if not provided, runs all enabled matching hooks)
* }
* Response: { success: true, result: EventReplayResult }
*/
import type { Request, Response } from 'express';
import type { EventHistoryService } from '../../../services/event-history-service.js';
import type { SettingsService } from '../../../services/settings-service.js';
import type { EventReplayResult, EventReplayHookResult, EventHook } from '@automaker/types';
import { exec } from 'child_process';
import { promisify } from 'util';
import { getErrorMessage, logError, logger } from '../common.js';
const execAsync = promisify(exec);
/** Default timeout for shell commands (30 seconds) */
const DEFAULT_SHELL_TIMEOUT = 30000;
/** Default timeout for HTTP requests (10 seconds) */
const DEFAULT_HTTP_TIMEOUT = 10000;
interface HookContext {
featureId?: string;
featureName?: string;
projectPath?: string;
projectName?: string;
error?: string;
errorType?: string;
timestamp: string;
eventType: string;
}
/**
* Substitute {{variable}} placeholders in a string
*/
function substituteVariables(template: string, context: HookContext): string {
return template.replace(/\{\{(\w+)\}\}/g, (match, variable) => {
const value = context[variable as keyof HookContext];
if (value === undefined || value === null) {
return '';
}
return String(value);
});
}
/**
* Execute a single hook and return the result
*/
async function executeHook(hook: EventHook, context: HookContext): Promise<EventReplayHookResult> {
const hookName = hook.name || hook.id;
const startTime = Date.now();
try {
if (hook.action.type === 'shell') {
const command = substituteVariables(hook.action.command, context);
const timeout = hook.action.timeout || DEFAULT_SHELL_TIMEOUT;
logger.info(`Replaying shell hook "${hookName}": ${command}`);
await execAsync(command, {
timeout,
maxBuffer: 1024 * 1024,
});
return {
hookId: hook.id,
hookName: hook.name,
success: true,
durationMs: Date.now() - startTime,
};
} else if (hook.action.type === 'http') {
const url = substituteVariables(hook.action.url, context);
const method = hook.action.method || 'POST';
const headers: Record<string, string> = {
'Content-Type': 'application/json',
};
if (hook.action.headers) {
for (const [key, value] of Object.entries(hook.action.headers)) {
headers[key] = substituteVariables(value, context);
}
}
let body: string | undefined;
if (hook.action.body) {
body = substituteVariables(hook.action.body, context);
} else if (method !== 'GET') {
body = JSON.stringify({
eventType: context.eventType,
timestamp: context.timestamp,
featureId: context.featureId,
projectPath: context.projectPath,
projectName: context.projectName,
error: context.error,
});
}
logger.info(`Replaying HTTP hook "${hookName}": ${method} ${url}`);
const controller = new AbortController();
const timeoutId = setTimeout(() => controller.abort(), DEFAULT_HTTP_TIMEOUT);
const response = await fetch(url, {
method,
headers,
body: method !== 'GET' ? body : undefined,
signal: controller.signal,
});
clearTimeout(timeoutId);
if (!response.ok) {
return {
hookId: hook.id,
hookName: hook.name,
success: false,
error: `HTTP ${response.status}: ${response.statusText}`,
durationMs: Date.now() - startTime,
};
}
return {
hookId: hook.id,
hookName: hook.name,
success: true,
durationMs: Date.now() - startTime,
};
}
return {
hookId: hook.id,
hookName: hook.name,
success: false,
error: 'Unknown hook action type',
durationMs: Date.now() - startTime,
};
} catch (error) {
const errorMessage =
error instanceof Error
? error.name === 'AbortError'
? 'Request timed out'
: error.message
: String(error);
return {
hookId: hook.id,
hookName: hook.name,
success: false,
error: errorMessage,
durationMs: Date.now() - startTime,
};
}
}
export function createReplayHandler(
eventHistoryService: EventHistoryService,
settingsService: SettingsService
) {
return async (req: Request, res: Response): Promise<void> => {
try {
const { projectPath, eventId, hookIds } = req.body as {
projectPath: string;
eventId: string;
hookIds?: string[];
};
if (!projectPath || typeof projectPath !== 'string') {
res.status(400).json({ success: false, error: 'projectPath is required' });
return;
}
if (!eventId || typeof eventId !== 'string') {
res.status(400).json({ success: false, error: 'eventId is required' });
return;
}
// Get the event
const event = await eventHistoryService.getEvent(projectPath, eventId);
if (!event) {
res.status(404).json({ success: false, error: 'Event not found' });
return;
}
// Get hooks from settings
const settings = await settingsService.getGlobalSettings();
let hooks = settings.eventHooks || [];
// Filter to matching trigger and enabled hooks
hooks = hooks.filter((h) => h.enabled && h.trigger === event.trigger);
// If specific hook IDs requested, filter to those
if (hookIds && hookIds.length > 0) {
hooks = hooks.filter((h) => hookIds.includes(h.id));
}
// Build context for variable substitution
const context: HookContext = {
featureId: event.featureId,
featureName: event.featureName,
projectPath: event.projectPath,
projectName: event.projectName,
error: event.error,
errorType: event.errorType,
timestamp: event.timestamp,
eventType: event.trigger,
};
// Execute all hooks in parallel
const hookResults = await Promise.all(hooks.map((hook) => executeHook(hook, context)));
const result: EventReplayResult = {
eventId,
hooksTriggered: hooks.length,
hookResults,
};
logger.info(`Replayed event ${eventId}: ${hooks.length} hooks triggered`);
res.json({
success: true,
result,
});
} catch (error) {
logError(error, 'Replay event failed');
res.status(500).json({ success: false, error: getErrorMessage(error) });
}
};
}

View File

@@ -4,6 +4,8 @@
import { Router } from 'express'; import { Router } from 'express';
import { FeatureLoader } from '../../services/feature-loader.js'; import { FeatureLoader } from '../../services/feature-loader.js';
import type { SettingsService } from '../../services/settings-service.js';
import type { EventEmitter } from '../../lib/events.js';
import { validatePathParams } from '../../middleware/validate-paths.js'; import { validatePathParams } from '../../middleware/validate-paths.js';
import { createListHandler } from './routes/list.js'; import { createListHandler } from './routes/list.js';
import { createGetHandler } from './routes/get.js'; import { createGetHandler } from './routes/get.js';
@@ -15,12 +17,20 @@ import { createDeleteHandler } from './routes/delete.js';
import { createAgentOutputHandler, createRawOutputHandler } from './routes/agent-output.js'; import { createAgentOutputHandler, createRawOutputHandler } from './routes/agent-output.js';
import { createGenerateTitleHandler } from './routes/generate-title.js'; import { createGenerateTitleHandler } from './routes/generate-title.js';
export function createFeaturesRoutes(featureLoader: FeatureLoader): Router { export function createFeaturesRoutes(
featureLoader: FeatureLoader,
settingsService?: SettingsService,
events?: EventEmitter
): Router {
const router = Router(); const router = Router();
router.post('/list', validatePathParams('projectPath'), createListHandler(featureLoader)); router.post('/list', validatePathParams('projectPath'), createListHandler(featureLoader));
router.post('/get', validatePathParams('projectPath'), createGetHandler(featureLoader)); router.post('/get', validatePathParams('projectPath'), createGetHandler(featureLoader));
router.post('/create', validatePathParams('projectPath'), createCreateHandler(featureLoader)); router.post(
'/create',
validatePathParams('projectPath'),
createCreateHandler(featureLoader, events)
);
router.post('/update', validatePathParams('projectPath'), createUpdateHandler(featureLoader)); router.post('/update', validatePathParams('projectPath'), createUpdateHandler(featureLoader));
router.post( router.post(
'/bulk-update', '/bulk-update',
@@ -35,7 +45,7 @@ export function createFeaturesRoutes(featureLoader: FeatureLoader): Router {
router.post('/delete', validatePathParams('projectPath'), createDeleteHandler(featureLoader)); router.post('/delete', validatePathParams('projectPath'), createDeleteHandler(featureLoader));
router.post('/agent-output', createAgentOutputHandler(featureLoader)); router.post('/agent-output', createAgentOutputHandler(featureLoader));
router.post('/raw-output', createRawOutputHandler(featureLoader)); router.post('/raw-output', createRawOutputHandler(featureLoader));
router.post('/generate-title', createGenerateTitleHandler()); router.post('/generate-title', createGenerateTitleHandler(settingsService));
return router; return router;
} }

View File

@@ -30,19 +30,27 @@ export function createBulkDeleteHandler(featureLoader: FeatureLoader) {
return; return;
} }
const results = await Promise.all( // Process in parallel batches of 20 for efficiency
featureIds.map(async (featureId) => { const BATCH_SIZE = 20;
const success = await featureLoader.delete(projectPath, featureId); const results: BulkDeleteResult[] = [];
if (success) {
return { featureId, success: true }; for (let i = 0; i < featureIds.length; i += BATCH_SIZE) {
} const batch = featureIds.slice(i, i + BATCH_SIZE);
return { const batchResults = await Promise.all(
featureId, batch.map(async (featureId) => {
success: false, const success = await featureLoader.delete(projectPath, featureId);
error: 'Deletion failed. Check server logs for details.', if (success) {
}; return { featureId, success: true };
}) }
); return {
featureId,
success: false,
error: 'Deletion failed. Check server logs for details.',
};
})
);
results.push(...batchResults);
}
const successCount = results.reduce((count, r) => count + (r.success ? 1 : 0), 0); const successCount = results.reduce((count, r) => count + (r.success ? 1 : 0), 0);
const failureCount = results.length - successCount; const failureCount = results.length - successCount;

View File

@@ -43,17 +43,36 @@ export function createBulkUpdateHandler(featureLoader: FeatureLoader) {
const results: BulkUpdateResult[] = []; const results: BulkUpdateResult[] = [];
const updatedFeatures: Feature[] = []; const updatedFeatures: Feature[] = [];
for (const featureId of featureIds) { // Process in parallel batches of 20 for efficiency
try { const BATCH_SIZE = 20;
const updated = await featureLoader.update(projectPath, featureId, updates); for (let i = 0; i < featureIds.length; i += BATCH_SIZE) {
results.push({ featureId, success: true }); const batch = featureIds.slice(i, i + BATCH_SIZE);
updatedFeatures.push(updated); const batchResults = await Promise.all(
} catch (error) { batch.map(async (featureId) => {
results.push({ try {
featureId, const updated = await featureLoader.update(projectPath, featureId, updates);
success: false, return { featureId, success: true as const, feature: updated };
error: getErrorMessage(error), } catch (error) {
}); return {
featureId,
success: false as const,
error: getErrorMessage(error),
};
}
})
);
for (const result of batchResults) {
if (result.success) {
results.push({ featureId: result.featureId, success: true });
updatedFeatures.push(result.feature);
} else {
results.push({
featureId: result.featureId,
success: false,
error: result.error,
});
}
} }
} }

View File

@@ -4,10 +4,11 @@
import type { Request, Response } from 'express'; import type { Request, Response } from 'express';
import { FeatureLoader } from '../../../services/feature-loader.js'; import { FeatureLoader } from '../../../services/feature-loader.js';
import type { EventEmitter } from '../../../lib/events.js';
import type { Feature } from '@automaker/types'; import type { Feature } from '@automaker/types';
import { getErrorMessage, logError } from '../common.js'; import { getErrorMessage, logError } from '../common.js';
export function createCreateHandler(featureLoader: FeatureLoader) { export function createCreateHandler(featureLoader: FeatureLoader, events?: EventEmitter) {
return async (req: Request, res: Response): Promise<void> => { return async (req: Request, res: Response): Promise<void> => {
try { try {
const { projectPath, feature } = req.body as { const { projectPath, feature } = req.body as {
@@ -23,7 +24,30 @@ export function createCreateHandler(featureLoader: FeatureLoader) {
return; return;
} }
// Check for duplicate title if title is provided
if (feature.title && feature.title.trim()) {
const duplicate = await featureLoader.findDuplicateTitle(projectPath, feature.title);
if (duplicate) {
res.status(409).json({
success: false,
error: `A feature with title "${feature.title}" already exists`,
duplicateFeatureId: duplicate.id,
});
return;
}
}
const created = await featureLoader.create(projectPath, feature); const created = await featureLoader.create(projectPath, feature);
// Emit feature_created event for hooks
if (events) {
events.emit('feature:created', {
featureId: created.id,
featureName: created.name,
projectPath,
});
}
res.json({ success: true, feature: created }); res.json({ success: true, feature: created });
} catch (error) { } catch (error) {
logError(error, 'Create feature failed'); logError(error, 'Create feature failed');

View File

@@ -9,6 +9,8 @@ import type { Request, Response } from 'express';
import { createLogger } from '@automaker/utils'; import { createLogger } from '@automaker/utils';
import { CLAUDE_MODEL_MAP } from '@automaker/model-resolver'; import { CLAUDE_MODEL_MAP } from '@automaker/model-resolver';
import { simpleQuery } from '../../../providers/simple-query-service.js'; import { simpleQuery } from '../../../providers/simple-query-service.js';
import type { SettingsService } from '../../../services/settings-service.js';
import { getPromptCustomization } from '../../../lib/settings-helpers.js';
const logger = createLogger('GenerateTitle'); const logger = createLogger('GenerateTitle');
@@ -26,16 +28,9 @@ interface GenerateTitleErrorResponse {
error: string; error: string;
} }
const SYSTEM_PROMPT = `You are a title generator. Your task is to create a concise, descriptive title (5-10 words max) for a software feature based on its description. export function createGenerateTitleHandler(
settingsService?: SettingsService
Rules: ): (req: Request, res: Response) => Promise<void> {
- Output ONLY the title, nothing else
- Keep it short and action-oriented (e.g., "Add dark mode toggle", "Fix login validation")
- Start with a verb when possible (Add, Fix, Update, Implement, Create, etc.)
- No quotes, periods, or extra formatting
- Capture the essence of the feature in a scannable way`;
export function createGenerateTitleHandler(): (req: Request, res: Response) => Promise<void> {
return async (req: Request, res: Response): Promise<void> => { return async (req: Request, res: Response): Promise<void> => {
try { try {
const { description } = req.body as GenerateTitleRequestBody; const { description } = req.body as GenerateTitleRequestBody;
@@ -61,11 +56,15 @@ export function createGenerateTitleHandler(): (req: Request, res: Response) => P
logger.info(`Generating title for description: ${trimmedDescription.substring(0, 50)}...`); logger.info(`Generating title for description: ${trimmedDescription.substring(0, 50)}...`);
// Get customized prompts from settings
const prompts = await getPromptCustomization(settingsService, '[GenerateTitle]');
const systemPrompt = prompts.titleGeneration.systemPrompt;
const userPrompt = `Generate a concise title for this feature:\n\n${trimmedDescription}`; const userPrompt = `Generate a concise title for this feature:\n\n${trimmedDescription}`;
// Use simpleQuery - provider abstraction handles all the streaming/extraction // Use simpleQuery - provider abstraction handles all the streaming/extraction
const result = await simpleQuery({ const result = await simpleQuery({
prompt: `${SYSTEM_PROMPT}\n\n${userPrompt}`, prompt: `${systemPrompt}\n\n${userPrompt}`,
model: CLAUDE_MODEL_MAP.haiku, model: CLAUDE_MODEL_MAP.haiku,
cwd: process.cwd(), cwd: process.cwd(),
maxTurns: 1, maxTurns: 1,

View File

@@ -4,8 +4,14 @@
import type { Request, Response } from 'express'; import type { Request, Response } from 'express';
import { FeatureLoader } from '../../../services/feature-loader.js'; import { FeatureLoader } from '../../../services/feature-loader.js';
import type { Feature } from '@automaker/types'; import type { Feature, FeatureStatus } from '@automaker/types';
import { getErrorMessage, logError } from '../common.js'; import { getErrorMessage, logError } from '../common.js';
import { createLogger } from '@automaker/utils';
const logger = createLogger('features/update');
// Statuses that should trigger syncing to app_spec.txt
const SYNC_TRIGGER_STATUSES: FeatureStatus[] = ['verified', 'completed'];
export function createUpdateHandler(featureLoader: FeatureLoader) { export function createUpdateHandler(featureLoader: FeatureLoader) {
return async (req: Request, res: Response): Promise<void> => { return async (req: Request, res: Response): Promise<void> => {
@@ -34,6 +40,28 @@ export function createUpdateHandler(featureLoader: FeatureLoader) {
return; return;
} }
// Check for duplicate title if title is being updated
if (updates.title && updates.title.trim()) {
const duplicate = await featureLoader.findDuplicateTitle(
projectPath,
updates.title,
featureId // Exclude the current feature from duplicate check
);
if (duplicate) {
res.status(409).json({
success: false,
error: `A feature with title "${updates.title}" already exists`,
duplicateFeatureId: duplicate.id,
});
return;
}
}
// Get the current feature to detect status changes
const currentFeature = await featureLoader.get(projectPath, featureId);
const previousStatus = currentFeature?.status as FeatureStatus | undefined;
const newStatus = updates.status as FeatureStatus | undefined;
const updated = await featureLoader.update( const updated = await featureLoader.update(
projectPath, projectPath,
featureId, featureId,
@@ -42,6 +70,22 @@ export function createUpdateHandler(featureLoader: FeatureLoader) {
enhancementMode, enhancementMode,
preEnhancementDescription preEnhancementDescription
); );
// Trigger sync to app_spec.txt when status changes to verified or completed
if (newStatus && SYNC_TRIGGER_STATUSES.includes(newStatus) && previousStatus !== newStatus) {
try {
const synced = await featureLoader.syncFeatureToAppSpec(projectPath, updated);
if (synced) {
logger.info(
`Synced feature "${updated.title || updated.id}" to app_spec.txt on status change to ${newStatus}`
);
}
} catch (syncError) {
// Log the sync error but don't fail the update operation
logger.error(`Failed to sync feature to app_spec.txt:`, syncError);
}
}
res.json({ success: true, feature: updated }); res.json({ success: true, feature: updated });
} catch (error) { } catch (error) {
logError(error, 'Update feature failed'); logError(error, 'Update feature failed');

View File

@@ -30,11 +30,11 @@ import { writeValidation } from '../../../lib/validation-storage.js';
import { streamingQuery } from '../../../providers/simple-query-service.js'; import { streamingQuery } from '../../../providers/simple-query-service.js';
import { import {
issueValidationSchema, issueValidationSchema,
ISSUE_VALIDATION_SYSTEM_PROMPT,
buildValidationPrompt, buildValidationPrompt,
ValidationComment, ValidationComment,
ValidationLinkedPR, ValidationLinkedPR,
} from './validation-schema.js'; } from './validation-schema.js';
import { getPromptCustomization } from '../../../lib/settings-helpers.js';
import { import {
trySetValidationRunning, trySetValidationRunning,
clearValidationStatus, clearValidationStatus,
@@ -117,13 +117,17 @@ async function runValidation(
let responseText = ''; let responseText = '';
// Get customized prompts from settings
const prompts = await getPromptCustomization(settingsService, '[ValidateIssue]');
const issueValidationSystemPrompt = prompts.issueValidation.systemPrompt;
// Determine if we should use structured output (Claude/Codex support it, Cursor/OpenCode don't) // Determine if we should use structured output (Claude/Codex support it, Cursor/OpenCode don't)
const useStructuredOutput = isClaudeModel(model) || isCodexModel(model); const useStructuredOutput = isClaudeModel(model) || isCodexModel(model);
// Build the final prompt - for Cursor, include system prompt and JSON schema instructions // Build the final prompt - for Cursor, include system prompt and JSON schema instructions
let finalPrompt = basePrompt; let finalPrompt = basePrompt;
if (!useStructuredOutput) { if (!useStructuredOutput) {
finalPrompt = `${ISSUE_VALIDATION_SYSTEM_PROMPT} finalPrompt = `${issueValidationSystemPrompt}
CRITICAL INSTRUCTIONS: CRITICAL INSTRUCTIONS:
1. DO NOT write any files. Return the JSON in your response only. 1. DO NOT write any files. Return the JSON in your response only.
@@ -167,7 +171,7 @@ ${basePrompt}`;
prompt: finalPrompt, prompt: finalPrompt,
model: model as string, model: model as string,
cwd: projectPath, cwd: projectPath,
systemPrompt: useStructuredOutput ? ISSUE_VALIDATION_SYSTEM_PROMPT : undefined, systemPrompt: useStructuredOutput ? issueValidationSystemPrompt : undefined,
abortController, abortController,
thinkingLevel: effectiveThinkingLevel, thinkingLevel: effectiveThinkingLevel,
reasoningEffort: effectiveReasoningEffort, reasoningEffort: effectiveReasoningEffort,

View File

@@ -1,8 +1,11 @@
/** /**
* Issue Validation Schema and System Prompt * Issue Validation Schema and Prompt Building
* *
* Defines the JSON schema for Claude's structured output and * Defines the JSON schema for Claude's structured output and
* the system prompt that guides the validation process. * helper functions for building validation prompts.
*
* Note: The system prompt is now centralized in @automaker/prompts
* and accessed via getPromptCustomization() in validate-issue.ts
*/ */
/** /**
@@ -82,76 +85,6 @@ export const issueValidationSchema = {
additionalProperties: false, additionalProperties: false,
} as const; } as const;
/**
* System prompt that guides Claude in validating GitHub issues.
* Instructs the model to use read-only tools to analyze the codebase.
*/
export const ISSUE_VALIDATION_SYSTEM_PROMPT = `You are an expert code analyst validating GitHub issues against a codebase.
Your task is to analyze a GitHub issue and determine if it's valid by scanning the codebase.
## Validation Process
1. **Read the issue carefully** - Understand what is being reported or requested
2. **Search the codebase** - Use Glob to find relevant files by pattern, Grep to search for keywords
3. **Examine the code** - Use Read to look at the actual implementation in relevant files
4. **Check linked PRs** - If there are linked pull requests, use \`gh pr diff <PR_NUMBER>\` to review the changes
5. **Form your verdict** - Based on your analysis, determine if the issue is valid
## Verdicts
- **valid**: The issue describes a real problem that exists in the codebase, or a clear feature request that can be implemented. The referenced files/components exist and the issue is actionable.
- **invalid**: The issue describes behavior that doesn't exist, references non-existent files or components, is based on a misunderstanding of the code, or the described "bug" is actually expected behavior.
- **needs_clarification**: The issue lacks sufficient detail to verify. Specify what additional information is needed in the missingInfo field.
## For Bug Reports, Check:
- Do the referenced files/components exist?
- Does the code match what the issue describes?
- Is the described behavior actually a bug or expected?
- Can you locate the code that would cause the reported issue?
## For Feature Requests, Check:
- Does the feature already exist?
- Is the implementation location clear?
- Is the request technically feasible given the codebase structure?
## Analyzing Linked Pull Requests
When an issue has linked PRs (especially open ones), you MUST analyze them:
1. **Run \`gh pr diff <PR_NUMBER>\`** to see what changes the PR makes
2. **Run \`gh pr view <PR_NUMBER>\`** to see PR description and status
3. **Evaluate if the PR fixes the issue** - Does the diff address the reported problem?
4. **Provide a recommendation**:
- \`wait_for_merge\`: The PR appears to fix the issue correctly. No additional work needed - just wait for it to be merged.
- \`pr_needs_work\`: The PR attempts to fix the issue but is incomplete or has problems.
- \`no_pr\`: No relevant PR exists for this issue.
5. **Include prAnalysis in your response** with:
- hasOpenPR: true/false
- prFixesIssue: true/false (based on diff analysis)
- prNumber: the PR number you analyzed
- prSummary: brief description of what the PR changes
- recommendation: one of the above values
## Response Guidelines
- **Always include relatedFiles** when you find relevant code
- **Set bugConfirmed to true** only if you can definitively confirm a bug exists in the code
- **Provide a suggestedFix** when you have a clear idea of how to address the issue
- **Use missingInfo** when the verdict is needs_clarification to list what's needed
- **Include prAnalysis** when there are linked PRs - this is critical for avoiding duplicate work
- **Set estimatedComplexity** to help prioritize:
- trivial: Simple text changes, one-line fixes
- simple: Small changes to one file
- moderate: Changes to multiple files or moderate logic changes
- complex: Significant refactoring or new feature implementation
- very_complex: Major architectural changes or cross-cutting concerns
Be thorough in your analysis but focus on files that are directly relevant to the issue.`;
/** /**
* Comment data structure for validation prompt * Comment data structure for validation prompt
*/ */

View File

@@ -9,12 +9,14 @@ import type { Request, Response } from 'express';
export interface EnvironmentResponse { export interface EnvironmentResponse {
isContainerized: boolean; isContainerized: boolean;
skipSandboxWarning?: boolean;
} }
export function createEnvironmentHandler() { export function createEnvironmentHandler() {
return (_req: Request, res: Response): void => { return (_req: Request, res: Response): void => {
res.json({ res.json({
isContainerized: process.env.IS_CONTAINERIZED === 'true', isContainerized: process.env.IS_CONTAINERIZED === 'true',
skipSandboxWarning: process.env.AUTOMAKER_SKIP_SANDBOX_WARNING === 'true',
} satisfies EnvironmentResponse); } satisfies EnvironmentResponse);
}; };
} }

View File

@@ -0,0 +1,21 @@
/**
* Common utilities for notification routes
*
* Provides logger and error handling utilities shared across all notification endpoints.
*/
import { createLogger } from '@automaker/utils';
import { getErrorMessage as getErrorMessageShared, createLogError } from '../common.js';
/** Logger instance for notification-related operations */
export const logger = createLogger('Notifications');
/**
* Extract user-friendly error message from error objects
*/
export { getErrorMessageShared as getErrorMessage };
/**
* Log error with automatic logger binding
*/
export const logError = createLogError(logger);

View File

@@ -0,0 +1,62 @@
/**
* Notifications routes - HTTP API for project-level notifications
*
* Provides endpoints for:
* - Listing notifications
* - Getting unread count
* - Marking notifications as read
* - Dismissing notifications
*
* All endpoints use handler factories that receive the NotificationService instance.
* Mounted at /api/notifications in the main server.
*/
import { Router } from 'express';
import type { NotificationService } from '../../services/notification-service.js';
import { validatePathParams } from '../../middleware/validate-paths.js';
import { createListHandler } from './routes/list.js';
import { createUnreadCountHandler } from './routes/unread-count.js';
import { createMarkReadHandler } from './routes/mark-read.js';
import { createDismissHandler } from './routes/dismiss.js';
/**
* Create notifications router with all endpoints
*
* Endpoints:
* - POST /list - List all notifications for a project
* - POST /unread-count - Get unread notification count
* - POST /mark-read - Mark notification(s) as read
* - POST /dismiss - Dismiss notification(s)
*
* @param notificationService - Instance of NotificationService
* @returns Express Router configured with all notification endpoints
*/
export function createNotificationsRoutes(notificationService: NotificationService): Router {
const router = Router();
// List notifications
router.post('/list', validatePathParams('projectPath'), createListHandler(notificationService));
// Get unread count
router.post(
'/unread-count',
validatePathParams('projectPath'),
createUnreadCountHandler(notificationService)
);
// Mark as read (single or all)
router.post(
'/mark-read',
validatePathParams('projectPath'),
createMarkReadHandler(notificationService)
);
// Dismiss (single or all)
router.post(
'/dismiss',
validatePathParams('projectPath'),
createDismissHandler(notificationService)
);
return router;
}

View File

@@ -0,0 +1,53 @@
/**
* POST /api/notifications/dismiss - Dismiss notification(s)
*
* Request body: { projectPath: string, notificationId?: string }
* - If notificationId provided: dismisses that notification
* - If notificationId not provided: dismisses all notifications
*
* Response: { success: true, dismissed: boolean | count: number }
*/
import type { Request, Response } from 'express';
import type { NotificationService } from '../../../services/notification-service.js';
import { getErrorMessage, logError } from '../common.js';
/**
* Create handler for POST /api/notifications/dismiss
*
* @param notificationService - Instance of NotificationService
* @returns Express request handler
*/
export function createDismissHandler(notificationService: NotificationService) {
return async (req: Request, res: Response): Promise<void> => {
try {
const { projectPath, notificationId } = req.body;
if (!projectPath || typeof projectPath !== 'string') {
res.status(400).json({ success: false, error: 'projectPath is required' });
return;
}
// If notificationId provided, dismiss single notification
if (notificationId) {
const dismissed = await notificationService.dismissNotification(
projectPath,
notificationId
);
if (!dismissed) {
res.status(404).json({ success: false, error: 'Notification not found' });
return;
}
res.json({ success: true, dismissed: true });
return;
}
// Otherwise dismiss all
const count = await notificationService.dismissAll(projectPath);
res.json({ success: true, count });
} catch (error) {
logError(error, 'Dismiss failed');
res.status(500).json({ success: false, error: getErrorMessage(error) });
}
};
}

View File

@@ -0,0 +1,39 @@
/**
* POST /api/notifications/list - List all notifications for a project
*
* Request body: { projectPath: string }
* Response: { success: true, notifications: Notification[] }
*/
import type { Request, Response } from 'express';
import type { NotificationService } from '../../../services/notification-service.js';
import { getErrorMessage, logError } from '../common.js';
/**
* Create handler for POST /api/notifications/list
*
* @param notificationService - Instance of NotificationService
* @returns Express request handler
*/
export function createListHandler(notificationService: NotificationService) {
return async (req: Request, res: Response): Promise<void> => {
try {
const { projectPath } = req.body;
if (!projectPath || typeof projectPath !== 'string') {
res.status(400).json({ success: false, error: 'projectPath is required' });
return;
}
const notifications = await notificationService.getNotifications(projectPath);
res.json({
success: true,
notifications,
});
} catch (error) {
logError(error, 'List notifications failed');
res.status(500).json({ success: false, error: getErrorMessage(error) });
}
};
}

View File

@@ -0,0 +1,50 @@
/**
* POST /api/notifications/mark-read - Mark notification(s) as read
*
* Request body: { projectPath: string, notificationId?: string }
* - If notificationId provided: marks that notification as read
* - If notificationId not provided: marks all notifications as read
*
* Response: { success: true, count?: number, notification?: Notification }
*/
import type { Request, Response } from 'express';
import type { NotificationService } from '../../../services/notification-service.js';
import { getErrorMessage, logError } from '../common.js';
/**
* Create handler for POST /api/notifications/mark-read
*
* @param notificationService - Instance of NotificationService
* @returns Express request handler
*/
export function createMarkReadHandler(notificationService: NotificationService) {
return async (req: Request, res: Response): Promise<void> => {
try {
const { projectPath, notificationId } = req.body;
if (!projectPath || typeof projectPath !== 'string') {
res.status(400).json({ success: false, error: 'projectPath is required' });
return;
}
// If notificationId provided, mark single notification
if (notificationId) {
const notification = await notificationService.markAsRead(projectPath, notificationId);
if (!notification) {
res.status(404).json({ success: false, error: 'Notification not found' });
return;
}
res.json({ success: true, notification });
return;
}
// Otherwise mark all as read
const count = await notificationService.markAllAsRead(projectPath);
res.json({ success: true, count });
} catch (error) {
logError(error, 'Mark read failed');
res.status(500).json({ success: false, error: getErrorMessage(error) });
}
};
}

View File

@@ -0,0 +1,39 @@
/**
* POST /api/notifications/unread-count - Get unread notification count
*
* Request body: { projectPath: string }
* Response: { success: true, count: number }
*/
import type { Request, Response } from 'express';
import type { NotificationService } from '../../../services/notification-service.js';
import { getErrorMessage, logError } from '../common.js';
/**
* Create handler for POST /api/notifications/unread-count
*
* @param notificationService - Instance of NotificationService
* @returns Express request handler
*/
export function createUnreadCountHandler(notificationService: NotificationService) {
return async (req: Request, res: Response): Promise<void> => {
try {
const { projectPath } = req.body;
if (!projectPath || typeof projectPath !== 'string') {
res.status(400).json({ success: false, error: 'projectPath is required' });
return;
}
const count = await notificationService.getUnreadCount(projectPath);
res.json({
success: true,
count,
});
} catch (error) {
logError(error, 'Get unread count failed');
res.status(500).json({ success: false, error: getErrorMessage(error) });
}
};
}

View File

@@ -4,12 +4,58 @@
import type { Request, Response } from 'express'; import type { Request, Response } from 'express';
import type { AutoModeService } from '../../../services/auto-mode-service.js'; import type { AutoModeService } from '../../../services/auto-mode-service.js';
import { getBacklogPlanStatus, getRunningDetails } from '../../backlog-plan/common.js';
import { getAllRunningGenerations } from '../../app-spec/common.js';
import path from 'path';
import { getErrorMessage, logError } from '../common.js'; import { getErrorMessage, logError } from '../common.js';
export function createIndexHandler(autoModeService: AutoModeService) { export function createIndexHandler(autoModeService: AutoModeService) {
return async (_req: Request, res: Response): Promise<void> => { return async (_req: Request, res: Response): Promise<void> => {
try { try {
const runningAgents = await autoModeService.getRunningAgents(); const runningAgents = [...(await autoModeService.getRunningAgents())];
const backlogPlanStatus = getBacklogPlanStatus();
const backlogPlanDetails = getRunningDetails();
if (backlogPlanStatus.isRunning && backlogPlanDetails) {
runningAgents.push({
featureId: `backlog-plan:${backlogPlanDetails.projectPath}`,
projectPath: backlogPlanDetails.projectPath,
projectName: path.basename(backlogPlanDetails.projectPath),
isAutoMode: false,
title: 'Backlog plan',
description: backlogPlanDetails.prompt,
});
}
// Add spec/feature generation tasks
const specGenerations = getAllRunningGenerations();
for (const generation of specGenerations) {
let title: string;
let description: string;
switch (generation.type) {
case 'feature_generation':
title = 'Generating features from spec';
description = 'Creating features from the project specification';
break;
case 'sync':
title = 'Syncing spec with code';
description = 'Updating spec from codebase and completed features';
break;
default:
title = 'Regenerating spec';
description = 'Analyzing project and generating specification';
}
runningAgents.push({
featureId: `spec-generation:${generation.projectPath}`,
projectPath: generation.projectPath,
projectName: path.basename(generation.projectPath),
isAutoMode: false,
title,
description,
});
}
res.json({ res.json({
success: true, success: true,

View File

@@ -12,6 +12,18 @@ import type { Request, Response } from 'express';
import type { SettingsService } from '../../../services/settings-service.js'; import type { SettingsService } from '../../../services/settings-service.js';
import type { GlobalSettings } from '../../../types/settings.js'; import type { GlobalSettings } from '../../../types/settings.js';
import { getErrorMessage, logError, logger } from '../common.js'; import { getErrorMessage, logError, logger } from '../common.js';
import { setLogLevel, LogLevel } from '@automaker/utils';
import { setRequestLoggingEnabled } from '../../../index.js';
/**
* Map server log level string to LogLevel enum
*/
const LOG_LEVEL_MAP: Record<string, LogLevel> = {
error: LogLevel.ERROR,
warn: LogLevel.WARN,
info: LogLevel.INFO,
debug: LogLevel.DEBUG,
};
/** /**
* Create handler factory for PUT /api/settings/global * Create handler factory for PUT /api/settings/global
@@ -46,6 +58,23 @@ export function createUpdateGlobalHandler(settingsService: SettingsService) {
const settings = await settingsService.updateGlobalSettings(updates); const settings = await settingsService.updateGlobalSettings(updates);
// Apply server log level if it was updated
if ('serverLogLevel' in updates && updates.serverLogLevel) {
const level = LOG_LEVEL_MAP[updates.serverLogLevel];
if (level !== undefined) {
setLogLevel(level);
logger.info(`Server log level changed to: ${updates.serverLogLevel}`);
}
}
// Apply request logging setting if it was updated
if ('enableRequestLogging' in updates && typeof updates.enableRequestLogging === 'boolean') {
setRequestLoggingEnabled(updates.enableRequestLogging);
logger.info(
`HTTP request logging ${updates.enableRequestLogging ? 'enabled' : 'disabled'}`
);
}
res.json({ res.json({
success: true, success: true,
settings, settings,

View File

@@ -3,6 +3,7 @@
*/ */
import { Router } from 'express'; import { Router } from 'express';
import { createStatusHandler } from './routes/status.js';
import { createClaudeStatusHandler } from './routes/claude-status.js'; import { createClaudeStatusHandler } from './routes/claude-status.js';
import { createInstallClaudeHandler } from './routes/install-claude.js'; import { createInstallClaudeHandler } from './routes/install-claude.js';
import { createAuthClaudeHandler } from './routes/auth-claude.js'; import { createAuthClaudeHandler } from './routes/auth-claude.js';
@@ -12,6 +13,10 @@ import { createApiKeysHandler } from './routes/api-keys.js';
import { createPlatformHandler } from './routes/platform.js'; import { createPlatformHandler } from './routes/platform.js';
import { createVerifyClaudeAuthHandler } from './routes/verify-claude-auth.js'; import { createVerifyClaudeAuthHandler } from './routes/verify-claude-auth.js';
import { createVerifyCodexAuthHandler } from './routes/verify-codex-auth.js'; import { createVerifyCodexAuthHandler } from './routes/verify-codex-auth.js';
import { createVerifyCodeRabbitAuthHandler } from './routes/verify-coderabbit-auth.js';
import { createCodeRabbitStatusHandler } from './routes/coderabbit-status.js';
import { createAuthCodeRabbitHandler } from './routes/auth-coderabbit.js';
import { createDeauthCodeRabbitHandler } from './routes/deauth-coderabbit.js';
import { createGhStatusHandler } from './routes/gh-status.js'; import { createGhStatusHandler } from './routes/gh-status.js';
import { createCursorStatusHandler } from './routes/cursor-status.js'; import { createCursorStatusHandler } from './routes/cursor-status.js';
import { createCodexStatusHandler } from './routes/codex-status.js'; import { createCodexStatusHandler } from './routes/codex-status.js';
@@ -44,6 +49,9 @@ import {
export function createSetupRoutes(): Router { export function createSetupRoutes(): Router {
const router = Router(); const router = Router();
// Unified CLI status endpoint
router.get('/status', createStatusHandler());
router.get('/claude-status', createClaudeStatusHandler()); router.get('/claude-status', createClaudeStatusHandler());
router.post('/install-claude', createInstallClaudeHandler()); router.post('/install-claude', createInstallClaudeHandler());
router.post('/auth-claude', createAuthClaudeHandler()); router.post('/auth-claude', createAuthClaudeHandler());
@@ -54,6 +62,7 @@ export function createSetupRoutes(): Router {
router.get('/platform', createPlatformHandler()); router.get('/platform', createPlatformHandler());
router.post('/verify-claude-auth', createVerifyClaudeAuthHandler()); router.post('/verify-claude-auth', createVerifyClaudeAuthHandler());
router.post('/verify-codex-auth', createVerifyCodexAuthHandler()); router.post('/verify-codex-auth', createVerifyCodexAuthHandler());
router.post('/verify-coderabbit-auth', createVerifyCodeRabbitAuthHandler());
router.get('/gh-status', createGhStatusHandler()); router.get('/gh-status', createGhStatusHandler());
// Cursor CLI routes // Cursor CLI routes
@@ -72,6 +81,11 @@ export function createSetupRoutes(): Router {
router.post('/auth-opencode', createAuthOpencodeHandler()); router.post('/auth-opencode', createAuthOpencodeHandler());
router.post('/deauth-opencode', createDeauthOpencodeHandler()); router.post('/deauth-opencode', createDeauthOpencodeHandler());
// CodeRabbit CLI routes
router.get('/coderabbit-status', createCodeRabbitStatusHandler());
router.post('/auth-coderabbit', createAuthCodeRabbitHandler());
router.post('/deauth-coderabbit', createDeauthCodeRabbitHandler());
// OpenCode Dynamic Model Discovery routes // OpenCode Dynamic Model Discovery routes
router.get('/opencode/models', createGetOpencodeModelsHandler()); router.get('/opencode/models', createGetOpencodeModelsHandler());
router.post('/opencode/models/refresh', createRefreshOpencodeModelsHandler()); router.post('/opencode/models/refresh', createRefreshOpencodeModelsHandler());

View File

@@ -0,0 +1,80 @@
/**
* POST /auth-coderabbit endpoint - Authenticate CodeRabbit CLI via OAuth
*
* CodeRabbit CLI requires interactive authentication:
* 1. Run `cr auth login`
* 2. Browser opens with OAuth flow
* 3. After browser auth, CLI shows a token
* 4. User must press Enter to confirm
*
* Since step 4 requires interactive input, we can't fully automate this.
* Instead, we provide the command for the user to run manually.
*/
import type { Request, Response } from 'express';
import { execSync } from 'child_process';
import { logError, getErrorMessage } from '../common.js';
import * as fs from 'fs';
import * as path from 'path';
/**
* Find the CodeRabbit CLI command (coderabbit or cr)
*/
function findCodeRabbitCommand(): string | null {
const commands = ['coderabbit', 'cr'];
for (const command of commands) {
try {
const whichCommand = process.platform === 'win32' ? 'where' : 'which';
const result = execSync(`${whichCommand} ${command}`, {
encoding: 'utf8',
timeout: 2000,
}).trim();
if (result) {
return result.split('\n')[0];
}
} catch {
// Command not found, try next
}
}
return null;
}
export function createAuthCodeRabbitHandler() {
return async (_req: Request, res: Response): Promise<void> => {
try {
// Remove the disconnected marker file to reconnect the app to the CLI
const markerPath = path.join(process.cwd(), '.automaker', '.coderabbit-disconnected');
if (fs.existsSync(markerPath)) {
fs.unlinkSync(markerPath);
}
// Find CodeRabbit CLI
const cliPath = findCodeRabbitCommand();
if (!cliPath) {
res.status(400).json({
success: false,
error: 'CodeRabbit CLI is not installed. Please install it first.',
});
return;
}
// CodeRabbit CLI requires interactive input (pressing Enter after OAuth)
// We can't automate this, so we return the command for the user to run
const command = cliPath.includes('coderabbit') ? 'coderabbit auth login' : 'cr auth login';
res.json({
success: true,
requiresManualAuth: true,
command,
message: `Please run "${command}" in your terminal to authenticate. After completing OAuth in your browser, press Enter in the terminal to confirm.`,
});
} catch (error) {
logError(error, 'Auth CodeRabbit failed');
res.status(500).json({
success: false,
error: getErrorMessage(error),
message: 'Failed to initiate CodeRabbit authentication',
});
}
};
}

View File

@@ -0,0 +1,240 @@
/**
* GET /coderabbit-status endpoint - Get CodeRabbit CLI installation and auth status
*/
import type { Request, Response } from 'express';
import { spawn, execSync } from 'child_process';
import { getErrorMessage, logError } from '../common.js';
import * as fs from 'fs';
import * as path from 'path';
const DISCONNECTED_MARKER_FILE = '.coderabbit-disconnected';
function isCodeRabbitDisconnectedFromApp(): boolean {
try {
const projectRoot = process.cwd();
const markerPath = path.join(projectRoot, '.automaker', DISCONNECTED_MARKER_FILE);
return fs.existsSync(markerPath);
} catch {
return false;
}
}
/**
* Find the CodeRabbit CLI command (coderabbit or cr)
*/
function findCodeRabbitCommand(): string | null {
const commands = ['coderabbit', 'cr'];
for (const command of commands) {
try {
const whichCommand = process.platform === 'win32' ? 'where' : 'which';
const result = execSync(`${whichCommand} ${command}`, {
encoding: 'utf8',
timeout: 2000,
}).trim();
if (result) {
return result.split('\n')[0];
}
} catch {
// Command not found, try next
}
}
return null;
}
/**
* Get CodeRabbit CLI version
*/
async function getCodeRabbitVersion(command: string): Promise<string | null> {
return new Promise((resolve) => {
const child = spawn(command, ['--version'], {
stdio: 'pipe',
timeout: 5000,
});
let stdout = '';
child.stdout?.on('data', (data) => {
stdout += data.toString();
});
child.on('close', (code) => {
if (code === 0 && stdout) {
resolve(stdout.trim());
} else {
resolve(null);
}
});
child.on('error', () => {
resolve(null);
});
});
}
interface CodeRabbitAuthInfo {
authenticated: boolean;
method: 'oauth' | 'none';
username?: string;
email?: string;
organization?: string;
}
/**
* Check CodeRabbit CLI authentication status
* Parses output like:
* ```
* CodeRabbit CLI Status
* ✅ Authentication: Logged in
* User Information:
* 👤 Name: Kacper
* 📧 Email: kacperlachowiczwp.pl@wp.pl
* 🔧 Username: Shironex
* Organization Information:
* 🏢 Name: Anime-World-SPZOO
* ```
*/
async function getCodeRabbitAuthStatus(command: string): Promise<CodeRabbitAuthInfo> {
return new Promise((resolve) => {
const child = spawn(command, ['auth', 'status'], {
stdio: 'pipe',
timeout: 10000,
});
let stdout = '';
let stderr = '';
child.stdout?.on('data', (data) => {
stdout += data.toString();
});
child.stderr?.on('data', (data) => {
stderr += data.toString();
});
child.on('close', (code) => {
const output = stdout + stderr;
// Check for "Logged in" in Authentication line
const isAuthenticated =
code === 0 &&
(output.includes('Logged in') || output.includes('logged in')) &&
!output.toLowerCase().includes('not logged in');
if (isAuthenticated) {
// Parse the structured output format
// Username: look for "Username: <value>" line
const usernameMatch = output.match(/Username:\s*(\S+)/i);
// Email: look for "Email: <value>" line
const emailMatch = output.match(/Email:\s*(\S+@\S+)/i);
// Organization: look for "Name: <value>" under Organization Information
// The org name appears after "Organization Information:" section
const orgSection = output.split(/Organization Information:/i)[1];
const orgMatch = orgSection?.match(/Name:\s*(.+?)(?:\n|$)/i);
resolve({
authenticated: true,
method: 'oauth',
username: usernameMatch?.[1]?.trim(),
email: emailMatch?.[1]?.trim(),
organization: orgMatch?.[1]?.trim(),
});
} else {
resolve({
authenticated: false,
method: 'none',
});
}
});
child.on('error', () => {
resolve({
authenticated: false,
method: 'none',
});
});
});
}
/**
* Creates handler for GET /api/setup/coderabbit-status
* Returns CodeRabbit CLI installation and authentication status
*/
export function createCodeRabbitStatusHandler() {
const installCommand = 'npm install -g coderabbit';
const loginCommand = 'coderabbit auth login';
return async (_req: Request, res: Response): Promise<void> => {
try {
// Check if user has manually disconnected from the app
if (isCodeRabbitDisconnectedFromApp()) {
res.json({
success: true,
installed: true,
version: null,
path: null,
auth: {
authenticated: false,
method: 'none',
},
recommendation: 'CodeRabbit CLI is disconnected. Click Sign In to reconnect.',
installCommand,
loginCommand,
});
return;
}
// Find CodeRabbit CLI
const cliPath = findCodeRabbitCommand();
if (!cliPath) {
res.json({
success: true,
installed: false,
version: null,
path: null,
auth: {
authenticated: false,
method: 'none',
},
recommendation: 'Install CodeRabbit CLI to enable AI-powered code reviews.',
installCommand,
loginCommand,
installCommands: {
macos: 'curl -fsSL https://coderabbit.ai/install | bash',
npm: installCommand,
},
});
return;
}
// Get version
const version = await getCodeRabbitVersion(cliPath);
// Get auth status
const authStatus = await getCodeRabbitAuthStatus(cliPath);
res.json({
success: true,
installed: true,
version,
path: cliPath,
auth: authStatus,
recommendation: authStatus.authenticated
? undefined
: 'Sign in to CodeRabbit to enable AI-powered code reviews.',
installCommand,
loginCommand,
installCommands: {
macos: 'curl -fsSL https://coderabbit.ai/install | bash',
npm: installCommand,
},
});
} catch (error) {
logError(error, 'Get CodeRabbit status failed');
res.status(500).json({
success: false,
error: getErrorMessage(error),
});
}
};
}

View File

@@ -0,0 +1,113 @@
/**
* POST /deauth-coderabbit endpoint - Sign out from CodeRabbit CLI
*/
import type { Request, Response } from 'express';
import { spawn, execSync } from 'child_process';
import { logError, getErrorMessage } from '../common.js';
import * as fs from 'fs';
import * as path from 'path';
/**
* Find the CodeRabbit CLI command (coderabbit or cr)
*/
function findCodeRabbitCommand(): string | null {
const commands = ['coderabbit', 'cr'];
for (const command of commands) {
try {
const whichCommand = process.platform === 'win32' ? 'where' : 'which';
const result = execSync(`${whichCommand} ${command}`, {
encoding: 'utf8',
timeout: 2000,
}).trim();
if (result) {
return result.split('\n')[0];
}
} catch {
// Command not found, try next
}
}
return null;
}
export function createDeauthCodeRabbitHandler() {
return async (_req: Request, res: Response): Promise<void> => {
try {
// Find CodeRabbit CLI
const cliPath = findCodeRabbitCommand();
if (cliPath) {
// Try to run the CLI logout command
const logoutResult = await new Promise<{ success: boolean; error?: string }>((resolve) => {
const child = spawn(cliPath, ['auth', 'logout'], {
stdio: 'pipe',
timeout: 10000,
});
let stderr = '';
child.stderr?.on('data', (data) => {
stderr += data.toString();
});
child.on('close', (code) => {
if (code === 0) {
resolve({ success: true });
} else {
resolve({ success: false, error: stderr || 'Logout command failed' });
}
});
child.on('error', (err) => {
resolve({ success: false, error: err.message });
});
});
if (!logoutResult.success) {
// CLI logout failed, create marker file as fallback
const automakerDir = path.join(process.cwd(), '.automaker');
const markerPath = path.join(automakerDir, '.coderabbit-disconnected');
if (!fs.existsSync(automakerDir)) {
fs.mkdirSync(automakerDir, { recursive: true });
}
fs.writeFileSync(
markerPath,
JSON.stringify({
disconnectedAt: new Date().toISOString(),
message: 'CodeRabbit CLI is disconnected from the app',
})
);
}
} else {
// CLI not installed, just create marker file
const automakerDir = path.join(process.cwd(), '.automaker');
const markerPath = path.join(automakerDir, '.coderabbit-disconnected');
if (!fs.existsSync(automakerDir)) {
fs.mkdirSync(automakerDir, { recursive: true });
}
fs.writeFileSync(
markerPath,
JSON.stringify({
disconnectedAt: new Date().toISOString(),
message: 'CodeRabbit CLI is disconnected from the app',
})
);
}
res.json({
success: true,
message: 'Successfully signed out from CodeRabbit CLI',
});
} catch (error) {
logError(error, 'Deauth CodeRabbit failed');
res.status(500).json({
success: false,
error: getErrorMessage(error),
message: 'Failed to sign out from CodeRabbit CLI',
});
}
};
}

View File

@@ -0,0 +1,249 @@
/**
* GET /status endpoint - Get unified CLI availability status
*
* Returns the installation and authentication status of all supported CLIs
* in a single response. This is useful for quickly determining which
* providers are available without making multiple API calls.
*/
import type { Request, Response } from 'express';
import { getClaudeStatus } from '../get-claude-status.js';
import { getErrorMessage, logError } from '../common.js';
import { CursorProvider } from '../../../providers/cursor-provider.js';
import { CodexProvider } from '../../../providers/codex-provider.js';
import { OpencodeProvider } from '../../../providers/opencode-provider.js';
import * as fs from 'fs';
import * as path from 'path';
/**
* Check if a CLI has been manually disconnected from the app
*/
function isCliDisconnected(cliName: string): boolean {
try {
const projectRoot = process.cwd();
const markerPath = path.join(projectRoot, '.automaker', `.${cliName}-disconnected`);
return fs.existsSync(markerPath);
} catch {
return false;
}
}
/**
* CLI status response for a single provider
*/
interface CliStatusResponse {
installed: boolean;
version: string | null;
path: string | null;
auth: {
authenticated: boolean;
method: string;
};
disconnected: boolean;
}
/**
* Unified status response for all CLIs
*/
interface UnifiedStatusResponse {
success: boolean;
timestamp: string;
clis: {
claude: CliStatusResponse | null;
cursor: CliStatusResponse | null;
codex: CliStatusResponse | null;
opencode: CliStatusResponse | null;
};
availableProviders: string[];
hasAnyAuthenticated: boolean;
}
/**
* Get detailed Claude CLI status
*/
async function getClaudeCliStatus(): Promise<CliStatusResponse> {
const disconnected = isCliDisconnected('claude');
try {
const status = await getClaudeStatus();
return {
installed: status.installed,
version: status.version || null,
path: status.path || null,
auth: {
authenticated: disconnected ? false : status.auth.authenticated,
method: disconnected ? 'none' : status.auth.method,
},
disconnected,
};
} catch {
return {
installed: false,
version: null,
path: null,
auth: { authenticated: false, method: 'none' },
disconnected,
};
}
}
/**
* Get detailed Cursor CLI status
*/
async function getCursorCliStatus(): Promise<CliStatusResponse> {
const disconnected = isCliDisconnected('cursor');
try {
const provider = new CursorProvider();
const [installed, version, auth] = await Promise.all([
provider.isInstalled(),
provider.getVersion(),
provider.checkAuth(),
]);
const cliPath = installed ? provider.getCliPath() : null;
return {
installed,
version: version || null,
path: cliPath,
auth: {
authenticated: disconnected ? false : auth.authenticated,
method: disconnected ? 'none' : auth.method,
},
disconnected,
};
} catch {
return {
installed: false,
version: null,
path: null,
auth: { authenticated: false, method: 'none' },
disconnected,
};
}
}
/**
* Get detailed Codex CLI status
*/
async function getCodexCliStatus(): Promise<CliStatusResponse> {
const disconnected = isCliDisconnected('codex');
try {
const provider = new CodexProvider();
const status = await provider.detectInstallation();
let authMethod = 'none';
if (!disconnected && status.authenticated) {
authMethod = status.hasApiKey ? 'api_key_env' : 'cli_authenticated';
}
return {
installed: status.installed,
version: status.version || null,
path: status.path || null,
auth: {
authenticated: disconnected ? false : status.authenticated || false,
method: authMethod,
},
disconnected,
};
} catch {
return {
installed: false,
version: null,
path: null,
auth: { authenticated: false, method: 'none' },
disconnected,
};
}
}
/**
* Get detailed OpenCode CLI status
*/
async function getOpencodeCliStatus(): Promise<CliStatusResponse> {
try {
const provider = new OpencodeProvider();
const status = await provider.detectInstallation();
let authMethod = 'none';
if (status.authenticated) {
authMethod = status.hasApiKey ? 'api_key_env' : 'cli_authenticated';
}
return {
installed: status.installed,
version: status.version || null,
path: status.path || null,
auth: {
authenticated: status.authenticated || false,
method: authMethod,
},
disconnected: false, // OpenCode doesn't have disconnect feature
};
} catch {
return {
installed: false,
version: null,
path: null,
auth: { authenticated: false, method: 'none' },
disconnected: false,
};
}
}
/**
* Creates handler for GET /api/setup/status
* Returns unified CLI availability status for all providers
*/
export function createStatusHandler() {
return async (_req: Request, res: Response): Promise<void> => {
try {
// Fetch all CLI statuses in parallel for performance
const [claude, cursor, codex, opencode] = await Promise.all([
getClaudeCliStatus(),
getCursorCliStatus(),
getCodexCliStatus(),
getOpencodeCliStatus(),
]);
// Determine which providers are available (installed and authenticated)
const availableProviders: string[] = [];
if (claude.installed && claude.auth.authenticated) {
availableProviders.push('claude');
}
if (cursor.installed && cursor.auth.authenticated) {
availableProviders.push('cursor');
}
if (codex.installed && codex.auth.authenticated) {
availableProviders.push('codex');
}
if (opencode.installed && opencode.auth.authenticated) {
availableProviders.push('opencode');
}
const response: UnifiedStatusResponse = {
success: true,
timestamp: new Date().toISOString(),
clis: {
claude,
cursor,
codex,
opencode,
},
availableProviders,
hasAnyAuthenticated: availableProviders.length > 0,
};
res.json(response);
} catch (error) {
logError(error, 'Get unified CLI status failed');
res.status(500).json({
success: false,
error: getErrorMessage(error),
});
}
};
}

View File

@@ -0,0 +1,163 @@
/**
* POST /verify-coderabbit-auth endpoint - Verify CodeRabbit authentication
* Validates API key format and optionally tests the connection
*/
import type { Request, Response } from 'express';
import { spawn } from 'child_process';
import { createLogger } from '@automaker/utils';
import { AuthRateLimiter, validateApiKey } from '../../../lib/auth-utils.js';
const logger = createLogger('Setup');
const rateLimiter = new AuthRateLimiter();
/**
* Test CodeRabbit CLI authentication by running a simple command
*/
async function testCodeRabbitCli(
apiKey?: string
): Promise<{ authenticated: boolean; error?: string }> {
return new Promise((resolve) => {
// Set up environment with API key if provided
const env = { ...process.env };
if (apiKey) {
env.CODERABBIT_API_KEY = apiKey;
}
// Try to run coderabbit auth status to verify auth
const child = spawn('coderabbit', ['auth', 'status'], {
stdio: ['pipe', 'pipe', 'pipe'],
env,
timeout: 10000,
});
let stdout = '';
let stderr = '';
child.stdout?.on('data', (data) => {
stdout += data.toString();
});
child.stderr?.on('data', (data) => {
stderr += data.toString();
});
child.on('close', (code) => {
if (code === 0) {
// Check output for authentication status
const output = stdout.toLowerCase() + stderr.toLowerCase();
if (
output.includes('authenticated') ||
output.includes('logged in') ||
output.includes('valid')
) {
resolve({ authenticated: true });
} else if (output.includes('not authenticated') || output.includes('not logged in')) {
resolve({ authenticated: false, error: 'CodeRabbit CLI is not authenticated.' });
} else {
// Command succeeded, assume authenticated
resolve({ authenticated: true });
}
} else {
// Command failed
const errorMsg = stderr || stdout || 'CodeRabbit CLI authentication check failed.';
resolve({ authenticated: false, error: errorMsg.trim() });
}
});
child.on('error', (err) => {
// CodeRabbit CLI not installed or other error
resolve({ authenticated: false, error: `CodeRabbit CLI error: ${err.message}` });
});
});
}
/**
* Validate CodeRabbit API key format
* CodeRabbit API keys typically start with 'cr-'
*/
function validateCodeRabbitKey(apiKey: string): { isValid: boolean; error?: string } {
if (!apiKey || apiKey.trim().length === 0) {
return { isValid: false, error: 'API key cannot be empty.' };
}
// CodeRabbit API keys typically start with 'cr-'
if (!apiKey.startsWith('cr-')) {
return {
isValid: false,
error: 'Invalid CodeRabbit API key format. Keys should start with "cr-".',
};
}
if (apiKey.length < 10) {
return { isValid: false, error: 'API key is too short.' };
}
return { isValid: true };
}
export function createVerifyCodeRabbitAuthHandler() {
return async (req: Request, res: Response): Promise<void> => {
try {
const { authMethod, apiKey } = req.body as {
authMethod?: 'cli' | 'api_key';
apiKey?: string;
};
// Rate limiting to prevent abuse
const clientIp = req.ip || req.socket.remoteAddress || 'unknown';
if (!rateLimiter.canAttempt(clientIp)) {
const resetTime = rateLimiter.getResetTime(clientIp);
res.status(429).json({
success: false,
authenticated: false,
error: 'Too many authentication attempts. Please try again later.',
resetTime,
});
return;
}
logger.info(
`[Setup] Verifying CodeRabbit authentication using method: ${authMethod || 'auto'}${apiKey ? ' (with provided key)' : ''}`
);
// For API key verification
if (authMethod === 'api_key' && apiKey) {
// Validate key format
const validation = validateCodeRabbitKey(apiKey);
if (!validation.isValid) {
res.json({
success: true,
authenticated: false,
error: validation.error,
});
return;
}
// Test the CLI with the provided API key
const result = await testCodeRabbitCli(apiKey);
res.json({
success: true,
authenticated: result.authenticated,
error: result.error,
});
return;
}
// For CLI auth or auto detection
const result = await testCodeRabbitCli();
res.json({
success: true,
authenticated: result.authenticated,
error: result.error,
});
} catch (error) {
logger.error('[Setup] Verify CodeRabbit auth endpoint error:', error);
res.status(500).json({
success: false,
authenticated: false,
error: error instanceof Error ? error.message : 'Verification failed',
});
}
};
}

View File

@@ -15,7 +15,7 @@ import { FeatureLoader } from '../../services/feature-loader.js';
import { getAppSpecPath } from '@automaker/platform'; import { getAppSpecPath } from '@automaker/platform';
import * as secureFs from '../../lib/secure-fs.js'; import * as secureFs from '../../lib/secure-fs.js';
import type { SettingsService } from '../../services/settings-service.js'; import type { SettingsService } from '../../services/settings-service.js';
import { getAutoLoadClaudeMdSetting } from '../../lib/settings-helpers.js'; import { getAutoLoadClaudeMdSetting, getPromptCustomization } from '../../lib/settings-helpers.js';
const logger = createLogger('Suggestions'); const logger = createLogger('Suggestions');
@@ -137,11 +137,15 @@ export async function generateSuggestions(
modelOverride?: string, modelOverride?: string,
thinkingLevelOverride?: ThinkingLevel thinkingLevelOverride?: ThinkingLevel
): Promise<void> { ): Promise<void> {
// Get customized prompts from settings
const prompts = await getPromptCustomization(settingsService, '[Suggestions]');
// Map suggestion types to their prompts
const typePrompts: Record<string, string> = { const typePrompts: Record<string, string> = {
features: 'Analyze this project and suggest new features that would add value.', features: prompts.suggestions.featuresPrompt,
refactoring: 'Analyze this project and identify refactoring opportunities.', refactoring: prompts.suggestions.refactoringPrompt,
security: 'Analyze this project for security vulnerabilities and suggest fixes.', security: prompts.suggestions.securityPrompt,
performance: 'Analyze this project for performance issues and suggest optimizations.', performance: prompts.suggestions.performancePrompt,
}; };
// Load existing context to avoid duplicates // Load existing context to avoid duplicates
@@ -151,15 +155,7 @@ export async function generateSuggestions(
${existingContext} ${existingContext}
${existingContext ? '\nIMPORTANT: Do NOT suggest features that are already implemented or already in the backlog above. Focus on NEW ideas that complement what already exists.\n' : ''} ${existingContext ? '\nIMPORTANT: Do NOT suggest features that are already implemented or already in the backlog above. Focus on NEW ideas that complement what already exists.\n' : ''}
Look at the codebase and provide 3-5 concrete suggestions. ${prompts.suggestions.baseTemplate}`;
For each suggestion, provide:
1. A category (e.g., "User Experience", "Security", "Performance")
2. A clear description of what to implement
3. Priority (1=high, 2=medium, 3=low)
4. Brief reasoning for why this would help
The response will be automatically formatted as structured JSON.`;
// Don't send initial message - let the agent output speak for itself // Don't send initial message - let the agent output speak for itself
// The first agent message will be captured as an info entry // The first agent message will be captured as an info entry

View File

@@ -13,6 +13,7 @@ import {
} from '../common.js'; } from '../common.js';
import { updateWorktreePRInfo } from '../../../lib/worktree-metadata.js'; import { updateWorktreePRInfo } from '../../../lib/worktree-metadata.js';
import { createLogger } from '@automaker/utils'; import { createLogger } from '@automaker/utils';
import { validatePRState } from '@automaker/types';
const logger = createLogger('CreatePR'); const logger = createLogger('CreatePR');
@@ -268,11 +269,12 @@ export function createCreatePRHandler() {
prAlreadyExisted = true; prAlreadyExisted = true;
// Store the existing PR info in metadata // Store the existing PR info in metadata
// GitHub CLI returns uppercase states: OPEN, MERGED, CLOSED
await updateWorktreePRInfo(effectiveProjectPath, branchName, { await updateWorktreePRInfo(effectiveProjectPath, branchName, {
number: existingPr.number, number: existingPr.number,
url: existingPr.url, url: existingPr.url,
title: existingPr.title || title, title: existingPr.title || title,
state: existingPr.state || 'open', state: validatePRState(existingPr.state),
createdAt: new Date().toISOString(), createdAt: new Date().toISOString(),
}); });
logger.debug( logger.debug(
@@ -319,11 +321,12 @@ export function createCreatePRHandler() {
if (prNumber) { if (prNumber) {
try { try {
// Note: GitHub doesn't have a 'DRAFT' state - drafts still show as 'OPEN'
await updateWorktreePRInfo(effectiveProjectPath, branchName, { await updateWorktreePRInfo(effectiveProjectPath, branchName, {
number: prNumber, number: prNumber,
url: prUrl, url: prUrl,
title, title,
state: draft ? 'draft' : 'open', state: 'OPEN',
createdAt: new Date().toISOString(), createdAt: new Date().toISOString(),
}); });
logger.debug(`Stored PR info for branch ${branchName}: PR #${prNumber}`); logger.debug(`Stored PR info for branch ${branchName}: PR #${prNumber}`);
@@ -352,11 +355,12 @@ export function createCreatePRHandler() {
prNumber = existingPr.number; prNumber = existingPr.number;
prAlreadyExisted = true; prAlreadyExisted = true;
// GitHub CLI returns uppercase states: OPEN, MERGED, CLOSED
await updateWorktreePRInfo(effectiveProjectPath, branchName, { await updateWorktreePRInfo(effectiveProjectPath, branchName, {
number: existingPr.number, number: existingPr.number,
url: existingPr.url, url: existingPr.url,
title: existingPr.title || title, title: existingPr.title || title,
state: existingPr.state || 'open', state: validatePRState(existingPr.state),
createdAt: new Date().toISOString(), createdAt: new Date().toISOString(),
}); });
logger.debug(`Fetched and stored existing PR: #${existingPr.number}`); logger.debug(`Fetched and stored existing PR: #${existingPr.number}`);

View File

@@ -34,6 +34,7 @@ export function createGetDevServerLogsHandler() {
result: { result: {
worktreePath: result.result.worktreePath, worktreePath: result.result.worktreePath,
port: result.result.port, port: result.result.port,
url: result.result.url,
logs: result.result.logs, logs: result.result.logs,
startedAt: result.result.startedAt, startedAt: result.result.startedAt,
}, },

View File

@@ -14,12 +14,34 @@ import path from 'path';
import * as secureFs from '../../../lib/secure-fs.js'; import * as secureFs from '../../../lib/secure-fs.js';
import { isGitRepo } from '@automaker/git-utils'; import { isGitRepo } from '@automaker/git-utils';
import { getErrorMessage, logError, normalizePath, execEnv, isGhCliAvailable } from '../common.js'; import { getErrorMessage, logError, normalizePath, execEnv, isGhCliAvailable } from '../common.js';
import { readAllWorktreeMetadata, type WorktreePRInfo } from '../../../lib/worktree-metadata.js'; import {
readAllWorktreeMetadata,
updateWorktreePRInfo,
type WorktreePRInfo,
} from '../../../lib/worktree-metadata.js';
import { createLogger } from '@automaker/utils'; import { createLogger } from '@automaker/utils';
import { validatePRState } from '@automaker/types';
import {
checkGitHubRemote,
type GitHubRemoteStatus,
} from '../../github/routes/check-github-remote.js';
const execAsync = promisify(exec); const execAsync = promisify(exec);
const logger = createLogger('Worktree'); const logger = createLogger('Worktree');
/**
* Cache for GitHub remote status per project path.
* This prevents repeated "no git remotes found" warnings when polling
* projects that don't have a GitHub remote configured.
*/
interface GitHubRemoteCacheEntry {
status: GitHubRemoteStatus;
checkedAt: number;
}
const githubRemoteCache = new Map<string, GitHubRemoteCacheEntry>();
const GITHUB_REMOTE_CACHE_TTL_MS = 5 * 60 * 1000; // 5 minutes
interface WorktreeInfo { interface WorktreeInfo {
path: string; path: string;
branch: string; branch: string;
@@ -122,22 +144,65 @@ async function scanWorktreesDirectory(
} }
/** /**
* Fetch open PRs from GitHub and create a map of branch name to PR info. * Get cached GitHub remote status for a project, or check and cache it.
* This allows detecting PRs that were created outside the app. * Returns null if gh CLI is not available.
*/
async function getGitHubRemoteStatus(projectPath: string): Promise<GitHubRemoteStatus | null> {
// Check if gh CLI is available first
const ghAvailable = await isGhCliAvailable();
if (!ghAvailable) {
return null;
}
const now = Date.now();
const cached = githubRemoteCache.get(projectPath);
// Return cached result if still valid
if (cached && now - cached.checkedAt < GITHUB_REMOTE_CACHE_TTL_MS) {
return cached.status;
}
// Check GitHub remote and cache the result
const status = await checkGitHubRemote(projectPath);
githubRemoteCache.set(projectPath, {
status,
checkedAt: Date.now(),
});
return status;
}
/**
* Fetch all PRs from GitHub and create a map of branch name to PR info.
* Uses --state all to include merged/closed PRs, allowing detection of
* state changes (e.g., when a PR is merged on GitHub).
*
* This also allows detecting PRs that were created outside the app.
*
* Uses cached GitHub remote status to avoid repeated warnings when the
* project doesn't have a GitHub remote configured.
*/ */
async function fetchGitHubPRs(projectPath: string): Promise<Map<string, WorktreePRInfo>> { async function fetchGitHubPRs(projectPath: string): Promise<Map<string, WorktreePRInfo>> {
const prMap = new Map<string, WorktreePRInfo>(); const prMap = new Map<string, WorktreePRInfo>();
try { try {
// Check if gh CLI is available // Check GitHub remote status (uses cache to avoid repeated warnings)
const ghAvailable = await isGhCliAvailable(); const remoteStatus = await getGitHubRemoteStatus(projectPath);
if (!ghAvailable) {
// If gh CLI not available or no GitHub remote, return empty silently
if (!remoteStatus || !remoteStatus.hasGitHubRemote) {
return prMap; return prMap;
} }
// Fetch open PRs from GitHub // Use -R flag with owner/repo for more reliable PR fetching
const repoFlag =
remoteStatus.owner && remoteStatus.repo
? `-R ${remoteStatus.owner}/${remoteStatus.repo}`
: '';
// Fetch all PRs from GitHub (including merged/closed to detect state changes)
const { stdout } = await execAsync( const { stdout } = await execAsync(
'gh pr list --state open --json number,title,url,state,headRefName,createdAt --limit 1000', `gh pr list ${repoFlag} --state all --json number,title,url,state,headRefName,createdAt --limit 1000`,
{ cwd: projectPath, env: execEnv, timeout: 15000 } { cwd: projectPath, env: execEnv, timeout: 15000 }
); );
@@ -155,7 +220,8 @@ async function fetchGitHubPRs(projectPath: string): Promise<Map<string, Worktree
number: pr.number, number: pr.number,
url: pr.url, url: pr.url,
title: pr.title, title: pr.title,
state: pr.state, // GitHub CLI returns state as uppercase: OPEN, MERGED, CLOSED
state: validatePRState(pr.state),
createdAt: pr.createdAt, createdAt: pr.createdAt,
}); });
} }
@@ -170,9 +236,10 @@ async function fetchGitHubPRs(projectPath: string): Promise<Map<string, Worktree
export function createListHandler() { export function createListHandler() {
return async (req: Request, res: Response): Promise<void> => { return async (req: Request, res: Response): Promise<void> => {
try { try {
const { projectPath, includeDetails } = req.body as { const { projectPath, includeDetails, forceRefreshGitHub } = req.body as {
projectPath: string; projectPath: string;
includeDetails?: boolean; includeDetails?: boolean;
forceRefreshGitHub?: boolean;
}; };
if (!projectPath) { if (!projectPath) {
@@ -180,6 +247,12 @@ export function createListHandler() {
return; return;
} }
// Clear GitHub remote cache if force refresh requested
// This allows users to re-check for GitHub remote after adding one
if (forceRefreshGitHub) {
githubRemoteCache.delete(projectPath);
}
if (!(await isGitRepo(projectPath))) { if (!(await isGitRepo(projectPath))) {
res.json({ success: true, worktrees: [] }); res.json({ success: true, worktrees: [] });
return; return;
@@ -287,23 +360,36 @@ export function createListHandler() {
} }
} }
// Add PR info from metadata or GitHub for each worktree // Assign PR info to each worktree, preferring fresh GitHub data over cached metadata.
// Only fetch GitHub PRs if includeDetails is requested (performance optimization) // Only fetch GitHub PRs if includeDetails is requested (performance optimization).
// Uses --state all to detect merged/closed PRs, limited to 1000 recent PRs.
const githubPRs = includeDetails const githubPRs = includeDetails
? await fetchGitHubPRs(projectPath) ? await fetchGitHubPRs(projectPath)
: new Map<string, WorktreePRInfo>(); : new Map<string, WorktreePRInfo>();
for (const worktree of worktrees) { for (const worktree of worktrees) {
const metadata = allMetadata.get(worktree.branch); const metadata = allMetadata.get(worktree.branch);
if (metadata?.pr) { const githubPR = githubPRs.get(worktree.branch);
// Use stored metadata (more complete info)
worktree.pr = metadata.pr; if (githubPR) {
} else if (includeDetails) { // Prefer fresh GitHub data (it has the current state)
// Fall back to GitHub PR detection only when includeDetails is requested worktree.pr = githubPR;
const githubPR = githubPRs.get(worktree.branch);
if (githubPR) { // Sync metadata with GitHub state when:
worktree.pr = githubPR; // 1. No metadata exists for this PR (PR created externally)
// 2. State has changed (e.g., merged/closed on GitHub)
const needsSync = !metadata?.pr || metadata.pr.state !== githubPR.state;
if (needsSync) {
// Fire and forget - don't block the response
updateWorktreePRInfo(projectPath, worktree.branch, githubPR).catch((err) => {
logger.warn(
`Failed to update PR info for ${worktree.branch}: ${getErrorMessage(err)}`
);
});
} }
} else if (metadata?.pr) {
// Fall back to stored metadata (for PRs not in recent GitHub response)
worktree.pr = metadata.pr;
} }
} }

View File

@@ -29,6 +29,10 @@ import {
appendLearning, appendLearning,
recordMemoryUsage, recordMemoryUsage,
createLogger, createLogger,
atomicWriteJson,
readJsonWithRecovery,
logRecoveryWarning,
DEFAULT_BACKUP_COUNT,
} from '@automaker/utils'; } from '@automaker/utils';
const logger = createLogger('AutoMode'); const logger = createLogger('AutoMode');
@@ -60,6 +64,7 @@ import {
getMCPServersFromSettings, getMCPServersFromSettings,
getPromptCustomization, getPromptCustomization,
} from '../lib/settings-helpers.js'; } from '../lib/settings-helpers.js';
import { getNotificationService } from './notification-service.js';
const execAsync = promisify(exec); const execAsync = promisify(exec);
@@ -386,6 +391,7 @@ export class AutoModeService {
this.emitAutoModeEvent('auto_mode_error', { this.emitAutoModeEvent('auto_mode_error', {
error: errorInfo.message, error: errorInfo.message,
errorType: errorInfo.type, errorType: errorInfo.type,
projectPath,
}); });
}); });
} }
@@ -579,6 +585,9 @@ export class AutoModeService {
'[AutoMode]' '[AutoMode]'
); );
// Get customized prompts from settings
const prompts = await getPromptCustomization(this.settingsService, '[AutoMode]');
// Build the prompt - use continuation prompt if provided (for recovery after plan approval) // Build the prompt - use continuation prompt if provided (for recovery after plan approval)
let prompt: string; let prompt: string;
// Load project context files (CLAUDE.md, CODE_QUALITY.md, etc.) and memory files // Load project context files (CLAUDE.md, CODE_QUALITY.md, etc.) and memory files
@@ -604,7 +613,7 @@ export class AutoModeService {
logger.info(`Using continuation prompt for feature ${featureId}`); logger.info(`Using continuation prompt for feature ${featureId}`);
} else { } else {
// Normal flow: build prompt with planning phase // Normal flow: build prompt with planning phase
const featurePrompt = this.buildFeaturePrompt(feature); const featurePrompt = this.buildFeaturePrompt(feature, prompts.taskExecution);
const planningPrefix = await this.getPlanningPromptPrefix(feature); const planningPrefix = await this.getPlanningPromptPrefix(feature);
prompt = planningPrefix + featurePrompt; prompt = planningPrefix + featurePrompt;
@@ -783,6 +792,9 @@ export class AutoModeService {
): Promise<void> { ): Promise<void> {
logger.info(`Executing ${steps.length} pipeline step(s) for feature ${featureId}`); logger.info(`Executing ${steps.length} pipeline step(s) for feature ${featureId}`);
// Get customized prompts from settings
const prompts = await getPromptCustomization(this.settingsService, '[AutoMode]');
// Load context files once with feature context for smart memory selection // Load context files once with feature context for smart memory selection
const contextResult = await loadContextFiles({ const contextResult = await loadContextFiles({
projectPath, projectPath,
@@ -827,7 +839,12 @@ export class AutoModeService {
}); });
// Build prompt for this pipeline step // Build prompt for this pipeline step
const prompt = this.buildPipelineStepPrompt(step, feature, previousContext); const prompt = this.buildPipelineStepPrompt(
step,
feature,
previousContext,
prompts.taskExecution
);
// Get model from feature // Get model from feature
const model = resolveModelString(feature.model, DEFAULT_MODELS.claude); const model = resolveModelString(feature.model, DEFAULT_MODELS.claude);
@@ -882,14 +899,18 @@ export class AutoModeService {
private buildPipelineStepPrompt( private buildPipelineStepPrompt(
step: PipelineStep, step: PipelineStep,
feature: Feature, feature: Feature,
previousContext: string previousContext: string,
taskExecutionPrompts: {
implementationInstructions: string;
playwrightVerificationInstructions: string;
}
): string { ): string {
let prompt = `## Pipeline Step: ${step.name} let prompt = `## Pipeline Step: ${step.name}
This is an automated pipeline step following the initial feature implementation. This is an automated pipeline step following the initial feature implementation.
### Feature Context ### Feature Context
${this.buildFeaturePrompt(feature)} ${this.buildFeaturePrompt(feature, taskExecutionPrompts)}
`; `;
@@ -1279,6 +1300,9 @@ Complete the pipeline step instructions above. Review the previous work and appl
'[AutoMode]' '[AutoMode]'
); );
// Get customized prompts from settings
const prompts = await getPromptCustomization(this.settingsService, '[AutoMode]');
// Load project context files (CLAUDE.md, CODE_QUALITY.md, etc.) - passed as system prompt // Load project context files (CLAUDE.md, CODE_QUALITY.md, etc.) - passed as system prompt
const contextResult = await loadContextFiles({ const contextResult = await loadContextFiles({
projectPath, projectPath,
@@ -1296,7 +1320,7 @@ Complete the pipeline step instructions above. Review the previous work and appl
// Build complete prompt with feature info, previous context, and follow-up instructions // Build complete prompt with feature info, previous context, and follow-up instructions
let fullPrompt = `## Follow-up on Feature Implementation let fullPrompt = `## Follow-up on Feature Implementation
${feature ? this.buildFeaturePrompt(feature) : `**Feature ID:** ${featureId}`} ${feature ? this.buildFeaturePrompt(feature, prompts.taskExecution) : `**Feature ID:** ${featureId}`}
`; `;
if (previousContext) { if (previousContext) {
@@ -1396,13 +1420,13 @@ Address the follow-up instructions above. Review the previous work and make the
allImagePaths.push(...allPaths); allImagePaths.push(...allPaths);
} }
// Save updated feature.json with new images // Save updated feature.json with new images (atomic write with backup)
if (copiedImagePaths.length > 0 && feature) { if (copiedImagePaths.length > 0 && feature) {
const featureDirForSave = getFeatureDir(projectPath, featureId); const featureDirForSave = getFeatureDir(projectPath, featureId);
const featurePath = path.join(featureDirForSave, 'feature.json'); const featurePath = path.join(featureDirForSave, 'feature.json');
try { try {
await secureFs.writeFile(featurePath, JSON.stringify(feature, null, 2)); await atomicWriteJson(featurePath, feature, { backupCount: DEFAULT_BACKUP_COUNT });
} catch (error) { } catch (error) {
logger.error(`Failed to save feature.json:`, error); logger.error(`Failed to save feature.json:`, error);
} }
@@ -1529,6 +1553,7 @@ Address the follow-up instructions above. Review the previous work and make the
message: allPassed message: allPassed
? 'All verification checks passed' ? 'All verification checks passed'
: `Verification failed: ${results.find((r) => !r.passed)?.check || 'Unknown'}`, : `Verification failed: ${results.find((r) => !r.passed)?.check || 'Unknown'}`,
projectPath,
}); });
return allPassed; return allPassed;
@@ -1602,6 +1627,7 @@ Address the follow-up instructions above. Review the previous work and make the
featureId, featureId,
passes: true, passes: true,
message: `Changes committed: ${hash.trim().substring(0, 8)}`, message: `Changes committed: ${hash.trim().substring(0, 8)}`,
projectPath,
}); });
return hash.trim(); return hash.trim();
@@ -1888,13 +1914,17 @@ Format your response as a structured markdown document.`;
content: editedPlan || feature.planSpec.content, content: editedPlan || feature.planSpec.content,
}); });
// Build continuation prompt and re-run the feature // Get customized prompts from settings
const prompts = await getPromptCustomization(this.settingsService, '[AutoMode]');
// Build continuation prompt using centralized template
const planContent = editedPlan || feature.planSpec.content || ''; const planContent = editedPlan || feature.planSpec.content || '';
let continuationPrompt = `The plan/specification has been approved. `; let continuationPrompt = prompts.taskExecution.continuationAfterApprovalTemplate;
if (feedback) { continuationPrompt = continuationPrompt.replace(
continuationPrompt += `\n\nUser feedback: ${feedback}\n\n`; /\{\{userFeedback\}\}/g,
} feedback || ''
continuationPrompt += `Now proceed with the implementation as specified in the plan:\n\n${planContent}\n\nImplement the feature now.`; );
continuationPrompt = continuationPrompt.replace(/\{\{approvedPlan\}\}/g, planContent);
logger.info(`Starting recovery execution for feature ${featureId}`); logger.info(`Starting recovery execution for feature ${featureId}`);
@@ -2066,8 +2096,20 @@ Format your response as a structured markdown document.`;
const featurePath = path.join(featureDir, 'feature.json'); const featurePath = path.join(featureDir, 'feature.json');
try { try {
const data = (await secureFs.readFile(featurePath, 'utf-8')) as string; // Use recovery-enabled read for corrupted file handling
const feature = JSON.parse(data); const result = await readJsonWithRecovery<Feature | null>(featurePath, null, {
maxBackups: DEFAULT_BACKUP_COUNT,
autoRestore: true,
});
logRecoveryWarning(result, `Feature ${featureId}`, logger);
const feature = result.data;
if (!feature) {
logger.warn(`Feature ${featureId} not found or could not be recovered`);
return;
}
feature.status = status; feature.status = status;
feature.updatedAt = new Date().toISOString(); feature.updatedAt = new Date().toISOString();
// Set justFinishedAt timestamp when moving to waiting_approval (agent just completed) // Set justFinishedAt timestamp when moving to waiting_approval (agent just completed)
@@ -2078,9 +2120,41 @@ Format your response as a structured markdown document.`;
// Clear the timestamp when moving to other statuses // Clear the timestamp when moving to other statuses
feature.justFinishedAt = undefined; feature.justFinishedAt = undefined;
} }
await secureFs.writeFile(featurePath, JSON.stringify(feature, null, 2));
} catch { // Use atomic write with backup support
// Feature file may not exist await atomicWriteJson(featurePath, feature, { backupCount: DEFAULT_BACKUP_COUNT });
// Create notifications for important status changes
const notificationService = getNotificationService();
if (status === 'waiting_approval') {
await notificationService.createNotification({
type: 'feature_waiting_approval',
title: 'Feature Ready for Review',
message: `"${feature.name || featureId}" is ready for your review and approval.`,
featureId,
projectPath,
});
} else if (status === 'verified') {
await notificationService.createNotification({
type: 'feature_verified',
title: 'Feature Verified',
message: `"${feature.name || featureId}" has been verified and is complete.`,
featureId,
projectPath,
});
}
// Sync completed/verified features to app_spec.txt
if (status === 'verified' || status === 'completed') {
try {
await this.featureLoader.syncFeatureToAppSpec(projectPath, feature);
} catch (syncError) {
// Log but don't fail the status update if sync fails
logger.warn(`Failed to sync feature ${featureId} to app_spec.txt:`, syncError);
}
}
} catch (error) {
logger.error(`Failed to update feature status for ${featureId}:`, error);
} }
} }
@@ -2092,11 +2166,24 @@ Format your response as a structured markdown document.`;
featureId: string, featureId: string,
updates: Partial<PlanSpec> updates: Partial<PlanSpec>
): Promise<void> { ): Promise<void> {
const featurePath = path.join(projectPath, '.automaker', 'features', featureId, 'feature.json'); // Use getFeatureDir helper for consistent path resolution
const featureDir = getFeatureDir(projectPath, featureId);
const featurePath = path.join(featureDir, 'feature.json');
try { try {
const data = (await secureFs.readFile(featurePath, 'utf-8')) as string; // Use recovery-enabled read for corrupted file handling
const feature = JSON.parse(data); const result = await readJsonWithRecovery<Feature | null>(featurePath, null, {
maxBackups: DEFAULT_BACKUP_COUNT,
autoRestore: true,
});
logRecoveryWarning(result, `Feature ${featureId}`, logger);
const feature = result.data;
if (!feature) {
logger.warn(`Feature ${featureId} not found or could not be recovered`);
return;
}
// Initialize planSpec if it doesn't exist // Initialize planSpec if it doesn't exist
if (!feature.planSpec) { if (!feature.planSpec) {
@@ -2116,7 +2203,9 @@ Format your response as a structured markdown document.`;
} }
feature.updatedAt = new Date().toISOString(); feature.updatedAt = new Date().toISOString();
await secureFs.writeFile(featurePath, JSON.stringify(feature, null, 2));
// Use atomic write with backup support
await atomicWriteJson(featurePath, feature, { backupCount: DEFAULT_BACKUP_COUNT });
} catch (error) { } catch (error) {
logger.error(`Failed to update planSpec for ${featureId}:`, error); logger.error(`Failed to update planSpec for ${featureId}:`, error);
} }
@@ -2133,25 +2222,34 @@ Format your response as a structured markdown document.`;
const allFeatures: Feature[] = []; const allFeatures: Feature[] = [];
const pendingFeatures: Feature[] = []; const pendingFeatures: Feature[] = [];
// Load all features (for dependency checking) // Load all features (for dependency checking) with recovery support
for (const entry of entries) { for (const entry of entries) {
if (entry.isDirectory()) { if (entry.isDirectory()) {
const featurePath = path.join(featuresDir, entry.name, 'feature.json'); const featurePath = path.join(featuresDir, entry.name, 'feature.json');
try {
const data = (await secureFs.readFile(featurePath, 'utf-8')) as string;
const feature = JSON.parse(data);
allFeatures.push(feature);
// Track pending features separately // Use recovery-enabled read for corrupted file handling
if ( const result = await readJsonWithRecovery<Feature | null>(featurePath, null, {
feature.status === 'pending' || maxBackups: DEFAULT_BACKUP_COUNT,
feature.status === 'ready' || autoRestore: true,
feature.status === 'backlog' });
) {
pendingFeatures.push(feature); logRecoveryWarning(result, `Feature ${entry.name}`, logger);
}
} catch { const feature = result.data;
// Skip invalid features if (!feature) {
// Skip features that couldn't be loaded or recovered
continue;
}
allFeatures.push(feature);
// Track pending features separately
if (
feature.status === 'pending' ||
feature.status === 'ready' ||
feature.status === 'backlog'
) {
pendingFeatures.push(feature);
} }
} }
} }
@@ -2225,7 +2323,13 @@ Format your response as a structured markdown document.`;
return planningPrompt + '\n\n---\n\n## Feature Request\n\n'; return planningPrompt + '\n\n---\n\n## Feature Request\n\n';
} }
private buildFeaturePrompt(feature: Feature): string { private buildFeaturePrompt(
feature: Feature,
taskExecutionPrompts: {
implementationInstructions: string;
playwrightVerificationInstructions: string;
}
): string {
const title = this.extractTitleFromDescription(feature.description); const title = this.extractTitleFromDescription(feature.description);
let prompt = `## Feature Implementation Task let prompt = `## Feature Implementation Task
@@ -2267,80 +2371,10 @@ You can use the Read tool to view these images at any time during implementation
// Add verification instructions based on testing mode // Add verification instructions based on testing mode
if (feature.skipTests) { if (feature.skipTests) {
// Manual verification - just implement the feature // Manual verification - just implement the feature
prompt += ` prompt += `\n${taskExecutionPrompts.implementationInstructions}`;
## Instructions
Implement this feature by:
1. First, explore the codebase to understand the existing structure
2. Plan your implementation approach
3. Write the necessary code changes
4. Ensure the code follows existing patterns and conventions
When done, wrap your final summary in <summary> tags like this:
<summary>
## Summary: [Feature Title]
### Changes Implemented
- [List of changes made]
### Files Modified
- [List of files]
### Notes for Developer
- [Any important notes]
</summary>
This helps parse your summary correctly in the output logs.`;
} else { } else {
// Automated testing - implement and verify with Playwright // Automated testing - implement and verify with Playwright
prompt += ` prompt += `\n${taskExecutionPrompts.implementationInstructions}\n\n${taskExecutionPrompts.playwrightVerificationInstructions}`;
## Instructions
Implement this feature by:
1. First, explore the codebase to understand the existing structure
2. Plan your implementation approach
3. Write the necessary code changes
4. Ensure the code follows existing patterns and conventions
## Verification with Playwright (REQUIRED)
After implementing the feature, you MUST verify it works correctly using Playwright:
1. **Create a temporary Playwright test** to verify the feature works as expected
2. **Run the test** to confirm the feature is working
3. **Delete the test file** after verification - this is a temporary verification test, not a permanent test suite addition
Example verification workflow:
\`\`\`bash
# Create a simple verification test
npx playwright test my-verification-test.spec.ts
# After successful verification, delete the test
rm my-verification-test.spec.ts
\`\`\`
The test should verify the core functionality of the feature. If the test fails, fix the implementation and re-test.
When done, wrap your final summary in <summary> tags like this:
<summary>
## Summary: [Feature Title]
### Changes Implemented
- [List of changes made]
### Files Modified
- [List of files]
### Verification Status
- [Describe how the feature was verified with Playwright]
### Notes for Developer
- [Any important notes]
</summary>
This helps parse your summary correctly in the output logs.`;
} }
return prompt; return prompt;
@@ -2910,6 +2944,12 @@ After generating the revised spec, output:
`Starting multi-agent execution: ${parsedTasks.length} tasks for feature ${featureId}` `Starting multi-agent execution: ${parsedTasks.length} tasks for feature ${featureId}`
); );
// Get customized prompts for task execution
const taskPrompts = await getPromptCustomization(
this.settingsService,
'[AutoMode]'
);
// Execute each task with a separate agent // Execute each task with a separate agent
for (let taskIndex = 0; taskIndex < parsedTasks.length; taskIndex++) { for (let taskIndex = 0; taskIndex < parsedTasks.length; taskIndex++) {
const task = parsedTasks[taskIndex]; const task = parsedTasks[taskIndex];
@@ -2941,6 +2981,7 @@ After generating the revised spec, output:
parsedTasks, parsedTasks,
taskIndex, taskIndex,
approvedPlanContent, approvedPlanContent,
taskPrompts.taskExecution.taskPromptTemplate,
userFeedback userFeedback
); );
@@ -3023,15 +3064,21 @@ After generating the revised spec, output:
`No parsed tasks, using single-agent execution for feature ${featureId}` `No parsed tasks, using single-agent execution for feature ${featureId}`
); );
const continuationPrompt = `The plan/specification has been approved. Now implement it. // Get customized prompts for continuation
${userFeedback ? `\n## User Feedback\n${userFeedback}\n` : ''} const taskPrompts = await getPromptCustomization(
## Approved Plan this.settingsService,
'[AutoMode]'
${approvedPlanContent} );
let continuationPrompt =
## Instructions taskPrompts.taskExecution.continuationAfterApprovalTemplate;
continuationPrompt = continuationPrompt.replace(
Implement all the changes described in the plan above.`; /\{\{userFeedback\}\}/g,
userFeedback || ''
);
continuationPrompt = continuationPrompt.replace(
/\{\{approvedPlan\}\}/g,
approvedPlanContent
);
const continuationStream = provider.executeQuery({ const continuationStream = provider.executeQuery({
prompt: continuationPrompt, prompt: continuationPrompt,
@@ -3151,17 +3198,16 @@ Implement all the changes described in the plan above.`;
throw new Error(`Feature ${featureId} not found`); throw new Error(`Feature ${featureId} not found`);
} }
const prompt = `## Continuing Feature Implementation // Get customized prompts from settings
const prompts = await getPromptCustomization(this.settingsService, '[AutoMode]');
${this.buildFeaturePrompt(feature)} // Build the feature prompt
const featurePrompt = this.buildFeaturePrompt(feature, prompts.taskExecution);
## Previous Context // Use the resume feature template with variable substitution
The following is the output from a previous implementation attempt. Continue from where you left off: let prompt = prompts.taskExecution.resumeFeatureTemplate;
prompt = prompt.replace(/\{\{featurePrompt\}\}/g, featurePrompt);
${context} prompt = prompt.replace(/\{\{previousContext\}\}/g, context);
## Instructions
Review the previous work and continue the implementation. If the feature appears complete, verify it works correctly.`;
return this.executeFeature(projectPath, featureId, useWorktrees, false, undefined, { return this.executeFeature(projectPath, featureId, useWorktrees, false, undefined, {
continuationPrompt: prompt, continuationPrompt: prompt,
@@ -3282,68 +3328,42 @@ Review the previous work and continue the implementation. If the feature appears
allTasks: ParsedTask[], allTasks: ParsedTask[],
taskIndex: number, taskIndex: number,
planContent: string, planContent: string,
taskPromptTemplate: string,
userFeedback?: string userFeedback?: string
): string { ): string {
const completedTasks = allTasks.slice(0, taskIndex); const completedTasks = allTasks.slice(0, taskIndex);
const remainingTasks = allTasks.slice(taskIndex + 1); const remainingTasks = allTasks.slice(taskIndex + 1);
let prompt = `# Task Execution: ${task.id} // Build completed tasks string
const completedTasksStr =
completedTasks.length > 0
? `### Already Completed (${completedTasks.length} tasks)\n${completedTasks.map((t) => `- [x] ${t.id}: ${t.description}`).join('\n')}\n`
: '';
You are executing a specific task as part of a larger feature implementation. // Build remaining tasks string
const remainingTasksStr =
remainingTasks.length > 0
? `### Coming Up Next (${remainingTasks.length} tasks remaining)\n${remainingTasks
.slice(0, 3)
.map((t) => `- [ ] ${t.id}: ${t.description}`)
.join(
'\n'
)}${remainingTasks.length > 3 ? `\n... and ${remainingTasks.length - 3} more tasks` : ''}\n`
: '';
## Your Current Task // Build user feedback string
const userFeedbackStr = userFeedback ? `### User Feedback\n${userFeedback}\n` : '';
**Task ID:** ${task.id} // Use centralized template with variable substitution
**Description:** ${task.description} let prompt = taskPromptTemplate;
${task.filePath ? `**Primary File:** ${task.filePath}` : ''} prompt = prompt.replace(/\{\{taskId\}\}/g, task.id);
${task.phase ? `**Phase:** ${task.phase}` : ''} prompt = prompt.replace(/\{\{taskDescription\}\}/g, task.description);
prompt = prompt.replace(/\{\{taskFilePath\}\}/g, task.filePath || '');
## Context prompt = prompt.replace(/\{\{taskPhase\}\}/g, task.phase || '');
prompt = prompt.replace(/\{\{completedTasks\}\}/g, completedTasksStr);
`; prompt = prompt.replace(/\{\{remainingTasks\}\}/g, remainingTasksStr);
prompt = prompt.replace(/\{\{userFeedback\}\}/g, userFeedbackStr);
// Show what's already done prompt = prompt.replace(/\{\{planContent\}\}/g, planContent);
if (completedTasks.length > 0) {
prompt += `### Already Completed (${completedTasks.length} tasks)
${completedTasks.map((t) => `- [x] ${t.id}: ${t.description}`).join('\n')}
`;
}
// Show remaining tasks
if (remainingTasks.length > 0) {
prompt += `### Coming Up Next (${remainingTasks.length} tasks remaining)
${remainingTasks
.slice(0, 3)
.map((t) => `- [ ] ${t.id}: ${t.description}`)
.join('\n')}
${remainingTasks.length > 3 ? `... and ${remainingTasks.length - 3} more tasks` : ''}
`;
}
// Add user feedback if any
if (userFeedback) {
prompt += `### User Feedback
${userFeedback}
`;
}
// Add relevant excerpt from plan (just the task-related part to save context)
prompt += `### Reference: Full Plan
<details>
${planContent}
</details>
## Instructions
1. Focus ONLY on completing task ${task.id}: "${task.description}"
2. Do not work on other tasks
3. Use the existing codebase patterns
4. When done, summarize what you implemented
Begin implementing task ${task.id} now.`;
return prompt; return prompt;
} }
@@ -3461,31 +3481,39 @@ Begin implementing task ${task.id} now.`;
for (const entry of entries) { for (const entry of entries) {
if (entry.isDirectory()) { if (entry.isDirectory()) {
const featurePath = path.join(featuresDir, entry.name, 'feature.json'); const featurePath = path.join(featuresDir, entry.name, 'feature.json');
try {
const data = (await secureFs.readFile(featurePath, 'utf-8')) as string;
const feature = JSON.parse(data) as Feature;
// Check if feature was interrupted (in_progress or pipeline_*) // Use recovery-enabled read for corrupted file handling
if ( const result = await readJsonWithRecovery<Feature | null>(featurePath, null, {
feature.status === 'in_progress' || maxBackups: DEFAULT_BACKUP_COUNT,
(feature.status && feature.status.startsWith('pipeline_')) autoRestore: true,
) { });
// Verify it has existing context (agent-output.md)
const featureDir = getFeatureDir(projectPath, feature.id); logRecoveryWarning(result, `Feature ${entry.name}`, logger);
const contextPath = path.join(featureDir, 'agent-output.md');
try { const feature = result.data;
await secureFs.access(contextPath); if (!feature) {
interruptedFeatures.push(feature); // Skip features that couldn't be loaded or recovered
logger.info( continue;
`Found interrupted feature: ${feature.id} (${feature.title}) - status: ${feature.status}` }
);
} catch { // Check if feature was interrupted (in_progress or pipeline_*)
// No context file, skip this feature - it will be restarted fresh if (
logger.info(`Interrupted feature ${feature.id} has no context, will restart fresh`); feature.status === 'in_progress' ||
} (feature.status && feature.status.startsWith('pipeline_'))
) {
// Verify it has existing context (agent-output.md)
const featureDir = getFeatureDir(projectPath, feature.id);
const contextPath = path.join(featureDir, 'agent-output.md');
try {
await secureFs.access(contextPath);
interruptedFeatures.push(feature);
logger.info(
`Found interrupted feature: ${feature.id} (${feature.title}) - status: ${feature.status}`
);
} catch {
// No context file, skip this feature - it will be restarted fresh
logger.info(`Interrupted feature ${feature.id} has no context, will restart fresh`);
} }
} catch {
// Skip invalid features
} }
} }
} }
@@ -3553,32 +3581,13 @@ Begin implementing task ${task.id} now.`;
// Limit output to avoid token limits // Limit output to avoid token limits
const truncatedOutput = agentOutput.length > 10000 ? agentOutput.slice(-10000) : agentOutput; const truncatedOutput = agentOutput.length > 10000 ? agentOutput.slice(-10000) : agentOutput;
const userPrompt = `You are an Architecture Decision Record (ADR) extractor. Analyze this implementation and return ONLY JSON with learnings. No explanations. // Get customized prompts from settings
const prompts = await getPromptCustomization(this.settingsService, '[AutoMode]');
Feature: "${feature.title}" // Build user prompt using centralized template with variable substitution
let userPrompt = prompts.taskExecution.learningExtractionUserPromptTemplate;
Implementation log: userPrompt = userPrompt.replace(/\{\{featureTitle\}\}/g, feature.title || '');
${truncatedOutput} userPrompt = userPrompt.replace(/\{\{implementationLog\}\}/g, truncatedOutput);
Extract MEANINGFUL learnings - not obvious things. For each, capture:
- DECISIONS: Why this approach vs alternatives? What would break if changed?
- GOTCHAS: What was unexpected? What's the root cause? How to avoid?
- PATTERNS: Why this pattern? What problem does it solve? Trade-offs?
JSON format ONLY (no markdown, no text):
{"learnings": [{
"category": "architecture|api|ui|database|auth|testing|performance|security|gotchas",
"type": "decision|gotcha|pattern",
"content": "What was done/learned",
"context": "Problem being solved or situation faced",
"why": "Reasoning - why this approach",
"rejected": "Alternative considered and why rejected",
"tradeoffs": "What became easier/harder",
"breaking": "What breaks if this is changed/removed"
}]}
IMPORTANT: Only include NON-OBVIOUS learnings with real reasoning. Skip trivial patterns.
If nothing notable: {"learnings": []}`;
try { try {
// Get model from phase settings // Get model from phase settings
@@ -3612,8 +3621,7 @@ If nothing notable: {"learnings": []}`;
cwd: projectPath, cwd: projectPath,
maxTurns: 1, maxTurns: 1,
allowedTools: [], allowedTools: [],
systemPrompt: systemPrompt: prompts.taskExecution.learningExtractionSystemPrompt,
'You are a JSON extraction assistant. You MUST respond with ONLY valid JSON, no explanations, no markdown, no other text. Extract learnings from the provided implementation context and return them as JSON.',
}); });
const responseText = result.text; const responseText = result.text;

View File

@@ -22,6 +22,29 @@ export class ClaudeUsageService {
private timeout = 30000; // 30 second timeout private timeout = 30000; // 30 second timeout
private isWindows = os.platform() === 'win32'; private isWindows = os.platform() === 'win32';
private isLinux = os.platform() === 'linux'; private isLinux = os.platform() === 'linux';
// On Windows, ConPTY requires AttachConsole which fails in Electron/service mode
// Detect Electron by checking for electron-specific env vars or process properties
// When in Electron, always use winpty to avoid ConPTY's AttachConsole errors
private isElectron =
!!(process.versions && (process.versions as Record<string, string>).electron) ||
!!process.env.ELECTRON_RUN_AS_NODE;
private useConptyFallback = false; // Track if we need to use winpty fallback on Windows
/**
* Kill a PTY process with platform-specific handling.
* Windows doesn't support Unix signals like SIGTERM, so we call kill() without arguments.
* On Unix-like systems (macOS, Linux), we can specify the signal.
*
* @param ptyProcess - The PTY process to kill
* @param signal - The signal to send on Unix-like systems (default: 'SIGTERM')
*/
private killPtyProcess(ptyProcess: pty.IPty, signal: string = 'SIGTERM'): void {
if (this.isWindows) {
ptyProcess.kill();
} else {
ptyProcess.kill(signal);
}
}
/** /**
* Check if Claude CLI is available on the system * Check if Claude CLI is available on the system
@@ -181,37 +204,94 @@ export class ClaudeUsageService {
? ['/c', 'claude', '--add-dir', workingDirectory] ? ['/c', 'claude', '--add-dir', workingDirectory]
: ['-c', `claude --add-dir "${workingDirectory}"`]; : ['-c', `claude --add-dir "${workingDirectory}"`];
// Using 'any' for ptyProcess because node-pty types don't include 'killed' property
// eslint-disable-next-line @typescript-eslint/no-explicit-any
let ptyProcess: any = null; let ptyProcess: any = null;
// Build PTY spawn options
const ptyOptions: pty.IPtyForkOptions = {
name: 'xterm-256color',
cols: 120,
rows: 30,
cwd: workingDirectory,
env: {
...process.env,
TERM: 'xterm-256color',
} as Record<string, string>,
};
// On Windows, always use winpty instead of ConPTY
// ConPTY requires AttachConsole which fails in many contexts:
// - Electron apps without a console
// - VS Code integrated terminal
// - Spawned from other applications
// The error happens in a subprocess so we can't catch it - must proactively disable
if (this.isWindows) {
(ptyOptions as pty.IWindowsPtyForkOptions).useConpty = false;
logger.info(
'[executeClaudeUsageCommandPty] Using winpty on Windows (ConPTY disabled for compatibility)'
);
}
try { try {
ptyProcess = pty.spawn(shell, args, { ptyProcess = pty.spawn(shell, args, ptyOptions);
name: 'xterm-256color',
cols: 120,
rows: 30,
cwd: workingDirectory,
env: {
...process.env,
TERM: 'xterm-256color',
} as Record<string, string>,
});
} catch (spawnError) { } catch (spawnError) {
const errorMessage = spawnError instanceof Error ? spawnError.message : String(spawnError); const errorMessage = spawnError instanceof Error ? spawnError.message : String(spawnError);
logger.error('[executeClaudeUsageCommandPty] Failed to spawn PTY:', errorMessage);
// Return a user-friendly error instead of crashing // Check for Windows ConPTY-specific errors
reject( if (this.isWindows && errorMessage.includes('AttachConsole failed')) {
new Error( // ConPTY failed - try winpty fallback
`Unable to access terminal: ${errorMessage}. Claude CLI may not be available or PTY support is limited in this environment.` if (!this.useConptyFallback) {
) logger.warn(
); '[executeClaudeUsageCommandPty] ConPTY AttachConsole failed, retrying with winpty fallback'
return; );
this.useConptyFallback = true;
try {
(ptyOptions as pty.IWindowsPtyForkOptions).useConpty = false;
ptyProcess = pty.spawn(shell, args, ptyOptions);
logger.info(
'[executeClaudeUsageCommandPty] Successfully spawned with winpty fallback'
);
} catch (fallbackError) {
const fallbackMessage =
fallbackError instanceof Error ? fallbackError.message : String(fallbackError);
logger.error(
'[executeClaudeUsageCommandPty] Winpty fallback also failed:',
fallbackMessage
);
reject(
new Error(
`Windows PTY unavailable: Both ConPTY and winpty failed. This typically happens when running in Electron without a console. ConPTY error: ${errorMessage}. Winpty error: ${fallbackMessage}`
)
);
return;
}
} else {
logger.error('[executeClaudeUsageCommandPty] Winpty fallback failed:', errorMessage);
reject(
new Error(
`Windows PTY unavailable: ${errorMessage}. The application is running without console access (common in Electron). Try running from a terminal window.`
)
);
return;
}
} else {
logger.error('[executeClaudeUsageCommandPty] Failed to spawn PTY:', errorMessage);
reject(
new Error(
`Unable to access terminal: ${errorMessage}. Claude CLI may not be available or PTY support is limited in this environment.`
)
);
return;
}
} }
const timeoutId = setTimeout(() => { const timeoutId = setTimeout(() => {
if (!settled) { if (!settled) {
settled = true; settled = true;
if (ptyProcess && !ptyProcess.killed) { if (ptyProcess && !ptyProcess.killed) {
ptyProcess.kill(); this.killPtyProcess(ptyProcess);
} }
// Don't fail if we have data - return it instead // Don't fail if we have data - return it instead
if (output.includes('Current session')) { if (output.includes('Current session')) {
@@ -244,16 +324,23 @@ export class ClaudeUsageService {
const cleanOutput = output.replace(/\x1B\[[0-9;]*[A-Za-z]/g, ''); const cleanOutput = output.replace(/\x1B\[[0-9;]*[A-Za-z]/g, '');
// Check for specific authentication/permission errors // Check for specific authentication/permission errors
if ( // Must be very specific to avoid false positives from garbled terminal encoding
cleanOutput.includes('OAuth token does not meet scope requirement') || // Removed permission_error check as it was causing false positives with winpty encoding
cleanOutput.includes('permission_error') || const authChecks = {
cleanOutput.includes('token_expired') || oauth: cleanOutput.includes('OAuth token does not meet scope requirement'),
cleanOutput.includes('authentication_error') tokenExpired: cleanOutput.includes('token_expired'),
) { // Only match if it looks like a JSON API error response
authError:
cleanOutput.includes('"type":"authentication_error"') ||
cleanOutput.includes('"type": "authentication_error"'),
};
const hasAuthError = authChecks.oauth || authChecks.tokenExpired || authChecks.authError;
if (hasAuthError) {
if (!settled) { if (!settled) {
settled = true; settled = true;
if (ptyProcess && !ptyProcess.killed) { if (ptyProcess && !ptyProcess.killed) {
ptyProcess.kill(); this.killPtyProcess(ptyProcess);
} }
reject( reject(
new Error( new Error(
@@ -265,11 +352,16 @@ export class ClaudeUsageService {
} }
// Check if we've seen the usage data (look for "Current session" or the TUI Usage header) // Check if we've seen the usage data (look for "Current session" or the TUI Usage header)
if ( // Also check for percentage patterns that appear in usage output
!hasSeenUsageData && const hasUsageIndicators =
(cleanOutput.includes('Current session') || cleanOutput.includes('Current session') ||
(cleanOutput.includes('Usage') && cleanOutput.includes('% left'))) (cleanOutput.includes('Usage') && cleanOutput.includes('% left')) ||
) { // Additional patterns for winpty - look for percentage patterns
/\d+%\s*(left|used|remaining)/i.test(cleanOutput) ||
cleanOutput.includes('Resets in') ||
cleanOutput.includes('Current week');
if (!hasSeenUsageData && hasUsageIndicators) {
hasSeenUsageData = true; hasSeenUsageData = true;
// Wait for full output, then send escape to exit // Wait for full output, then send escape to exit
setTimeout(() => { setTimeout(() => {
@@ -277,9 +369,10 @@ export class ClaudeUsageService {
ptyProcess.write('\x1b'); // Send escape key ptyProcess.write('\x1b'); // Send escape key
// Fallback: if ESC doesn't exit (Linux), use SIGTERM after 2s // Fallback: if ESC doesn't exit (Linux), use SIGTERM after 2s
// Windows doesn't support signals, so killPtyProcess handles platform differences
setTimeout(() => { setTimeout(() => {
if (!settled && ptyProcess && !ptyProcess.killed) { if (!settled && ptyProcess && !ptyProcess.killed) {
ptyProcess.kill('SIGTERM'); this.killPtyProcess(ptyProcess);
} }
}, 2000); }, 2000);
} }
@@ -307,10 +400,18 @@ export class ClaudeUsageService {
} }
// Detect REPL prompt and send /usage command // Detect REPL prompt and send /usage command
if ( // On Windows with winpty, Unicode prompt char gets garbled, so also check for ASCII indicators
!hasSentCommand && const isReplReady =
(cleanOutput.includes('') || cleanOutput.includes('? for shortcuts')) cleanOutput.includes('') ||
) { cleanOutput.includes('? for shortcuts') ||
// Fallback for winpty garbled encoding - detect CLI welcome screen elements
(cleanOutput.includes('Welcome back') && cleanOutput.includes('Claude')) ||
(cleanOutput.includes('Tips for getting started') && cleanOutput.includes('Claude')) ||
// Detect model indicator which appears when REPL is ready
(cleanOutput.includes('Opus') && cleanOutput.includes('Claude API')) ||
(cleanOutput.includes('Sonnet') && cleanOutput.includes('Claude API'));
if (!hasSentCommand && isReplReady) {
hasSentCommand = true; hasSentCommand = true;
// Wait for REPL to fully settle // Wait for REPL to fully settle
setTimeout(() => { setTimeout(() => {
@@ -347,11 +448,9 @@ export class ClaudeUsageService {
if (settled) return; if (settled) return;
settled = true; settled = true;
if ( // Check for auth errors - must be specific to avoid false positives
output.includes('token_expired') || // Removed permission_error check as it was causing false positives with winpty encoding
output.includes('authentication_error') || if (output.includes('token_expired') || output.includes('"type":"authentication_error"')) {
output.includes('permission_error')
) {
reject(new Error("Authentication required - please run 'claude login'")); reject(new Error("Authentication required - please run 'claude login'"));
return; return;
} }

File diff suppressed because it is too large Load Diff

View File

@@ -379,10 +379,11 @@ class DevServerService {
// Create server info early so we can reference it in handlers // Create server info early so we can reference it in handlers
// We'll add it to runningServers after verifying the process started successfully // We'll add it to runningServers after verifying the process started successfully
const hostname = process.env.HOSTNAME || 'localhost';
const serverInfo: DevServerInfo = { const serverInfo: DevServerInfo = {
worktreePath, worktreePath,
port, port,
url: `http://localhost:${port}`, url: `http://${hostname}:${port}`,
process: devProcess, process: devProcess,
startedAt: new Date(), startedAt: new Date(),
scrollbackBuffer: '', scrollbackBuffer: '',
@@ -474,7 +475,7 @@ class DevServerService {
result: { result: {
worktreePath, worktreePath,
port, port,
url: `http://localhost:${port}`, url: `http://${hostname}:${port}`,
message: `Dev server started on port ${port}`, message: `Dev server started on port ${port}`,
}, },
}; };
@@ -594,6 +595,7 @@ class DevServerService {
result?: { result?: {
worktreePath: string; worktreePath: string;
port: number; port: number;
url: string;
logs: string; logs: string;
startedAt: string; startedAt: string;
}; };
@@ -613,6 +615,7 @@ class DevServerService {
result: { result: {
worktreePath: server.worktreePath, worktreePath: server.worktreePath,
port: server.port, port: server.port,
url: server.url,
logs: server.scrollbackBuffer, logs: server.scrollbackBuffer,
startedAt: server.startedAt.toISOString(), startedAt: server.startedAt.toISOString(),
}, },

View File

@@ -0,0 +1,338 @@
/**
* Event History Service - Stores and retrieves event records for debugging and replay
*
* Provides persistent storage for events in {projectPath}/.automaker/events/
* Each event is stored as a separate JSON file with an index for quick listing.
*
* Features:
* - Store events when they occur
* - List and filter historical events
* - Replay events to test hook configurations
* - Delete old events to manage disk space
*/
import { createLogger } from '@automaker/utils';
import * as secureFs from '../lib/secure-fs.js';
import {
getEventHistoryDir,
getEventHistoryIndexPath,
getEventPath,
ensureEventHistoryDir,
} from '@automaker/platform';
import type {
StoredEvent,
StoredEventIndex,
StoredEventSummary,
EventHistoryFilter,
EventHookTrigger,
} from '@automaker/types';
import { DEFAULT_EVENT_HISTORY_INDEX } from '@automaker/types';
import { randomUUID } from 'crypto';
const logger = createLogger('EventHistoryService');
/** Maximum events to keep in the index (oldest are pruned) */
const MAX_EVENTS_IN_INDEX = 1000;
/**
* Atomic file write - write to temp file then rename
*/
async function atomicWriteJson(filePath: string, data: unknown): Promise<void> {
const tempPath = `${filePath}.tmp.${Date.now()}`;
const content = JSON.stringify(data, null, 2);
try {
await secureFs.writeFile(tempPath, content, 'utf-8');
await secureFs.rename(tempPath, filePath);
} catch (error) {
try {
await secureFs.unlink(tempPath);
} catch {
// Ignore cleanup errors
}
throw error;
}
}
/**
* Safely read JSON file with fallback to default
*/
async function readJsonFile<T>(filePath: string, defaultValue: T): Promise<T> {
try {
const content = (await secureFs.readFile(filePath, 'utf-8')) as string;
return JSON.parse(content) as T;
} catch (error) {
if ((error as NodeJS.ErrnoException).code === 'ENOENT') {
return defaultValue;
}
logger.error(`Error reading ${filePath}:`, error);
return defaultValue;
}
}
/**
* Input for storing a new event
*/
export interface StoreEventInput {
trigger: EventHookTrigger;
projectPath: string;
featureId?: string;
featureName?: string;
error?: string;
errorType?: string;
passes?: boolean;
metadata?: Record<string, unknown>;
}
/**
* EventHistoryService - Manages persistent storage of events
*/
export class EventHistoryService {
/**
* Store a new event to history
*
* @param input - Event data to store
* @returns Promise resolving to the stored event
*/
async storeEvent(input: StoreEventInput): Promise<StoredEvent> {
const { projectPath, trigger, featureId, featureName, error, errorType, passes, metadata } =
input;
// Ensure events directory exists
await ensureEventHistoryDir(projectPath);
const eventId = `evt-${Date.now()}-${randomUUID().slice(0, 8)}`;
const timestamp = new Date().toISOString();
const projectName = this.extractProjectName(projectPath);
const event: StoredEvent = {
id: eventId,
trigger,
timestamp,
projectPath,
projectName,
featureId,
featureName,
error,
errorType,
passes,
metadata,
};
// Write the full event to its own file
const eventPath = getEventPath(projectPath, eventId);
await atomicWriteJson(eventPath, event);
// Update the index
await this.addToIndex(projectPath, event);
logger.info(`Stored event ${eventId} (${trigger}) for project ${projectName}`);
return event;
}
/**
* Get all events for a project with optional filtering
*
* @param projectPath - Absolute path to project directory
* @param filter - Optional filter criteria
* @returns Promise resolving to array of event summaries
*/
async getEvents(projectPath: string, filter?: EventHistoryFilter): Promise<StoredEventSummary[]> {
const indexPath = getEventHistoryIndexPath(projectPath);
const index = await readJsonFile<StoredEventIndex>(indexPath, DEFAULT_EVENT_HISTORY_INDEX);
let events = [...index.events];
// Apply filters
if (filter) {
if (filter.trigger) {
events = events.filter((e) => e.trigger === filter.trigger);
}
if (filter.featureId) {
events = events.filter((e) => e.featureId === filter.featureId);
}
if (filter.since) {
const sinceDate = new Date(filter.since).getTime();
events = events.filter((e) => new Date(e.timestamp).getTime() >= sinceDate);
}
if (filter.until) {
const untilDate = new Date(filter.until).getTime();
events = events.filter((e) => new Date(e.timestamp).getTime() <= untilDate);
}
}
// Sort by timestamp (newest first)
events.sort((a, b) => new Date(b.timestamp).getTime() - new Date(a.timestamp).getTime());
// Apply pagination
if (filter?.offset) {
events = events.slice(filter.offset);
}
if (filter?.limit) {
events = events.slice(0, filter.limit);
}
return events;
}
/**
* Get a single event by ID
*
* @param projectPath - Absolute path to project directory
* @param eventId - Event identifier
* @returns Promise resolving to the full event or null if not found
*/
async getEvent(projectPath: string, eventId: string): Promise<StoredEvent | null> {
const eventPath = getEventPath(projectPath, eventId);
try {
const content = (await secureFs.readFile(eventPath, 'utf-8')) as string;
return JSON.parse(content) as StoredEvent;
} catch (error) {
if ((error as NodeJS.ErrnoException).code === 'ENOENT') {
return null;
}
logger.error(`Error reading event ${eventId}:`, error);
return null;
}
}
/**
* Delete an event by ID
*
* @param projectPath - Absolute path to project directory
* @param eventId - Event identifier
* @returns Promise resolving to true if deleted
*/
async deleteEvent(projectPath: string, eventId: string): Promise<boolean> {
// Remove from index
const indexPath = getEventHistoryIndexPath(projectPath);
const index = await readJsonFile<StoredEventIndex>(indexPath, DEFAULT_EVENT_HISTORY_INDEX);
const initialLength = index.events.length;
index.events = index.events.filter((e) => e.id !== eventId);
if (index.events.length === initialLength) {
return false; // Event not found in index
}
await atomicWriteJson(indexPath, index);
// Delete the event file
const eventPath = getEventPath(projectPath, eventId);
try {
await secureFs.unlink(eventPath);
} catch (error) {
if ((error as NodeJS.ErrnoException).code !== 'ENOENT') {
logger.error(`Error deleting event file ${eventId}:`, error);
}
}
logger.info(`Deleted event ${eventId}`);
return true;
}
/**
* Clear all events for a project
*
* @param projectPath - Absolute path to project directory
* @returns Promise resolving to number of events cleared
*/
async clearEvents(projectPath: string): Promise<number> {
const indexPath = getEventHistoryIndexPath(projectPath);
const index = await readJsonFile<StoredEventIndex>(indexPath, DEFAULT_EVENT_HISTORY_INDEX);
const count = index.events.length;
// Delete all event files
for (const event of index.events) {
const eventPath = getEventPath(projectPath, event.id);
try {
await secureFs.unlink(eventPath);
} catch (error) {
if ((error as NodeJS.ErrnoException).code !== 'ENOENT') {
logger.error(`Error deleting event file ${event.id}:`, error);
}
}
}
// Reset the index
await atomicWriteJson(indexPath, DEFAULT_EVENT_HISTORY_INDEX);
logger.info(`Cleared ${count} events for project`);
return count;
}
/**
* Get event count for a project
*
* @param projectPath - Absolute path to project directory
* @param filter - Optional filter criteria
* @returns Promise resolving to event count
*/
async getEventCount(projectPath: string, filter?: EventHistoryFilter): Promise<number> {
const events = await this.getEvents(projectPath, {
...filter,
limit: undefined,
offset: undefined,
});
return events.length;
}
/**
* Add an event to the index (internal)
*/
private async addToIndex(projectPath: string, event: StoredEvent): Promise<void> {
const indexPath = getEventHistoryIndexPath(projectPath);
const index = await readJsonFile<StoredEventIndex>(indexPath, DEFAULT_EVENT_HISTORY_INDEX);
const summary: StoredEventSummary = {
id: event.id,
trigger: event.trigger,
timestamp: event.timestamp,
featureName: event.featureName,
featureId: event.featureId,
};
// Add to beginning (newest first)
index.events.unshift(summary);
// Prune old events if over limit
if (index.events.length > MAX_EVENTS_IN_INDEX) {
const removed = index.events.splice(MAX_EVENTS_IN_INDEX);
// Delete the pruned event files
for (const oldEvent of removed) {
const eventPath = getEventPath(projectPath, oldEvent.id);
try {
await secureFs.unlink(eventPath);
} catch {
// Ignore deletion errors for pruned events
}
}
logger.info(`Pruned ${removed.length} old events from history`);
}
await atomicWriteJson(indexPath, index);
}
/**
* Extract project name from path
*/
private extractProjectName(projectPath: string): string {
const parts = projectPath.split(/[/\\]/);
return parts[parts.length - 1] || projectPath;
}
}
// Singleton instance
let eventHistoryServiceInstance: EventHistoryService | null = null;
/**
* Get the singleton event history service instance
*/
export function getEventHistoryService(): EventHistoryService {
if (!eventHistoryServiceInstance) {
eventHistoryServiceInstance = new EventHistoryService();
}
return eventHistoryServiceInstance;
}

View File

@@ -0,0 +1,373 @@
/**
* Event Hook Service - Executes custom actions when system events occur
*
* Listens to the event emitter and triggers configured hooks:
* - Shell commands: Executed with configurable timeout
* - HTTP webhooks: POST/GET/PUT/PATCH requests with variable substitution
*
* Also stores events to history for debugging and replay.
*
* Supported events:
* - feature_created: A new feature was created
* - feature_success: Feature completed successfully
* - feature_error: Feature failed with an error
* - auto_mode_complete: Auto mode finished all features (idle state)
* - auto_mode_error: Auto mode encountered a critical error
*/
import { exec } from 'child_process';
import { promisify } from 'util';
import { createLogger } from '@automaker/utils';
import type { EventEmitter } from '../lib/events.js';
import type { SettingsService } from './settings-service.js';
import type { EventHistoryService } from './event-history-service.js';
import type {
EventHook,
EventHookTrigger,
EventHookShellAction,
EventHookHttpAction,
} from '@automaker/types';
const execAsync = promisify(exec);
const logger = createLogger('EventHooks');
/** Default timeout for shell commands (30 seconds) */
const DEFAULT_SHELL_TIMEOUT = 30000;
/** Default timeout for HTTP requests (10 seconds) */
const DEFAULT_HTTP_TIMEOUT = 10000;
/**
* Context available for variable substitution in hooks
*/
interface HookContext {
featureId?: string;
featureName?: string;
projectPath?: string;
projectName?: string;
error?: string;
errorType?: string;
timestamp: string;
eventType: EventHookTrigger;
}
/**
* Auto-mode event payload structure
*/
interface AutoModeEventPayload {
type?: string;
featureId?: string;
passes?: boolean;
message?: string;
error?: string;
errorType?: string;
projectPath?: string;
}
/**
* Feature created event payload structure
*/
interface FeatureCreatedPayload {
featureId: string;
featureName?: string;
projectPath: string;
}
/**
* Event Hook Service
*
* Manages execution of user-configured event hooks in response to system events.
* Also stores events to history for debugging and replay.
*/
export class EventHookService {
private emitter: EventEmitter | null = null;
private settingsService: SettingsService | null = null;
private eventHistoryService: EventHistoryService | null = null;
private unsubscribe: (() => void) | null = null;
/**
* Initialize the service with event emitter, settings service, and event history service
*/
initialize(
emitter: EventEmitter,
settingsService: SettingsService,
eventHistoryService?: EventHistoryService
): void {
this.emitter = emitter;
this.settingsService = settingsService;
this.eventHistoryService = eventHistoryService || null;
// Subscribe to events
this.unsubscribe = emitter.subscribe((type, payload) => {
if (type === 'auto-mode:event') {
this.handleAutoModeEvent(payload as AutoModeEventPayload);
} else if (type === 'feature:created') {
this.handleFeatureCreatedEvent(payload as FeatureCreatedPayload);
}
});
logger.info('Event hook service initialized');
}
/**
* Cleanup subscriptions
*/
destroy(): void {
if (this.unsubscribe) {
this.unsubscribe();
this.unsubscribe = null;
}
this.emitter = null;
this.settingsService = null;
this.eventHistoryService = null;
}
/**
* Handle auto-mode events and trigger matching hooks
*/
private async handleAutoModeEvent(payload: AutoModeEventPayload): Promise<void> {
if (!payload.type) return;
// Map internal event types to hook triggers
let trigger: EventHookTrigger | null = null;
switch (payload.type) {
case 'auto_mode_feature_complete':
trigger = payload.passes ? 'feature_success' : 'feature_error';
break;
case 'auto_mode_error':
// Feature-level error (has featureId) vs auto-mode level error
trigger = payload.featureId ? 'feature_error' : 'auto_mode_error';
break;
case 'auto_mode_idle':
trigger = 'auto_mode_complete';
break;
default:
// Other event types don't trigger hooks
return;
}
if (!trigger) return;
// Build context for variable substitution
const context: HookContext = {
featureId: payload.featureId,
projectPath: payload.projectPath,
projectName: payload.projectPath ? this.extractProjectName(payload.projectPath) : undefined,
error: payload.error || payload.message,
errorType: payload.errorType,
timestamp: new Date().toISOString(),
eventType: trigger,
};
// Execute matching hooks (pass passes for feature completion events)
await this.executeHooksForTrigger(trigger, context, { passes: payload.passes });
}
/**
* Handle feature:created events and trigger matching hooks
*/
private async handleFeatureCreatedEvent(payload: FeatureCreatedPayload): Promise<void> {
const context: HookContext = {
featureId: payload.featureId,
featureName: payload.featureName,
projectPath: payload.projectPath,
projectName: this.extractProjectName(payload.projectPath),
timestamp: new Date().toISOString(),
eventType: 'feature_created',
};
await this.executeHooksForTrigger('feature_created', context);
}
/**
* Execute all enabled hooks matching the given trigger and store event to history
*/
private async executeHooksForTrigger(
trigger: EventHookTrigger,
context: HookContext,
additionalData?: { passes?: boolean }
): Promise<void> {
// Store event to history (even if no hooks match)
if (this.eventHistoryService && context.projectPath) {
try {
await this.eventHistoryService.storeEvent({
trigger,
projectPath: context.projectPath,
featureId: context.featureId,
featureName: context.featureName,
error: context.error,
errorType: context.errorType,
passes: additionalData?.passes,
});
} catch (error) {
logger.error('Failed to store event to history:', error);
}
}
if (!this.settingsService) {
logger.warn('Settings service not available');
return;
}
try {
const settings = await this.settingsService.getGlobalSettings();
const hooks = settings.eventHooks || [];
// Filter to enabled hooks matching this trigger
const matchingHooks = hooks.filter((hook) => hook.enabled && hook.trigger === trigger);
if (matchingHooks.length === 0) {
return;
}
logger.info(`Executing ${matchingHooks.length} hook(s) for trigger: ${trigger}`);
// Execute hooks in parallel (don't wait for one to finish before starting next)
await Promise.allSettled(matchingHooks.map((hook) => this.executeHook(hook, context)));
} catch (error) {
logger.error('Error executing hooks:', error);
}
}
/**
* Execute a single hook
*/
private async executeHook(hook: EventHook, context: HookContext): Promise<void> {
const hookName = hook.name || hook.id;
try {
if (hook.action.type === 'shell') {
await this.executeShellHook(hook.action, context, hookName);
} else if (hook.action.type === 'http') {
await this.executeHttpHook(hook.action, context, hookName);
}
} catch (error) {
logger.error(`Hook "${hookName}" failed:`, error);
}
}
/**
* Execute a shell command hook
*/
private async executeShellHook(
action: EventHookShellAction,
context: HookContext,
hookName: string
): Promise<void> {
const command = this.substituteVariables(action.command, context);
const timeout = action.timeout || DEFAULT_SHELL_TIMEOUT;
logger.info(`Executing shell hook "${hookName}": ${command}`);
try {
const { stdout, stderr } = await execAsync(command, {
timeout,
maxBuffer: 1024 * 1024, // 1MB buffer
});
if (stdout) {
logger.debug(`Hook "${hookName}" stdout: ${stdout.trim()}`);
}
if (stderr) {
logger.warn(`Hook "${hookName}" stderr: ${stderr.trim()}`);
}
logger.info(`Shell hook "${hookName}" completed successfully`);
} catch (error) {
if ((error as NodeJS.ErrnoException).code === 'ETIMEDOUT') {
logger.error(`Shell hook "${hookName}" timed out after ${timeout}ms`);
}
throw error;
}
}
/**
* Execute an HTTP webhook hook
*/
private async executeHttpHook(
action: EventHookHttpAction,
context: HookContext,
hookName: string
): Promise<void> {
const url = this.substituteVariables(action.url, context);
const method = action.method || 'POST';
// Substitute variables in headers
const headers: Record<string, string> = {
'Content-Type': 'application/json',
};
if (action.headers) {
for (const [key, value] of Object.entries(action.headers)) {
headers[key] = this.substituteVariables(value, context);
}
}
// Substitute variables in body
let body: string | undefined;
if (action.body) {
body = this.substituteVariables(action.body, context);
} else if (method !== 'GET') {
// Default body with context information
body = JSON.stringify({
eventType: context.eventType,
timestamp: context.timestamp,
featureId: context.featureId,
projectPath: context.projectPath,
projectName: context.projectName,
error: context.error,
});
}
logger.info(`Executing HTTP hook "${hookName}": ${method} ${url}`);
try {
const controller = new AbortController();
const timeoutId = setTimeout(() => controller.abort(), DEFAULT_HTTP_TIMEOUT);
const response = await fetch(url, {
method,
headers,
body: method !== 'GET' ? body : undefined,
signal: controller.signal,
});
clearTimeout(timeoutId);
if (!response.ok) {
logger.warn(`HTTP hook "${hookName}" received status ${response.status}`);
} else {
logger.info(`HTTP hook "${hookName}" completed successfully (status: ${response.status})`);
}
} catch (error) {
if ((error as Error).name === 'AbortError') {
logger.error(`HTTP hook "${hookName}" timed out after ${DEFAULT_HTTP_TIMEOUT}ms`);
}
throw error;
}
}
/**
* Substitute {{variable}} placeholders in a string
*/
private substituteVariables(template: string, context: HookContext): string {
return template.replace(/\{\{(\w+)\}\}/g, (match, variable) => {
const value = context[variable as keyof HookContext];
if (value === undefined || value === null) {
return '';
}
return String(value);
});
}
/**
* Extract project name from path
*/
private extractProjectName(projectPath: string): string {
const parts = projectPath.split(/[/\\]/);
return parts[parts.length - 1] || projectPath;
}
}
// Singleton instance
export const eventHookService = new EventHookService();

View File

@@ -5,14 +5,22 @@
import path from 'path'; import path from 'path';
import type { Feature, DescriptionHistoryEntry } from '@automaker/types'; import type { Feature, DescriptionHistoryEntry } from '@automaker/types';
import { createLogger } from '@automaker/utils'; import {
createLogger,
atomicWriteJson,
readJsonWithRecovery,
logRecoveryWarning,
DEFAULT_BACKUP_COUNT,
} from '@automaker/utils';
import * as secureFs from '../lib/secure-fs.js'; import * as secureFs from '../lib/secure-fs.js';
import { import {
getFeaturesDir, getFeaturesDir,
getFeatureDir, getFeatureDir,
getFeatureImagesDir, getFeatureImagesDir,
getAppSpecPath,
ensureAutomakerDir, ensureAutomakerDir,
} from '@automaker/platform'; } from '@automaker/platform';
import { addImplementedFeature, type ImplementedFeature } from '../lib/xml-extractor.js';
const logger = createLogger('FeatureLoader'); const logger = createLogger('FeatureLoader');
@@ -192,31 +200,31 @@ export class FeatureLoader {
})) as any[]; })) as any[];
const featureDirs = entries.filter((entry) => entry.isDirectory()); const featureDirs = entries.filter((entry) => entry.isDirectory());
// Load all features concurrently (secureFs has built-in concurrency limiting) // Load all features concurrently with automatic recovery from backups
const featurePromises = featureDirs.map(async (dir) => { const featurePromises = featureDirs.map(async (dir) => {
const featureId = dir.name; const featureId = dir.name;
const featureJsonPath = this.getFeatureJsonPath(projectPath, featureId); const featureJsonPath = this.getFeatureJsonPath(projectPath, featureId);
try { // Use recovery-enabled read to handle corrupted files
const content = (await secureFs.readFile(featureJsonPath, 'utf-8')) as string; const result = await readJsonWithRecovery<Feature | null>(featureJsonPath, null, {
const feature = JSON.parse(content); maxBackups: DEFAULT_BACKUP_COUNT,
autoRestore: true,
});
if (!feature.id) { logRecoveryWarning(result, `Feature ${featureId}`, logger);
logger.warn(`Feature ${featureId} missing required 'id' field, skipping`);
return null;
}
return feature as Feature; const feature = result.data;
} catch (error) {
if ((error as NodeJS.ErrnoException).code === 'ENOENT') { if (!feature) {
return null;
} else if (error instanceof SyntaxError) {
logger.warn(`Failed to parse feature.json for ${featureId}: ${error.message}`);
} else {
logger.error(`Failed to load feature ${featureId}:`, (error as Error).message);
}
return null; return null;
} }
if (!feature.id) {
logger.warn(`Feature ${featureId} missing required 'id' field, skipping`);
return null;
}
return feature;
}); });
const results = await Promise.all(featurePromises); const results = await Promise.all(featurePromises);
@@ -236,21 +244,85 @@ export class FeatureLoader {
} }
} }
/**
* Normalize a title for comparison (case-insensitive, trimmed)
*/
private normalizeTitle(title: string): string {
return title.toLowerCase().trim();
}
/**
* Find a feature by its title (case-insensitive match)
* @param projectPath - Path to the project
* @param title - Title to search for
* @returns The matching feature or null if not found
*/
async findByTitle(projectPath: string, title: string): Promise<Feature | null> {
if (!title || !title.trim()) {
return null;
}
const normalizedTitle = this.normalizeTitle(title);
const features = await this.getAll(projectPath);
for (const feature of features) {
if (feature.title && this.normalizeTitle(feature.title) === normalizedTitle) {
return feature;
}
}
return null;
}
/**
* Check if a title already exists on another feature (for duplicate detection)
* @param projectPath - Path to the project
* @param title - Title to check
* @param excludeFeatureId - Optional feature ID to exclude from the check (for updates)
* @returns The duplicate feature if found, null otherwise
*/
async findDuplicateTitle(
projectPath: string,
title: string,
excludeFeatureId?: string
): Promise<Feature | null> {
if (!title || !title.trim()) {
return null;
}
const normalizedTitle = this.normalizeTitle(title);
const features = await this.getAll(projectPath);
for (const feature of features) {
// Skip the feature being updated (if provided)
if (excludeFeatureId && feature.id === excludeFeatureId) {
continue;
}
if (feature.title && this.normalizeTitle(feature.title) === normalizedTitle) {
return feature;
}
}
return null;
}
/** /**
* Get a single feature by ID * Get a single feature by ID
* Uses automatic recovery from backups if the main file is corrupted
*/ */
async get(projectPath: string, featureId: string): Promise<Feature | null> { async get(projectPath: string, featureId: string): Promise<Feature | null> {
try { const featureJsonPath = this.getFeatureJsonPath(projectPath, featureId);
const featureJsonPath = this.getFeatureJsonPath(projectPath, featureId);
const content = (await secureFs.readFile(featureJsonPath, 'utf-8')) as string; // Use recovery-enabled read to handle corrupted files
return JSON.parse(content); const result = await readJsonWithRecovery<Feature | null>(featureJsonPath, null, {
} catch (error) { maxBackups: DEFAULT_BACKUP_COUNT,
if ((error as NodeJS.ErrnoException).code === 'ENOENT') { autoRestore: true,
return null; });
}
logger.error(`Failed to get feature ${featureId}:`, error); logRecoveryWarning(result, `Feature ${featureId}`, logger);
throw error;
} return result.data;
} }
/** /**
@@ -294,8 +366,8 @@ export class FeatureLoader {
descriptionHistory: initialHistory, descriptionHistory: initialHistory,
}; };
// Write feature.json // Write feature.json atomically with backup support
await secureFs.writeFile(featureJsonPath, JSON.stringify(feature, null, 2), 'utf-8'); await atomicWriteJson(featureJsonPath, feature, { backupCount: DEFAULT_BACKUP_COUNT });
logger.info(`Created feature ${featureId}`); logger.info(`Created feature ${featureId}`);
return feature; return feature;
@@ -379,9 +451,9 @@ export class FeatureLoader {
descriptionHistory: updatedHistory, descriptionHistory: updatedHistory,
}; };
// Write back to file // Write back to file atomically with backup support
const featureJsonPath = this.getFeatureJsonPath(projectPath, featureId); const featureJsonPath = this.getFeatureJsonPath(projectPath, featureId);
await secureFs.writeFile(featureJsonPath, JSON.stringify(updatedFeature, null, 2), 'utf-8'); await atomicWriteJson(featureJsonPath, updatedFeature, { backupCount: DEFAULT_BACKUP_COUNT });
logger.info(`Updated feature ${featureId}`); logger.info(`Updated feature ${featureId}`);
return updatedFeature; return updatedFeature;
@@ -460,4 +532,64 @@ export class FeatureLoader {
} }
} }
} }
/**
* Sync a completed feature to the app_spec.txt implemented_features section
*
* When a feature is completed, this method adds it to the implemented_features
* section of the project's app_spec.txt file. This keeps the spec in sync
* with the actual state of the codebase.
*
* @param projectPath - Path to the project
* @param feature - The feature to sync (must have title or description)
* @param fileLocations - Optional array of file paths where the feature was implemented
* @returns True if the spec was updated, false if no spec exists or feature was skipped
*/
async syncFeatureToAppSpec(
projectPath: string,
feature: Feature,
fileLocations?: string[]
): Promise<boolean> {
try {
const appSpecPath = getAppSpecPath(projectPath);
// Read the current app_spec.txt
let specContent: string;
try {
specContent = (await secureFs.readFile(appSpecPath, 'utf-8')) as string;
} catch (error) {
if ((error as NodeJS.ErrnoException).code === 'ENOENT') {
logger.info(`No app_spec.txt found for project, skipping sync for feature ${feature.id}`);
return false;
}
throw error;
}
// Build the implemented feature entry
const featureName = feature.title || `Feature: ${feature.id}`;
const implementedFeature: ImplementedFeature = {
name: featureName,
description: feature.description,
...(fileLocations && fileLocations.length > 0 ? { file_locations: fileLocations } : {}),
};
// Add the feature to the implemented_features section
const updatedSpecContent = addImplementedFeature(specContent, implementedFeature);
// Check if the content actually changed (feature might already exist)
if (updatedSpecContent === specContent) {
logger.info(`Feature "${featureName}" already exists in app_spec.txt, skipping`);
return false;
}
// Write the updated spec back to the file
await secureFs.writeFile(appSpecPath, updatedSpecContent, 'utf-8');
logger.info(`Synced feature "${featureName}" to app_spec.txt`);
return true;
} catch (error) {
logger.error(`Failed to sync feature ${feature.id} to app_spec.txt:`, error);
throw error;
}
}
} }

View File

@@ -41,6 +41,7 @@ import type { FeatureLoader } from './feature-loader.js';
import { createChatOptions, validateWorkingDirectory } from '../lib/sdk-options.js'; import { createChatOptions, validateWorkingDirectory } from '../lib/sdk-options.js';
import { resolveModelString } from '@automaker/model-resolver'; import { resolveModelString } from '@automaker/model-resolver';
import { stripProviderPrefix } from '@automaker/types'; import { stripProviderPrefix } from '@automaker/types';
import { getPromptCustomization } from '../lib/settings-helpers.js';
const logger = createLogger('IdeationService'); const logger = createLogger('IdeationService');
@@ -195,8 +196,12 @@ export class IdeationService {
// Gather existing features and ideas to prevent duplicate suggestions // Gather existing features and ideas to prevent duplicate suggestions
const existingWorkContext = await this.gatherExistingWorkContext(projectPath); const existingWorkContext = await this.gatherExistingWorkContext(projectPath);
// Get customized prompts from settings
const prompts = await getPromptCustomization(this.settingsService, '[IdeationService]');
// Build system prompt for ideation // Build system prompt for ideation
const systemPrompt = this.buildIdeationSystemPrompt( const systemPrompt = this.buildIdeationSystemPrompt(
prompts.ideation.ideationSystemPrompt,
contextResult.formattedPrompt, contextResult.formattedPrompt,
activeSession.session.promptCategory, activeSession.session.promptCategory,
existingWorkContext existingWorkContext
@@ -645,8 +650,12 @@ export class IdeationService {
// Gather existing features and ideas to prevent duplicates // Gather existing features and ideas to prevent duplicates
const existingWorkContext = await this.gatherExistingWorkContext(projectPath); const existingWorkContext = await this.gatherExistingWorkContext(projectPath);
// Get customized prompts from settings
const prompts = await getPromptCustomization(this.settingsService, '[IdeationService]');
// Build system prompt for structured suggestions // Build system prompt for structured suggestions
const systemPrompt = this.buildSuggestionsSystemPrompt( const systemPrompt = this.buildSuggestionsSystemPrompt(
prompts.ideation.suggestionsSystemPrompt,
contextPrompt, contextPrompt,
category, category,
count, count,
@@ -721,8 +730,14 @@ export class IdeationService {
/** /**
* Build system prompt for structured suggestion generation * Build system prompt for structured suggestion generation
* @param basePrompt - The base system prompt from settings
* @param contextFilesPrompt - Project context from loaded files
* @param category - The idea category to focus on
* @param count - Number of suggestions to generate
* @param existingWorkContext - Context about existing features/ideas
*/ */
private buildSuggestionsSystemPrompt( private buildSuggestionsSystemPrompt(
basePrompt: string,
contextFilesPrompt: string | undefined, contextFilesPrompt: string | undefined,
category: IdeaCategory, category: IdeaCategory,
count: number = 10, count: number = 10,
@@ -734,35 +749,18 @@ export class IdeationService {
const existingWorkSection = existingWorkContext ? `\n\n${existingWorkContext}` : ''; const existingWorkSection = existingWorkContext ? `\n\n${existingWorkContext}` : '';
return `You are an AI product strategist helping brainstorm feature ideas for a software project. // Replace placeholder {{count}} if present, otherwise append count instruction
let prompt = basePrompt;
if (prompt.includes('{{count}}')) {
prompt = prompt.replace(/\{\{count\}\}/g, String(count));
} else {
prompt += `\n\nGenerate exactly ${count} suggestions.`;
}
IMPORTANT: You do NOT have access to any tools. You CANNOT read files, search code, or run commands. return `${prompt}
You must generate suggestions based ONLY on the project context provided below.
Do NOT say "I'll analyze" or "Let me explore" - you cannot do those things.
Based on the project context and the user's prompt, generate exactly ${count} creative and actionable feature suggestions.
YOUR RESPONSE MUST BE ONLY A JSON ARRAY - nothing else. No explanation, no preamble, no markdown code fences.
Each suggestion must have this structure:
{
"title": "Short, actionable title (max 60 chars)",
"description": "Clear description of what to build or improve (2-3 sentences)",
"rationale": "Why this is valuable - the problem it solves or opportunity it creates",
"priority": "high" | "medium" | "low"
}
Focus area: ${this.getCategoryDescription(category)} Focus area: ${this.getCategoryDescription(category)}
Guidelines:
- Generate exactly ${count} suggestions
- Be specific and actionable - avoid vague ideas
- Mix different priority levels (some high, some medium, some low)
- Each suggestion should be independently implementable
- Think creatively - include both obvious improvements and innovative ideas
- Consider the project's domain and target users
- IMPORTANT: Do NOT suggest features or ideas that already exist in the "Existing Features" or "Existing Ideas" sections below
${contextSection}${existingWorkSection}`; ${contextSection}${existingWorkSection}`;
} }
@@ -1269,30 +1267,11 @@ ${contextSection}${existingWorkSection}`;
// ============================================================================ // ============================================================================
private buildIdeationSystemPrompt( private buildIdeationSystemPrompt(
basePrompt: string,
contextFilesPrompt: string | undefined, contextFilesPrompt: string | undefined,
category?: IdeaCategory, category?: IdeaCategory,
existingWorkContext?: string existingWorkContext?: string
): string { ): string {
const basePrompt = `You are an AI product strategist and UX expert helping brainstorm ideas for improving a software project.
Your role is to:
- Analyze the codebase structure and patterns
- Identify opportunities for improvement
- Suggest actionable ideas with clear rationale
- Consider user experience, technical feasibility, and business value
- Be specific and reference actual files/components when possible
When suggesting ideas:
1. Provide a clear, concise title
2. Explain the problem or opportunity
3. Describe the proposed solution
4. Highlight the expected benefit
5. Note any dependencies or considerations
IMPORTANT: Do NOT suggest features or ideas that already exist in the project. Check the "Existing Features" and "Existing Ideas" sections below to avoid duplicates.
Focus on practical, implementable suggestions that would genuinely improve the product.`;
const categoryContext = category const categoryContext = category
? `\n\nFocus area: ${this.getCategoryDescription(category)}` ? `\n\nFocus area: ${this.getCategoryDescription(category)}`
: ''; : '';

View File

@@ -0,0 +1,280 @@
/**
* Notification Service - Handles reading/writing notifications to JSON files
*
* Provides persistent storage for project-level notifications in
* {projectPath}/.automaker/notifications.json
*
* Notifications alert users when:
* - Features reach specific statuses (waiting_approval, verified)
* - Long-running operations complete (spec generation)
*/
import { createLogger } from '@automaker/utils';
import * as secureFs from '../lib/secure-fs.js';
import { getNotificationsPath, ensureAutomakerDir } from '@automaker/platform';
import type { Notification, NotificationsFile, NotificationType } from '@automaker/types';
import { DEFAULT_NOTIFICATIONS_FILE } from '@automaker/types';
import type { EventEmitter } from '../lib/events.js';
import { randomUUID } from 'crypto';
const logger = createLogger('NotificationService');
/**
* Atomic file write - write to temp file then rename
*/
async function atomicWriteJson(filePath: string, data: unknown): Promise<void> {
const tempPath = `${filePath}.tmp.${Date.now()}`;
const content = JSON.stringify(data, null, 2);
try {
await secureFs.writeFile(tempPath, content, 'utf-8');
await secureFs.rename(tempPath, filePath);
} catch (error) {
// Clean up temp file if it exists
try {
await secureFs.unlink(tempPath);
} catch {
// Ignore cleanup errors
}
throw error;
}
}
/**
* Safely read JSON file with fallback to default
*/
async function readJsonFile<T>(filePath: string, defaultValue: T): Promise<T> {
try {
const content = (await secureFs.readFile(filePath, 'utf-8')) as string;
return JSON.parse(content) as T;
} catch (error) {
if ((error as NodeJS.ErrnoException).code === 'ENOENT') {
return defaultValue;
}
logger.error(`Error reading ${filePath}:`, error);
return defaultValue;
}
}
/**
* Input for creating a new notification
*/
export interface CreateNotificationInput {
type: NotificationType;
title: string;
message: string;
featureId?: string;
projectPath: string;
}
/**
* NotificationService - Manages persistent storage of notifications
*
* Handles reading and writing notifications to JSON files with atomic operations
* for reliability. Each project has its own notifications.json file.
*/
export class NotificationService {
private events: EventEmitter | null = null;
/**
* Set the event emitter for broadcasting notification events
*/
setEventEmitter(events: EventEmitter): void {
this.events = events;
}
/**
* Get all notifications for a project
*
* @param projectPath - Absolute path to project directory
* @returns Promise resolving to array of notifications
*/
async getNotifications(projectPath: string): Promise<Notification[]> {
const notificationsPath = getNotificationsPath(projectPath);
const file = await readJsonFile<NotificationsFile>(
notificationsPath,
DEFAULT_NOTIFICATIONS_FILE
);
// Filter out dismissed notifications and sort by date (newest first)
return file.notifications
.filter((n) => !n.dismissed)
.sort((a, b) => new Date(b.createdAt).getTime() - new Date(a.createdAt).getTime());
}
/**
* Get unread notification count for a project
*
* @param projectPath - Absolute path to project directory
* @returns Promise resolving to unread count
*/
async getUnreadCount(projectPath: string): Promise<number> {
const notifications = await this.getNotifications(projectPath);
return notifications.filter((n) => !n.read).length;
}
/**
* Create a new notification
*
* @param input - Notification creation input
* @returns Promise resolving to the created notification
*/
async createNotification(input: CreateNotificationInput): Promise<Notification> {
const { projectPath, type, title, message, featureId } = input;
// Ensure automaker directory exists
await ensureAutomakerDir(projectPath);
const notificationsPath = getNotificationsPath(projectPath);
const file = await readJsonFile<NotificationsFile>(
notificationsPath,
DEFAULT_NOTIFICATIONS_FILE
);
const notification: Notification = {
id: randomUUID(),
type,
title,
message,
createdAt: new Date().toISOString(),
read: false,
dismissed: false,
featureId,
projectPath,
};
file.notifications.push(notification);
await atomicWriteJson(notificationsPath, file);
logger.info(`Created notification: ${title} for project ${projectPath}`);
// Emit event for real-time updates
if (this.events) {
this.events.emit('notification:created', notification);
}
return notification;
}
/**
* Mark a notification as read
*
* @param projectPath - Absolute path to project directory
* @param notificationId - ID of the notification to mark as read
* @returns Promise resolving to the updated notification or null if not found
*/
async markAsRead(projectPath: string, notificationId: string): Promise<Notification | null> {
const notificationsPath = getNotificationsPath(projectPath);
const file = await readJsonFile<NotificationsFile>(
notificationsPath,
DEFAULT_NOTIFICATIONS_FILE
);
const notification = file.notifications.find((n) => n.id === notificationId);
if (!notification) {
return null;
}
notification.read = true;
await atomicWriteJson(notificationsPath, file);
logger.info(`Marked notification ${notificationId} as read`);
return notification;
}
/**
* Mark all notifications as read for a project
*
* @param projectPath - Absolute path to project directory
* @returns Promise resolving to number of notifications marked as read
*/
async markAllAsRead(projectPath: string): Promise<number> {
const notificationsPath = getNotificationsPath(projectPath);
const file = await readJsonFile<NotificationsFile>(
notificationsPath,
DEFAULT_NOTIFICATIONS_FILE
);
let count = 0;
for (const notification of file.notifications) {
if (!notification.read && !notification.dismissed) {
notification.read = true;
count++;
}
}
if (count > 0) {
await atomicWriteJson(notificationsPath, file);
logger.info(`Marked ${count} notifications as read`);
}
return count;
}
/**
* Dismiss a notification
*
* @param projectPath - Absolute path to project directory
* @param notificationId - ID of the notification to dismiss
* @returns Promise resolving to true if notification was dismissed
*/
async dismissNotification(projectPath: string, notificationId: string): Promise<boolean> {
const notificationsPath = getNotificationsPath(projectPath);
const file = await readJsonFile<NotificationsFile>(
notificationsPath,
DEFAULT_NOTIFICATIONS_FILE
);
const notification = file.notifications.find((n) => n.id === notificationId);
if (!notification) {
return false;
}
notification.dismissed = true;
await atomicWriteJson(notificationsPath, file);
logger.info(`Dismissed notification ${notificationId}`);
return true;
}
/**
* Dismiss all notifications for a project
*
* @param projectPath - Absolute path to project directory
* @returns Promise resolving to number of notifications dismissed
*/
async dismissAll(projectPath: string): Promise<number> {
const notificationsPath = getNotificationsPath(projectPath);
const file = await readJsonFile<NotificationsFile>(
notificationsPath,
DEFAULT_NOTIFICATIONS_FILE
);
let count = 0;
for (const notification of file.notifications) {
if (!notification.dismissed) {
notification.dismissed = true;
count++;
}
}
if (count > 0) {
await atomicWriteJson(notificationsPath, file);
logger.info(`Dismissed ${count} notifications`);
}
return count;
}
}
// Singleton instance
let notificationServiceInstance: NotificationService | null = null;
/**
* Get the singleton notification service instance
*/
export function getNotificationService(): NotificationService {
if (!notificationServiceInstance) {
notificationServiceInstance = new NotificationService();
}
return notificationServiceInstance;
}

View File

@@ -7,7 +7,7 @@
* - Per-project settings ({projectPath}/.automaker/settings.json) * - Per-project settings ({projectPath}/.automaker/settings.json)
*/ */
import { createLogger } from '@automaker/utils'; import { createLogger, atomicWriteJson, DEFAULT_BACKUP_COUNT } from '@automaker/utils';
import * as secureFs from '../lib/secure-fs.js'; import * as secureFs from '../lib/secure-fs.js';
import { import {
@@ -42,28 +42,8 @@ import {
const logger = createLogger('SettingsService'); const logger = createLogger('SettingsService');
/** /**
* Atomic file write - write to temp file then rename * Wrapper for readJsonFile from utils that uses the local secureFs
*/ * to maintain compatibility with the server's secure file system
async function atomicWriteJson(filePath: string, data: unknown): Promise<void> {
const tempPath = `${filePath}.tmp.${Date.now()}`;
const content = JSON.stringify(data, null, 2);
try {
await secureFs.writeFile(tempPath, content, 'utf-8');
await secureFs.rename(tempPath, filePath);
} catch (error) {
// Clean up temp file if it exists
try {
await secureFs.unlink(tempPath);
} catch {
// Ignore cleanup errors
}
throw error;
}
}
/**
* Safely read JSON file with fallback to default
*/ */
async function readJsonFile<T>(filePath: string, defaultValue: T): Promise<T> { async function readJsonFile<T>(filePath: string, defaultValue: T): Promise<T> {
try { try {
@@ -90,6 +70,13 @@ async function fileExists(filePath: string): Promise<boolean> {
} }
} }
/**
* Write settings atomically with backup support
*/
async function writeSettingsJson(filePath: string, data: unknown): Promise<void> {
await atomicWriteJson(filePath, data, { backupCount: DEFAULT_BACKUP_COUNT });
}
/** /**
* SettingsService - Manages persistent storage of user settings and credentials * SettingsService - Manages persistent storage of user settings and credentials
* *
@@ -180,7 +167,7 @@ export class SettingsService {
if (needsSave) { if (needsSave) {
try { try {
await ensureDataDir(this.dataDir); await ensureDataDir(this.dataDir);
await atomicWriteJson(settingsPath, result); await writeSettingsJson(settingsPath, result);
logger.info('Settings migration complete'); logger.info('Settings migration complete');
} catch (error) { } catch (error) {
logger.error('Failed to save migrated settings:', error); logger.error('Failed to save migrated settings:', error);
@@ -340,7 +327,7 @@ export class SettingsService {
}; };
} }
await atomicWriteJson(settingsPath, updated); await writeSettingsJson(settingsPath, updated);
logger.info('Global settings updated'); logger.info('Global settings updated');
return updated; return updated;
@@ -414,7 +401,7 @@ export class SettingsService {
}; };
} }
await atomicWriteJson(credentialsPath, updated); await writeSettingsJson(credentialsPath, updated);
logger.info('Credentials updated'); logger.info('Credentials updated');
return updated; return updated;
@@ -433,6 +420,7 @@ export class SettingsService {
anthropic: { configured: boolean; masked: string }; anthropic: { configured: boolean; masked: string };
google: { configured: boolean; masked: string }; google: { configured: boolean; masked: string };
openai: { configured: boolean; masked: string }; openai: { configured: boolean; masked: string };
coderabbit: { configured: boolean; masked: string };
}> { }> {
const credentials = await this.getCredentials(); const credentials = await this.getCredentials();
@@ -454,6 +442,10 @@ export class SettingsService {
configured: !!credentials.apiKeys.openai, configured: !!credentials.apiKeys.openai,
masked: maskKey(credentials.apiKeys.openai), masked: maskKey(credentials.apiKeys.openai),
}, },
coderabbit: {
configured: !!credentials.apiKeys.coderabbit,
masked: maskKey(credentials.apiKeys.coderabbit),
},
}; };
} }
@@ -525,7 +517,7 @@ export class SettingsService {
}; };
} }
await atomicWriteJson(settingsPath, updated); await writeSettingsJson(settingsPath, updated);
logger.info(`Project settings updated for ${projectPath}`); logger.info(`Project settings updated for ${projectPath}`);
return updated; return updated;
@@ -671,12 +663,14 @@ export class SettingsService {
anthropic?: string; anthropic?: string;
google?: string; google?: string;
openai?: string; openai?: string;
coderabbit?: string;
}; };
await this.updateCredentials({ await this.updateCredentials({
apiKeys: { apiKeys: {
anthropic: apiKeys.anthropic || '', anthropic: apiKeys.anthropic || '',
google: apiKeys.google || '', google: apiKeys.google || '',
openai: apiKeys.openai || '', openai: apiKeys.openai || '',
coderabbit: apiKeys.coderabbit || '',
}, },
}); });
migratedCredentials = true; migratedCredentials = true;

View File

@@ -70,6 +70,29 @@ export class TerminalService extends EventEmitter {
private sessions: Map<string, TerminalSession> = new Map(); private sessions: Map<string, TerminalSession> = new Map();
private dataCallbacks: Set<DataCallback> = new Set(); private dataCallbacks: Set<DataCallback> = new Set();
private exitCallbacks: Set<ExitCallback> = new Set(); private exitCallbacks: Set<ExitCallback> = new Set();
private isWindows = os.platform() === 'win32';
// On Windows, ConPTY requires AttachConsole which fails in Electron/service mode
// Detect Electron by checking for electron-specific env vars or process properties
private isElectron =
!!(process.versions && (process.versions as Record<string, string>).electron) ||
!!process.env.ELECTRON_RUN_AS_NODE;
private useConptyFallback = false; // Track if we need to use winpty fallback on Windows
/**
* Kill a PTY process with platform-specific handling.
* Windows doesn't support Unix signals like SIGTERM/SIGKILL, so we call kill() without arguments.
* On Unix-like systems (macOS, Linux), we can specify the signal.
*
* @param ptyProcess - The PTY process to kill
* @param signal - The signal to send on Unix-like systems (default: 'SIGTERM')
*/
private killPtyProcess(ptyProcess: pty.IPty, signal: string = 'SIGTERM'): void {
if (this.isWindows) {
ptyProcess.kill();
} else {
ptyProcess.kill(signal);
}
}
/** /**
* Detect the best shell for the current platform * Detect the best shell for the current platform
@@ -322,13 +345,60 @@ export class TerminalService extends EventEmitter {
logger.info(`Creating session ${id} with shell: ${shell} in ${cwd}`); logger.info(`Creating session ${id} with shell: ${shell} in ${cwd}`);
const ptyProcess = pty.spawn(shell, shellArgs, { // Build PTY spawn options
const ptyOptions: pty.IPtyForkOptions = {
name: 'xterm-256color', name: 'xterm-256color',
cols: options.cols || 80, cols: options.cols || 80,
rows: options.rows || 24, rows: options.rows || 24,
cwd, cwd,
env, env,
}); };
// On Windows, always use winpty instead of ConPTY
// ConPTY requires AttachConsole which fails in many contexts:
// - Electron apps without a console
// - VS Code integrated terminal
// - Spawned from other applications
// The error happens in a subprocess so we can't catch it - must proactively disable
if (this.isWindows) {
(ptyOptions as pty.IWindowsPtyForkOptions).useConpty = false;
logger.info(
`[createSession] Using winpty for session ${id} (ConPTY disabled for compatibility)`
);
}
let ptyProcess: pty.IPty;
try {
ptyProcess = pty.spawn(shell, shellArgs, ptyOptions);
} catch (spawnError) {
const errorMessage = spawnError instanceof Error ? spawnError.message : String(spawnError);
// Check for Windows ConPTY-specific errors
if (this.isWindows && errorMessage.includes('AttachConsole failed')) {
// ConPTY failed - try winpty fallback
if (!this.useConptyFallback) {
logger.warn(`[createSession] ConPTY AttachConsole failed, retrying with winpty fallback`);
this.useConptyFallback = true;
try {
(ptyOptions as pty.IWindowsPtyForkOptions).useConpty = false;
ptyProcess = pty.spawn(shell, shellArgs, ptyOptions);
logger.info(`[createSession] Successfully spawned session ${id} with winpty fallback`);
} catch (fallbackError) {
const fallbackMessage =
fallbackError instanceof Error ? fallbackError.message : String(fallbackError);
logger.error(`[createSession] Winpty fallback also failed:`, fallbackMessage);
return null;
}
} else {
logger.error(`[createSession] PTY spawn failed (winpty):`, errorMessage);
return null;
}
} else {
logger.error(`[createSession] PTY spawn failed:`, errorMessage);
return null;
}
}
const session: TerminalSession = { const session: TerminalSession = {
id, id,
@@ -392,7 +462,11 @@ export class TerminalService extends EventEmitter {
// Handle exit // Handle exit
ptyProcess.onExit(({ exitCode }) => { ptyProcess.onExit(({ exitCode }) => {
logger.info(`Session ${id} exited with code ${exitCode}`); const exitMessage =
exitCode === undefined || exitCode === null
? 'Session terminated'
: `Session exited with code ${exitCode}`;
logger.info(`${exitMessage} (${id})`);
this.sessions.delete(id); this.sessions.delete(id);
this.exitCallbacks.forEach((cb) => cb(id, exitCode)); this.exitCallbacks.forEach((cb) => cb(id, exitCode));
this.emit('exit', id, exitCode); this.emit('exit', id, exitCode);
@@ -477,8 +551,9 @@ export class TerminalService extends EventEmitter {
} }
// First try graceful SIGTERM to allow process cleanup // First try graceful SIGTERM to allow process cleanup
// On Windows, killPtyProcess calls kill() without signal since Windows doesn't support Unix signals
logger.info(`Session ${sessionId} sending SIGTERM`); logger.info(`Session ${sessionId} sending SIGTERM`);
session.pty.kill('SIGTERM'); this.killPtyProcess(session.pty, 'SIGTERM');
// Schedule SIGKILL fallback if process doesn't exit gracefully // Schedule SIGKILL fallback if process doesn't exit gracefully
// The onExit handler will remove session from map when it actually exits // The onExit handler will remove session from map when it actually exits
@@ -486,7 +561,7 @@ export class TerminalService extends EventEmitter {
if (this.sessions.has(sessionId)) { if (this.sessions.has(sessionId)) {
logger.info(`Session ${sessionId} still alive after SIGTERM, sending SIGKILL`); logger.info(`Session ${sessionId} still alive after SIGTERM, sending SIGKILL`);
try { try {
session.pty.kill('SIGKILL'); this.killPtyProcess(session.pty, 'SIGKILL');
} catch { } catch {
// Process may have already exited // Process may have already exited
} }
@@ -588,7 +663,8 @@ export class TerminalService extends EventEmitter {
if (session.flushTimeout) { if (session.flushTimeout) {
clearTimeout(session.flushTimeout); clearTimeout(session.flushTimeout);
} }
session.pty.kill(); // Use platform-specific kill to ensure proper termination on Windows
this.killPtyProcess(session.pty);
} catch { } catch {
// Ignore errors during cleanup // Ignore errors during cleanup
} }

View File

@@ -121,7 +121,7 @@ describe('worktree-metadata.ts', () => {
number: 123, number: 123,
url: 'https://github.com/owner/repo/pull/123', url: 'https://github.com/owner/repo/pull/123',
title: 'Test PR', title: 'Test PR',
state: 'open', state: 'OPEN',
createdAt: new Date().toISOString(), createdAt: new Date().toISOString(),
}, },
}; };
@@ -158,7 +158,7 @@ describe('worktree-metadata.ts', () => {
number: 456, number: 456,
url: 'https://github.com/owner/repo/pull/456', url: 'https://github.com/owner/repo/pull/456',
title: 'Updated PR', title: 'Updated PR',
state: 'closed', state: 'CLOSED',
createdAt: new Date().toISOString(), createdAt: new Date().toISOString(),
}, },
}; };
@@ -177,7 +177,7 @@ describe('worktree-metadata.ts', () => {
number: 789, number: 789,
url: 'https://github.com/owner/repo/pull/789', url: 'https://github.com/owner/repo/pull/789',
title: 'New PR', title: 'New PR',
state: 'open', state: 'OPEN',
createdAt: new Date().toISOString(), createdAt: new Date().toISOString(),
}; };
@@ -201,7 +201,7 @@ describe('worktree-metadata.ts', () => {
number: 999, number: 999,
url: 'https://github.com/owner/repo/pull/999', url: 'https://github.com/owner/repo/pull/999',
title: 'Updated PR', title: 'Updated PR',
state: 'merged', state: 'MERGED',
createdAt: new Date().toISOString(), createdAt: new Date().toISOString(),
}; };
@@ -224,7 +224,7 @@ describe('worktree-metadata.ts', () => {
number: 111, number: 111,
url: 'https://github.com/owner/repo/pull/111', url: 'https://github.com/owner/repo/pull/111',
title: 'PR', title: 'PR',
state: 'open', state: 'OPEN',
createdAt: new Date().toISOString(), createdAt: new Date().toISOString(),
}; };
@@ -259,7 +259,7 @@ describe('worktree-metadata.ts', () => {
number: 222, number: 222,
url: 'https://github.com/owner/repo/pull/222', url: 'https://github.com/owner/repo/pull/222',
title: 'Has PR', title: 'Has PR',
state: 'open', state: 'OPEN',
createdAt: new Date().toISOString(), createdAt: new Date().toISOString(),
}; };
@@ -297,7 +297,7 @@ describe('worktree-metadata.ts', () => {
number: 333, number: 333,
url: 'https://github.com/owner/repo/pull/333', url: 'https://github.com/owner/repo/pull/333',
title: 'PR 3', title: 'PR 3',
state: 'open', state: 'OPEN',
createdAt: new Date().toISOString(), createdAt: new Date().toISOString(),
}, },
}; };

File diff suppressed because it is too large Load Diff

View File

@@ -286,6 +286,7 @@ describe('claude-provider.ts', () => {
const generator = provider.executeQuery({ const generator = provider.executeQuery({
prompt: 'Test', prompt: 'Test',
model: 'claude-opus-4-5-20251101',
cwd: '/test', cwd: '/test',
}); });
@@ -312,6 +313,7 @@ describe('claude-provider.ts', () => {
const generator = provider.executeQuery({ const generator = provider.executeQuery({
prompt: 'Test', prompt: 'Test',
model: 'claude-opus-4-5-20251101',
cwd: '/test', cwd: '/test',
}); });
@@ -339,6 +341,7 @@ describe('claude-provider.ts', () => {
const generator = provider.executeQuery({ const generator = provider.executeQuery({
prompt: 'Test', prompt: 'Test',
model: 'claude-opus-4-5-20251101',
cwd: '/test', cwd: '/test',
}); });

View File

@@ -11,6 +11,11 @@ import {
getCodexConfigDir, getCodexConfigDir,
getCodexAuthIndicators, getCodexAuthIndicators,
} from '@automaker/platform'; } from '@automaker/platform';
import {
calculateReasoningTimeout,
REASONING_TIMEOUT_MULTIPLIERS,
DEFAULT_TIMEOUT_MS,
} from '@automaker/types';
const OPENAI_API_KEY_ENV = 'OPENAI_API_KEY'; const OPENAI_API_KEY_ENV = 'OPENAI_API_KEY';
const originalOpenAIKey = process.env[OPENAI_API_KEY_ENV]; const originalOpenAIKey = process.env[OPENAI_API_KEY_ENV];
@@ -289,5 +294,121 @@ describe('codex-provider.ts', () => {
expect(codexRunMock).not.toHaveBeenCalled(); expect(codexRunMock).not.toHaveBeenCalled();
expect(spawnJSONLProcess).toHaveBeenCalled(); expect(spawnJSONLProcess).toHaveBeenCalled();
}); });
it('passes extended timeout for high reasoning effort', async () => {
vi.mocked(spawnJSONLProcess).mockReturnValue((async function* () {})());
await collectAsyncGenerator(
provider.executeQuery({
prompt: 'Complex reasoning task',
model: 'gpt-5.1-codex-max',
cwd: '/tmp',
reasoningEffort: 'high',
})
);
const call = vi.mocked(spawnJSONLProcess).mock.calls[0][0];
// High reasoning effort should have 3x the default timeout (90000ms)
expect(call.timeout).toBe(DEFAULT_TIMEOUT_MS * REASONING_TIMEOUT_MULTIPLIERS.high);
});
it('passes extended timeout for xhigh reasoning effort', async () => {
vi.mocked(spawnJSONLProcess).mockReturnValue((async function* () {})());
await collectAsyncGenerator(
provider.executeQuery({
prompt: 'Very complex reasoning task',
model: 'gpt-5.1-codex-max',
cwd: '/tmp',
reasoningEffort: 'xhigh',
})
);
const call = vi.mocked(spawnJSONLProcess).mock.calls[0][0];
// xhigh reasoning effort should have 4x the default timeout (120000ms)
expect(call.timeout).toBe(DEFAULT_TIMEOUT_MS * REASONING_TIMEOUT_MULTIPLIERS.xhigh);
});
it('uses default timeout when no reasoning effort is specified', async () => {
vi.mocked(spawnJSONLProcess).mockReturnValue((async function* () {})());
await collectAsyncGenerator(
provider.executeQuery({
prompt: 'Simple task',
model: 'gpt-5.2',
cwd: '/tmp',
})
);
const call = vi.mocked(spawnJSONLProcess).mock.calls[0][0];
// No reasoning effort should use the default timeout
expect(call.timeout).toBe(DEFAULT_TIMEOUT_MS);
});
});
describe('calculateReasoningTimeout', () => {
it('returns default timeout when no reasoning effort is specified', () => {
expect(calculateReasoningTimeout()).toBe(DEFAULT_TIMEOUT_MS);
expect(calculateReasoningTimeout(undefined)).toBe(DEFAULT_TIMEOUT_MS);
});
it('returns default timeout for none reasoning effort', () => {
expect(calculateReasoningTimeout('none')).toBe(DEFAULT_TIMEOUT_MS);
});
it('applies correct multiplier for minimal reasoning effort', () => {
const expected = Math.round(DEFAULT_TIMEOUT_MS * REASONING_TIMEOUT_MULTIPLIERS.minimal);
expect(calculateReasoningTimeout('minimal')).toBe(expected);
});
it('applies correct multiplier for low reasoning effort', () => {
const expected = Math.round(DEFAULT_TIMEOUT_MS * REASONING_TIMEOUT_MULTIPLIERS.low);
expect(calculateReasoningTimeout('low')).toBe(expected);
});
it('applies correct multiplier for medium reasoning effort', () => {
const expected = Math.round(DEFAULT_TIMEOUT_MS * REASONING_TIMEOUT_MULTIPLIERS.medium);
expect(calculateReasoningTimeout('medium')).toBe(expected);
});
it('applies correct multiplier for high reasoning effort', () => {
const expected = Math.round(DEFAULT_TIMEOUT_MS * REASONING_TIMEOUT_MULTIPLIERS.high);
expect(calculateReasoningTimeout('high')).toBe(expected);
});
it('applies correct multiplier for xhigh reasoning effort', () => {
const expected = Math.round(DEFAULT_TIMEOUT_MS * REASONING_TIMEOUT_MULTIPLIERS.xhigh);
expect(calculateReasoningTimeout('xhigh')).toBe(expected);
});
it('uses custom base timeout when provided', () => {
const customBase = 60000;
expect(calculateReasoningTimeout('high', customBase)).toBe(
Math.round(customBase * REASONING_TIMEOUT_MULTIPLIERS.high)
);
});
it('falls back to 1.0 multiplier for invalid reasoning effort', () => {
// Test that invalid values fallback gracefully to default multiplier
// This tests the defensive ?? 1.0 in calculateReasoningTimeout
const invalidEffort = 'invalid_effort' as never;
expect(calculateReasoningTimeout(invalidEffort)).toBe(DEFAULT_TIMEOUT_MS);
});
it('produces expected absolute timeout values', () => {
// Verify the actual timeout values that will be used:
// none: 30000ms (30s)
// minimal: 36000ms (36s)
// low: 45000ms (45s)
// medium: 60000ms (1m)
// high: 90000ms (1m 30s)
// xhigh: 120000ms (2m)
expect(calculateReasoningTimeout('none')).toBe(30000);
expect(calculateReasoningTimeout('minimal')).toBe(36000);
expect(calculateReasoningTimeout('low')).toBe(45000);
expect(calculateReasoningTimeout('medium')).toBe(60000);
expect(calculateReasoningTimeout('high')).toBe(90000);
expect(calculateReasoningTimeout('xhigh')).toBe(120000);
});
}); });
}); });

View File

@@ -0,0 +1,196 @@
/**
* Unit tests for code-review providers route handler
*
* Tests:
* - Returns provider status list
* - Returns recommended provider
* - Force refresh functionality
* - Error handling
*/
import { describe, it, expect, vi, beforeEach } from 'vitest';
import type { Request, Response } from 'express';
import { createProvidersHandler } from '@/routes/code-review/routes/providers.js';
import type { CodeReviewService } from '@/services/code-review-service.js';
import { createMockExpressContext } from '../../../utils/mocks.js';
// Mock logger
vi.mock('@automaker/utils', async () => {
const actual = await vi.importActual<typeof import('@automaker/utils')>('@automaker/utils');
return {
...actual,
createLogger: vi.fn(() => ({
info: vi.fn(),
error: vi.fn(),
warn: vi.fn(),
debug: vi.fn(),
})),
};
});
describe('code-review/providers route', () => {
let mockCodeReviewService: CodeReviewService;
let req: Request;
let res: Response;
const mockProviderStatuses = [
{
provider: 'claude' as const,
available: true,
authenticated: true,
version: '1.0.0',
issues: [],
},
{
provider: 'codex' as const,
available: true,
authenticated: false,
version: '0.5.0',
issues: ['Not authenticated'],
},
];
beforeEach(() => {
vi.clearAllMocks();
mockCodeReviewService = {
getProviderStatus: vi.fn().mockResolvedValue(mockProviderStatuses),
getBestProvider: vi.fn().mockResolvedValue('claude'),
executeReview: vi.fn(),
refreshProviderStatus: vi.fn(),
initialize: vi.fn(),
} as any;
const context = createMockExpressContext();
req = context.req;
res = context.res;
req.query = {};
});
describe('successful responses', () => {
it('should return provider status and recommended provider', async () => {
const handler = createProvidersHandler(mockCodeReviewService);
await handler(req, res);
expect(res.json).toHaveBeenCalledWith({
success: true,
providers: mockProviderStatuses,
recommended: 'claude',
});
});
it('should use cached status by default', async () => {
const handler = createProvidersHandler(mockCodeReviewService);
await handler(req, res);
expect(mockCodeReviewService.getProviderStatus).toHaveBeenCalledWith(false);
});
it('should force refresh when refresh=true query param is set', async () => {
req.query = { refresh: 'true' };
const handler = createProvidersHandler(mockCodeReviewService);
await handler(req, res);
expect(mockCodeReviewService.getProviderStatus).toHaveBeenCalledWith(true);
});
it('should handle no recommended provider', async () => {
mockCodeReviewService.getBestProvider = vi.fn().mockResolvedValue(null);
const handler = createProvidersHandler(mockCodeReviewService);
await handler(req, res);
expect(res.json).toHaveBeenCalledWith({
success: true,
providers: mockProviderStatuses,
recommended: null,
});
});
it('should handle empty provider list', async () => {
mockCodeReviewService.getProviderStatus = vi.fn().mockResolvedValue([]);
mockCodeReviewService.getBestProvider = vi.fn().mockResolvedValue(null);
const handler = createProvidersHandler(mockCodeReviewService);
await handler(req, res);
expect(res.json).toHaveBeenCalledWith({
success: true,
providers: [],
recommended: null,
});
});
});
describe('error handling', () => {
it('should handle getProviderStatus errors', async () => {
mockCodeReviewService.getProviderStatus = vi
.fn()
.mockRejectedValue(new Error('Failed to detect CLIs'));
const handler = createProvidersHandler(mockCodeReviewService);
await handler(req, res);
expect(res.status).toHaveBeenCalledWith(500);
expect(res.json).toHaveBeenCalledWith({
success: false,
error: 'Failed to detect CLIs',
});
});
it('should handle getBestProvider errors gracefully', async () => {
mockCodeReviewService.getBestProvider = vi
.fn()
.mockRejectedValue(new Error('Detection failed'));
const handler = createProvidersHandler(mockCodeReviewService);
await handler(req, res);
expect(res.status).toHaveBeenCalledWith(500);
expect(res.json).toHaveBeenCalledWith({
success: false,
error: 'Detection failed',
});
});
});
describe('provider priority', () => {
it('should recommend claude when available and authenticated', async () => {
const handler = createProvidersHandler(mockCodeReviewService);
await handler(req, res);
expect(res.json).toHaveBeenCalledWith(
expect.objectContaining({
recommended: 'claude',
})
);
});
it('should recommend codex when claude is not available', async () => {
mockCodeReviewService.getBestProvider = vi.fn().mockResolvedValue('codex');
const handler = createProvidersHandler(mockCodeReviewService);
await handler(req, res);
expect(res.json).toHaveBeenCalledWith(
expect.objectContaining({
recommended: 'codex',
})
);
});
it('should recommend cursor as fallback', async () => {
mockCodeReviewService.getBestProvider = vi.fn().mockResolvedValue('cursor');
const handler = createProvidersHandler(mockCodeReviewService);
await handler(req, res);
expect(res.json).toHaveBeenCalledWith(
expect.objectContaining({
recommended: 'cursor',
})
);
});
});
});

View File

@@ -0,0 +1,109 @@
/**
* Unit tests for code-review status route handler
*
* Tests:
* - Returns correct running status
* - Returns correct project path
* - Handles errors gracefully
*/
import { describe, it, expect, vi, beforeEach } from 'vitest';
import type { Request, Response } from 'express';
import { createStatusHandler } from '@/routes/code-review/routes/status.js';
import { createMockExpressContext } from '../../../utils/mocks.js';
// Mock the common module to control running state
vi.mock('@/routes/code-review/common.js', () => {
return {
isRunning: vi.fn(),
getReviewStatus: vi.fn(),
getCurrentProjectPath: vi.fn(),
setRunningState: vi.fn(),
getAbortController: vi.fn(),
getErrorMessage: (e: unknown) => (e instanceof Error ? e.message : String(e)),
logError: vi.fn(),
};
});
// Mock logger
vi.mock('@automaker/utils', async () => {
const actual = await vi.importActual<typeof import('@automaker/utils')>('@automaker/utils');
return {
...actual,
createLogger: vi.fn(() => ({
info: vi.fn(),
error: vi.fn(),
warn: vi.fn(),
debug: vi.fn(),
})),
};
});
describe('code-review/status route', () => {
let req: Request;
let res: Response;
beforeEach(() => {
vi.clearAllMocks();
const context = createMockExpressContext();
req = context.req;
res = context.res;
});
describe('when no review is running', () => {
it('should return isRunning: false with null projectPath', async () => {
const { getReviewStatus } = await import('@/routes/code-review/common.js');
vi.mocked(getReviewStatus).mockReturnValue({
isRunning: false,
projectPath: null,
});
const handler = createStatusHandler();
await handler(req, res);
expect(res.json).toHaveBeenCalledWith({
success: true,
isRunning: false,
projectPath: null,
});
});
});
describe('when a review is running', () => {
it('should return isRunning: true with the current project path', async () => {
const { getReviewStatus } = await import('@/routes/code-review/common.js');
vi.mocked(getReviewStatus).mockReturnValue({
isRunning: true,
projectPath: '/test/project',
});
const handler = createStatusHandler();
await handler(req, res);
expect(res.json).toHaveBeenCalledWith({
success: true,
isRunning: true,
projectPath: '/test/project',
});
});
});
describe('error handling', () => {
it('should handle errors gracefully', async () => {
const { getReviewStatus } = await import('@/routes/code-review/common.js');
vi.mocked(getReviewStatus).mockImplementation(() => {
throw new Error('Unexpected error');
});
const handler = createStatusHandler();
await handler(req, res);
expect(res.status).toHaveBeenCalledWith(500);
expect(res.json).toHaveBeenCalledWith({
success: false,
error: 'Unexpected error',
});
});
});
});

View File

@@ -0,0 +1,129 @@
/**
* Unit tests for code-review stop route handler
*
* Tests:
* - Stopping when no review is running
* - Stopping a running review
* - Abort controller behavior
* - Error handling
*/
import { describe, it, expect, vi, beforeEach } from 'vitest';
import type { Request, Response } from 'express';
import { createStopHandler } from '@/routes/code-review/routes/stop.js';
import { createMockExpressContext } from '../../../utils/mocks.js';
// Mock the common module
vi.mock('@/routes/code-review/common.js', () => {
return {
isRunning: vi.fn(),
getAbortController: vi.fn(),
setRunningState: vi.fn(),
getReviewStatus: vi.fn(),
getCurrentProjectPath: vi.fn(),
getErrorMessage: (e: unknown) => (e instanceof Error ? e.message : String(e)),
logError: vi.fn(),
};
});
// Mock logger
vi.mock('@automaker/utils', async () => {
const actual = await vi.importActual<typeof import('@automaker/utils')>('@automaker/utils');
return {
...actual,
createLogger: vi.fn(() => ({
info: vi.fn(),
error: vi.fn(),
warn: vi.fn(),
debug: vi.fn(),
})),
};
});
describe('code-review/stop route', () => {
let req: Request;
let res: Response;
beforeEach(() => {
vi.clearAllMocks();
const context = createMockExpressContext();
req = context.req;
res = context.res;
});
describe('when no review is running', () => {
it('should return success with message that nothing is running', async () => {
const { isRunning } = await import('@/routes/code-review/common.js');
vi.mocked(isRunning).mockReturnValue(false);
const handler = createStopHandler();
await handler(req, res);
expect(res.json).toHaveBeenCalledWith({
success: true,
message: 'No code review is currently running',
});
});
});
describe('when a review is running', () => {
it('should abort the review and reset running state', async () => {
const { isRunning, getAbortController, setRunningState } =
await import('@/routes/code-review/common.js');
const mockAbortController = {
abort: vi.fn(),
signal: { aborted: false },
};
vi.mocked(isRunning).mockReturnValue(true);
vi.mocked(getAbortController).mockReturnValue(mockAbortController as any);
const handler = createStopHandler();
await handler(req, res);
expect(mockAbortController.abort).toHaveBeenCalled();
expect(setRunningState).toHaveBeenCalledWith(false, null, null);
expect(res.json).toHaveBeenCalledWith({
success: true,
message: 'Code review stopped',
});
});
it('should handle case when abort controller is null', async () => {
const { isRunning, getAbortController, setRunningState } =
await import('@/routes/code-review/common.js');
vi.mocked(isRunning).mockReturnValue(true);
vi.mocked(getAbortController).mockReturnValue(null);
const handler = createStopHandler();
await handler(req, res);
expect(setRunningState).toHaveBeenCalledWith(false, null, null);
expect(res.json).toHaveBeenCalledWith({
success: true,
message: 'Code review stopped',
});
});
});
describe('error handling', () => {
it('should handle errors gracefully', async () => {
const { isRunning } = await import('@/routes/code-review/common.js');
vi.mocked(isRunning).mockImplementation(() => {
throw new Error('Unexpected error');
});
const handler = createStopHandler();
await handler(req, res);
expect(res.status).toHaveBeenCalledWith(500);
expect(res.json).toHaveBeenCalledWith({
success: false,
error: 'Unexpected error',
});
});
});
});

View File

@@ -0,0 +1,384 @@
/**
* Unit tests for code-review trigger route handler
*
* Tests:
* - Parameter validation
* - Request body validation (security)
* - Concurrent review prevention
* - Review execution
* - Error handling
*/
import { describe, it, expect, vi, beforeEach } from 'vitest';
import type { Request, Response } from 'express';
import { createTriggerHandler } from '@/routes/code-review/routes/trigger.js';
import type { CodeReviewService } from '@/services/code-review-service.js';
import { createMockExpressContext } from '../../../utils/mocks.js';
// Mock the common module to control running state
vi.mock('@/routes/code-review/common.js', () => {
let running = false;
return {
isRunning: vi.fn(() => running),
setRunningState: vi.fn((state: boolean) => {
running = state;
}),
getErrorMessage: (e: unknown) => (e instanceof Error ? e.message : String(e)),
logError: vi.fn(),
getAbortController: vi.fn(() => null),
getCurrentProjectPath: vi.fn(() => null),
};
});
// Mock logger
vi.mock('@automaker/utils', async () => {
const actual = await vi.importActual<typeof import('@automaker/utils')>('@automaker/utils');
return {
...actual,
createLogger: vi.fn(() => ({
info: vi.fn(),
error: vi.fn(),
warn: vi.fn(),
debug: vi.fn(),
})),
};
});
describe('code-review/trigger route', () => {
let mockCodeReviewService: CodeReviewService;
let req: Request;
let res: Response;
beforeEach(async () => {
vi.clearAllMocks();
// Reset running state
const { setRunningState, isRunning } = await import('@/routes/code-review/common.js');
vi.mocked(setRunningState)(false);
vi.mocked(isRunning).mockReturnValue(false);
mockCodeReviewService = {
executeReview: vi.fn().mockResolvedValue({
id: 'review-123',
verdict: 'approved',
summary: 'No issues found',
comments: [],
stats: {
totalComments: 0,
bySeverity: { critical: 0, high: 0, medium: 0, low: 0, info: 0 },
byCategory: {},
autoFixedCount: 0,
},
filesReviewed: ['src/index.ts'],
model: 'claude-sonnet-4-20250514',
reviewedAt: new Date().toISOString(),
durationMs: 1000,
}),
getProviderStatus: vi.fn(),
getBestProvider: vi.fn(),
refreshProviderStatus: vi.fn(),
initialize: vi.fn(),
} as any;
const context = createMockExpressContext();
req = context.req;
res = context.res;
});
describe('parameter validation', () => {
it('should return 400 if projectPath is missing', async () => {
req.body = {};
const handler = createTriggerHandler(mockCodeReviewService);
await handler(req, res);
expect(res.status).toHaveBeenCalledWith(400);
expect(res.json).toHaveBeenCalledWith({
success: false,
error: 'projectPath is required',
});
expect(mockCodeReviewService.executeReview).not.toHaveBeenCalled();
});
it('should return 400 if files is not an array', async () => {
req.body = {
projectPath: '/test/project',
files: 'not-an-array',
};
const handler = createTriggerHandler(mockCodeReviewService);
await handler(req, res);
expect(res.status).toHaveBeenCalledWith(400);
expect(res.json).toHaveBeenCalledWith({
success: false,
error: 'files must be an array',
});
});
it('should return 400 if too many files', async () => {
req.body = {
projectPath: '/test/project',
files: Array.from({ length: 150 }, (_, i) => `file${i}.ts`),
};
const handler = createTriggerHandler(mockCodeReviewService);
await handler(req, res);
expect(res.status).toHaveBeenCalledWith(400);
expect(res.json).toHaveBeenCalledWith({
success: false,
error: 'Maximum 100 files allowed per request',
});
});
it('should return 400 if file path is too long', async () => {
req.body = {
projectPath: '/test/project',
files: ['a'.repeat(600)],
};
const handler = createTriggerHandler(mockCodeReviewService);
await handler(req, res);
expect(res.status).toHaveBeenCalledWith(400);
expect(res.json).toHaveBeenCalledWith({
success: false,
error: 'File path too long',
});
});
it('should return 400 if baseRef is not a string', async () => {
req.body = {
projectPath: '/test/project',
baseRef: 123,
};
const handler = createTriggerHandler(mockCodeReviewService);
await handler(req, res);
expect(res.status).toHaveBeenCalledWith(400);
expect(res.json).toHaveBeenCalledWith({
success: false,
error: 'baseRef must be a string',
});
});
it('should return 400 if baseRef is too long', async () => {
req.body = {
projectPath: '/test/project',
baseRef: 'a'.repeat(300),
};
const handler = createTriggerHandler(mockCodeReviewService);
await handler(req, res);
expect(res.status).toHaveBeenCalledWith(400);
expect(res.json).toHaveBeenCalledWith({
success: false,
error: 'baseRef is too long',
});
});
it('should return 400 if categories is not an array', async () => {
req.body = {
projectPath: '/test/project',
categories: 'security',
};
const handler = createTriggerHandler(mockCodeReviewService);
await handler(req, res);
expect(res.status).toHaveBeenCalledWith(400);
expect(res.json).toHaveBeenCalledWith({
success: false,
error: 'categories must be an array',
});
});
it('should return 400 if category is invalid', async () => {
req.body = {
projectPath: '/test/project',
categories: ['security', 'invalid_category'],
};
const handler = createTriggerHandler(mockCodeReviewService);
await handler(req, res);
expect(res.status).toHaveBeenCalledWith(400);
expect(res.json).toHaveBeenCalledWith({
success: false,
error: 'Invalid category: invalid_category',
});
});
it('should return 400 if autoFix is not a boolean', async () => {
req.body = {
projectPath: '/test/project',
autoFix: 'true',
};
const handler = createTriggerHandler(mockCodeReviewService);
await handler(req, res);
expect(res.status).toHaveBeenCalledWith(400);
expect(res.json).toHaveBeenCalledWith({
success: false,
error: 'autoFix must be a boolean',
});
});
it('should return 400 if thinkingLevel is invalid', async () => {
req.body = {
projectPath: '/test/project',
thinkingLevel: 'invalid',
};
const handler = createTriggerHandler(mockCodeReviewService);
await handler(req, res);
expect(res.status).toHaveBeenCalledWith(400);
expect(res.json).toHaveBeenCalledWith({
success: false,
error: 'Invalid thinkingLevel: invalid',
});
});
});
describe('concurrent review prevention', () => {
it('should return 409 if a review is already in progress', async () => {
const { isRunning } = await import('@/routes/code-review/common.js');
vi.mocked(isRunning).mockReturnValue(true);
req.body = { projectPath: '/test/project' };
const handler = createTriggerHandler(mockCodeReviewService);
await handler(req, res);
expect(res.status).toHaveBeenCalledWith(409);
expect(res.json).toHaveBeenCalledWith({
success: false,
error: 'A code review is already in progress',
});
expect(mockCodeReviewService.executeReview).not.toHaveBeenCalled();
});
});
describe('successful review execution', () => {
it('should trigger review and return success immediately', async () => {
req.body = {
projectPath: '/test/project',
};
const handler = createTriggerHandler(mockCodeReviewService);
await handler(req, res);
expect(res.json).toHaveBeenCalledWith({
success: true,
message: 'Code review started',
});
});
it('should pass all options to executeReview', async () => {
req.body = {
projectPath: '/test/project',
files: ['src/index.ts', 'src/utils.ts'],
baseRef: 'main',
categories: ['security', 'performance'],
autoFix: true,
model: 'claude-opus-4-5-20251101',
thinkingLevel: 'high',
};
const handler = createTriggerHandler(mockCodeReviewService);
await handler(req, res);
// Wait for async execution
await new Promise((resolve) => setTimeout(resolve, 10));
expect(mockCodeReviewService.executeReview).toHaveBeenCalledWith(
expect.objectContaining({
projectPath: '/test/project',
files: ['src/index.ts', 'src/utils.ts'],
baseRef: 'main',
categories: ['security', 'performance'],
autoFix: true,
model: 'claude-opus-4-5-20251101',
thinkingLevel: 'high',
abortController: expect.any(AbortController),
})
);
});
it('should accept valid categories', async () => {
const validCategories = [
'tech_stack',
'security',
'code_quality',
'implementation',
'architecture',
'performance',
'testing',
'documentation',
];
req.body = {
projectPath: '/test/project',
categories: validCategories,
};
const handler = createTriggerHandler(mockCodeReviewService);
await handler(req, res);
expect(res.json).toHaveBeenCalledWith({
success: true,
message: 'Code review started',
});
});
it('should accept valid thinking levels', async () => {
for (const level of ['low', 'medium', 'high']) {
req.body = {
projectPath: '/test/project',
thinkingLevel: level,
};
const handler = createTriggerHandler(mockCodeReviewService);
await handler(req, res);
expect(res.json).toHaveBeenCalledWith({
success: true,
message: 'Code review started',
});
vi.clearAllMocks();
}
});
});
describe('error handling', () => {
it('should handle service errors gracefully', async () => {
mockCodeReviewService.executeReview = vi.fn().mockRejectedValue(new Error('Service error'));
req.body = {
projectPath: '/test/project',
};
const handler = createTriggerHandler(mockCodeReviewService);
await handler(req, res);
// Response is sent immediately (async execution)
expect(res.json).toHaveBeenCalledWith({
success: true,
message: 'Code review started',
});
// Wait for async error handling
await new Promise((resolve) => setTimeout(resolve, 50));
// Running state should be reset
const { setRunningState } = await import('@/routes/code-review/common.js');
expect(setRunningState).toHaveBeenCalledWith(false);
});
});
});

View File

@@ -202,8 +202,17 @@ describe('auto-mode-service.ts - Planning Mode', () => {
}); });
describe('buildFeaturePrompt', () => { describe('buildFeaturePrompt', () => {
const buildFeaturePrompt = (svc: any, feature: any) => { const defaultTaskExecutionPrompts = {
return svc.buildFeaturePrompt(feature); implementationInstructions: 'Test implementation instructions',
playwrightVerificationInstructions: 'Test playwright instructions',
};
const buildFeaturePrompt = (
svc: any,
feature: any,
taskExecutionPrompts = defaultTaskExecutionPrompts
) => {
return svc.buildFeaturePrompt(feature, taskExecutionPrompts);
}; };
it('should include feature ID and description', () => { it('should include feature ID and description', () => {
@@ -242,14 +251,15 @@ describe('auto-mode-service.ts - Planning Mode', () => {
expect(result).toContain('/tmp/image2.jpg'); expect(result).toContain('/tmp/image2.jpg');
}); });
it('should include summary tags instruction', () => { it('should include implementation instructions', () => {
const feature = { const feature = {
id: 'feat-123', id: 'feat-123',
description: 'Test feature', description: 'Test feature',
}; };
const result = buildFeaturePrompt(service, feature); const result = buildFeaturePrompt(service, feature);
expect(result).toContain('<summary>'); // The prompt should include the implementation instructions passed to it
expect(result).toContain('</summary>'); expect(result).toContain('Test implementation instructions');
expect(result).toContain('Test playwright instructions');
}); });
}); });

View File

@@ -91,7 +91,7 @@ describe('claude-usage-service.ts', () => {
it("should use 'where' command on Windows", async () => { it("should use 'where' command on Windows", async () => {
vi.mocked(os.platform).mockReturnValue('win32'); vi.mocked(os.platform).mockReturnValue('win32');
const windowsService = new ClaudeUsageService(); // Create new service after platform mock const ptyService = new ClaudeUsageService(); // Create new service after platform mock
mockSpawnProcess.on.mockImplementation((event: string, callback: Function) => { mockSpawnProcess.on.mockImplementation((event: string, callback: Function) => {
if (event === 'close') { if (event === 'close') {
@@ -100,7 +100,7 @@ describe('claude-usage-service.ts', () => {
return mockSpawnProcess; return mockSpawnProcess;
}); });
await windowsService.isAvailable(); await ptyService.isAvailable();
expect(spawn).toHaveBeenCalledWith('where', ['claude']); expect(spawn).toHaveBeenCalledWith('where', ['claude']);
}); });
@@ -403,120 +403,22 @@ Resets Jan 15, 3pm
}); });
}); });
describe('executeClaudeUsageCommandMac', () => { // Note: executeClaudeUsageCommandMac tests removed - the service now uses PTY for all platforms
beforeEach(() => { // The executeClaudeUsageCommandMac method exists but is dead code (never called)
vi.mocked(os.platform).mockReturnValue('darwin'); describe.skip('executeClaudeUsageCommandMac (deprecated - uses PTY now)', () => {
vi.spyOn(process, 'env', 'get').mockReturnValue({ HOME: '/Users/testuser' }); it('should be skipped - service now uses PTY for all platforms', () => {
}); expect(true).toBe(true);
it('should execute expect script and return output', async () => {
const mockOutput = `
Current session
65% left
Resets in 2h
`;
let stdoutCallback: Function;
let closeCallback: Function;
mockSpawnProcess.stdout = {
on: vi.fn((event: string, callback: Function) => {
if (event === 'data') {
stdoutCallback = callback;
}
}),
};
mockSpawnProcess.stderr = {
on: vi.fn(),
};
mockSpawnProcess.on = vi.fn((event: string, callback: Function) => {
if (event === 'close') {
closeCallback = callback;
}
return mockSpawnProcess;
});
const promise = service.fetchUsageData();
// Simulate stdout data
stdoutCallback!(Buffer.from(mockOutput));
// Simulate successful close
closeCallback!(0);
const result = await promise;
expect(result.sessionPercentage).toBe(35); // 100 - 65
expect(spawn).toHaveBeenCalledWith(
'expect',
expect.arrayContaining(['-c']),
expect.any(Object)
);
});
it('should handle authentication errors', async () => {
const mockOutput = 'token_expired';
let stdoutCallback: Function;
let closeCallback: Function;
mockSpawnProcess.stdout = {
on: vi.fn((event: string, callback: Function) => {
if (event === 'data') {
stdoutCallback = callback;
}
}),
};
mockSpawnProcess.stderr = {
on: vi.fn(),
};
mockSpawnProcess.on = vi.fn((event: string, callback: Function) => {
if (event === 'close') {
closeCallback = callback;
}
return mockSpawnProcess;
});
const promise = service.fetchUsageData();
stdoutCallback!(Buffer.from(mockOutput));
closeCallback!(1);
await expect(promise).rejects.toThrow('Authentication required');
});
it('should handle timeout with no data', async () => {
vi.useFakeTimers();
mockSpawnProcess.stdout = {
on: vi.fn(),
};
mockSpawnProcess.stderr = {
on: vi.fn(),
};
mockSpawnProcess.on = vi.fn(() => mockSpawnProcess);
mockSpawnProcess.kill = vi.fn();
const promise = service.fetchUsageData();
// Advance time past timeout (30 seconds)
vi.advanceTimersByTime(31000);
await expect(promise).rejects.toThrow('Command timed out');
vi.useRealTimers();
}); });
}); });
describe('executeClaudeUsageCommandWindows', () => { describe('executeClaudeUsageCommandPty', () => {
// Note: The service now uses PTY for all platforms, using process.cwd() as the working directory
beforeEach(() => { beforeEach(() => {
vi.mocked(os.platform).mockReturnValue('win32'); vi.mocked(os.platform).mockReturnValue('win32');
vi.mocked(os.homedir).mockReturnValue('C:\\Users\\testuser');
vi.spyOn(process, 'env', 'get').mockReturnValue({ USERPROFILE: 'C:\\Users\\testuser' });
}); });
it('should use node-pty on Windows and return output', async () => { it('should use node-pty and return output', async () => {
const windowsService = new ClaudeUsageService(); // Create new service for Windows platform const ptyService = new ClaudeUsageService();
const mockOutput = ` const mockOutput = `
Current session Current session
65% left 65% left
@@ -538,7 +440,7 @@ Resets in 2h
}; };
vi.mocked(pty.spawn).mockReturnValue(mockPty as any); vi.mocked(pty.spawn).mockReturnValue(mockPty as any);
const promise = windowsService.fetchUsageData(); const promise = ptyService.fetchUsageData();
// Simulate data // Simulate data
dataCallback!(mockOutput); dataCallback!(mockOutput);
@@ -549,16 +451,19 @@ Resets in 2h
const result = await promise; const result = await promise;
expect(result.sessionPercentage).toBe(35); expect(result.sessionPercentage).toBe(35);
// Service uses process.cwd() for --add-dir
expect(pty.spawn).toHaveBeenCalledWith( expect(pty.spawn).toHaveBeenCalledWith(
'cmd.exe', 'cmd.exe',
['/c', 'claude', '--add-dir', 'C:\\Users\\testuser'], ['/c', 'claude', '--add-dir', process.cwd()],
expect.any(Object) expect.objectContaining({
cwd: process.cwd(),
})
); );
}); });
it('should send escape key after seeing usage data', async () => { it('should send escape key after seeing usage data', async () => {
vi.useFakeTimers(); vi.useFakeTimers();
const windowsService = new ClaudeUsageService(); const ptyService = new ClaudeUsageService();
const mockOutput = 'Current session\n65% left'; const mockOutput = 'Current session\n65% left';
@@ -577,7 +482,7 @@ Resets in 2h
}; };
vi.mocked(pty.spawn).mockReturnValue(mockPty as any); vi.mocked(pty.spawn).mockReturnValue(mockPty as any);
const promise = windowsService.fetchUsageData(); const promise = ptyService.fetchUsageData();
// Simulate seeing usage data // Simulate seeing usage data
dataCallback!(mockOutput); dataCallback!(mockOutput);
@@ -594,8 +499,8 @@ Resets in 2h
vi.useRealTimers(); vi.useRealTimers();
}); });
it('should handle authentication errors on Windows', async () => { it('should handle authentication errors', async () => {
const windowsService = new ClaudeUsageService(); const ptyService = new ClaudeUsageService();
let dataCallback: Function | undefined; let dataCallback: Function | undefined;
let exitCallback: Function | undefined; let exitCallback: Function | undefined;
@@ -611,18 +516,22 @@ Resets in 2h
}; };
vi.mocked(pty.spawn).mockReturnValue(mockPty as any); vi.mocked(pty.spawn).mockReturnValue(mockPty as any);
const promise = windowsService.fetchUsageData(); const promise = ptyService.fetchUsageData();
dataCallback!('authentication_error'); // Send data containing the authentication error pattern the service looks for
dataCallback!('"type":"authentication_error"');
// Trigger the exit handler which checks for auth errors
exitCallback!({ exitCode: 1 });
await expect(promise).rejects.toThrow( await expect(promise).rejects.toThrow(
"Claude CLI authentication issue. Please run 'claude logout' and then 'claude login' in your terminal to refresh permissions." "Claude CLI authentication issue. Please run 'claude logout' and then 'claude login' in your terminal to refresh permissions."
); );
}); });
it('should handle timeout with no data on Windows', async () => { it('should handle timeout with no data', async () => {
vi.useFakeTimers(); vi.useFakeTimers();
const windowsService = new ClaudeUsageService(); const ptyService = new ClaudeUsageService();
const mockPty = { const mockPty = {
onData: vi.fn(), onData: vi.fn(),
@@ -633,7 +542,7 @@ Resets in 2h
}; };
vi.mocked(pty.spawn).mockReturnValue(mockPty as any); vi.mocked(pty.spawn).mockReturnValue(mockPty as any);
const promise = windowsService.fetchUsageData(); const promise = ptyService.fetchUsageData();
// Advance time past timeout (45 seconds) // Advance time past timeout (45 seconds)
vi.advanceTimersByTime(46000); vi.advanceTimersByTime(46000);
@@ -648,7 +557,7 @@ Resets in 2h
it('should return data on timeout if data was captured', async () => { it('should return data on timeout if data was captured', async () => {
vi.useFakeTimers(); vi.useFakeTimers();
const windowsService = new ClaudeUsageService(); const ptyService = new ClaudeUsageService();
let dataCallback: Function | undefined; let dataCallback: Function | undefined;
@@ -663,7 +572,7 @@ Resets in 2h
}; };
vi.mocked(pty.spawn).mockReturnValue(mockPty as any); vi.mocked(pty.spawn).mockReturnValue(mockPty as any);
const promise = windowsService.fetchUsageData(); const promise = ptyService.fetchUsageData();
// Simulate receiving usage data // Simulate receiving usage data
dataCallback!('Current session\n65% left\nResets in 2h'); dataCallback!('Current session\n65% left\nResets in 2h');
@@ -681,7 +590,9 @@ Resets in 2h
it('should send SIGTERM after ESC if process does not exit', async () => { it('should send SIGTERM after ESC if process does not exit', async () => {
vi.useFakeTimers(); vi.useFakeTimers();
const windowsService = new ClaudeUsageService(); // Mock Unix platform to test SIGTERM behavior (Windows calls kill() without signal)
vi.mocked(os.platform).mockReturnValue('darwin');
const ptyService = new ClaudeUsageService();
let dataCallback: Function | undefined; let dataCallback: Function | undefined;
@@ -696,7 +607,7 @@ Resets in 2h
}; };
vi.mocked(pty.spawn).mockReturnValue(mockPty as any); vi.mocked(pty.spawn).mockReturnValue(mockPty as any);
windowsService.fetchUsageData(); ptyService.fetchUsageData();
// Simulate seeing usage data // Simulate seeing usage data
dataCallback!('Current session\n65% left'); dataCallback!('Current session\n65% left');

File diff suppressed because it is too large Load Diff

View File

@@ -190,9 +190,10 @@ describe('feature-loader.ts', () => {
const result = await loader.getAll(testProjectPath); const result = await loader.getAll(testProjectPath);
expect(result).toEqual([]); expect(result).toEqual([]);
// With recovery-enabled reads, warnings come from AtomicWriter and FeatureLoader
expect(consoleSpy).toHaveBeenCalledWith( expect(consoleSpy).toHaveBeenCalledWith(
expect.stringMatching(/WARN.*\[FeatureLoader\]/), expect.stringMatching(/WARN.*\[AtomicWriter\]/),
expect.stringContaining('Failed to parse feature.json') expect.stringContaining('unavailable')
); );
consoleSpy.mockRestore(); consoleSpy.mockRestore();
@@ -260,10 +261,13 @@ describe('feature-loader.ts', () => {
expect(result).toBeNull(); expect(result).toBeNull();
}); });
it('should throw on other errors', async () => { it('should return null on other errors (with recovery attempt)', async () => {
// With recovery-enabled reads, get() returns null instead of throwing
// because it attempts to recover from backups before giving up
vi.mocked(fs.readFile).mockRejectedValue(new Error('Permission denied')); vi.mocked(fs.readFile).mockRejectedValue(new Error('Permission denied'));
await expect(loader.get(testProjectPath, 'feature-123')).rejects.toThrow('Permission denied'); const result = await loader.get(testProjectPath, 'feature-123');
expect(result).toBeNull();
}); });
}); });
@@ -442,4 +446,471 @@ describe('feature-loader.ts', () => {
); );
}); });
}); });
describe('findByTitle', () => {
it('should find feature by exact title match (case-insensitive)', async () => {
vi.mocked(fs.access).mockResolvedValue(undefined);
vi.mocked(fs.readdir).mockResolvedValue([
{ name: 'feature-1', isDirectory: () => true } as any,
{ name: 'feature-2', isDirectory: () => true } as any,
]);
vi.mocked(fs.readFile)
.mockResolvedValueOnce(
JSON.stringify({
id: 'feature-1000-abc',
title: 'Login Feature',
category: 'auth',
description: 'Login implementation',
})
)
.mockResolvedValueOnce(
JSON.stringify({
id: 'feature-2000-def',
title: 'Logout Feature',
category: 'auth',
description: 'Logout implementation',
})
);
const result = await loader.findByTitle(testProjectPath, 'LOGIN FEATURE');
expect(result).not.toBeNull();
expect(result?.id).toBe('feature-1000-abc');
expect(result?.title).toBe('Login Feature');
});
it('should return null when title is not found', async () => {
vi.mocked(fs.access).mockResolvedValue(undefined);
vi.mocked(fs.readdir).mockResolvedValue([
{ name: 'feature-1', isDirectory: () => true } as any,
]);
vi.mocked(fs.readFile).mockResolvedValueOnce(
JSON.stringify({
id: 'feature-1000-abc',
title: 'Login Feature',
category: 'auth',
description: 'Login implementation',
})
);
const result = await loader.findByTitle(testProjectPath, 'Nonexistent Feature');
expect(result).toBeNull();
});
it('should return null for empty or whitespace title', async () => {
const result1 = await loader.findByTitle(testProjectPath, '');
const result2 = await loader.findByTitle(testProjectPath, ' ');
expect(result1).toBeNull();
expect(result2).toBeNull();
});
it('should skip features without titles', async () => {
vi.mocked(fs.access).mockResolvedValue(undefined);
vi.mocked(fs.readdir).mockResolvedValue([
{ name: 'feature-1', isDirectory: () => true } as any,
{ name: 'feature-2', isDirectory: () => true } as any,
]);
vi.mocked(fs.readFile)
.mockResolvedValueOnce(
JSON.stringify({
id: 'feature-1000-abc',
// no title
category: 'auth',
description: 'Login implementation',
})
)
.mockResolvedValueOnce(
JSON.stringify({
id: 'feature-2000-def',
title: 'Login Feature',
category: 'auth',
description: 'Another login',
})
);
const result = await loader.findByTitle(testProjectPath, 'Login Feature');
expect(result).not.toBeNull();
expect(result?.id).toBe('feature-2000-def');
});
});
describe('findDuplicateTitle', () => {
it('should find duplicate title', async () => {
vi.mocked(fs.access).mockResolvedValue(undefined);
vi.mocked(fs.readdir).mockResolvedValue([
{ name: 'feature-1', isDirectory: () => true } as any,
]);
vi.mocked(fs.readFile).mockResolvedValueOnce(
JSON.stringify({
id: 'feature-1000-abc',
title: 'My Feature',
category: 'ui',
description: 'Feature description',
})
);
const result = await loader.findDuplicateTitle(testProjectPath, 'my feature');
expect(result).not.toBeNull();
expect(result?.id).toBe('feature-1000-abc');
});
it('should exclude specified feature ID from duplicate check', async () => {
vi.mocked(fs.access).mockResolvedValue(undefined);
vi.mocked(fs.readdir).mockResolvedValue([
{ name: 'feature-1', isDirectory: () => true } as any,
{ name: 'feature-2', isDirectory: () => true } as any,
]);
vi.mocked(fs.readFile)
.mockResolvedValueOnce(
JSON.stringify({
id: 'feature-1000-abc',
title: 'My Feature',
category: 'ui',
description: 'Feature 1',
})
)
.mockResolvedValueOnce(
JSON.stringify({
id: 'feature-2000-def',
title: 'Other Feature',
category: 'ui',
description: 'Feature 2',
})
);
// Should not find duplicate when excluding the feature that has the title
const result = await loader.findDuplicateTitle(
testProjectPath,
'My Feature',
'feature-1000-abc'
);
expect(result).toBeNull();
});
it('should find duplicate when title exists on different feature', async () => {
vi.mocked(fs.access).mockResolvedValue(undefined);
vi.mocked(fs.readdir).mockResolvedValue([
{ name: 'feature-1', isDirectory: () => true } as any,
{ name: 'feature-2', isDirectory: () => true } as any,
]);
vi.mocked(fs.readFile)
.mockResolvedValueOnce(
JSON.stringify({
id: 'feature-1000-abc',
title: 'My Feature',
category: 'ui',
description: 'Feature 1',
})
)
.mockResolvedValueOnce(
JSON.stringify({
id: 'feature-2000-def',
title: 'Other Feature',
category: 'ui',
description: 'Feature 2',
})
);
// Should find duplicate because feature-1000-abc has the title and we're excluding feature-2000-def
const result = await loader.findDuplicateTitle(
testProjectPath,
'My Feature',
'feature-2000-def'
);
expect(result).not.toBeNull();
expect(result?.id).toBe('feature-1000-abc');
});
it('should return null for empty or whitespace title', async () => {
const result1 = await loader.findDuplicateTitle(testProjectPath, '');
const result2 = await loader.findDuplicateTitle(testProjectPath, ' ');
expect(result1).toBeNull();
expect(result2).toBeNull();
});
it('should handle titles with leading/trailing whitespace', async () => {
vi.mocked(fs.access).mockResolvedValue(undefined);
vi.mocked(fs.readdir).mockResolvedValue([
{ name: 'feature-1', isDirectory: () => true } as any,
]);
vi.mocked(fs.readFile).mockResolvedValueOnce(
JSON.stringify({
id: 'feature-1000-abc',
title: 'My Feature',
category: 'ui',
description: 'Feature description',
})
);
const result = await loader.findDuplicateTitle(testProjectPath, ' My Feature ');
expect(result).not.toBeNull();
expect(result?.id).toBe('feature-1000-abc');
});
});
describe('syncFeatureToAppSpec', () => {
const sampleAppSpec = `<?xml version="1.0" encoding="UTF-8"?>
<project_specification>
<project_name>Test Project</project_name>
<core_capabilities>
<capability>Testing</capability>
</core_capabilities>
<implemented_features>
<feature>
<name>Existing Feature</name>
<description>Already implemented</description>
</feature>
</implemented_features>
</project_specification>`;
const appSpecWithoutFeatures = `<?xml version="1.0" encoding="UTF-8"?>
<project_specification>
<project_name>Test Project</project_name>
<core_capabilities>
<capability>Testing</capability>
</core_capabilities>
</project_specification>`;
it('should add feature to app_spec.txt', async () => {
vi.mocked(fs.readFile).mockResolvedValueOnce(sampleAppSpec);
vi.mocked(fs.writeFile).mockResolvedValue(undefined);
const feature = {
id: 'feature-1234-abc',
title: 'New Feature',
category: 'ui',
description: 'A new feature description',
};
const result = await loader.syncFeatureToAppSpec(testProjectPath, feature);
expect(result).toBe(true);
expect(fs.writeFile).toHaveBeenCalledWith(
expect.stringContaining('app_spec.txt'),
expect.stringContaining('New Feature'),
'utf-8'
);
expect(fs.writeFile).toHaveBeenCalledWith(
expect.any(String),
expect.stringContaining('A new feature description'),
'utf-8'
);
});
it('should add feature with file locations', async () => {
vi.mocked(fs.readFile).mockResolvedValueOnce(sampleAppSpec);
vi.mocked(fs.writeFile).mockResolvedValue(undefined);
const feature = {
id: 'feature-1234-abc',
title: 'Feature With Locations',
category: 'backend',
description: 'Feature with file locations',
};
const result = await loader.syncFeatureToAppSpec(testProjectPath, feature, [
'src/feature.ts',
'src/utils/helper.ts',
]);
expect(result).toBe(true);
expect(fs.writeFile).toHaveBeenCalledWith(
expect.any(String),
expect.stringContaining('src/feature.ts'),
'utf-8'
);
expect(fs.writeFile).toHaveBeenCalledWith(
expect.any(String),
expect.stringContaining('src/utils/helper.ts'),
'utf-8'
);
});
it('should return false when app_spec.txt does not exist', async () => {
const error: any = new Error('File not found');
error.code = 'ENOENT';
vi.mocked(fs.readFile).mockRejectedValueOnce(error);
const feature = {
id: 'feature-1234-abc',
title: 'New Feature',
category: 'ui',
description: 'A new feature description',
};
const result = await loader.syncFeatureToAppSpec(testProjectPath, feature);
expect(result).toBe(false);
expect(fs.writeFile).not.toHaveBeenCalled();
});
it('should return false when feature already exists (duplicate)', async () => {
vi.mocked(fs.readFile).mockResolvedValueOnce(sampleAppSpec);
const feature = {
id: 'feature-5678-xyz',
title: 'Existing Feature', // Same name as existing feature
category: 'ui',
description: 'Different description',
};
const result = await loader.syncFeatureToAppSpec(testProjectPath, feature);
expect(result).toBe(false);
expect(fs.writeFile).not.toHaveBeenCalled();
});
it('should use feature ID as fallback name when title is missing', async () => {
vi.mocked(fs.readFile).mockResolvedValueOnce(sampleAppSpec);
vi.mocked(fs.writeFile).mockResolvedValue(undefined);
const feature = {
id: 'feature-1234-abc',
category: 'ui',
description: 'Feature without title',
// No title property
};
const result = await loader.syncFeatureToAppSpec(testProjectPath, feature);
expect(result).toBe(true);
expect(fs.writeFile).toHaveBeenCalledWith(
expect.any(String),
expect.stringContaining('Feature: feature-1234-abc'),
'utf-8'
);
});
it('should handle app_spec without implemented_features section', async () => {
vi.mocked(fs.readFile).mockResolvedValueOnce(appSpecWithoutFeatures);
vi.mocked(fs.writeFile).mockResolvedValue(undefined);
const feature = {
id: 'feature-1234-abc',
title: 'First Feature',
category: 'ui',
description: 'First implemented feature',
};
const result = await loader.syncFeatureToAppSpec(testProjectPath, feature);
expect(result).toBe(true);
expect(fs.writeFile).toHaveBeenCalledWith(
expect.any(String),
expect.stringContaining('<implemented_features>'),
'utf-8'
);
expect(fs.writeFile).toHaveBeenCalledWith(
expect.any(String),
expect.stringContaining('First Feature'),
'utf-8'
);
});
it('should throw on non-ENOENT file read errors', async () => {
const error = new Error('Permission denied');
vi.mocked(fs.readFile).mockRejectedValueOnce(error);
const feature = {
id: 'feature-1234-abc',
title: 'New Feature',
category: 'ui',
description: 'A new feature description',
};
await expect(loader.syncFeatureToAppSpec(testProjectPath, feature)).rejects.toThrow(
'Permission denied'
);
});
it('should preserve existing features when adding a new one', async () => {
vi.mocked(fs.readFile).mockResolvedValueOnce(sampleAppSpec);
vi.mocked(fs.writeFile).mockResolvedValue(undefined);
const feature = {
id: 'feature-1234-abc',
title: 'New Feature',
category: 'ui',
description: 'A new feature',
};
await loader.syncFeatureToAppSpec(testProjectPath, feature);
// Verify both old and new features are in the output
expect(fs.writeFile).toHaveBeenCalledWith(
expect.any(String),
expect.stringContaining('Existing Feature'),
'utf-8'
);
expect(fs.writeFile).toHaveBeenCalledWith(
expect.any(String),
expect.stringContaining('New Feature'),
'utf-8'
);
});
it('should escape special characters in feature name and description', async () => {
vi.mocked(fs.readFile).mockResolvedValueOnce(sampleAppSpec);
vi.mocked(fs.writeFile).mockResolvedValue(undefined);
const feature = {
id: 'feature-1234-abc',
title: 'Feature with <special> & "chars"',
category: 'ui',
description: 'Description with <tags> & "quotes"',
};
const result = await loader.syncFeatureToAppSpec(testProjectPath, feature);
expect(result).toBe(true);
// The XML should have escaped characters
expect(fs.writeFile).toHaveBeenCalledWith(
expect.any(String),
expect.stringContaining('&lt;special&gt;'),
'utf-8'
);
expect(fs.writeFile).toHaveBeenCalledWith(
expect.any(String),
expect.stringContaining('&amp;'),
'utf-8'
);
});
it('should not add empty file_locations array', async () => {
vi.mocked(fs.readFile).mockResolvedValueOnce(sampleAppSpec);
vi.mocked(fs.writeFile).mockResolvedValue(undefined);
const feature = {
id: 'feature-1234-abc',
title: 'Feature Without Locations',
category: 'ui',
description: 'No file locations',
};
await loader.syncFeatureToAppSpec(testProjectPath, feature, []);
// File locations should not be included when array is empty
const writeCall = vi.mocked(fs.writeFile).mock.calls[0];
const writtenContent = writeCall[1] as string;
// Count occurrences of file_locations - should only have the one from Existing Feature if any
// The new feature should not add file_locations
expect(writtenContent).toContain('Feature Without Locations');
});
});
}); });

View File

@@ -1,6 +1,6 @@
{ {
"name": "@automaker/ui", "name": "@automaker/ui",
"version": "0.11.0", "version": "0.12.0",
"description": "An autonomous AI development studio that helps you build software faster using AI-powered agents", "description": "An autonomous AI development studio that helps you build software faster using AI-powered agents",
"homepage": "https://github.com/AutoMaker-Org/automaker", "homepage": "https://github.com/AutoMaker-Org/automaker",
"repository": { "repository": {
@@ -48,6 +48,22 @@
"@dnd-kit/core": "6.3.1", "@dnd-kit/core": "6.3.1",
"@dnd-kit/sortable": "10.0.0", "@dnd-kit/sortable": "10.0.0",
"@dnd-kit/utilities": "3.2.2", "@dnd-kit/utilities": "3.2.2",
"@fontsource/cascadia-code": "^5.2.3",
"@fontsource/fira-code": "^5.2.7",
"@fontsource/ibm-plex-mono": "^5.2.7",
"@fontsource/inconsolata": "^5.2.8",
"@fontsource/inter": "^5.2.8",
"@fontsource/iosevka": "^5.2.5",
"@fontsource/jetbrains-mono": "^5.2.8",
"@fontsource/lato": "^5.2.7",
"@fontsource/montserrat": "^5.2.8",
"@fontsource/open-sans": "^5.2.7",
"@fontsource/poppins": "^5.2.7",
"@fontsource/raleway": "^5.2.8",
"@fontsource/roboto": "^5.2.9",
"@fontsource/source-code-pro": "^5.2.7",
"@fontsource/source-sans-3": "^5.2.9",
"@fontsource/work-sans": "^5.2.8",
"@lezer/highlight": "1.2.3", "@lezer/highlight": "1.2.3",
"@radix-ui/react-checkbox": "1.3.3", "@radix-ui/react-checkbox": "1.3.3",
"@radix-ui/react-collapsible": "1.1.12", "@radix-ui/react-collapsible": "1.1.12",
@@ -204,12 +220,34 @@
"arch": [ "arch": [
"x64" "x64"
] ]
},
{
"target": "rpm",
"arch": [
"x64"
]
} }
], ],
"category": "Development", "category": "Development",
"icon": "public/logo_larger.png", "icon": "public/logo_larger.png",
"maintainer": "webdevcody@gmail.com", "maintainer": "webdevcody@gmail.com",
"executableName": "automaker" "executableName": "automaker",
"description": "An autonomous AI development studio that helps you build software faster using AI-powered agents",
"synopsis": "AI-powered autonomous development studio"
},
"rpm": {
"depends": [
"gtk3",
"libnotify",
"nss",
"libXScrnSaver",
"libXtst",
"xdg-utils",
"at-spi2-core",
"libuuid"
],
"compression": "xz",
"vendor": "AutoMaker Team"
}, },
"nsis": { "nsis": {
"oneClick": false, "oneClick": false,

View File

@@ -8,6 +8,7 @@ import { useCursorStatusInit } from './hooks/use-cursor-status-init';
import { useProviderAuthInit } from './hooks/use-provider-auth-init'; import { useProviderAuthInit } from './hooks/use-provider-auth-init';
import './styles/global.css'; import './styles/global.css';
import './styles/theme-imports'; import './styles/theme-imports';
import './styles/font-imports';
const logger = createLogger('App'); const logger = createLogger('App');

View File

@@ -0,0 +1,67 @@
/* Zed Fonts - https://github.com/zed-industries/zed-fonts */
/* Zed Sans - UI Font */
@font-face {
font-family: 'Zed Sans';
font-style: normal;
font-weight: 400;
font-display: swap;
src: url('./zed-sans-extended.ttf') format('truetype');
}
@font-face {
font-family: 'Zed Sans';
font-style: italic;
font-weight: 400;
font-display: swap;
src: url('./zed-sans-extendeditalic.ttf') format('truetype');
}
@font-face {
font-family: 'Zed Sans';
font-style: normal;
font-weight: 700;
font-display: swap;
src: url('./zed-sans-extendedbold.ttf') format('truetype');
}
@font-face {
font-family: 'Zed Sans';
font-style: italic;
font-weight: 700;
font-display: swap;
src: url('./zed-sans-extendedbolditalic.ttf') format('truetype');
}
/* Zed Mono - Code Font */
@font-face {
font-family: 'Zed Mono';
font-style: normal;
font-weight: 400;
font-display: swap;
src: url('./zed-mono-extended.ttf') format('truetype');
}
@font-face {
font-family: 'Zed Mono';
font-style: italic;
font-weight: 400;
font-display: swap;
src: url('./zed-mono-extendeditalic.ttf') format('truetype');
}
@font-face {
font-family: 'Zed Mono';
font-style: normal;
font-weight: 700;
font-display: swap;
src: url('./zed-mono-extendedbold.ttf') format('truetype');
}
@font-face {
font-family: 'Zed Mono';
font-style: italic;
font-weight: 700;
font-display: swap;
src: url('./zed-mono-extendedbolditalic.ttf') format('truetype');
}

Some files were not shown because too many files have changed in this diff Show More