Compare commits

...

576 Commits

Author SHA1 Message Date
Kacper
d4ce1f331b feat: add onboarding wizard components and logic
- Introduced a new onboarding wizard for guiding users through the board features.
- Added shared onboarding components including OnboardingWizard, useOnboardingWizard, and related types and constants.
- Implemented onboarding logic in the BoardView, integrating sample feature generation for a quick start.
- Updated UI components to support onboarding interactions and improve user experience.
- Enhanced onboarding state management with localStorage persistence for user progress tracking.
2026-01-11 16:05:04 +01:00
Shirone
66b68ea4eb feat: implement onboarding wizard for board view
- Added a new onboarding wizard to guide users through the board features.
- Integrated sample feature generation for quick start onboarding.
- Enhanced BoardView component to manage onboarding state and actions.
- Updated BoardControls and BoardHeader to include tour functionality.
- Introduced utility functions for sample feature management in constants.
- Improved user experience with toast notifications for onboarding actions.
2026-01-11 12:49:29 +01:00
Shirone
6e4b611662 refactor: optimize bulk delete handler and UI feedback
- Refactored the bulk delete handler to utilize Promise.all for concurrent deletion of features, improving performance and error handling.
- Updated the BoardView component to handle deletion results more effectively, providing user feedback for both successful and failed deletions.
- Enhanced local state management to avoid redundant API calls during feature deletion.
2026-01-11 11:17:28 +01:00
DhanushSantosh
7522e58fee Merge remote-tracking branch 'upstream/v0.10.0rc' into feature/codex-cli 2026-01-11 14:51:06 +05:30
Shirone
317c21ffc0 Merge pull request #413 from AutoMaker-Org/feat/bulk-delete-features-in-backlog
feat: bulk delete in backlog mass select
2026-01-11 09:19:15 +00:00
Shirone
9c5fe44617 feat: add bulk delete functionality for features
- Introduced a new endpoint `/bulk-delete` to allow deletion of multiple features at once.
- Implemented `createBulkDeleteHandler` to process bulk delete requests and handle success/failure responses.
- Updated the UI to include a bulk delete option in the BoardView component, with confirmation dialog for user actions.
- Enhanced the HTTP API client to support bulk delete requests.
- Improved the selection action bar to trigger bulk delete functionality and provide user feedback.
2026-01-11 10:17:35 +01:00
DhanushSantosh
7f79d9692c feat: Add official icons for MiniMax, GLM (Z.ai), and BigPickle models
- Add official MiniMax logo SVG from LobeHub icons library
- Add official Z.ai logo SVG for GLM models from LobeHub icons library
- Add BigPickle icon with custom green color (#4ADE80)
- Fix icon detection logic to properly handle amazon-bedrock/ and opencode/ prefixes
- Update phase-model-selector and opencode-model-configuration to use
  getProviderIconForModel() for consistent icon display across the app

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-11 14:44:32 +05:30
Shirone
2d4ffc7514 feat: add accept all functionality in ideation view
- Introduced a new "Accept All" button in the IdeationHeader component, allowing users to accept all filtered suggestions at once.
- Implemented state management for accept all readiness and count in the IdeationView component.
- Enhanced the IdeationDashboard to notify the parent component about the readiness of the accept all feature.
- Added logic to handle the acceptance of all suggestions, including success and failure notifications.
- Updated UI components to reflect the new functionality and improve user experience.
2026-01-11 10:01:01 +01:00
webdevcody
5f3db1f25e feat: enhance spec regeneration management by project
- Refactored spec regeneration status tracking to support multiple projects using a Map for running states and abort controllers.
- Updated `getSpecRegenerationStatus` to accept a project path, allowing retrieval of status specific to a project.
- Modified `setRunningState` to manage running states and abort controllers per project.
- Adjusted related route handlers to utilize project-specific status checks and updates.
- Introduced a new Graph View page and integrated it into the routing structure.
- Enhanced UI components to reflect the current project’s spec generation state.
2026-01-11 01:37:26 -05:00
webdevcody
7115460804 feat: add resume interrupted features endpoint and handler
- Introduced a new endpoint `/resume-interrupted` to handle resuming features that were interrupted during server restarts.
- Implemented the `createResumeInterruptedHandler` to check for and resume interrupted features based on the project path.
- Enhanced the `AutoModeService` to track and manage the execution state of features, ensuring they can be resumed correctly.
- Updated relevant types and prompts to include the new 'ux-reviewer' enhancement mode for better user experience handling.
- Added new templates for UX review and other enhancement modes to improve task descriptions from a user experience perspective.
2026-01-11 01:37:13 -05:00
webdevcody
0db8808b2a Merge branch 'memory-ui' into v0.10.0rc 2026-01-10 20:13:07 -05:00
webdevcody
cf3ed1dd8f Merge branch 'v0.10.0rc' of github.com:AutoMaker-Org/automaker into v0.10.0rc 2026-01-10 20:08:02 -05:00
webdevcody
da682e3993 feat: add memory management feature with UI components
- Introduced a new MemoryView component for viewing and editing AI memory files.
- Updated navigation hooks and keyboard shortcuts to include memory functionality.
- Added memory file creation, deletion, and renaming capabilities.
- Enhanced the sidebar navigation to support memory as a new section.
- Implemented loading and saving of memory files with a markdown editor.
- Integrated dialogs for creating, deleting, and renaming memory files.
2026-01-10 20:07:50 -05:00
DhanushSantosh
a92457b871 fix: Handle Claude CLI unavailability gracefully in CI
- Add try-catch around pty.spawn() to prevent crashes when PTY unavailable
- Add unhandledRejection/uncaughtException handlers for graceful degradation
- Add checkBackendHealth/waitForBackendHealth utilities for tests
- Add data/.api-key and data/credentials.json to .gitignore
2026-01-11 03:22:43 +05:30
DhanushSantosh
89a960629a fix: Improve E2E test workflow for better backend debugging
Enhanced backend server startup in CI:
- Track server PID and process status
- Save logs to backend.log for debugging
- Better error detection with process monitoring
- Added cleanup step to kill server process
- Print backend logs on test failure

Improves reliability of E2E tests by providing better diagnostics when backend fails to start
2026-01-11 02:58:56 +05:30
DhanushSantosh
41144ff1fa Merge recovered upstream commits including worktree enhancement
This brings back commits that were accidentally overwritten during a force push:
- fa8ae149 feat: enhance worktree listing by scanning external directories
- Plus any other changes from upstream/v0.10.0rc at that time

The merge ensures all changes are preserved while keeping the history intact.
2026-01-11 02:27:21 +05:30
DhanushSantosh
360cddcb91 Merge upstream commits including worktree enhancement
This recovers the commits that were accidentally overwritten during force push.

Included:
- fa8ae149 feat: enhance worktree listing by scanning external directories
- Any other commits from upstream/v0.10.0rc at that point
2026-01-11 02:27:08 +05:30
DhanushSantosh
427832e72e fix: Display correct provider icons for all OpenCode/Bedrock models
The issue was that ALL OpenCode models were showing the OpenCode icon, regardless
of their actual underlying provider. This fix ensures each model shows its
authentic brand icon.

Changes:

1. **model-constants.ts** - Fixed provider field assignment
   - Changed provider from hardcoded 'opencode' to actual config.provider
   - Now correctly maps: opencode/big-pickle, amazon-bedrock/anthropic.*, etc.

2. **phase-model-selector.tsx** - Added provider-specific icon logic
   - Added imports for DeepSeekIcon, NovaIcon, QwenIcon, MistralIcon, MetaIcon
   - Added ProviderIcon selector based on model.provider field
   - Each model type now displays its correct provider icon

3. **provider-icon.tsx** - Updated icon detection and mapping
   - Enhanced getUnderlyingModelIcon() to detect specific Bedrock providers:
     * amazon-bedrock/anthropic.* → anthropic icon
     * amazon-bedrock/deepseek.* → deepseek icon
     * amazon-bedrock/nova.* → nova icon
     * amazon-bedrock/meta.* or llama → meta icon
     * amazon-bedrock/mistral.* → mistral icon
     * amazon-bedrock/qwen.* → qwen icon
     * opencode/* → opencode icon
   - Added meta and mistral to PROVIDER_ICON_KEYS
   - Added placeholder definitions for meta/mistral in PROVIDER_ICON_DEFINITIONS
   - Updated iconMap to include all provider icons
   - Set OpenCode icon to official brand color (#6366F1 indigo)

Result: All model selectors and kanban cards now show correct brand icons
for each OpenCode model (DeepSeek whale, Amazon Nova sparkle, Qwen star, etc.)
2026-01-11 02:18:55 +05:30
webdevcody
27c60658f7 Merge branch 'v0.10.0rc' of github.com:AutoMaker-Org/automaker into v0.10.0rc 2026-01-10 15:41:50 -05:00
webdevcody
fa8ae149d3 feat: enhance worktree listing by scanning external directories
- Implemented a new function to scan the .worktrees directory for worktrees that may exist outside of git's management, allowing for better detection of externally created or corrupted worktrees.
- Updated the /list endpoint to include discovered worktrees in the response, improving the accuracy of the worktree listing.
- Added logging for discovered worktrees to aid in debugging and tracking.
- Cleaned up and organized imports in the list.ts file for better maintainability.
2026-01-10 15:41:35 -05:00
DhanushSantosh
0c19beb11c fix: Set OpenCode icon to official brand color (#6366F1 indigo)
The OpenCode icon now uses the official indigo brand color (#6366F1)
from opencode.ai instead of white, making it visible in both light
and dark themes.
2026-01-11 01:54:17 +05:30
DhanushSantosh
e34e4a59e9 fix: Resolve TypeScript error assigning part.result to string field
Fix TS2322 error where finishEvent.part?.result (typed as {}) was being
assigned to result.result (typed as string).

Solution: Safely handle arbitrary result payloads by:
1. Reading raw value as unknown from Record<string, unknown>
2. Checking if it's a string, otherwise JSON.stringify()

This ensures type safety while supporting both string and object results
from the OpenCode CLI.
2026-01-11 01:35:32 +05:30
DhanushSantosh
7cc092cd59 test: Fix remaining OpenCode provider test failures
Fix all 8 remaining test failures:

1. Update executeQuery integration tests to use new OpenCode event format:
   - text events use type='text' with part.text
   - tool_call events use type='tool_call' with part containing call_id, name, args
   - tool_result events use type='tool_result' with part
   - step_finish events use type='step_finish' with part
   - Use sessionID field instead of session_id

2. Fix step_finish event handling:
   - Include result field in successful completion response
   - Check for reason === 'error' to detect failed steps
   - Provide default error message when error field is missing

3. Update model test expectations:
   - Model 'opencode/big-pickle' stays as-is (not stripped to 'big-pickle')
   - PROVIDER_PREFIXES only strips 'opencode-' prefix, not 'opencode/'

All 84 tests now pass successfully!
2026-01-11 01:23:42 +05:30
DhanushSantosh
51cd7156d2 test: Update OpenCode provider tests to use new event format
Update normalizeEvent tests to match new OpenCode API:
- text events use type='text' with part.text instead of text-delta
- tool_call events use type='tool_call' with part containing call_id, name, args
- tool_result events use type='tool_result' with part
- tool_error events use type='tool_error' with part
- step_finish events use type='step_finish' with part

Update buildCliArgs tests:
- Remove expectations for -q flag (no longer used)
- Remove expectations for -c flag (cwd set at subprocess level)
- Remove expectations for - final arg (prompt via stdin)
- Update format to 'json' instead of 'stream-json'

Remaining 8 test failures are in integration tests that use executeQuery
and require more extensive mock data updates.
2026-01-11 01:07:55 +05:30
DhanushSantosh
1dc843d2d0 Merge upstream/v0.10.0rc into feature/codex-cli
Sync with latest upstream changes:
- feat: enhance feature dialogs with planning mode tooltips
- refactor: remove kanbanCardDetailLevel from settings and UI components
- refactor: streamline feature addition in BoardView and KanbanBoard
- feat: implement dashboard view and enhance sidebar navigation
2026-01-11 00:59:36 +05:30
DhanushSantosh
4040bef4b8 feat: Add OpenCode provider integration with official brand icons
This commit integrates OpenCode as a new AI provider and updates all provider
icons with their official brand colors for better visual recognition.

**OpenCode Provider Integration:**
- Add OpencodeProvider class with CLI-based execution
- Support for OpenCode native models (opencode/) and Bedrock models
- Proper event normalization for OpenCode streaming format
- Correct CLI arguments: --format json (not stream-json)
- Event structure: type, part.text, sessionID fields

**Provider Icons:**
- Add official OpenCode icon (white square frame from opencode.ai)
- Add DeepSeek icon (blue whale #4D6BFE)
- Add Qwen icon (purple gradient #6336E7 → #6F69F7)
- Add Amazon Nova icon (AWS orange #FF9900)
- Add Mistral icon (rainbow gradient gold→red)
- Add Meta icon (blue #1877F2)
- Update existing icons with brand colors:
  * Claude: #d97757 (terra cotta)
  * OpenAI/Codex: #74aa9c (teal-green)
  * Cursor: #5E9EFF (bright blue)

**Settings UI Updates:**
- Update settings navigation to show OpenCode icon
- Update model configuration to use provider-specific icons
- Differentiate between OpenCode free models and Bedrock-hosted models
- All AI models now display their official brand logos

**Model Resolution:**
- Add isOpencodeModel() function to detect OpenCode models
- Support patterns: opencode/, opencode-*, amazon-bedrock/*
- Update getProviderFromModel to recognize opencode provider

Note: Some unit tests in opencode-provider.test.ts need updating to match
the new event structure and CLI argument format.
2026-01-11 00:56:25 +05:30
Kacper
e64a850f57 feat: enhance feature dialogs with planning mode tooltips
- Integrated Tooltip components into AddFeatureDialog, EditFeatureDialog, and MassEditDialog to provide user guidance on planning mode availability.
- Updated the rendering logic for planning mode selection to conditionally display tooltips when planning modes are not supported.
- Improved user experience by clarifying the conditions under which planning modes can be utilized.
2026-01-10 20:08:17 +01:00
webdevcody
555523df38 refactor: remove kanbanCardDetailLevel from settings and UI components
- Eliminated kanbanCardDetailLevel from the SettingsService, app state, and various UI components including BoardView and BoardControls.
- Updated related hooks and API client to reflect the removal of kanbanCardDetailLevel.
- Cleaned up imports and props associated with kanbanCardDetailLevel across the codebase for improved clarity and maintainability.
2026-01-10 13:39:45 -05:00
webdevcody
dd882139f3 refactor: streamline feature addition in BoardView and KanbanBoard
- Removed the add feature shortcut from BoardHeader and integrated the add feature functionality directly into the KanbanBoard and BoardView components.
- Added a floating action button for adding features in the KanbanBoard's backlog column.
- Updated KanbanColumn to support a footer action for enhanced UI consistency.
- Cleaned up unused imports and props related to feature addition.
2026-01-10 13:19:21 -05:00
webdevcody
a67b8c6109 feat: implement dashboard view and enhance sidebar navigation
- Added a new DashboardView component for improved project management.
- Updated sidebar navigation to redirect to the dashboard instead of the home page.
- Removed ProjectActions from the sidebar for a cleaner interface.
- Enhanced BoardView to conditionally render the WorktreePanel based on visibility settings.
- Introduced worktree panel visibility management per project in the app store.
- Updated project settings to include worktree panel visibility and favorite status.
- Adjusted navigation logic to ensure users are directed to the appropriate view based on project state.
2026-01-10 13:08:59 -05:00
webdevcody
134208dab6 Merge branch 'v0.10.0rc' of github.com:AutoMaker-Org/automaker into v0.10.0rc 2026-01-10 12:27:33 -05:00
webdevcody
887343d232 Merge branch 'main' into v0.10.0rc 2026-01-10 12:27:29 -05:00
Web Dev Cody
299b838400 Merge pull request #404 from AutoMaker-Org/fix/security-vulnerability
chore: update @modelcontextprotocol/sdk in server dep
2026-01-10 12:25:29 -05:00
Kacper
c5d0a8be7d chore: update @modelcontextprotocol/sdk in server dep 2026-01-10 17:18:08 +01:00
Shirone
fe433a84c9 Merge pull request #402 from AutoMaker-Org/fix/security-vulnerability-in-dep
fix: security vulnerability in server
2026-01-10 15:52:00 +00:00
Kacper
543aa7a27b chore: update @modelcontextprotocol/sdk in server dep 2026-01-10 16:42:26 +01:00
Shirone
36ddf0513b Merge pull request #400 from AutoMaker-Org/feat/codex-usage
feat: improve codex plan and usage detection
2026-01-10 15:29:33 +00:00
Shirone
c99883e634 fix: address PR review comments
- Add error logging to CodexProvider auth check instead of silent failure
- Fix cachedAt timestamp to return actual cache time instead of request time
- Replace misleading hardcoded rate limit values (100) with sentinel value (-1)
- Fix unused parameter warning in codex routes

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-10 16:26:12 +01:00
Shirone
604f98b08f chore: ignore .codex/config.toml to prevent API key exposure
Move .codex/config.toml to .gitignore to prevent accidental commits of
API keys. The file will remain local to each user's setup.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-10 16:18:19 +01:00
Shirone
c5009a0333 refactor: remove Codex credits handling from services and UI components
- Eliminated CodexCreditsSnapshot interface and related logic from CodexUsageService and UI components.
- Updated CodexUsageSection to display only plan type, removing credits information for a cleaner interface.
- Streamlined Codex usage formatting functions by removing unused credit formatting logic.

These changes simplify the Codex usage management by focusing on plan types, enhancing clarity and maintainability.
2026-01-10 16:18:19 +01:00
Shirone
99b05d35a2 feat: update Codex services and UI components for enhanced model management
- Bumped version numbers for @automaker/server and @automaker/ui to 0.9.0 in package-lock.json.
- Introduced CodexAppServerService and CodexModelCacheService to manage communication with the Codex CLI's app-server and cache model data.
- Updated CodexUsageService to utilize app-server for fetching usage data.
- Enhanced Codex routes to support fetching available models and integrated model caching.
- Improved UI components to dynamically load and display Codex models, including error handling and loading states.
- Added new API methods for fetching Codex models and integrated them into the app store for state management.

These changes improve the overall functionality and user experience of the Codex integration, ensuring efficient model management and data retrieval.
2026-01-10 16:18:08 +01:00
Web Dev Cody
a3ecc6fe02 Merge pull request #394 from AutoMaker-Org/remove-profiles
refactor: remove AI profile functionality and related components
2026-01-09 19:25:15 -05:00
webdevcody
fc20dd5ad4 refactor: remove AI profile functionality and related components
- Deleted the AI profile management feature, including all associated views, hooks, and types.
- Updated settings and navigation components to remove references to AI profiles.
- Adjusted local storage and settings synchronization logic to reflect the removal of AI profiles.
- Cleaned up tests and utility functions that were dependent on the AI profile feature.

These changes streamline the application by eliminating unused functionality, improving maintainability and reducing complexity.
2026-01-09 19:21:30 -05:00
webdevcody
3f2707404c chore: release v0.9.0
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-09 18:40:15 -05:00
Web Dev Cody
fdd3a28be7 Merge pull request #385 from AutoMaker-Org/v0.9.0rc
V0.9.0rc
2026-01-09 18:37:32 -05:00
Kacper
eb94e4de72 feat: enhance CodexUsageService to fetch usage data from app-server JSON-RPC API
- Implemented a new method to retrieve usage data from the Codex app-server, providing real-time data and improving reliability.
- Updated the fetchUsageData method to prioritize app-server data over fallback methods.
- Added detailed logging for better traceability and debugging.
- Removed unused methods related to OpenAI API usage and Codex CLI requests, streamlining the service.

These changes enhance the functionality and robustness of the CodexUsageService, ensuring accurate usage statistics retrieval.
2026-01-10 00:11:42 +01:00
webdevcody
21d275c984 increase panel width 2026-01-09 17:10:49 -05:00
webdevcody
cadb19d7ed feat: reorganize global settings navigation into groups
- Refactored the global navigation structure to group settings items into distinct categories for improved organization and usability.
- Updated the settings navigation component to render these groups dynamically, enhancing the user experience.
- Changed the default initial view in the settings hook to 'model-defaults' for better alignment with the new navigation structure.

These changes streamline navigation and make it easier for users to find relevant settings.
2026-01-09 17:09:08 -05:00
webdevcody
a01241fabe Merge branch 'v0.9.0rc' of github.com:AutoMaker-Org/automaker into v0.9.0rc 2026-01-09 16:40:17 -05:00
webdevcody
89a0877bcc feat: enhance settings navigation with collapsible sub-items and local storage state
- Implemented a collapsible dropdown for navigation items in the settings view, allowing users to expand or collapse sub-items.
- Added local storage functionality to remember the open/closed state of the dropdown across sessions.
- Updated the styling and interaction logic for improved user experience and accessibility.

These changes improve the organization of navigation items and enhance user interaction within the settings view.
2026-01-09 16:40:07 -05:00
Shirone
950a97d72b Merge pull request #392 from AutoMaker-Org/fix/codex-usage-plan-detection
fix: correct Codex plan type detection from JWT auth
2026-01-09 22:34:12 +01:00
Kacper
639c1de2e8 merge: sync with latest v0.9.0rc changes 2026-01-09 22:27:06 +01:00
Kacper
254e4f630c fix: address PR review comments for type safety
- Add typeof checks for fallback claim values to prevent runtime errors
- Make openaiAuth parsing more robust with proper type validation
- Add isNaN check for date parsing to handle invalid dates
- Refactor fetchFromAuthFile to reuse getPlanTypeFromAuthFile (DRY)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-09 22:25:06 +01:00
Kacper
5d0fb08651 fix: correct Codex plan type detection from JWT auth
- Fix hardcoded 'plus' planType that was returned as default
- Read plan type from correct JWT path: https://api.openai.com/auth.chatgpt_plan_type
- Add subscription expiry check - override to 'free' if expired
- Use getCodexAuthPath() from @automaker/platform instead of manual path
- Remove unused imports (os, fs, path) and class properties
- Clean up code and add minimal essential logging

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-09 22:18:41 +01:00
webdevcody
7ea64b32f3 feat: implement work mode selection in feature dialogs
- Added WorkModeSelector component to allow users to choose between 'current', 'auto', and 'custom' work modes for feature management.
- Updated AddFeatureDialog and EditFeatureDialog to utilize the new work mode functionality, replacing the previous branch selector logic.
- Enhanced useBoardActions hook to handle branch name generation based on the selected work mode.
- Adjusted settings to default to using worktrees, improving the overall feature creation experience.

These changes streamline the feature management process by providing clearer options for branch handling and worktree isolation.
2026-01-09 16:15:09 -05:00
webdevcody
93807c22c1 feat: add VITE_APP_MODE environment variable support
- Introduced VITE_APP_MODE variable in multiple files to manage application modes.
- Updated dev.mjs and docker-compose.dev.yml to set different modes for development.
- Enhanced type definitions in vite-env.d.ts to include VITE_APP_MODE options.
- Modified AutomakerLogo component to display version suffix based on the current app mode.
- Improved OS detection logic in use-os-detection.ts to utilize Electron's platform information.
- Updated ElectronAPI interface to expose platform information.

These changes provide better control over application behavior based on the mode, enhancing the development experience.
2026-01-09 15:30:49 -05:00
SuperComboGamer
b2cf17b53b feat: add project-scoped agent memory system (#351)
* memory

* feat: add smart memory selection with task context

- Add taskContext parameter to loadContextFiles for intelligent file selection
- Memory files are scored based on tag matching with task keywords
- Category name matching (e.g., "terminals" matches terminals.md) with 4x weight
- Usage statistics influence scoring (files that helped before rank higher)
- Limit to top 5 files + always include gotchas.md
- Auto-mode passes feature title/description as context
- Chat sessions pass user message as context

This prevents loading 40+ memory files and killing context limits.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* refactor: enhance auto-mode service and context loader

- Improved context loading by adding task context for better memory selection.
- Updated JSON parsing logic to handle various formats and ensure robust error handling.
- Introduced file locking mechanisms to prevent race conditions during memory file updates.
- Enhanced metadata handling in memory files, including validation and sanitization.
- Refactored scoring logic for context files to improve selection accuracy based on task relevance.

These changes optimize memory file management and enhance the overall performance of the auto-mode service.

* refactor: enhance learning extraction and formatting in auto-mode service

- Improved the learning extraction process by refining the user prompt to focus on meaningful insights and structured JSON output.
- Updated the LearningEntry interface to include additional context fields for better documentation of decisions and patterns.
- Enhanced the formatLearning function to adopt an Architecture Decision Record (ADR) style, providing richer context for recorded learnings.
- Added detailed logging for better traceability during the learning extraction and appending processes.

These changes aim to improve the quality and clarity of learnings captured during the auto-mode service's operation.

* feat: integrate stripProviderPrefix utility for model ID handling

- Added stripProviderPrefix utility to various routes to ensure providers receive bare model IDs.
- Updated model references in executeQuery calls across multiple files, enhancing consistency in model ID handling.
- Introduced memoryExtractionModel in settings for improved learning extraction tasks.

These changes streamline the model ID processing and enhance the overall functionality of the provider interactions.

* feat: enhance error handling and server offline management in board actions

- Improved error handling in the handleRunFeature and handleStartImplementation functions to throw errors for better caller management.
- Integrated connection error detection and server offline handling, redirecting users to the login page when the server is unreachable.
- Updated follow-up feature logic to include rollback mechanisms and improved user feedback for error scenarios.

These changes enhance the robustness of the board actions by ensuring proper error management and user experience during server connectivity issues.

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: webdevcody <webdevcody@gmail.com>
2026-01-09 15:11:59 -05:00
DhanushSantosh
7e768b6290 fix: restore complete E2E workflow structure
- Restore missing workflow metadata (name, on, jobs)
- Fix YAML structure that got corrupted during edits
- Ensure E2E tests will run on PRs and pushes to main/master
2026-01-09 22:44:57 +05:30
DhanushSantosh
7bdf5e4261 fix: resolve CI E2E test failures
- Fix disposed response object in Playwright route handler
- Add git user config to prevent 'empty ident' errors
- Increase server startup timeout and improve debugging
- Fix YAML indentation in E2E workflow

Resolves:
- 'Response has been disposed' error in open-existing-project test
- Git identity configuration issues in CI
- Backend server startup timing issues
2026-01-09 22:40:33 +05:30
DhanushSantosh
f6fed612df merge: resolve conflicts with upstream OpenCode support
- Combined CLI disconnection markers with OpenCode support
- Added OpenCode auth/deauth routes and API methods
- Resolved merge conflicts between feature branch and upstream v0.9.0rc
2026-01-09 22:25:46 +05:30
DhanushSantosh
3b3e282df7 merge: sync with upstream v0.9.0rc branch 2026-01-09 22:10:51 +05:30
Web Dev Cody
7f69f652fb Merge pull request #390 from AutoMaker-Org/opencode-support
Opencode support
2026-01-09 11:11:07 -05:00
DhanushSantosh
1452232409 feat: fix CLI authentication detection to prevent unnecessary browser prompts
- Fix Claude, Codex, and Cursor auth handlers to check if CLI is already authenticated
- Use same detection logic as each provider's internal checkAuth/codexAuthIndicators()
- For Codex: Check for API keys and auth files before requiring manual login
- For Cursor: Check for env var and credentials files before requiring manual auth
- For Claude: Check for cached auth tokens, settings, and credentials files
- If CLI is already authenticated: Just reconnect by removing disconnected marker
- If CLI needs auth: Tell user to manually run login command
- This prevents timeout errors when login commands can't run in non-interactive mode
2026-01-09 21:34:14 +05:30
webdevcody
4f0f56a7ba feat: enhance provider setup and authentication flow
- Refactored ProvidersSetupStep component to improve the UI and streamline provider status checks for Claude, Cursor, Codex, and OpenCode.
- Introduced auto-verification for CLI authentication and improved error handling for authentication states.
- Added loading indicators for provider status checks and enhanced user feedback for installation and authentication processes.
- Updated setup store to manage verification states and ensure accurate representation of provider statuses.

These changes enhance the user experience by providing clearer feedback and a more efficient setup process for AI providers.
2026-01-09 11:03:01 -05:00
webdevcody
a695d0db7b feat: enhance OpenCode provider tests and UI setup
- Updated unit tests for OpenCode provider to include new authentication indicators.
- Refactored ProvidersSetupStep component by removing unnecessary UI elements for better clarity.
- Improved board background persistence tests by utilizing a setup function for initializing app state.
- Enhanced settings synchronization tests to ensure proper handling of login and app state.

These changes improve the testing framework and user interface for OpenCode integration, ensuring a smoother setup and authentication process.
2026-01-09 10:08:38 -05:00
webdevcody
87c3d766c9 Merge branch 'v0.9.0rc' into opencode-support 2026-01-09 09:40:16 -05:00
webdevcody
89248001e4 feat: implement OpenCode authentication and provider setup
- Added OpenCode authentication status check to the OpencodeProvider class.
- Introduced OpenCodeAuthStatus interface to manage authentication states.
- Updated detectInstallation method to include authentication status in the response.
- Created ProvidersSetupStep component to consolidate provider setup UI, including Claude, Cursor, Codex, and OpenCode.
- Refactored setup view to streamline navigation and improve user experience.
- Enhanced OpenCode CLI integration with updated installation paths and authentication checks.

This commit enhances the setup process by allowing users to configure and authenticate multiple AI providers, improving overall functionality and user experience.
2026-01-09 09:39:46 -05:00
webdevcody
41b4869068 feat: enhance feature dialogs with OpenCode model support
- Added OpenCode model selection to AddFeatureDialog and EditFeatureDialog.
- Introduced ProfileTypeahead component for improved profile selection.
- Updated model constants to include OpenCode models and integrated them into the PhaseModelSelector.
- Enhanced planning mode options with new UI elements for OpenCode.
- Refactored existing components to streamline model handling and improve user experience.

This commit expands the functionality of the feature dialogs, allowing users to select and manage OpenCode models effectively.
2026-01-09 09:02:30 -05:00
webdevcody
be88a07329 feat: add OpenCode CLI support with status endpoint
- Implemented OpenCode CLI installation and authentication status check.
- Added new route for OpenCode status in setup routes.
- Updated HttpApiClient to include method for fetching OpenCode status.
- Enhanced system paths to include OpenCode's default installation directories.

This commit introduces functionality to check the installation and authentication status of the OpenCode CLI, improving integration with the overall system.
2026-01-08 23:15:35 -05:00
Kacper
f6738ff26c fix: update getDefaultDocumentsPath to use window.electronAPI for Electron mode
- Removed dependency on getElectronAPI and directly accessed window.electronAPI for retrieving the documents path in Electron mode.
- Added handling for web mode to return null when the documents path cannot be accessed.
- Included logging for the resolved documents path to aid in debugging.
2026-01-09 00:04:28 +01:00
Shirone
ae22f781d8 Merge pull request #370 from AutoMaker-Org/feat/subagents-skills
feat: add skills and subagents configuration support
2026-01-08 23:33:52 +01:00
Kacper
e649c4ced5 refactor: reduce code duplication in agent-discovery.ts
Addresses PR feedback to reduce duplicated code in scanAgentsDirectory
by introducing an FsAdapter interface that abstracts the differences
between systemPaths (user directory) and secureFs (project directory).

Changes:
- Extract parseAgentContent helper for parsing agent file content
- Add FsAdapter interface with exists, readdir, and readFile methods
- Create createSystemPathAdapter for user-level paths
- Create createSecureFsAdapter for project-level paths
- Refactor scanAgentsDirectory to use a single loop with the adapter

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-08 23:02:41 +01:00
Kacper
50da1b401c Merge branch 'v0.9.0rc' into feat/subagents-skills
Resolved conflict in agent-service.ts by keeping both:
- agents parameter for custom subagents (from our branch)
- thinkingLevel and reasoningEffort parameters (from v0.9.0rc)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-08 22:57:09 +01:00
DhanushSantosh
33d02d1df8 Merge branch 'v0.9.0rc' of https://github.com/AutoMaker-Org/automaker into feature/codex-cli 2026-01-09 03:14:06 +05:30
Shirone
f3041190fa Merge pull request #383 from AutoMaker-Org/fix/electron-initial-auth
fix: add API key header to verifySession for Electron auth
2026-01-08 22:39:19 +01:00
webdevcody
55c2530d5a Merge branch 'v0.9.0rc' into opencode-support 2026-01-08 15:40:06 -05:00
webdevcody
5fbc7dd13e opencode support 2026-01-08 15:30:20 -05:00
DhanushSantosh
b2e5ff1460 fix: centralize model prefix handling to prevent provider errors
Moves prefix stripping from individual providers to AgentService/IdeationService
and adds validation to ensure providers receive bare model IDs. This prevents
bugs like the Codex CLI receiving "codex-gpt-5.1-codex-max" instead of the
expected "gpt-5.1-codex-max".

- Add validateBareModelId() helper with fail-fast validation
- Add originalModel field to ExecuteOptions for logging
- Update all providers to validate model has no prefix
- Centralize prefix stripping in service layer
- Remove redundant prefix stripping from individual providers

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-09 01:40:29 +05:30
Shirone
0f9232ea33 fix: add API key header to verifySession for Electron auth
The verifySession() function was not including the X-API-Key header
when making requests to /api/settings/status, causing Electron mode
to fail authentication on app startup despite having a valid API key.

This resulted in users seeing "You've been logged out" screen
immediately after launching the Electron app.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-08 21:00:22 +01:00
DhanushSantosh
a815be6a20 fix: update model resolver for codex- prefix and fix thinking/reasoning separation
- Add codex- prefix support in model resolver
- Fix modelSupportsThinking() to properly detect provider types
- Update CODEX_MODEL_PREFIXES to include codex- prefix
2026-01-09 00:45:31 +05:30
DhanushSantosh
7b4667eba9 fix: add provider prefixes to CLI models for clear separation
- Add 'codex-' prefix to all Codex CLI model IDs
- Add 'cursor-' prefix to Cursor CLI GPT model IDs
- Update provider-utils.ts to use prefix-based matching
- Update UI components to use prefixed model IDs
- Fix model routing to prevent Cursor picking up Codex models
2026-01-09 00:03:49 +05:30
DhanushSantosh
08ccf2632a fix: expand Codex model routing to cover ALL Codex models
- Update isCursorModel to exclude ALL Codex models from Cursor routing
- Check against CODEX_MODEL_CONFIG_MAP for comprehensive exclusion
- Includes: gpt-5.2-codex, gpt-5.1-codex-max, gpt-5.1-codex-mini, gpt-5.2, gpt-5.1
- Also excludes overlapping models like gpt-5.2 and gpt-5.1 that exist in both maps
- Update test to expect CodexProvider for gpt-5.2 (correct behavior)

This ensures ALL Codex CLI models route to Codex provider, not Cursor.
Previously only gpt-5.1-codex-* and gpt-5.2-codex-* were excluded.
2026-01-08 23:17:35 +05:30
DhanushSantosh
7583598a05 fix: differentiate Codex CLI models from Cursor CLI models
- Fix isCursorModel to exclude Codex-specific models (gpt-5.1-codex-*, gpt-5.2-codex-*)
- These models should route to Codex provider, not Cursor provider
- Add CODEX_YOLO_FLAG constant for --dangerously-bypass-approvals-and-sandbox
- Always use YOLO flag in codex-provider for full permissions
- Simplify codex CLI args to minimal set with YOLO flag
- Update tests to reflect new behavior with YOLO flag

This fixes the bug where selecting a Codex model (e.g., gpt-5.1-codex-max)
was incorrectly spawning cursor-agent instead of codex exec.

The root cause was:
1. Cursor provider had higher priority (10) than Codex (5)
2. isCursorModel() returned true for Codex models in CURSOR_MODEL_MAP
3. Models like gpt-5.1-codex-max routed to Cursor instead of Codex

The fix:
1. isCursorModel now excludes Codex-specific model IDs
2. Codex always uses --dangerously-bypass-approvals-and-sandbox flag
2026-01-08 23:03:03 +05:30
DhanushSantosh
7e68691e92 fix: remove sandbox and approval policy dropdowns from Codex settings UI
- Remove SANDBOX_OPTIONS and APPROVAL_OPTIONS from codex-settings.tsx
- Remove Select components for sandbox mode and approval policy
- Remove codexSandboxMode and codexApprovalPolicy from CodexSettingsProps interface
- Update codex-settings-tab.tsx to pass only simplified props
- Codex now always runs with full permissions (--dangerously-bypass-approvals-and-sandbox)

The UI no longer shows sandbox or approval settings since Codex always uses full permissions.
2026-01-08 22:27:18 +05:30
DhanushSantosh
4b2034b834 fix: Codex CLI always runs with full permissions (--dangerously-bypass-approvals-and-sandbox)
- Always use CODEX_YOLO_FLAG (--dangerously-bypass-approvals-and-sandbox) for Codex
- Remove all conditional logic - no sandbox/approval config, no config overrides
- Simplify codex-provider.ts to always run Codex in full-permissions mode
- Codex always gets: full access, no approvals, web search enabled, images enabled
- Update services to apply full-permission settings automatically for Codex models
- Remove sandbox and approval controls from UI settings page
- Update tests to reflect new behavior (some pre-existing tests disabled/updated)

Note: 3 pre-existing tests disabled/skipped due to old behavior expectations (require separate PR to update)
2026-01-08 21:56:32 +05:30
DhanushSantosh
4dcf54146c feat: add reasoning effort support for Codex models
- Add ReasoningEffortSelector component for UI selection
- Integrate reasoning effort in feature creation/editing dialogs
- Add reasoning effort support to phase model selector
- Update agent service and board actions to handle reasoning effort
- Add reasoning effort fields to feature and settings types
- Update model selector and agent info panel with reasoning effort display
- Enhance agent context parser for reasoning effort processing

Reasoning effort allows fine-tuned control over Codex model reasoning
capabilities, providing options from 'none' to 'xhigh' for different
task complexity requirements.
2026-01-08 20:43:36 +05:30
DhanushSantosh
8a9715adef fix: AI profile display issues for Codex models
- Fix Codex profiles showing as 'Claude Sonnet' in UI
- Add proper Codex model display in profile cards and selectors
- Add useEffect to sync profile form data when editing profiles
- Update provider badges to show 'Codex' with OpenAI icon
- Enhance profile selection validation across feature dialogs
- Add getCodexModelLabel support to display functions

The issue was that profile display functions only handled Claude and Cursor
providers, causing Codex profiles to fallback to 'sonnet' display.
2026-01-08 17:13:45 +05:30
DhanushSantosh
d253d494ba feat: enhance Codex usage tracking with multiple data sources
- Try OpenAI API if API key is available
- Parse rate limit info from Codex CLI responses
- Extract plan type from Codex auth file
- Provide helpful configuration message when usage unavailable
2026-01-08 15:13:05 +05:30
DhanushSantosh
d1f7794afa Merge branch 'main' of https://github.com/AutoMaker-Org/automaker into feature/codex-cli 2026-01-08 14:38:23 +05:30
DhanushSantosh
4d80a93710 refactor: update Codex models to latest OpenAI models
- Replace gpt-5-codex, gpt-5-codex-mini, codex-1, codex-mini-latest, gpt-5
- Add gpt-5.1-codex-max, gpt-5.1-codex-mini, gpt-5.2, gpt-5.1
- Update thinking support for all Codex models
2026-01-08 14:22:35 +05:30
webdevcody
d70faf3b28 Merge branch 'v0.9.0rc' into feat/subagents-skills 2026-01-08 00:33:30 -05:00
Web Dev Cody
271749a5a4 Merge pull request #375 from yumesha/main
fix background image and create test incase background image failed again
2026-01-08 00:26:31 -05:00
Web Dev Cody
d1bd131cab Merge pull request #378 from AutoMaker-Org/remove-sandbox-as-it-is-broken
completly remove sandbox related code as the downstream libraries do …
2026-01-08 00:25:30 -05:00
webdevcody
96fe90ca65 chore: remove worktree integration E2E test file
- Deleted the worktree integration test file to streamline the test suite and remove obsolete tests. This change helps maintain focus on relevant test cases and improves overall test management.
2026-01-08 00:23:00 -05:00
webdevcody
fd5f7b873a fix: improve worktree branch handling in list route
- Updated the logic in the createListHandler to ensure that the branch name is correctly assigned, especially for the main worktree when it may be missing.
- Added checks to handle cases where the worktree directory might not exist, ensuring that removed worktrees are accurately tracked.
- Enhanced the final worktree entry handling to account for scenarios where the output does not end with a blank line, improving robustness.
2026-01-08 00:13:12 -05:00
webdevcody
959467de90 feat: add UI test command and clean up integration test
- Introduced a new npm script "test:ui" for running UI tests in the apps/ui workspace.
- Removed unnecessary login screen handling from the worktree integration test to streamline the test flow.
2026-01-08 00:07:23 -05:00
webdevcody
69434fe356 feat: enhance login view with retry mechanism for server checks
- Added useRef to manage AbortController for retry requests in the LoginView component.
- Implemented logic to abort any ongoing retry requests before initiating a new server check, improving error handling and user experience during login attempts.
2026-01-07 23:18:52 -05:00
webdevcody
dc264bd164 feat: update E2E fixture settings and improve test repository initialization
- Changed the E2E settings to enable the use of worktrees for better test isolation.
- Modified the test repository initialization to explicitly set the initial branch to 'main', ensuring compatibility across different git versions and avoiding CI environment discrepancies.
2026-01-07 23:10:02 -05:00
webdevcody
8992f667c7 refactor: clean up settings service and improve E2E fixture descriptions
- Removed the redundant call to ignore empty array overwrite for 'enabledCodexModels' in the SettingsService.
- Reformatted the description of the 'Heavy Task' profile in the E2E fixture setup script for better readability.
2026-01-07 23:04:27 -05:00
webdevcody
eb627ef323 feat: enhance E2E test setup and error handling
- Updated Playwright configuration to explicitly unset ALLOWED_ROOT_DIRECTORY for unrestricted testing paths.
- Improved E2E fixture setup script to reset server settings to a known state, ensuring test isolation.
- Enhanced error handling in ContextView and WelcomeView components to reset state and provide user feedback on failures.
- Updated tests to ensure proper navigation and visibility checks during logout processes, improving reliability.
2026-01-07 23:01:57 -05:00
webdevcody
d8cdb0bf7a feat: enhance global settings update with data loss prevention
- Added safeguards to prevent overwriting non-empty arrays with empty arrays during global settings updates, specifically for the 'projects' field.
- Implemented logging for updates to assist in diagnosing accidental wipes of critical settings.
- Updated tests to verify that projects are preserved during logout transitions and that theme changes are ignored if a project wipe is attempted.
- Enhanced the settings synchronization logic to ensure safe handling during authentication state changes.
2026-01-07 21:38:46 -05:00
webdevcody
8c68c24716 feat: implement Codex CLI authentication check and integrate with provider
- Added a new utility for checking Codex CLI authentication status using the 'codex login status' command.
- Integrated the authentication check into the CodexProvider's installation detection and authentication methods.
- Updated Codex CLI status display in the UI to reflect authentication status and method.
- Enhanced error handling and logging for better debugging during authentication checks.
- Refactored related components to ensure consistent handling of authentication across the application.
2026-01-07 21:06:39 -05:00
antdev
9b302583c4 fix background image part 2 2026-01-08 09:23:57 +08:00
webdevcody
47c2d795e0 chore: update e2e test results upload configuration
- Renamed the upload step to clarify that it includes screenshots, traces, and videos.
- Changed the condition for uploading test results to always run, ensuring artifacts are uploaded regardless of test outcome.
- Added a new option to ignore if no files are found during the upload process.
2026-01-07 20:00:52 -05:00
yumesha
d608d8c2d4 Merge branch 'AutoMaker-Org:main' into main 2026-01-08 08:52:59 +08:00
webdevcody
f737b1f30a merge in v0.9.0 2026-01-07 18:22:32 -05:00
webdevcody
8b36fce7d7 refactor: improve test stability and clarity in various test cases
- Updated the 'Add Context Image' test to simplify file verification by relying on UI visibility instead of disk checks.
- Enhanced the 'Feature Manual Review Flow' test with better project setup and API interception to ensure consistent test conditions.
- Improved the 'AI Profiles' test by replacing arbitrary timeouts with dynamic checks for profile count.
- Refined the 'Project Creation' and 'Open Existing Project' tests to ensure proper project visibility and settings management during tests.
- Added mechanisms to prevent settings hydration from restoring previous project states, ensuring tests run in isolation.
- Removed unused test image from fixtures to clean up the repository.
2026-01-07 18:07:27 -05:00
DhanushSantosh
30a2a1c921 feat: add unified usage popover with Claude and Codex tabs
- Created combined UsagePopover component with tab switching between providers
- Added Codex usage API endpoint and service (returns not available message)
- Updated BoardHeader to show single usage button for both providers
- Enhanced type definitions for Codex usage with primary/secondary rate limits
- Wired up Codex usage API in HTTP client

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-08 03:37:37 +05:30
webdevcody
763f9832c3 feat: enhance test setup with splash screen handling and sandbox warnings
- Added `skipSandboxWarning` option to project setup functions to streamline testing.
- Implemented logic to disable the splash screen during tests by setting `automaker-splash-shown` in sessionStorage.
- Introduced a new package.json for a test project and added a test image to the fixtures for improved testing capabilities.
2026-01-07 16:31:48 -05:00
webdevcody
11b1bbc143 feat: implement splash screen handling in navigation and interactions
- Added a new function `waitForSplashScreenToDisappear` to manage splash screen visibility, ensuring it does not block user interactions.
- Integrated splash screen checks in various navigation functions and interaction methods to enhance user experience by waiting for the splash screen to disappear before proceeding.
- Updated test setup to disable the splash screen during tests for consistent testing behavior.
2026-01-07 16:10:17 -05:00
webdevcody
7176d3e513 fix: enhance sandbox compatibility checks in sdk-options and improve login view effect handling
- Added additional cloud storage path patterns for macOS and Linux to the checkSandboxCompatibility function, ensuring better compatibility with sandbox environments.
- Revised the login view to simplify the initial server/session check logic, removing unnecessary ref guard and improving responsiveness during component unmounting.
2026-01-07 15:54:17 -05:00
DhanushSantosh
9c3ba34b51 Merge remote-tracking branch 'upstream/v0.9.0rc' into feature/codex-cli 2026-01-08 01:57:27 +05:30
DhanushSantosh
ff3af937da fix: update event type in CodexProvider from threadCompleted to turnCompleted
- Changed the event type from 'thread.completed' to 'turn.completed' in the CODEX_EVENT_TYPES constant and its usage within the CodexProvider class.
- This update aligns the event handling with the intended functionality, ensuring correct event processing.
2026-01-08 01:54:02 +05:30
webdevcody
b9fcb916a6 fix: add missing checkSandboxCompatibility function to sdk-options
The codex-provider.ts imports this function but it was missing from
sdk-options.ts. This adds the implementation that checks if sandbox
mode is compatible with the working directory (disables sandbox for
cloud storage paths).

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-07 15:13:52 -05:00
webdevcody
cfa1f114fd Merge branch 'v0.9.0rc' into remove-sandbox-as-it-is-broken 2026-01-07 15:01:31 -05:00
Web Dev Cody
761929ea8e Merge pull request #367 from DhanushSantosh/feature/codex-cli
Codex CLI Implementation
2026-01-07 14:44:48 -05:00
webdevcody
4d36e66deb refactor: update session cookie options and improve login view authentication flow
- Revised SameSite attribute for session cookies to clarify its behavior in documentation.
- Streamlined cookie clearing logic in the authentication route by utilizing `getSessionCookieOptions()`.
- Enhanced the login view to support aborting server checks, improving responsiveness during component unmounting.
- Ensured proper handling of server check retries with abort signal integration for better user experience.
2026-01-07 14:33:55 -05:00
webdevcody
e58e389658 feat: implement settings migration from localStorage to server
- Added logic to perform settings migration, merging localStorage data with server settings if necessary.
- Introduced `localStorageMigrated` flag to prevent re-migration on subsequent app loads.
- Updated `useSettingsMigration` hook to handle migration and hydration of settings.
- Ensured localStorage values are preserved post-migration for user flexibility.
- Enhanced documentation within the migration logic for clarity.
2026-01-07 14:29:32 -05:00
DhanushSantosh
821827f850 refactor: simplify config value formatting in CodexProvider
- Removed unnecessary JSON.stringify conversion for string values in formatConfigValue function, streamlining the value formatting process.
- This change enhances code clarity and reduces complexity in the configuration handling of the CodexProvider.
2026-01-08 00:27:11 +05:30
DhanushSantosh
9d8464cceb feat: enhance CodexProvider argument handling and configuration
- Added approval policy and web search features to the CodexProvider's argument construction, improving flexibility in command execution.
- Updated unit tests to validate the new configuration handling for approval and search features, ensuring accurate argument parsing.

These changes enhance the functionality of the CodexProvider, allowing for more dynamic command configurations and improving test coverage.
2026-01-08 00:16:57 +05:30
DhanushSantosh
48a4fa5c6c refactor: streamline argument handling in CodexProvider
- Reorganized argument construction in CodexProvider to separate pre-execution arguments from global flags, improving clarity and maintainability.
- Updated unit tests to reflect changes in argument order, ensuring correct validation of approval and search indices.

These changes enhance the structure of the CodexProvider's command execution process and improve test reliability.
2026-01-08 00:04:52 +05:30
webdevcody
70c04b5a3f feat: update session cookie options and enhance authentication flow
- Changed SameSite attribute for session cookies from 'strict' to 'lax' to allow cross-origin fetches, improving compatibility with various client requests.
- Updated cookie clearing logic in the authentication route to use `res.cookie()` for better reliability in cross-origin environments.
- Refactored the login view to implement a state machine for managing authentication phases, enhancing clarity and maintainability.
- Introduced a new logged-out view to inform users of session expiration and provide options to log in or retry.
- Added account and security sections to the settings view, allowing users to manage their account and security preferences more effectively.
2026-01-07 12:55:23 -05:00
DhanushSantosh
24ea10e818 feat: enhance Codex authentication and API key management
- Introduced a new method to check Codex authentication status, allowing for better handling of API keys and OAuth tokens.
- Updated API key management to include OpenAI, enabling users to manage their keys more effectively.
- Enhanced the CodexProvider to support session ID tracking and deduplication of text blocks in assistant messages.
- Improved error handling and logging in authentication routes, providing clearer feedback to users.

These changes improve the overall user experience and security of the Codex integration, ensuring smoother authentication processes and better management of API keys.
2026-01-07 22:49:30 +05:30
webdevcody
927451013c feat: add sandbox risk confirmation and rejection screens
- Introduced `SandboxRiskDialog` to prompt users about risks when running outside a containerized environment.
- Added `SandboxRejectionScreen` for users who deny the sandbox risk confirmation, providing options to reload or restart the app.
- Updated settings view and danger zone section to manage sandbox warning preferences.
- Implemented a new API endpoint to check if the application is running in a containerized environment.
- Enhanced state management to handle sandbox warning settings across the application.
2026-01-07 10:41:43 -05:00
webdevcody
0d206fe75f feat: enhance login view with session verification and loading state
- Implemented session verification on component mount using exponential backoff to handle server live reload scenarios.
- Added loading state to the login view while checking for an existing session, improving user experience.
- Removed unused setup wizard navigation from the API keys section for cleaner code.
2026-01-07 10:18:06 -05:00
webdevcody
11accac5ae feat: implement API-first settings management and description history tracking
- Migrated settings persistence from localStorage to an API-first approach, ensuring consistency between Electron and web modes.
- Introduced `useSettingsSync` hook for automatic synchronization of settings to the server with debouncing.
- Enhanced feature update logic to track description changes with a history, allowing for better management of feature descriptions.
- Updated various components and services to utilize the new settings structure and description history functionality.
- Removed persist middleware from Zustand store, streamlining state management and improving performance.
2026-01-07 10:05:54 -05:00
DhanushSantosh
2250367ddc chore: update npm audit level in CI workflow
- Changed the npm audit command in the security audit workflow to check for critical vulnerabilities instead of moderate ones.
- This adjustment enhances the security posture of the application by ensuring that critical issues are identified and addressed promptly.
2026-01-07 20:24:49 +05:30
DhanushSantosh
fe305bbc81 feat: add vision support validation for image processing
- Introduced a new method in ProviderFactory to check if a model supports vision/image input.
- Updated AgentService and AutoModeService to validate vision support before processing images, throwing an error if the model does not support it.
- Enhanced error messages to guide users on switching models or removing images if necessary.

These changes improve the robustness of image processing by ensuring compatibility with the selected models.
2026-01-07 20:12:39 +05:30
DhanushSantosh
92195340c6 feat: enhance authentication handling and API key validation
- Added optional API keys for OpenAI and Cursor to the .env.example file.
- Implemented API key validation in CursorProvider to ensure valid keys are used.
- Introduced rate limiting in Claude and Codex authentication routes to prevent abuse.
- Created secure environment handling for authentication without modifying process.env.
- Improved error handling and logging for authentication processes, enhancing user feedback.

These changes improve the security and reliability of the authentication mechanisms across the application.
2026-01-07 19:26:42 +05:30
webdevcody
1316ead8c8 completly remove sandbox related code as the downstream libraries do not work with it on various os 2026-01-07 08:54:14 -05:00
DhanushSantosh
03b33106e0 fix: replace git+ssh URLs with https in package-lock.json
- Configure git to use HTTPS for GitHub URLs globally
- Run npm run fix:lockfile to rewrite package-lock.json
- Resolves lint-lockfile failure in CI/CD environments
2026-01-07 19:09:26 +05:30
DhanushSantosh
251f0fd88e chore: update CI configuration and enhance test stability
- Added deterministic API key and environment variables in e2e-tests.yml to ensure consistent test behavior.
- Refactored CodexProvider tests to improve type safety and mock handling, ensuring reliable test execution.
- Updated provider-factory tests to mock installation detection for CodexProvider, enhancing test isolation.
- Adjusted Playwright configuration to conditionally use external backend, improving flexibility in test environments.
- Enhanced kill-test-servers script to handle external server scenarios, ensuring proper cleanup of test processes.

These changes improve the reliability and maintainability of the testing framework, leading to a more stable development experience.
2026-01-07 19:09:26 +05:30
DhanushSantosh
96f154d440 refactor: enhance type safety and error handling in navigation and project creation
- Updated navigation functions to cast route paths correctly, improving type safety.
- Added error handling for the templates API in project creation hooks to ensure robustness.
- Refactored task progress panel to improve type handling for feature data.
- Introduced type checks and default values in various components to enhance overall stability.

These changes improve the reliability and maintainability of the application, ensuring better user experience and code quality.
2026-01-07 19:09:25 +05:30
DhanushSantosh
27c6d5a3bb refactor: improve error handling and CLI integration
- Updated CodexProvider to read prompts from stdin to prevent shell escaping issues.
- Enhanced AgentService to handle streamed error messages from providers, ensuring a consistent user experience.
- Modified UI components to display error messages clearly, including visual indicators for errors in chat bubbles.
- Updated CLI status handling to support both Claude and Codex APIs, improving compatibility and user feedback.

These changes enhance the robustness of the application and improve the user experience during error scenarios.
2026-01-07 19:09:25 +05:30
DhanushSantosh
a57dcc170d feature/codex-cli 2026-01-07 19:09:24 +05:30
Shirone
5c601ff200 feat: implement subagents configuration management
- Added a new function to retrieve subagents configuration from settings, allowing users to enable/disable subagents and select sources for loading them.
- Updated the AgentService to incorporate subagents configuration, dynamically adding tools based on the settings.
- Enhanced the UI components to manage subagents, including a settings section for enabling/disabling and selecting sources.
- Introduced a new hook for managing subagents settings state and interactions.

These changes improve the flexibility and usability of subagents within the application, enhancing user experience and configuration options.
2026-01-07 10:32:42 +01:00
webdevcody
4d4025ca06 fix: improve WebSocket connection handling in Electron mode
- Updated the logic for establishing WebSocket connections in Electron mode to handle cases where the API key is unavailable.
- Added fallback to wsToken/cookie authentication for real-time event updates, enhancing reliability in external server scenarios.
- Improved logging for better debugging of WebSocket connection issues.
2026-01-06 22:29:08 -05:00
Web Dev Cody
77f253c7bd Merge pull request #376 from AutoMaker-Org/electron-with-docker-api
feat: add support for external server mode with Docker integration
2026-01-06 21:12:49 -05:00
Shirone
fe13d47b24 refactor: improve agent file model validation and settings source deduplication
- Enhanced model parsing in agent discovery to validate against allowed values and log warnings for invalid models.
- Refactored settingSources construction in AgentService to utilize Set for automatic deduplication, simplifying the merging of user and project settings with skills sources.
- Updated tests to reflect changes in allowedTools for improved functionality.

These changes enhance the robustness of agent configuration and streamline settings management.
2026-01-07 00:05:33 +01:00
Shirone
33acf502ed refactor: enhance skills and subagents settings management
- Updated useSkillsSettings and useSubagents hooks to improve state management and error handling.
- Added new settings API methods for skills configuration and agent discovery.
- Refactored app-store to include enableSkills and skillsSources state management.
- Enhanced settings migration to sync skills configuration with the server.

These changes streamline the management of skills and subagents, ensuring better integration and user experience.
2026-01-06 23:43:31 +01:00
webdevcody
66557b2093 feat: add support for external server mode with Docker integration
- Introduced a new `docker-compose.dev-server.yml` for running the backend API in a container, enabling local Electron to connect to it.
- Updated `dev.mjs` to include a new option for launching the Docker server container.
- Enhanced the UI application to support external server mode, allowing session-based authentication and adjusting routing logic accordingly.
- Added utility functions to check and cache the external server mode status for improved performance.
- Updated various components to handle authentication and routing based on the server mode.
2026-01-06 17:26:25 -05:00
Web Dev Cody
c713cef484 Merge pull request #360 from AutoMaker-Org/feat/mass-edit-backlog-features
feat: add mass edit feature for backlog kanban cards
2026-01-06 16:17:14 -05:00
webdevcody
5f7cbd3435 refactor: improve logging format in launchDockerContainers function
- Updated the logging format in the launchDockerContainers function to enhance readability by breaking long lines into multiple lines. This change improves the clarity of log messages when starting Docker containers.
2026-01-06 16:14:10 -05:00
webdevcody
a4290b5863 feat: enhance development environment with Docker support and UI improvements
- Introduced a new `docker-compose.dev.yml` for development mode, enabling live reload and improved container management.
- Updated `dev.mjs` to utilize `launchDockerDevContainers` for starting development containers with live reload capabilities.
- Refactored `printModeMenu` to differentiate between development and production Docker options.
- Enhanced the `BoardView` and `KanbanBoard` components by streamlining props and improving UI interactions.
- Removed the `start.mjs` script, consolidating production launch logic into `dev.mjs` for a more unified approach.
2026-01-06 16:11:29 -05:00
webdevcody
f9b0a38642 Merge branch 'main' into feat/mass-edit-backlog-features 2026-01-06 14:38:59 -05:00
antdev
46636cf385 fix background image and create test in case background image failed again 2026-01-07 02:04:22 +08:00
antdev
e24a6894a5 Merge remote-tracking branch 'refs/remotes/origin/main'
# Conflicts:
#	apps/ui/src/routes/__root.tsx
2026-01-07 01:48:34 +08:00
antdev
cf9289e21a fix background image and create test incase background image failed again 2026-01-07 01:21:46 +08:00
webdevcody
fe7bc954ba chore: add OpenSSH client to Dockerfile for enhanced SSH capabilities
- Updated the Dockerfile to include the OpenSSH client, improving the container's ability to handle SSH connections and operations.
2026-01-06 00:36:45 -05:00
Shirone
236989bf6e feat: add skills and subagents configuration support
- Updated .gitignore to include skills directory.
- Introduced agent discovery functionality to scan for AGENT.md files in user and project directories.
- Added new API endpoint for discovering filesystem agents.
- Implemented UI components for managing skills and viewing custom subagents.
- Enhanced settings helpers to retrieve skills configuration and custom subagents.
- Updated agent service to incorporate skills and subagents in task delegation.

These changes enhance the capabilities of the system by allowing users to define and manage skills and custom subagents effectively.
2026-01-06 04:31:57 +01:00
Web Dev Cody
8a6a83bf52 Merge pull request #366 from AutoMaker-Org/cursor-docker-oauth
feat: add Cursor CLI installation attempts documentation and enhance …
2026-01-05 21:53:31 -05:00
webdevcody
84b582ffa7 refactor: streamline Docker container management and enhance utility functions
- Removed redundant Docker image rebuilding logic from `dev.mjs` and `start.mjs`, centralizing it in the new `launchDockerContainers` function within `launcher-utils.mjs`.
- Introduced `sanitizeProjectName` and `shouldRebuildDockerImages` functions to improve project name handling and Docker image management.
- Updated the Docker launch process to provide clearer logging and ensure proper handling of environment variables, enhancing the overall development experience.
2026-01-05 21:50:12 -05:00
webdevcody
bd5176165d refactor: remove duplicate logger initialization in useCliStatus hook
- Eliminated redundant logger declaration within the useCliStatus hook to improve code clarity and prevent potential performance issues.
- This change enhances the maintainability of the code by ensuring the logger is created only once outside the hook.
2026-01-05 21:38:18 -05:00
webdevcody
49f32c4d59 Merge branch 'main' of github.com:AutoMaker-Org/automaker into cursor-docker-oauth 2026-01-05 21:29:20 -05:00
webdevcody
0af5bc86f4 Merge branch 'cursor-docker-oauth' of github.com:AutoMaker-Org/automaker into cursor-docker-oauth 2026-01-05 21:29:01 -05:00
webdevcody
bc5a36c5f4 feat: enhance project name sanitization and improve Docker image naming
- Added a `sanitizeProjectName` function to ensure project names are safe for shell commands and Docker image names by converting them to lowercase and removing non-alphanumeric characters.
- Updated `dev.mjs` and `start.mjs` to utilize the new sanitization function when determining Docker image names, enhancing security and consistency.
- Refactored the Docker entrypoint script to ensure proper permissions for the Cursor CLI config directory, improving setup reliability.
- Clarified documentation regarding the storage location of OAuth tokens for the Cursor CLI on Linux.

These changes improve the robustness of the Docker setup and enhance the overall development workflow.
2026-01-05 21:28:42 -05:00
Web Dev Cody
2934d73db2 Merge pull request #368 from AutoMaker-Org/fix/small-bugs
fix: small bugs
2026-01-05 20:23:42 -05:00
Shirone
a4968f7235 fix: show success toast only during project creation flow
- Updated the useSpecRegeneration hook to conditionally display the success toast message only when the user is in the active project creation flow, preventing unnecessary notifications during regular spec regeneration.
2026-01-06 02:04:08 +01:00
Shirone
b8e0c18c53 fix: theme switch bug
- when user had set up theme on the project lvl i and went trought the setup wizard again and changed theme its was not updating because its was only updating global theme and app was reverting back to show current project theme
2026-01-06 02:00:41 +01:00
Shirone
d0b3e0d9bb refactor: move logger initialization outside of useCliStatus function
- Moved the logger initialization to the top of the file for better readability and to avoid re-initialization on each function call.
- This change enhances the performance and clarity of the code in the useCliStatus hook.
- fix infinite loop calling caused by rerender because of logger
2026-01-06 01:53:08 +01:00
Kacper
2a0719e00c refactor: move logger initialization outside of useCliStatus hook
- Moved the logger creation outside the hook to prevent infinite re-renders.
- Updated dependencies in the checkStatus function to remove logger from the dependency array.

These changes enhance performance and maintainability of the useCliStatus hook.
2026-01-06 00:58:31 +01:00
webdevcody
af394183e6 feat: add Cursor CLI installation attempts documentation and enhance Docker setup
- Introduced a new markdown file summarizing various attempts to install the Cursor CLI in Docker, detailing approaches, results, and key learnings.
- Updated Dockerfile to ensure proper installation of Cursor CLI for the non-root user, including necessary PATH adjustments for interactive shells.
- Enhanced entrypoint script to manage OAuth tokens for both Claude and Cursor CLIs, ensuring correct permissions and directory setups.
- Added scripts for extracting OAuth tokens from macOS Keychain and Linux JSON files for seamless integration with Docker.
- Updated docker-compose files to support persistent storage for CLI configurations and authentication tokens.

These changes improve the development workflow and provide clear guidance on CLI installation and authentication processes.
2026-01-05 18:13:14 -05:00
webdevcody
5d675561ba chore: release v0.8.0
🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-05 16:31:11 -05:00
Web Dev Cody
27fb3f2777 Merge pull request #364 from AutoMaker-Org/v0.8.0rc
V0.8.0rc
2026-01-05 16:28:25 -05:00
webdevcody
aca84fe16a chore: update Docker configuration and entrypoint script
- Enhanced .dockerignore to exclude additional build outputs and dependencies.
- Modified dev.mjs and start.mjs to change Docker container startup behavior, removing the --build flag to preserve volumes.
- Updated docker-compose.yml to add a new volume for persisting Claude CLI OAuth session keys.
- Introduced docker-entrypoint.sh to fix permissions on the Claude CLI config directory.
- Adjusted Dockerfile to include the entrypoint script and ensure proper user permissions.

These changes improve the Docker setup and streamline the development workflow.
2026-01-05 10:44:47 -05:00
Shirone
abab7be367 Merge pull request #363 from AutoMaker-Org/fix/app-spec-ui-bug
fix: prevent 'No App Specification Found' during spec generation
2026-01-05 16:04:26 +01:00
Kacper
73d0edb873 refactor: move setIsGenerationRunning(false) outside if block
Address PR review feedback - ensure isGenerationRunning is always
reset to false when generation is not running, even if
api.specRegeneration is not available.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-05 16:01:41 +01:00
Kacper
84d93c2901 fix: prevent "No App Specification Found" during spec generation
Check generation status before trying to load the spec file.
This prevents 500 errors and confusing UI during spec generation.

Changes:
- useSpecLoading now checks specRegeneration.status() first
- If generation is running, skip the file read and set isGenerationRunning
- SpecView uses isGenerationRunning to show generating UI properly

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-05 15:57:17 +01:00
Shirone
d558050dfa Merge pull request #362 from AutoMaker-Org/refactor/adjust-buttons-in-board-header
style: update BoardHeader component for improved layout
2026-01-05 15:29:28 +01:00
Kacper
5991e99853 refactor: extract shared className and add data-testid
Address PR review feedback:
- Extract duplicated className to controlContainerClass constant
- Add data-testid="auto-mode-toggle-container" for testability

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-05 15:26:19 +01:00
Kacper
9661aa1dad style: update BoardHeader component for improved layout
- Adjusted the spacing and height of the concurrency slider and auto mode toggle containers for better visual consistency.
- Changed class names to enhance the overall design and maintainability of the UI.
2026-01-05 15:22:04 +01:00
Shirone
d4649ec456 Merge pull request #361 from AutoMaker-Org/refactor/improve-vitest-configuration
refactor: use Vitest projects config instead of deprecated workspace
2026-01-05 15:02:12 +01:00
Kacper
fde9eea2d6 style: use explicit config path for server project
Address PR review feedback for consistency - use full path
'apps/server/vitest.config.ts' instead of just 'apps/server'.

Note: libs/types has no tests (type definitions only), so it
doesn't need a vitest.config.ts.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-05 14:59:46 +01:00
Kacper
d1e3251c29 refactor: use glob patterns for vitest projects configuration
Address PR review feedback:
- Use 'libs/*/vitest.config.ts' glob to auto-discover lib projects
- Simplify test:packages script to use --project='!server' exclusion
- New libs with vitest.config.ts will be automatically included

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-05 14:50:47 +01:00
Kacper
2f51991558 refactor: use Vitest projects config instead of deprecated workspace
- Add root vitest.config.ts with projects array (replaces deprecated workspace)
- Add name property to each project's vitest.config.ts for filtering
- Update package.json test scripts to use vitest projects
- Add vitest to root devDependencies

This addresses the Vitest warning about multiple configs impacting
performance by running all projects in a single Vitest process.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-05 14:45:33 +01:00
Kacper
7963525246 refactor: replace loading messages with LoadingState component
- Updated RootLayoutContent to utilize the LoadingState component for displaying loading messages during various application states, enhancing consistency and maintainability of the UI.
2026-01-05 14:36:10 +01:00
Kacper
feae1d7686 refactor: enhance pre-commit hook for nvm compatibility
- Improved the pre-commit script to better handle loading Node.js versions from .nvmrc for both Unix and Windows environments.
- Added checks to ensure nvm is sourced correctly and to handle potential errors gracefully, enhancing the reliability of the development setup.
2026-01-05 14:36:01 +01:00
Web Dev Cody
bbdfaf6463 Merge pull request #358 from AutoMaker-Org/ideation-fix
refactor: update ideation dashboard and prompt list to use project-sp…
2026-01-04 18:07:53 -05:00
Shirone
1117afc37a refactor: update mass edit dialog and introduce new select components
- Removed advanced options toggle and related state from the mass edit dialog for a cleaner UI.
- Replaced ProfileQuickSelect with ProfileSelect for better profile management.
- Introduced new PlanningModeSelect and PrioritySelect components for streamlined selection of planning modes and priorities.
- Updated imports in shared index to include new select components.
- Enhanced the mass edit dialog to utilize the new components, improving user experience during bulk edits.
2026-01-04 23:24:24 +01:00
Kacper
06c02de1cb feat: add mass edit feature for backlog kanban cards
Add ability to select multiple backlog features and edit their configuration
in bulk. Selection is limited to backlog column features in the current
branch/worktree only.

Changes:
- Add selection mode toggle in board controls
- Add checkbox selection on kanban cards (backlog only)
- Disable drag and drop during selection mode
- Hide action buttons during selection mode
- Add floating selection action bar with Edit/Clear/Select All
- Add mass edit dialog with all configuration options in single scroll view
- Add server endpoint for bulk feature updates
2026-01-04 22:25:19 +01:00
webdevcody
e4d86aa654 refactor: optimize ideation components and store for project-specific job handling
- Updated IdeationDashboard and PromptList components to utilize memoization for improved performance when retrieving generation jobs specific to the current project.
- Removed the getJobsForProject function from the ideation store, streamlining job management by directly filtering jobs in the components.
- Enhanced the addGenerationJob function to ensure consistent job ID generation format.
- Implemented migration logic in the ideation store to clean up legacy jobs without project paths, improving data integrity.
2026-01-04 16:22:25 -05:00
webdevcody
4ac1edf314 refactor: update ideation dashboard and prompt list to use project-specific job retrieval
- Modified IdeationDashboard and PromptList components to fetch generation jobs specific to the current project.
- Updated addGenerationJob function to include projectPath as a parameter for better job management.
- Introduced getJobsForProject function in the ideation store to streamline job filtering by project.
2026-01-04 14:16:39 -05:00
Web Dev Cody
4f3ac27534 Merge pull request #288 from AutoMaker-Org/feat/cursor-cli
feat: Cursor CLI Integration
2026-01-04 13:31:29 -05:00
Kacper
4a41dbb665 style: fix formatting in auto-mode-service.ts 2026-01-04 13:28:37 +01:00
Kacper
f90cd61048 fix: remove MCP permission settings references removed in v0.8.0rc
v0.8.0rc removed getMCPPermissionSettings and related properties.
Removed all references from auto-mode-service.ts to fix build.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-04 13:18:35 +01:00
Kacper
078f107f66 Merge v0.8.0rc into feat/cursor-cli
Resolved conflicts:
- sdk-options.ts: kept HEAD (MCP & thinking level features)
- auto-mode-service.ts: kept HEAD (MCP features + fallback code)
- agent-output-modal.tsx: used v0.8.0rc (effectiveViewMode + pr-8 spacing)
- feature-suggestions-dialog.tsx: accepted deletion
- electron.ts: used v0.8.0rc (Ideation types)
- package-lock.json: regenerated

Fixed sdk-options.test.ts to expect 'default' permissionMode for read-only operations.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-04 13:12:45 +01:00
Web Dev Cody
64642916ab Merge pull request #355 from AutoMaker-Org/summaries-and-todos
Summaries and todos
2026-01-04 01:59:49 -05:00
webdevcody
e2206d7a96 feat: add thorough verification process and enhance agent output modal
- Introduced a new markdown file outlining a mandatory 3-pass verification process for code completion, focusing on correctness, edge cases, and maintainability.
- Updated the AgentInfoPanel to display a todo list for non-backlog features, ensuring users can see the agent's current tasks.
- Enhanced the AgentOutputModal to support a summary view, extracting and displaying summary content from raw log output.
- Improved the log parser to extract summaries from various formats, enhancing the overall user experience and information accessibility.
2026-01-04 01:56:45 -05:00
Web Dev Cody
32f859b927 Merge pull request #354 from AutoMaker-Org/ideation
feat: implement ideation feature for brainstorming and idea management
2026-01-04 01:21:44 -05:00
webdevcody
ac92725a6c feat: enhance ideation routes with event handling and new suggestion feature
- Updated the ideation routes to include an EventEmitter for better event management.
- Added a new endpoint to handle adding suggestions to the board, ensuring consistent category mapping.
- Modified existing routes to emit events for idea creation, update, and deletion, improving frontend notifications.
- Refactored the convert and create idea handlers to utilize the new event system.
- Removed static guided prompts data in favor of dynamic fetching from the backend API.
2026-01-04 00:38:01 -05:00
webdevcody
5c95d6d58e fix: update category mapping and improve ID generation format in IdeationService
- Changed the category mapping for 'feature' from 'feature' to 'ui'.
- Updated ID generation format to use hyphens instead of underscores for better readability.
- Enhanced unit tests to reflect the updated category and ensure proper functionality.
2026-01-04 00:22:06 -05:00
claude[bot]
abddfad063 test: add comprehensive unit tests for IdeationService
- Add 28 unit tests covering all major IdeationService functionality
- Test session management (start, get, stop, running state)
- Test idea CRUD operations (create, read, update, delete, archive)
- Test idea to feature conversion with user stories and notes
- Test project analysis and caching
- Test prompt management and filtering
- Test AI-powered suggestion generation
- Mock all external dependencies (fs, platform, utils, providers)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Web Dev Cody <webdevcody@users.noreply.github.com>
2026-01-04 05:16:06 +00:00
webdevcody
3512749e3c feat: refactor development and production launch scripts
- Introduced `dev.mjs` for development mode with hot reloading using Vite.
- Added `start.mjs` for production mode, serving pre-built static files without hot reloading.
- Created a new utility module `launcher-utils.mjs` for shared functions across scripts.
- Updated package.json scripts to reflect new launch commands.
- Removed deprecated `init.mjs` and associated MCP permission settings from the codebase.
- Added `.dockerignore` and updated `.gitignore` for better environment management.
- Enhanced README with updated usage instructions for starting the application.
2026-01-04 00:06:25 -05:00
webdevcody
2c70835769 Merge branch 'main' into ideation 2026-01-03 23:58:56 -05:00
Web Dev Cody
b1f7139bb6 Merge pull request #353 from AutoMaker-Org/fix-memory-leak
Fix memory leak
2026-01-03 23:57:11 -05:00
webdevcody
22aa24ae04 feat: add Docker container launch option and update process handling
- Introduced a new option to launch the application in a Docker container (Isolated Mode) from the main menu.
- Added checks for the ANTHROPIC_API_KEY environment variable to ensure proper API functionality.
- Updated process management to include Docker, allowing for better cleanup and handling of spawned processes.
- Enhanced user prompts and logging for improved clarity during the launch process.
2026-01-03 23:53:44 -05:00
webdevcody
586aabe11f chore: update .gitignore and improve cleanup handling in scripts
- Added .claude/hans/ to .gitignore to prevent tracking of specific directory.
- Updated cleanup calls in dev.mjs and start.mjs to use await for proper asynchronous handling.
- Enhanced error handling during cleanup in case of failures.
- Improved server failure handling in startServerAndWait function to ensure proper termination of failed processes.
2026-01-03 23:36:22 -05:00
webdevcody
afb0937cb3 refactor: update permissionMode to bypassPermissions in SDK options and tests
- Changed permissionMode from 'default' to 'bypassPermissions' in sdk-options and claude-provider unit tests.
- Added allowDangerouslySkipPermissions flag in claude-provider test to enhance permission handling.
2026-01-03 23:26:26 -05:00
webdevcody
d677910f40 refactor: update permission handling and optimize performance measurement
- Changed permissionMode settings in enhance and generate title routes to improve edit acceptance and default behavior.
- Refactored performance measurement cleanup in the App component to only execute in development mode, preventing unnecessary operations in production.
- Simplified the startServerAndWait function signature for better readability.
2026-01-03 23:23:43 -05:00
webdevcody
6d41c7d0bc docs: update README for authentication setup and production launch
- Revised instructions for starting Automaker, changing from `npm run dev` to `npm run start` for production mode.
- Added a setup wizard for authentication on first run, with options for using Claude Code CLI or entering an API key.
- Clarified development mode instructions, emphasizing the use of `npm run dev` for live reload and hot module replacement.
2026-01-03 23:13:53 -05:00
webdevcody
9552670d3d feat: introduce development mode launch script
- Added a new script (dev.mjs) to start the application in development mode with hot reloading using Vite.
- The script includes functionality for installing Playwright browsers, resolving port configurations, and launching either a web or desktop application.
- Removed the old init.mjs script, which was previously responsible for launching the application.
- Updated package.json to reference the new dev.mjs script for the development command.
- Introduced a shared utilities module (launcher-utils.mjs) for common functionalities used in both development and production scripts.
2026-01-03 23:11:18 -05:00
webdevcody
e32a82cca5 refactor: remove MCP permission settings and streamline SDK options for autonomous mode
- Removed MCP permission settings from the application, including related functions and UI components.
- Updated SDK options to always bypass permissions and allow unrestricted tool access in autonomous mode.
- Adjusted related components and services to reflect the removal of MCP permission configurations, ensuring a cleaner and more efficient codebase.
2026-01-03 23:00:20 -05:00
webdevcody
019d6dd7bd fix memory leak 2026-01-03 22:50:42 -05:00
Shirone
c6d94d4bf4 fix: improve abort handling in spawnJSONLProcess
- Added immediate invocation of abort handler if the abort signal is already triggered, ensuring proper cleanup.
- Updated test to use setImmediate for aborting, allowing the generator to start processing before the abort is called, enhancing reliability.
2026-01-04 03:50:17 +01:00
Shirone
ef06c13c1a feat: implement timeout for plan approval and enhance error handling
- Added a 30-minute timeout for user plan approval to prevent indefinite waiting and memory leaks.
- Wrapped resolve/reject functions in the waitForPlanApproval method to ensure timeout is cleared upon resolution.
- Enhanced error handling in the stream processing loop to ensure proper cleanup and logging of errors.
- Improved the handling of task execution and phase completion events for better tracking and user feedback.
2026-01-04 03:45:21 +01:00
Kacper
3ed3a90bf6 refactor: rename phase models to model defaults and reorganize components
- Updated imports and references from 'phase-models' to 'model-defaults' across various components.
- Removed obsolete phase models index file to streamline the codebase.
2026-01-03 23:05:34 +01:00
webdevcody
ff281e23d0 feat: implement ideation feature for brainstorming and idea management
- Introduced a new IdeationService to manage brainstorming sessions, including idea creation, analysis, and conversion to features.
- Added RESTful API routes for ideation, including session management, idea CRUD operations, and suggestion generation.
- Created UI components for the ideation dashboard, prompt selection, and category grid to enhance user experience.
- Integrated keyboard shortcuts and navigation for the ideation feature, improving accessibility and workflow.
- Updated state management with Zustand to handle ideation-specific data and actions.
- Added necessary types and paths for ideation functionality, ensuring type safety and clarity in the codebase.
2026-01-03 02:58:43 -05:00
Web Dev Cody
f34fd955ac Merge pull request #342 from yumesha/main
fixed background image not showing at desktop application (electron)
2026-01-03 02:05:24 -05:00
antdev
46cb6fa425 fixed 'Buffer' is not defined. 2026-01-03 13:52:57 +08:00
antdev
818d8af998 E2E Test Fix - Ready for Manual Application 2026-01-03 13:47:23 +08:00
Shirone
88aba360e3 fix: improve Cursor CLI implementation with type safety and security fixes
- Add getCliPath() public method to CursorProvider to avoid private field access
- Add path validation to cursor-config routes to prevent traversal attacks
- Add supportsVision field to CursorModelConfig (all false - CLI limitation)
- Consolidate duplicate types in providers/types.ts (re-export from @automaker/types)
- Add MCP servers warning log instead of error (not yet supported by Cursor CLI)
- Fix debug log type safety (replace 'as any' with proper type narrowing)
- Update docs to remove non-existent tier field, add supportsVision field
- Remove outdated TODO comment in sdk-options.ts

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-03 03:35:33 +01:00
Shirone
ec6d36bda5 chore: format 2026-01-03 03:08:46 +01:00
Shirone
816bf8f6f6 Merge branch 'main' into feat/cursor-cli 2026-01-03 03:05:19 +01:00
Shirone
a6d665c4fa fix: update logger tests to use console.log instead of console.warn
- Modified logger test cases to reflect that the warn method uses console.log in Node.js implementation.
- Updated expectations in feature-loader tests to align with the new logging behavior.
- Ensured consistent logging behavior across tests for improved clarity and accuracy.
2026-01-03 03:03:50 +01:00
Shirone
d13a16111c feat: enhance suggestion generation with model and thinking level overrides
- Updated the generateSuggestions function to accept model and thinking level overrides, allowing for more flexible suggestion generation.
- Modified the API client and UI components to support passing these new parameters, improving user control over the suggestion process.
- Introduced a new phase model for AI Suggestions in settings, enhancing the overall functionality and user experience.
2026-01-03 02:56:08 +01:00
antdev
8d5e7b068c fail format check fixed 2026-01-03 09:55:54 +08:00
Shirone
6d4f28575f fix: scrolling issues in phase model selector
- scrolling was broken when we used component inside modal / dialog
2026-01-03 02:55:34 +01:00
Shirone
7596ff9ec3 fix: enable logger colors by default in Node.js subprocess environments
Previously, colors were only enabled when stdout was a TTY, which caused
colored output to be stripped when the server ran as a subprocess. Now
colors are enabled by default in Node.js and can be disabled with
LOG_COLORS=false if needed.

Also removed the unused isTTY() function.
2026-01-03 02:54:14 +01:00
Shirone
35441c1a9d feat: add AI Suggestions phase model to settings view
- Introduced a new phase model for AI Suggestions, enhancing the functionality of the settings view.
- Updated the phase model handling to utilize DEFAULT_PHASE_MODELS as a fallback, ensuring robust behavior when specific models are not set.
- This addition improves the user experience by providing more options for project analysis and suggestions.
2026-01-03 02:41:39 +01:00
Shirone
abed3b3d75 feat: enhance use-model-override to support default phase models
- Updated the useModelOverride hook to include a fallback to DEFAULT_PHASE_MODELS when phase models are not available, ensuring smoother handling of model overrides.
- This change addresses scenarios where settings may not have been migrated to include new phase models, improving overall robustness and user experience.
2026-01-03 02:40:17 +01:00
Kacper
9071f89ec8 chore: remove obsolete Cursor CLI integration documentation
- Deleted the Cursor CLI integration analysis document, phase prompt, README, and related phase files as they are no longer relevant to the current project structure.
- This cleanup helps streamline the project and remove outdated references, ensuring a more maintainable codebase.
2026-01-02 21:02:40 +01:00
Kacper
3c8ee5b714 chore: update .prettierignore and format vite.config.mts
- Added routeTree.gen.ts to .prettierignore to prevent formatting of generated files.
- Reformatted the external dependencies list in vite.config.mts for improved readability.
2026-01-02 20:56:40 +01:00
Kacper
8e1a9addc1 fix: update logger test expectations to include log level prefixes
- Modified test cases in logger.test.ts to assert that log messages include appropriate log level prefixes (INFO, ERROR, WARN, DEBUG).
- Ensured consistency in log output formatting across various logging methods, enhancing clarity in test results.
2026-01-02 20:54:39 +01:00
Kacper
e72f7d1e1a fix: libs test 2026-01-02 20:50:57 +01:00
Kacper
4a28b70b72 feat: enhance query options with maxThinkingTokens support
- Introduced maxThinkingTokens to the query options for Claude models, allowing for more precise control over the SDK's reasoning capabilities.
- Refactored the enhance handler to utilize the new getThinkingTokenBudget function, improving the integration of thinking levels into the query process.
- Updated the query options structure for clarity and maintainability, ensuring consistent handling of model parameters.

This update enhances the application's ability to adapt reasoning capabilities based on user-defined thinking levels, improving overall performance.
2026-01-02 20:50:40 +01:00
Kacper
3e95a11189 refactor: replace logger with console statements in subprocess management
- Removed the centralized logging system in favor of direct console.log and console.error statements for subprocess management.
- Updated logging messages to include context for subprocess actions, such as spawning commands, handling errors, and process completion.
- This change simplifies the logging approach in subprocess handling, making it easier to track subprocess activities during development.
2026-01-02 20:46:39 +01:00
Shirone
2b942a6cb1 feat: integrate thinking level support across various components
- Enhanced multiple server and UI components to include an optional thinking level parameter, improving the configurability of model interactions.
- Updated request handlers and services to manage and pass the thinking level, ensuring consistent data handling across the application.
- Refactored UI components to display and manage the selected model along with its thinking level, enhancing user experience and clarity.
- Adjusted the Electron API and HTTP client to support the new thinking level parameter in requests, ensuring seamless integration.

This update significantly improves the application's ability to adapt reasoning capabilities based on user-defined thinking levels, enhancing overall performance and user satisfaction.
2026-01-02 17:52:12 +01:00
Shirone
69f3ba9724 feat: standardize logging across UI components
- Replaced console.log and console.error statements with logger methods from @automaker/utils in various UI components, ensuring consistent log formatting and improved readability.
- Enhanced error handling by utilizing logger methods to provide clearer context for issues encountered during operations.
- Updated multiple views and hooks to integrate the new logging system, improving maintainability and debugging capabilities.

This update significantly enhances the observability of UI components, facilitating easier troubleshooting and monitoring.
2026-01-02 17:33:15 +01:00
Shirone
96a999817f feat: implement structured logging across server components
- Integrated a centralized logging system using createLogger from @automaker/utils, replacing console.log and console.error statements with logger methods for consistent log formatting and improved readability.
- Updated various modules, including auth, events, and services, to utilize the new logging system, enhancing error tracking and operational visibility.
- Refactored logging messages to provide clearer context and information, ensuring better maintainability and debugging capabilities.

This update significantly enhances the observability of the server components, facilitating easier troubleshooting and monitoring.
2026-01-02 15:40:15 +01:00
Shirone
8c04e0028f feat: integrate thinking level support across agent and UI components
- Enhanced the agent service and request handling to include an optional thinking level parameter, improving the configurability of model interactions.
- Updated the UI components to manage and display the selected model along with its thinking level, ensuring a cohesive user experience.
- Refactored the model selector and input controls to accommodate the new model selection structure, enhancing usability and clarity.
- Adjusted the Electron API and HTTP client to support the new thinking level parameter in requests, ensuring consistent data handling across the application.

This update significantly improves the agent's ability to adapt its reasoning capabilities based on user-defined thinking levels, enhancing overall performance and user satisfaction.
2026-01-02 15:22:06 +01:00
Shirone
81d300391d feat: enhance SDK options with thinking level support
- Introduced a new function, buildThinkingOptions, to handle the conversion of ThinkingLevel to maxThinkingTokens for the Claude SDK.
- Updated existing SDK option creation functions to incorporate thinking options, ensuring that maxThinkingTokens are included based on the specified thinking level.
- Enhanced the settings service to support migration of phase models to include thinking levels, improving compatibility with new configurations.
- Added comprehensive tests for thinking level integration and migration logic, ensuring robust functionality across the application.

This update significantly improves the SDK's configurability and performance by allowing for more nuanced control over reasoning capabilities.
2026-01-02 14:55:52 +01:00
antdev
d417666fe1 fix background image not showing 2026-01-02 15:33:00 +08:00
webdevcody
2bbc8113c0 chore: update lockfile linting process
- Replaced the inline linting command for package-lock.json with a dedicated script (lint-lockfile.mjs) to check for git+ssh:// URLs, ensuring compatibility with CI/CD environments.
- The new script provides clear error messages and instructions if such URLs are found, enhancing the development workflow.
2026-01-02 00:29:04 -05:00
webdevcody
7e03af2dc6 chore: release v0.7.3
🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-02 00:00:41 -05:00
Shirone
914734cff6 feat(phase-model-selector): implement grouped model selection and enhanced UI
- Added support for grouped models in the PhaseModelSelector, allowing users to select from multiple variants within a single group.
- Introduced a new popover UI for displaying grouped model variants, improving user interaction and selection clarity.
- Implemented logic to filter and display enabled cursor models, including standalone and grouped options.
- Enhanced state management for expanded groups and variant selection, ensuring a smoother user experience.

This update significantly improves the model selection process, making it more intuitive and organized.
2026-01-02 02:37:20 +01:00
Shirone
e1bdb4c7df Merge remote-tracking branch 'origin/main' into feat/cursor-cli 2026-01-02 01:50:16 +01:00
Kacper
ad947691df feat: enhance TaskProgressPanel and AgentOutputModal components
- Added defaultExpanded prop to TaskProgressPanel for customizable initial state.
- Updated styling in TaskProgressPanel for improved layout and consistency.
- Modified AgentOutputModal to utilize the new defaultExpanded prop and adjusted class names for better responsiveness and appearance.
- Enhanced overflow handling in AgentOutputModal for improved user experience.
2026-01-02 00:23:25 +01:00
Web Dev Cody
ab9ef0d560 Merge pull request #340 from AutoMaker-Org/fix-web-mode-auth
feat: implement authentication state management and routing logic
2026-01-01 17:13:20 -05:00
webdevcody
844be657c8 feat: add skipSandboxWarning to settings and sync function
- Introduced skipSandboxWarning property in GlobalSettings interface to manage user preference for sandbox risk warnings.
- Updated syncSettingsToServer function to include skipSandboxWarning in the settings synchronization process.
- Set default value for skipSandboxWarning to false in DEFAULT_GLOBAL_SETTINGS.
2026-01-01 17:08:15 -05:00
webdevcody
90c89ef338 Merge branch 'fix-web-mode-auth' of github.com:AutoMaker-Org/automaker into fix-web-mode-auth 2026-01-01 16:49:41 -05:00
webdevcody
fb46c0c9ea feat: enhance sandbox risk dialog and settings management
- Updated the SandboxRiskDialog to include a checkbox for users to opt-out of future warnings, passing the state to the onConfirm callback.
- Modified SettingsView to manage the skipSandboxWarning state, allowing users to reset the warning preference.
- Enhanced DangerZoneSection to display a message when the sandbox warning is disabled and provide an option to reset this setting.
- Updated RootLayoutContent to respect the user's choice regarding the sandbox warning, auto-confirming if the user opts to skip it.
- Added skipSandboxWarning state management to the app store for persistent user preferences.
2026-01-01 16:49:35 -05:00
Kacper
81bd57cf6a feat: add runNpmAndWait function for improved npm command handling
- Introduced a new function, runNpmAndWait, to execute npm commands and wait for their completion, enhancing error handling.
- Updated the main function to build shared packages before starting the backend server, ensuring necessary dependencies are ready.
- Adjusted server and web process commands to use a consistent naming convention.
2026-01-01 22:39:12 +01:00
Kacper
83e59d6a4d feat(phase-model-selector): enhance model selection with favorites and popover UI
- Introduced a popover for model selection, allowing users to choose from Claude and Cursor models.
- Added functionality to toggle favorite models, enhancing user experience by allowing quick access to preferred options.
- Updated the UI to display selected model details and improved layout for better usability.
- Refactored model grouping and rendering logic for clarity and maintainability.

This update improves the overall interaction with the phase model selector, making it more intuitive and user-friendly.
2026-01-01 22:31:07 +01:00
webdevcody
59d47928a7 feat: implement authentication state management and routing logic
- Added a new auth store using Zustand to manage authentication state, including `authChecked` and `isAuthenticated`.
- Updated `LoginView` to set authentication state upon successful login and navigate based on setup completion.
- Enhanced `RootLayoutContent` to enforce routing rules based on authentication status, redirecting users to login or setup as necessary.
- Improved error handling and loading states during authentication checks.
2026-01-01 16:25:31 -05:00
Kacper
cbe951dd8f fix(suggestions): extract result text from Cursor provider
- Add handler for type=result messages in Cursor stream processing
- Cursor provider sends final accumulated text in msg.result
- Suggestions was only handling assistant messages for Cursor
- Add detailed logging for result extraction (like backlog plan)
- Matches pattern used by github validation and backlog plan

This fixes potential parsing issues when using Cursor models
for feature suggestions, ensuring the complete response text
is captured before JSON parsing.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-01 20:03:15 +01:00
Kacper
63b9f52d6b refactor(cursor-models): remove tier property from Cursor model configurations
- Removed the 'tier' property from Cursor model configurations and related UI components.
- Updated relevant files to reflect the removal of tier-related logic and display elements.
- This change simplifies the model structure and UI, focusing on essential attributes.
2026-01-01 20:00:40 +01:00
Kacper
3b3e61da8d fix(backlog-plan): add Cursor-specific prompt with no-file-write instructions
- Import isCursorModel to detect Cursor models
- For Cursor: embed systemPrompt in userPrompt with explicit instructions
- Add "DO NOT write any files" directive for Cursor models
- Prevents Cursor from writing plan to files instead of returning JSON
- Matches pattern used by github validation (validate-issue.ts)

Cursor doesn't support systemPrompt separation like Claude SDK,
so we need to combine prompts and add explicit instructions to
prevent it from using Write/Edit tools and creating files.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-01 19:15:37 +01:00
Kacper
0e22098652 feat(backlog-plan): add detailed logging for Cursor result extraction
- Change debug logs to info/warn so they're always visible
- Log when result message is received from Cursor
- Log lengths of both msg.result and accumulated responseText
- Log which source is being used (result vs accumulated)
- Log empty response error for better diagnostics
- Add response preview logging on parse failure

This will help diagnose why Cursor parsing is failing by showing:
1. Whether result messages are being received
2. What content lengths we're working with
3. Whether response text is empty or has content
4. What the actual response looks like when parsing fails

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-01 19:07:13 +01:00
Kacper
cf9a1f9077 fix(backlog-plan): use extractJsonWithArray and improve logging
- Switch from extractJson to extractJsonWithArray for 'changes' field
- Validates that 'changes' is an array, not just that it exists
- Add debug logging for response length and preview on parse failure
- Add debug logging when receiving result from Cursor provider
- Matches pattern used by suggestions feature
- Helps diagnose parsing issues with better error context

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-01 18:41:25 +01:00
Kacper
9b1174408b fix(backlog-plan): extract result text from Cursor provider
- Add handler for type=result messages in stream processing
- Cursor provider sends final accumulated text in msg.result
- Backlog plan was only handling assistant messages
- Now matches pattern used by github validation and suggestions
- Fixes "cursor cli parsing failed" error

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-01 18:36:43 +01:00
Kacper
207fd26681 feat(backlog-plan): add model override trigger to footer
- Add ModelOverrideTrigger to backlog plan dialog
- Position trigger in DialogFooter on left side (mr-auto)
- Display before Cancel button for better UX
- Use variant="button" to show model name
- Connect to phaseModels.backlogPlanningModel default
- Pass model override to server generate endpoint

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-01 18:32:40 +01:00
Kacper
aa318099dc feat(ui): add model override trigger to backlog plan dialog
Added ModelOverrideTrigger component to the "Plan Backlog Changes" dialog,
allowing users to override the global backlog planning model on a per-request
basis, consistent with other dialogs in the application.

Changes:
- Added model override state management to backlog-plan-dialog
- Integrated ModelOverrideTrigger component in dialog header (input mode only)
- Pass model override (or global default) to backlogPlan.generate API call
- UI shows override indicator when model is overridden from global default

The feature uses the existing backlogPlanningModel phase setting as the default
and allows temporary overrides without changing global settings.

Server already supports optional model parameter in the generate endpoint,
so no backend changes were required.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-01 18:20:45 +01:00
Kacper
7dec5d9d74 fix(sdk-options): normalize paths for cross-platform cloud storage detection
Fixed cloud storage path detection to work correctly on Windows by normalizing
path separators to forward slashes and removing Windows drive letters before
pattern matching.

Issue:
The isCloudStoragePath() function was failing on Windows because:
1. path.resolve() converts Unix paths to Windows paths with backslashes
2. Windows adds drive letters (e.g., "P:\Users\test" instead of "/Users/test")
3. Pattern checks for "/Library/CloudStorage/" didn't match "\Library\CloudStorage\"
4. Home-anchored path comparisons failed due to drive letter mismatches

Solution:
- Normalize all path separators to forward slashes for consistent pattern matching
- Remove Windows drive letters (e.g., "C:" or "P:") from normalized paths
- This ensures Unix-style test paths work the same on all platforms

All tests now pass (967 passed, 27 skipped):
-  Cloud storage path detection tests (macOS patterns)
-  Home-anchored cloud folder tests (Dropbox, Google Drive, OneDrive)
-  Sandbox compatibility tests
-  Cross-platform path handling

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-01 18:11:40 +01:00
Kacper
17dae1571b fix(tests): update terminal-service tests to work cross-platform
Updated terminal-service.test.ts to use path.resolve() in test expectations
so they work correctly on both Unix and Windows platforms.

The merge from main removed the skipIf conditions for Windows, expecting these
tests to work cross-platform. On Windows, path.resolve('/test/dir') converts
Unix-style paths to Windows paths (e.g., 'P:\test\dir'), so test expectations
needed to use path.resolve() as well to match the actual behavior.

Fixed tests:
- should create a new terminal session
- should fix double slashes in path
- should return all active sessions

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-01 18:07:00 +01:00
Kacper
f56b873571 Merge main into feat/cursor-cli-integration
Carefully merged latest changes from main branch into the Cursor CLI integration
branch. This merge brings in important improvements and fixes while preserving
all Cursor-related functionality.

Key changes from main:
- Sandbox mode security improvements and cloud storage compatibility
- Version-based settings migrations (v2 schema)
- Port configuration centralization
- System paths utilities for CLI detection
- Enhanced error handling in HttpApiClient
- Windows MCP process cleanup fixes
- New validation and build commands
- GitHub issue templates and release process improvements

Resolved conflicts in:
- apps/server/src/routes/context/routes/describe-image.ts
  (Combined Cursor provider routing with secure-fs imports)
- apps/server/src/services/auto-mode-service.ts
  (Merged failure tracking with raw output logging)
- apps/server/tests/unit/services/terminal-service.test.ts
  (Updated to async tests with systemPathExists mocking)
- libs/platform/src/index.ts
  (Combined WSL utilities with system-paths exports)
- libs/types/src/settings.ts
  (Merged DEFAULT_PHASE_MODELS with SETTINGS_VERSION constants)

All Cursor CLI integration features remain intact including:
- CursorProvider and CliProvider base class
- Phase-based model configuration
- Provider registry and factory patterns
- WSL support for Windows
- Model override UI components
- Cursor-specific settings and configurations

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-01 18:03:48 +01:00
Web Dev Cody
bd432b1da3 Merge pull request #304 from firstfloris/fix/sandbox-cloud-storage-compatibility
fix: auto-disable sandbox mode for cloud storage paths
2026-01-01 02:48:38 -05:00
webdevcody
b51aed849c fix: clarify sandbox mode behavior in sdk-options
- Updated the checkSandboxCompatibility function to explicitly handle the case when enableSandboxMode is set to false, ensuring clearer logic for sandbox mode activation.
- Adjusted unit tests to reflect the new behavior, confirming that sandbox mode defaults to enabled when not specified and correctly disables for cloud storage paths.
- Enhanced test descriptions for better clarity on expected outcomes in various scenarios.
2026-01-01 02:39:38 -05:00
Web Dev Cody
90e62b8add Merge pull request #337 from AutoMaker-Org/addressing-pr-issues
feat: improve error handling in HttpApiClient
2026-01-01 02:31:59 -05:00
webdevcody
67c6c9a9e7 feat: enhance cloud storage path detection in sdk-options
- Introduced macOS-specific cloud storage patterns and home-anchored folder detection to improve accuracy in identifying cloud storage paths.
- Updated the isCloudStoragePath function to utilize these new patterns, ensuring better handling of cloud storage locations.
- Added comprehensive unit tests to validate detection logic for various cloud storage scenarios, including false positive prevention.
2026-01-01 02:31:02 -05:00
webdevcody
2d66e38fa7 Merge branch 'main' into fix/sandbox-cloud-storage-compatibility 2026-01-01 02:23:10 -05:00
webdevcody
50aac1c218 feat: improve error handling in HttpApiClient
- Added error handling for HTTP responses in the HttpApiClient class.
- Enhanced error messages to include status text and parsed error data, improving debugging and user feedback.
2026-01-01 02:17:12 -05:00
Web Dev Cody
8c8a4875ca Merge pull request #329 from andydataguy/fix/windows-mcp-orphaned-processes
fix(windows): properly terminate MCP server process trees
2026-01-01 02:12:26 -05:00
webdevcody
eec36268fe Merge branch 'main' into fix/windows-mcp-orphaned-processes 2026-01-01 02:09:54 -05:00
WebDevCody
f6efbd1b26 docs: update release process in documentation
- Added steps for committing version bumps and creating git tags in the release process.
- Clarified the verification steps to include checking the visibility of tags on the remote repository.
2026-01-01 01:40:25 -05:00
WebDevCody
019793e047 chore: release v0.7.2 2026-01-01 01:40:04 -05:00
Web Dev Cody
a8a3711246 Merge pull request #336 from AutoMaker-Org/fix-things
refactor: use environment variables for git configuration in test rep…
2026-01-01 01:23:41 -05:00
WebDevCody
b867ca1407 refactor: update window close behavior for macOS and other platforms
- Modified the application to keep the app and servers running when all windows are closed on macOS, aligning with standard macOS behavior.
- On other platforms, ensured that the server processes are stopped and the app quits when all windows are closed, preventing potential port conflicts.
2026-01-01 01:20:34 -05:00
WebDevCody
75143c0792 refactor: clean up whitespace and improve prompt formatting in port management
- Removed unnecessary whitespace in the init.mjs file for better readability.
- Enhanced the formatting of user prompts to improve clarity during port conflict resolution.
2026-01-01 00:46:14 -05:00
WebDevCody
f32f3e82b2 feat: enhance port management and server initialization process
- Added a new function to check if a port is in use without terminating processes, improving user experience during server startup.
- Updated the health check function to accept a dynamic port parameter, allowing for flexible server configurations.
- Implemented user prompts for handling port conflicts, enabling users to kill processes, choose different ports, or cancel the operation.
- Enhanced CORS configuration to support localhost and IPv6 addresses, ensuring compatibility across different development environments.
- Refactored the main function to utilize dynamic port assignments for both the web and server applications, improving overall flexibility.
2026-01-01 00:42:42 -05:00
WebDevCody
abe272ef4d fix: remove TypeScript type annotations from bumpVersion function
- Updated the bumpVersion function to use plain JavaScript by removing TypeScript type annotations, improving compatibility with non-TypeScript environments.
- Cleaned up whitespace in the bump-version.mjs file for better readability.
2025-12-31 23:33:51 -05:00
WebDevCody
6d4ab9cc13 feat: implement version-based migrations for global settings
- Added versioning to global settings, enabling automatic migrations for breaking changes.
- Updated default global settings to reflect the new versioning schema.
- Implemented logic to disable sandbox mode for existing users during migration from version 1 to 2.
- Enhanced error handling for saving migrated settings, ensuring data integrity during updates.
2025-12-31 23:30:44 -05:00
WebDevCody
98381441b9 feat: add GitHub issue fix command and release command
- Introduced a new command for fetching and validating GitHub issues, allowing users to address issues directly from the command line.
- Added a release command to bump the version of the application and build the Electron app, ensuring version consistency across UI and server packages.
- Updated package.json files for both UI and server to version 0.7.1, reflecting the latest changes.
- Implemented version utility in the server to read the version from package.json, enhancing version management across the application.
2025-12-31 23:24:01 -05:00
WebDevCody
eae60ab6b9 feat: update README logo to SVG format
- Replaced the existing PNG logo with a new SVG version for improved scalability and quality.
- Added the SVG logo file to the project, enhancing visual consistency across different display resolutions.
2025-12-31 22:06:54 -05:00
WebDevCody
1d7b64cea8 refactor: use environment variables for git configuration in test repositories
- Updated test repository creation functions to utilize environment variables for git author and committer information, preventing modifications to the user's global git configuration.
- This change enhances test isolation and ensures consistent behavior across different environments.
2025-12-31 22:02:45 -05:00
Test User
6337e266c5 drag top bar 2025-12-31 21:58:22 -05:00
Web Dev Cody
da38adcba6 Merge pull request #332 from AutoMaker-Org/centeralize-fs-access
feat: implement secure file system access and path validation
2025-12-31 21:45:19 -05:00
Test User
af493fb73e feat: simulate containerized environment for testing
- Added an environment variable to simulate a containerized environment, allowing the application to skip sandbox confirmation dialogs during testing.
- This change aims to streamline the testing process by reducing unnecessary user interactions while ensuring the application behaves as expected in a containerized setup.
2025-12-31 21:21:35 -05:00
Test User
79bf1c9bec feat: add centralized build validation command and refactor port configuration
- Introduced a new command for validating project builds, providing detailed instructions for running builds and intelligently fixing failures based on recent changes.
- Refactored port configuration by centralizing it in the @automaker/types package for improved maintainability and backward compatibility.
- Updated imports in various modules to reflect the new centralized port configuration, ensuring consistent usage across the application.
2025-12-31 21:07:26 -05:00
Test User
b9a6e29ee8 feat: add sandbox environment checks and user confirmation dialogs
- Introduced a new endpoint to check if the application is running in a containerized environment, allowing the UI to display appropriate risk warnings.
- Added a confirmation dialog for users when running outside a sandbox, requiring acknowledgment of potential risks before proceeding.
- Implemented a rejection screen for users who deny sandbox risk confirmation, providing options to restart in a container or reload the application.
- Updated the main application logic to handle sandbox status checks and user responses effectively, enhancing security and user experience.
2025-12-31 21:00:23 -05:00
Test User
2828431cca feat: add test validation command and improve environment variable handling
- Introduced a new command for validating tests, providing detailed instructions for running tests and fixing failures based on code changes.
- Updated the environment variable handling in the Claude provider to only allow explicitly defined variables, enhancing security and preventing leakage of sensitive information.
- Improved feature loading to handle errors more gracefully and load features concurrently, optimizing performance.
- Centralized port configuration for the Automaker application to prevent accidental termination of critical services.
2025-12-31 20:36:20 -05:00
Web Dev Cody
d3f46f565b Merge pull request #330 from AutoMaker-Org/chore/cleanup-unused-files
chore: remove unused files from codebase and adress audit security
2025-12-31 20:02:23 -05:00
Test User
3f4f2199eb feat: initialize API key on module import for improved async handling
- Start API key initialization immediately upon importing the HTTP API client module to ensure the init promise is created early.
- Log errors during API key initialization to aid in debugging.

Additionally, added a version field to the setup store for proper state hydration, aligning with the app-store pattern.
2025-12-31 20:00:54 -05:00
Test User
38f0b16530 Merge remote-tracking branch 'origin/main' into centeralize-fs-access 2025-12-31 19:57:17 -05:00
Web Dev Cody
bd22323149 Merge pull request #335 from RayFernando1337/main
fix: resolve auth race condition causing 401 errors on Electron startup
2025-12-31 19:56:20 -05:00
RayFernando
f6ce03d59a fix: resolve auth race condition causing 401 errors on Electron startup
API requests were being made before initApiKey() completed, causing
401 Unauthorized errors on app startup in Electron mode.

Changes:
- Add waitForApiKeyInit() to track and await API key initialization
- Make HTTP methods (get/post/put/delete) wait for auth before requests
- Defer WebSocket connection until API key is ready
- Add explicit auth wait in useSettingsMigration hook

Fixes race condition introduced in PR #321
2025-12-31 16:14:09 -08:00
Test User
63816043cf feat: enhance shell detection logic and improve cross-platform support
- Updated the TerminalService to utilize getShellPaths() for better shell detection across platforms.
- Improved logic for detecting user-configured shells in WSL and added fallbacks for various platforms.
- Enhanced unit tests to mock shell paths for comprehensive cross-platform testing, ensuring accurate shell detection behavior.

These changes aim to streamline shell detection and improve the user experience across different operating systems.
2025-12-31 19:06:13 -05:00
Test User
eafe474dbc fix: update node-gyp repository URL to use HTTPS
Changed the resolved URL for the @electron/node-gyp dependency in package-lock.json from SSH to HTTPS for improved accessibility and compatibility across different environments.
2025-12-31 18:53:47 -05:00
Test User
59bbbd43c5 feat: add Node.js version management and improve error handling
- Introduced a .nvmrc file to specify the Node.js version (22) for the project, ensuring consistent development environments.
- Enhanced error handling in the startServer function to provide clearer messages when the Node.js executable cannot be found, improving debugging experience.
- Updated package.json files across various modules to enforce Node.js version compatibility and ensure consistent dependency versions.

These changes aim to streamline development processes and enhance the application's reliability by enforcing version control and improving error reporting.
2025-12-31 18:42:33 -05:00
Test User
2b89b0606c feat: implement secure file system access and path validation
- Introduced a restricted file system wrapper to ensure all file operations are confined to the script's directory, enhancing security.
- Updated various modules to utilize the new secure file system methods, replacing direct fs calls with validated operations.
- Enhanced path validation in the server routes and context loaders to prevent unauthorized access to the file system.
- Adjusted environment variable handling to use centralized methods for reading and writing API keys, ensuring consistent security practices.

This change improves the overall security posture of the application by enforcing strict file access controls and validating paths before any operations are performed.
2025-12-31 18:03:01 -05:00
Kacper
f496bb825d feat(agent-view): refactor to folder pattern and add Cursor model support
- Refactor agent-view.tsx from 1028 lines to ~215 lines
- Create agent-view/ folder with components/, hooks/, input-area/, shared/
- Extract hooks: useAgentScroll, useFileAttachments, useAgentShortcuts, useAgentSession
- Extract components: AgentHeader, ChatArea, MessageList, MessageBubble, ThinkingIndicator
- Extract input-area: AgentInputArea, FilePreview, QueueDisplay, InputControls
- Add AgentModelSelector with Claude and Cursor CLI model support
- Update /models/available to use ProviderFactory.getAllAvailableModels()
- Update /models/providers to include Cursor CLI status

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-31 16:58:21 +01:00
Kacper
07327e48b4 chore: remove unused pipeline feature documentation 2025-12-31 10:41:20 +01:00
Anand (Andy) Houston
e818922b0d fix(windows): properly terminate MCP server process trees
On Windows, MCP server processes spawned via 'cmd /c npx' weren't being
properly terminated after testing, causing orphaned processes that would
spam logs with "FastMCP warning: server is not responding to ping".

Root cause: client.close() kills only the parent cmd.exe, orphaning child
node.exe processes. taskkill /t needs the parent PID to traverse the tree.

Fix: Run taskkill BEFORE client.close() so the parent PID still exists
when we kill the process tree.

- Add execSync import for taskkill execution
- Add IS_WINDOWS constant for platform check
- Create cleanupConnection() method with proper termination order
- Add comprehensive documentation in docs/

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-31 16:04:23 +08:00
Shirone
04aac7ec07 chore: update package-lock.json to add peer dependencies and update package versions 2025-12-31 03:34:41 +01:00
Kacper
944e2f5ffe chore: remove unused files from codebase 2025-12-31 03:22:25 +01:00
Kacper
9653e2b970 refactor(settings): remove AI Enhancement section and related components
- Deleted AIEnhancementSection and its associated files from the settings view.
- Updated SettingsView to remove references to AI enhancement functionality.
- Cleaned up navigation and feature defaults sections by removing unused validation model references.

This refactor streamlines the settings view by eliminating the AI enhancement feature, which is no longer needed.
2025-12-31 02:56:06 +01:00
Kacper
5c400b7eff fix(server): Fix unit tests and increase coverage
- Skip platform-specific tests on Windows (CI runs on Linux)
- Add tests for json-extractor.ts (96% coverage)
- Add tests for cursor-config-manager.ts (100% coverage)
- Add tests for cursor-config-service.ts (98.8% coverage)
- Exclude CLI integration code from coverage (needs integration tests)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-31 02:14:11 +01:00
Kacper
3bc4b7f1f3 fix: update qs to 6.14.1 to fix high severity DoS vulnerability
Fixes GHSA-6rw7-vpxm-498p - qs's arrayLimit bypass in bracket notation
allows DoS via memory exhaustion.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-31 01:40:15 +01:00
Kacper
d539f7e3b7 Merge origin/main into feat/cursor-cli
Merges latest main branch changes including:
- MCP server support and configuration
- Pipeline configuration system
- Prompt customization settings
- GitHub issue comments in validation
- Auth middleware improvements
- Various UI/UX improvements

All Cursor CLI features preserved:
- Multi-provider support (Claude + Cursor)
- Model override capabilities
- Phase model configuration
- Provider tabs in settings

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-31 01:22:18 +01:00
Kacper
853292af45 refactor(cursor): seperate components and add permissions skeleton 2025-12-30 17:54:02 +01:00
Kacper
3c6736bc44 feat(cursor): Enhance Cursor tool handling with a registry and improved processing
Introduced a registry for Cursor tool handlers to streamline the processing of various tool calls, including read, write, edit, delete, grep, ls, glob, semantic search, and read lints. This refactor allows for better organization and normalization of tool inputs and outputs.

Additionally, updated the CursorToolCallEvent interface to accommodate new tool calls and their respective arguments. Enhanced logging for raw events and unrecognized tool call structures for improved debugging.

Affected files:
- cursor-provider.ts: Added CURSOR_TOOL_HANDLERS and refactored tool call processing.
- log-parser.ts: Updated tool categories and added summaries for new tools.
- cursor-cli.ts: Expanded CursorToolCallEvent interface to include new tool calls.

🤖 Generated with [Claude Code](https://claude.com/claude-code)
2025-12-30 17:35:39 +01:00
Kacper
dac916496c feat(server): Implement Cursor CLI permissions management
Added new routes and handlers for managing Cursor CLI permissions, including:
- GET /api/setup/cursor-permissions: Retrieve current permissions configuration and available profiles.
- POST /api/setup/cursor-permissions/profile: Apply a predefined permission profile (global or project).
- POST /api/setup/cursor-permissions/custom: Set custom permissions for a project.
- DELETE /api/setup/cursor-permissions: Delete project-level permissions, reverting to global settings.
- GET /api/setup/cursor-permissions/example: Provide an example config file for a specified profile.

Also introduced a new service for handling Cursor CLI configuration files and updated the UI to support permissions management.

Affected files:
- Added new routes in index.ts and cursor-config.ts
- Created cursor-config-service.ts for permissions management logic
- Updated UI components to display and manage permissions

🤖 Generated with [Claude Code](https://claude.com/claude-code)
2025-12-30 17:08:18 +01:00
Web Dev Cody
847a8ff327 Merge pull request #306 from Waaiez/fix/linux-claude-usage
fix: add Linux support for Claude usage service
2025-12-30 10:13:50 -05:00
Web Dev Cody
504c19aef5 Merge pull request #326 from andydataguy/fix/windows-orphaned-server-processes
fix(windows): properly kill server process tree on app quit
2025-12-30 10:07:10 -05:00
Web Dev Cody
ed2da7932c Merge pull request #327 from casiusss/fix/backlog-plan-json-format
fix: restore correct JSON format for backlog plan prompt
2025-12-30 10:05:56 -05:00
Kacper
078ab943a8 fix(server): Add explicit JSON response instructions for Cursor prompts
Cursor was writing JSON to files instead of returning it in the response.
Added clear instructions to all Cursor prompts:
1. DO NOT write any files
2. Return ONLY raw JSON in the response
3. No explanations, no markdown, just JSON

Affected routes:
- generate-spec.ts
- generate-features-from-spec.ts
- validate-issue.ts
- generate-suggestions.ts

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-30 15:44:50 +01:00
Kacper
948fdb6352 refactor(server): Use shared JSON extractor in feature and plan parsing
Update parseAndCreateFeatures and parsePlanResponse to use the shared
extractJson/extractJsonWithArray utilities instead of manual regex
parsing for more robust and consistent JSON extraction from AI responses.

- parse-and-create-features.ts: Use extractJsonWithArray for features
- generate-plan.ts: Use extractJson with requiredKey for backlog plans

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-30 15:36:17 +01:00
Kacper
b0f83b7c76 feat(server): Add readOnly mode to Cursor provider for read-only operations
Adds a readOnly option to ExecuteOptions that controls whether the
Cursor CLI runs with --force flag (allows edits) or without (suggest-only).

Read-only routes now pass readOnly: true:
- generate-spec.ts, generate-features-from-spec.ts (we write files ourselves)
- validate-issue.ts, generate-suggestions.ts (analysis only)
- describe-file.ts, describe-image.ts (description only)
- generate-plan.ts, enhance.ts (text generation only)

Routes that implement features (auto-mode-service, agent-service) keep
the default (readOnly: false) to allow file modifications.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-30 15:31:35 +01:00
Kacper
38d0e4103a feat(server): Add Cursor provider routing to spec generation routes
Add Cursor model support to generate-spec.ts and generate-features-from-spec.ts
routes, allowing them to use Cursor models when configured in phaseModels settings.

- Both routes now detect Cursor models via isCursorModel()
- Route to ProviderFactory for Cursor models, Claude SDK for Claude models
- Use resolveModelString() for proper model ID resolution
- Extract JSON from Cursor responses using shared json-extractor utility

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-30 15:26:40 +01:00
Kacper
19016f03d7 refactor(server): Extract JSON extraction utility to shared module
Created libs/server/src/lib/json-extractor.ts with reusable JSON
extraction utilities for parsing AI responses:

- extractJson<T>(): Multi-strategy JSON extraction
- extractJsonWithKey<T>(): Extract with required key validation
- extractJsonWithArray<T>(): Extract with array property validation

Strategies (tried in order):
1. JSON in ```json code block
2. JSON in ``` code block
3. Find JSON object by matching braces (with optional required key)
4. Find any JSON object by matching braces
5. First { to last }
6. Parse entire response

Updated:
- generate-suggestions.ts: Use extractJsonWithArray('suggestions')
- validate-issue.ts: Use extractJson()

Both files now use the shared utility instead of local implementations,
following DRY principle.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-30 15:18:45 +01:00
Kacper
26e4ac0d2f fix(suggestions): Improve JSON extraction for Cursor responses
Cursor responses may include text after the JSON object, causing
JSON.parse to fail. Added multi-strategy extraction similar to
validate-issue.ts:

1. Try extracting from ```json code block
2. Try extracting from ``` code block
3. Try finding {"suggestions" and matching braces
4. Try finding any JSON object with suggestions array

Uses bracket counting to find the correct closing brace.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-30 15:14:57 +01:00
Kacper
efd9a1b7d9 feat(suggestions): Wire to phaseModels.enhancementModel with Cursor support
The suggestions generation route (Feature Enhancement in UI) was not
reading from phaseModels settings and always used the default haiku model.

Changes:
- Read enhancementModel from phaseModels settings
- Add provider routing for Cursor vs Claude models
- Pass model to createSuggestionsOptions for Claude SDK
- For Cursor, include JSON schema in prompt and use ProviderFactory

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-30 15:11:34 +01:00
Stephan Rieche
968d889346 fix: restore correct JSON format for backlog plan prompt
The backlog plan system prompt was using an incorrect JSON format that didn't
match the BacklogPlanResult interface. This caused the plan generation to
complete but produce no visible results.

Issue:
- Prompt specified: { "plan": { "add": [...], "update": [...], "delete": [...] } }
- Code expected: { "changes": [...], "summary": "...", "dependencyUpdates": [...] }

Fix:
- Restored original working format with "changes" array
- Each change has: type ("add"|"update"|"delete"), feature, reason
- Matches BacklogPlanResult and BacklogChange interfaces exactly

Impact:
- Plan button on Kanban board will now generate and display plans correctly
- AI responses will be properly parsed and shown in review dialog

Testing:
- All 845 tests passing
- Verified format matches original hardcoded prompt from upstream

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-30 15:09:18 +01:00
Kacper
ed66fdd57d fix(cursor): Pass prompt via stdin to avoid shell escaping issues
When passing file content (containing TypeScript code) to cursor-agent via
WSL, bash was interpreting shell metacharacters like $(), backticks, etc.
as command substitution, causing errors like "/bin/bash: typescript\r':
command not found".

Changes:
- subprocess.ts: Add stdinData option to SubprocessOptions interface
- subprocess.ts: Write stdinData to stdin when provided
- cursor-provider.ts: Extract prompt text separately and pass via stdin
- cursor-provider.ts: Use '-' as prompt arg to indicate reading from stdin

This ensures file content with code examples is passed safely without
shell interpretation.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-30 15:07:08 +01:00
Kacper
34e51ddc3d feat(server): Add Cursor provider support for describe-file and describe-image routes
- describe-file.ts: Route to Cursor provider when using Cursor models (composer-1, etc.)
- describe-image.ts: Route to Cursor provider with image path context for Cursor models
- auto-mode-service.ts: Fix logging to use console.log instead of this.logger

Both routes now detect Cursor models using isCursorModel() and use
ProviderFactory.getProviderForModel() to get the appropriate provider
instead of always using the Claude SDK.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-30 14:59:25 +01:00
Kacper
68cefe43fb fix(ui): Add phaseModels to localStorage persistence
phaseModels was missing from the partialize() function, causing
it to reset to defaults on app restart. Now properly persisted
alongside other settings like enhancementModel and validationModel.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-30 14:46:11 +01:00
Kacper
d6a1c08952 fix(ui): Sync phaseModels to server when changed
Previously, phaseModels only persisted to localStorage but the server
reads from settings.json file. Now setPhaseModel/setPhaseModels/resetPhaseModels
call syncSettingsToServer() to keep server-side settings in sync.

Also added phaseModels to the syncSettingsToServer() updates object.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-30 14:40:20 +01:00
Kacper
fd7c22a457 feat(server): Wire analyzeProject to use phaseModels.projectAnalysisModel
Read model from settings.phaseModels.projectAnalysisModel instead of
hardcoded DEFAULT_MODELS.claude fallback. Falls back to
DEFAULT_PHASE_MODELS if settings unavailable.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-30 14:24:18 +01:00
Kacper
0798a64cd6 feat(server): Wire generate-plan to use phaseModels.backlogPlanningModel
Read model from settings.phaseModels.backlogPlanningModel instead of
hardcoded 'sonnet' fallback. Still supports per-call override via model
parameter. Falls back to DEFAULT_PHASE_MODELS if settings unavailable.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-30 14:23:12 +01:00
Kacper
fcba327fdb feat(server): Wire generate-features-from-spec to use phaseModels.featureGenerationModel
Pass model from settings.phaseModels.featureGenerationModel to
createFeatureGenerationOptions(). Falls back to DEFAULT_PHASE_MODELS
if settings unavailable.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-30 14:22:14 +01:00
Kacper
4d69d04e2b feat(server): Wire generate-spec to use phaseModels.specGenerationModel
Pass model from settings.phaseModels.specGenerationModel to
createSpecGenerationOptions(). Falls back to DEFAULT_PHASE_MODELS
if settings unavailable.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-30 14:21:17 +01:00
Kacper
f43e90f2d2 feat(server): Wire describe-image to use phaseModels.imageDescriptionModel
Replace hardcoded CLAUDE_MODEL_MAP.haiku with configurable model from
settings.phaseModels.imageDescriptionModel. Falls back to DEFAULT_PHASE_MODELS
if settings unavailable.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-30 14:20:06 +01:00
Kacper
ac0d4a556a feat(server): Wire describe-file to use phaseModels.fileDescriptionModel
Replace hardcoded CLAUDE_MODEL_MAP.haiku with configurable model from
settings.phaseModels.fileDescriptionModel. Falls back to DEFAULT_PHASE_MODELS
if settings unavailable.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-30 14:19:03 +01:00
Kacper
2be0e7d5f0 fix(ui): Use phaseModels.validationModel instead of legacy field
Update use-issue-validation hook to use the new phaseModels structure
for validation model selection instead of deprecated validationModel field.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-30 14:17:53 +01:00
Kacper
24599e0b8c feat(server): Add settings migration for phaseModels
- Add migratePhaseModels() to handle legacy enhancementModel/validationModel fields
- Deep merge phaseModels in updateGlobalSettings()
- Export PhaseModelConfig, PhaseModelKey, and DEFAULT_PHASE_MODELS from types
- Backwards compatible: legacy fields migrate to phaseModels structure

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-30 14:16:53 +01:00
Kacper
45d93f28bf fix(server): Improve Cursor CLI JSON response parsing
Add robust multi-strategy JSON extraction for Cursor validation responses:
- Strategy 1: Extract from ```json code blocks
- Strategy 2: Extract from ``` code blocks (no language)
- Strategy 3: Find JSON object directly in text (first { to last })
- Strategy 4: Parse entire response as JSON

This fixes silent failures when Cursor returns JSON in various formats.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-30 14:14:43 +01:00
Waaiez Kinnear
04aca1c8cb fix: add SIGTERM fallback for Linux Claude usage
On Linux, the ESC key doesn't exit the Claude CLI, causing a 30s timeout.
This fix:
1. Adds SIGTERM fallback 2s after ESC fails
2. Returns captured data on timeout instead of failing

Tested: ~19s on Linux instead of 30s timeout.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-30 13:24:45 +02:00
Anand (Andy) Houston
784d7fc059 fix(windows): use execSync for reliable process termination
Address code review feedback:
- Replace async spawn() with sync execSync() to ensure taskkill
  completes before app exits
- Add try/catch error handling for permission/invalid-PID errors
- Add helpful error logging for debugging

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-30 16:56:31 +08:00
Anand (Andy) Houston
d6705fbfb5 fix(windows): properly kill server process tree on app quit
On Windows, serverProcess.kill() doesn't reliably terminate Node.js
child processes. This causes orphaned node processes to hold onto
ports 3007/3008, preventing the app from starting on subsequent launches.

Use taskkill with /f /t flags to force-kill the entire process tree
on Windows, while keeping SIGTERM for macOS/Linux where it works correctly.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-30 16:47:29 +08:00
Web Dev Cody
c5ae9ad262 Merge pull request #325 from AutoMaker-Org/fix-mcp-bug
feat: enhance MCP server management and JSON import/export functionality
2025-12-30 01:51:46 -05:00
Test User
5a0ad75059 fix: improve MCP server update and synchronization handling
- Added rollback functionality for server updates on sync failure to maintain local state integrity.
- Enhanced logic for identifying newly added servers during addition and import processes, ensuring accurate pending sync tracking.
- Implemented duplicate server name validation during configuration to prevent errors in server management.
2025-12-30 01:47:55 -05:00
Test User
cf62dbbf7a feat: enhance MCP server management and JSON import/export functionality
- Introduced pending sync handling for MCP servers to improve synchronization reliability.
- Updated auto-test logic to skip servers pending sync, ensuring accurate testing.
- Enhanced JSON import/export to support both array and object formats, preserving server IDs.
- Added validation for server configurations during import to prevent errors.
- Improved error handling and user feedback for sync operations and server updates.
2025-12-30 01:32:43 -05:00
Web Dev Cody
a4d1a1497a Merge pull request #322 from casiusss/feat/customizable-prompts
feat: customizable prompts
2025-12-30 00:58:11 -05:00
Web Dev Cody
b798260491 Merge pull request #324 from illia1f/fix/kanban-card-ui
fix(kanban-card): jumping hover animation & drag overlay consistency
2025-12-30 00:44:27 -05:00
Web Dev Cody
1fcaa52f72 Merge pull request #321 from AutoMaker-Org/protect-api-with-api-key
adding more security to api endpoints to require api token for all ac…
2025-12-30 00:42:46 -05:00
Test User
46caae05d2 feat: improve test setup and authentication handling
- Added `dev:test` script to package.json for streamlined testing without file watching.
- Introduced `kill-test-servers` script to ensure no existing servers are running on test ports before executing tests.
- Enhanced Playwright configuration to use mock agent for tests, ensuring consistent API responses and disabling rate limiting.
- Updated various test files to include authentication steps and handle login screens, improving reliability and reducing flakiness in tests.
- Added `global-setup` for e2e tests to ensure proper initialization before test execution.
2025-12-30 00:06:27 -05:00
Kacper
39f2c8c9ff feat(ui): Enhance AI model handling with Cursor support
- Refactor model handling to support both Claude and Cursor models across various components.
- Introduce `stripProviderPrefix` utility for consistent model ID processing.
- Update `CursorProvider` to utilize `isCursorModel` for model validation.
- Implement model override functionality in GitHub issue validation and enhancement routes.
- Add `useCursorStatusInit` hook to initialize Cursor CLI status on app startup.
- Update UI components to reflect changes in model selection and validation processes.

This update improves the flexibility of AI model usage and enhances user experience by allowing quick model overrides.
2025-12-30 04:01:56 +01:00
Test User
59a6a23f9b feat: enhance test authentication and context navigation
- Added `authenticateForTests` utility to streamline API key authentication in tests, using a fallback for local testing.
- Updated context image test to include authentication step before navigation, ensuring proper session handling.
- Increased timeout for context view visibility to accommodate slower server responses.
- Introduced a test API key in the Playwright configuration for consistent testing environments.
2025-12-29 22:01:03 -05:00
Kacper
3d655c3298 feat(ui): Add ModelOverrideTrigger component for quick model overrides
- Add ModelOverrideTrigger with three variants: icon, button, inline
- Add useModelOverride hook for managing override state per phase
- Create shared components directory for reusable UI components
- Popover shows Claude + enabled Cursor models
- Visual indicator dot when model is overridden from global

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-30 02:13:46 +01:00
Kacper
2ba114931c feat(ui): Add Phase Models settings tab
- Add PhaseModelsSection with grouped phase configuration:
  - Quick Tasks: enhancement, file/image description
  - Validation Tasks: GitHub issue validation
  - Generation Tasks: spec, features, backlog, analysis
- Add PhaseModelSelector component showing Claude + Cursor models
- Add phaseModels state and actions to app-store
- Add 'phase-models' navigation item with Workflow icon

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-30 02:04:36 +01:00
Illia Filippov
88bb5b923f style(kanban-card): add transition effects to card wrapper classes for smoother animations 2025-12-30 02:01:13 +01:00
Kacper
a415ae6207 feat(types): Add PhaseModelConfig for per-phase AI model selection
- Add PhaseModelConfig interface with 8 configurable phases:
  - Quick tasks: enhancement, fileDescription, imageDescription
  - Validation: validationModel
  - Generation: specGeneration, featureGeneration, backlogPlanning, projectAnalysis
- Add PhaseModelKey type for type-safe access
- Add DEFAULT_PHASE_MODELS with sensible defaults
- Add phaseModels field to GlobalSettings
- Mark legacy enhancementModel/validationModel as deprecated
- Export new types from @automaker/types

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-30 01:55:21 +01:00
Stephan Rieche
504d9aa9d7 refactor: migrate AgentService to use centralized logger
Replace console.error calls with createLogger for consistent logging across
the AgentService. This improves debuggability and makes logger calls testable.

Changes:
- Add createLogger import from @automaker/utils
- Add private logger instance initialized with 'AgentService' prefix
- Replace all 7 console.error calls with this.logger.error
- Update test mocks to use vi.hoisted() for proper mock access
- Update settings-helpers test to create mockLogger inside vi.mock()

Test Impact:
- All 774 tests passing
- Logger error calls are now verifiable in tests
- Mock logger properly accessible via vi.hoisted() pattern

Resolves Gemini Code Assist suggestions:
- "Make logger mockable for test assertions"
- "Use logger instead of console.error in AgentService"

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-30 01:43:27 +01:00
Illia Filippov
ab0cd95d9a refactor(kanban-card): switch from useSortable to useDraggable 2025-12-30 01:36:00 +01:00
Test User
4c65855140 feat: enhance authentication and session management tests
- Added comprehensive unit tests for authentication middleware, including session token validation, API key authentication, and cookie-based authentication.
- Implemented tests for session management functions such as creating, updating, archiving, and deleting sessions.
- Improved test coverage for queue management in session handling, ensuring robust error handling and validation.
- Introduced checks for session metadata and working directory validation to ensure proper session creation.
2025-12-29 19:35:09 -05:00
Test User
adfc353b2d feat: add middleware to enforce JSON Content-Type for API requests
- Introduced `requireJsonContentType` middleware to ensure that all POST, PUT, and PATCH requests have the Content-Type set to application/json.
- This enhancement improves security by preventing CSRF and content-type confusion attacks, ensuring only properly formatted requests are processed.
2025-12-29 19:21:56 -05:00
Kacper
c1c2e706f0 chore: Remove temporary refactoring analysis document 2025-12-30 01:13:57 +01:00
Stephan Rieche
d5aea8355b refactor: improve code quality based on Gemini Code Assist suggestions
Applied three code quality improvements suggested by Gemini Code Assist:

1. **Replace nested ternary with map object (enhance.ts)**
   - Changed nested ternary operator to Record<EnhancementMode, string> map
   - Improves readability and maintainability
   - More declarative approach for system prompt selection

2. **Simplify handleToggle logic (prompt-customization-section.tsx)**
   - Removed redundant if/else branches
   - Both branches were calculating the same value
   - Cleaner, more concise implementation

3. **Add type safety to updatePrompt with generics (prompt-customization-section.tsx)**
   - Changed field parameter from string to keyof NonNullable<PromptCustomization[T]>
   - Prevents runtime errors from misspelled field names
   - Improved developer experience with autocomplete

All tests passing (774/774). Builds successful.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-30 01:09:25 +01:00
Test User
e498f39153 fix: update node-gyp repository URL in package-lock.json
- Changed the resolved URL for the @electron/node-gyp module from SSH to HTTPS for improved accessibility and compatibility.
2025-12-29 19:07:25 -05:00
Kacper
4157e11bba refactor(cursor): Extend CliProvider base class
Refactor CursorProvider to extend CliProvider instead of BaseProvider:
- Implement abstract methods: getCliName, getSpawnConfig, buildCliArgs, normalizeEvent
- Override detectCli() for Cursor-specific versions directory check
- Override mapError() for Cursor-specific error codes
- Override getInstallInstructions() for Cursor-specific guidance
- Reuse base class buildSubprocessOptions() for WSL/NPX handling

Removes ~200 lines of duplicated infrastructure code.
All existing behavior preserved.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-30 01:07:08 +01:00
Test User
d66259b411 feat: enhance authentication and session management
- Added NODE_ENV variable for development in docker-compose.override.yml.example.
- Changed default NODE_ENV to development in Dockerfile.
- Implemented fetchWsToken function to retrieve short-lived WebSocket tokens for secure authentication in TerminalPanel.
- Updated connect function to use wsToken for WebSocket connections when API key is not available.
- Introduced verifySession function to validate session status after login and on app load, ensuring session integrity.
- Modified RootLayoutContent to verify session cookie validity and redirect to login if the session is invalid or expired.

These changes improve the security and reliability of the authentication process.
2025-12-29 19:06:11 -05:00
Kacper
677f441cd1 feat(providers): Create CliProvider abstract base class
Add reusable infrastructure for CLI-based AI providers:
- SpawnStrategy types ('wsl' | 'npx' | 'direct' | 'cmd')
- CliSpawnConfig interface for platform-specific configuration
- Common CLI path detection (PATH, common locations)
- WSL support for Windows (reuses @automaker/platform utilities)
- NPX strategy for npm-installed CLIs
- Strategy-aware subprocess spawning with JSONL streaming
- Error mapping infrastructure with recovery suggestions

Reuses existing utilities:
- spawnJSONLProcess, WSL utils from @automaker/platform
- createLogger, isAbortError from @automaker/utils

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-30 01:02:07 +01:00
Illia Filippov
e556521c8d fix(kanban-card): jumping hover animation & drag overlay consistency 2025-12-30 00:51:52 +01:00
Kacper
dc8c06e447 feat(providers): Add provider registry pattern
Replace hardcoded switch statements with dynamic registry pattern.
Providers register with factory using registerProvider() function.

New features:
- registerProvider() function for dynamic registration
- canHandleModel() callback for model routing
- priority field for controlling match order
- aliases support (e.g., 'anthropic' -> 'claude')
- getRegisteredProviderNames() for introspection

Adding new providers now only requires calling registerProvider()
with a factory function and model matching logic.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-30 00:42:17 +01:00
Stephan Rieche
e448d6d4e5 fix: restore detailed planning prompts and fix test suite
This commit fixes two issues introduced during prompt customization:

1. **Restored Full Planning Prompts from Main**
   - Lite Mode: Added "Silently analyze the codebase first" instruction
   - Spec Mode: Restored detailed task format rules, [TASK_START]/[TASK_COMPLETE] markers
   - Full Mode: Restored comprehensive SDD format with [PHASE_COMPLETE] markers
   - Fixed table structures (Files to Modify, Technical Context, Risks & Mitigations)
   - Ensured all critical instructions for Auto Mode functionality are preserved

2. **Fixed Test Suite (774 tests passing)**
   - Made getPlanningPromptPrefix() async-aware in all 11 planning tests
   - Replaced console.log/error mocks with createLogger mocks (settings-helpers, agent-service)
   - Updated test expectations to match restored prompts
   - Fixed variable hoisting issue in agent-service mock setup
   - Built prompts library to apply changes

The planning prompts now match the detailed, production-ready versions from main
branch, ensuring Auto Mode has all necessary instructions for proper task execution.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-30 00:40:01 +01:00
Kacper
55bd9b0dc7 refactor(cursor): Move stream dedup logic to CursorProvider
Move Cursor-specific duplicate text handling from auto-mode-service.ts
into CursorProvider.deduplicateTextBlocks() for cleaner separation.

This handles:
- Duplicate consecutive text blocks (same text twice in a row)
- Final accumulated text block (contains ALL previous text)

Also update REFACTORING-ANALYSIS.md with SpawnStrategy types for
future CLI providers (wsl, npx, direct, cmd).

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-30 00:35:39 +01:00
Kacper
b76f09db2d refactor(types): Use ModelProvider type instead of hardcoded union
Replace 'claude' | 'cursor' literal unions with ModelProvider type
from @automaker/types for better extensibility when adding new providers.

- Update ProviderFactory.getProviderNameForModel() return type
- Update RunningFeature.provider type in auto-mode-service
- Update getRunningAgents() return type

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-30 00:25:37 +01:00
Kacper
35fa822c32 fix: Update section title in Claude setup step for clarity
- Changed the title from "API Key Setup" to "Claude Code Setup" to better reflect the purpose of the section and improve user understanding.
2025-12-30 00:22:20 +01:00
Kacper
a842d1b917 fix(tests): Update provider-factory tests for CursorProvider
- Update test assertions from expecting 1 provider to 2
- Add CursorProvider import and tests for Cursor model routing
- Add tests for Cursor models (cursor-auto, cursor-sonnet-4.5, etc.)
- Update tests for gpt-5.2/grok/gemini-3-pro as valid Cursor models
- Add tests for checkAllProviders to expect cursor status
- Add tests for getProviderByName with 'cursor'

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-30 00:22:07 +01:00
Stephan Rieche
65a09b2d38 fix: add index signature to planningPrompts for TypeScript
Add Record<string, string> type to planningPrompts object to fix TypeScript
error when using string as index.

Error fixed:
Element implicitly has an 'any' type because expression of type 'string'
can't be used to index type '{ lite: string; ... }'.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-29 23:42:00 +01:00
Test User
469ee5ff85 security: harden API authentication system
- Use crypto.timingSafeEqual() for API key validation (prevents timing attacks)
- Make WebSocket tokens single-use (invalidated after first validation)
- Add AUTOMAKER_HIDE_API_KEY env var to suppress API key banner in logs
- Add rate limiting to login endpoint (5 attempts/minute/IP)
- Update client to fetch short-lived wsToken for WebSocket auth
  (session tokens no longer exposed in URLs)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-29 17:35:55 -05:00
Stephan Rieche
04e6ed30a2 refactor: use centralized logger instead of console.log
Replace all console.log/console.error calls in settings-helpers.ts with
the centralized logger from @automaker/utils for consistency.

Changes:
- Import createLogger from @automaker/utils
- Create logger instance: createLogger('SettingsHelper')
- Replace console.log → logger.info
- Replace console.error → logger.error

Benefits:
- Consistent logging across the codebase
- Better log formatting and structure
- Easier to filter/control log output
- Follows existing patterns in other services

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-29 23:35:22 +01:00
Stephan Rieche
ec3d78922e fix: remove prompt caching to enable hot reload of custom prompts
Remove caching from Auto Mode and Agent services to allow custom prompts
to take effect immediately without requiring app restart.

Changes:
- Auto Mode: Load prompts on every feature execution instead of caching
- Agent Service: Load prompts on every chat message instead of caching
- Remove unused class fields: planningPrompts, agentSystemPrompt

This makes custom prompts work consistently across all features:
✓ Auto Mode - hot reload enabled
✓ Agent Runner - hot reload enabled
✓ Backlog Plan - already had hot reload
✓ Enhancement - already had hot reload

Users can now modify prompts in Settings and see changes immediately
without restarting the app.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-29 23:25:15 +01:00
Stephan Rieche
bc0ef47323 feat: add customizable AI prompts with enhanced UX
Add comprehensive prompt customization system allowing users to customize
all AI prompts (Auto Mode, Agent Runner, Backlog Plan, Enhancement) through
the Settings UI.

## Features

### Core Customization System
- New TypeScript types for prompt customization with enabled flag
- CustomPrompt interface with value and enabled state
- Prompts preserved even when disabled (no data loss)
- Merged prompt system (custom overrides defaults when enabled)
- Persistent storage in ~/.automaker/settings.json

### Settings UI
- New "Prompt Customization" section in Settings
- 4 tabs: Auto Mode, Agent, Backlog Plan, Enhancement
- Toggle-based editing (read-only default → editable custom)
- Dynamic textarea height based on prompt length (120px-600px)
- Visual state indicators (Custom/Default labels)

### Warning System
- Critical prompt warnings for Backlog Plan (JSON format requirement)
- Field-level warnings when editing critical prompts
- Info banners for Auto Mode planning markers
- Color-coded warnings (blue=info, amber=critical)

### Backend Integration
- Auto Mode service loads prompts from settings
- Agent service loads prompts from settings
- Backlog Plan service loads prompts from settings
- Enhancement endpoint loads prompts from settings
- Settings sync includes promptCustomization field

### Files Changed
- libs/types/src/prompts.ts - Type definitions
- libs/prompts/src/defaults.ts - Default prompt values
- libs/prompts/src/merge.ts - Merge utilities
- apps/ui/src/components/views/settings-view/prompts/ - UI components
- apps/server/src/lib/settings-helpers.ts - getPromptCustomization()
- All service files updated to use customizable prompts

## Technical Details

Prompt storage format:
```json
{
  "promptCustomization": {
    "autoMode": {
      "planningLite": {
        "value": "Custom prompt text...",
        "enabled": true
      }
    }
  }
}
```

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-29 23:17:20 +01:00
Test User
579246dc26 docs: add API security hardening design plan
Security improvements identified for the protect-api-with-api-key branch:
- Use short-lived wsToken for WebSocket auth (not session tokens in URLs)
- Add AUTOMAKER_HIDE_API_KEY env var to suppress console logging
- Add rate limiting to login endpoint (5 attempts/min/IP)
- Use timing-safe comparison for API key validation
- Make WebSocket tokens single-use

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-29 17:17:16 -05:00
Kacper
4115110c06 feat: Update ProfileForm to conditionally display enabled Cursor models
- Integrated useAppStore to fetch enabledCursorModels for dynamic rendering of Cursor model selection.
- Added a message to inform users when no Cursor models are enabled, guiding them to settings for configuration.
- Refactored the model selection logic to filter available models based on the enabled list, enhancing user experience and clarity.
2025-12-29 22:21:01 +01:00
Kacper
8e10f522c0 feat: Enhance Cursor model selection and profile handling
- Updated AddFeatureDialog to support both Cursor and Claude profiles, allowing for dynamic model and thinking level selection based on the chosen profile.
- Modified ModelSelector to filter available Cursor models based on global settings and display a warning if the Cursor CLI is not available.
- Enhanced ProfileQuickSelect to handle both profile types and improve selection logic for Cursor profiles.
- Refactored CursorSettingsTab to manage global settings for enabled Cursor models and default model selection, streamlining the configuration process.

🤖 Generated with [Claude Code](https://claude.com/claude-code)
2025-12-29 22:17:07 +01:00
Test User
d68de99c15 adding more security to api endpoints to require api token for all access, no by passing 2025-12-29 16:16:28 -05:00
Kacper
fa23a7b8e2 fix: Allow testing API keys without saving first
- Add optional apiKey parameter to verifyClaudeAuth endpoint
- Backend uses provided key when available, falls back to stored key
- Frontend passes current input value to test unsaved keys
- Add input validation before testing

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-29 21:39:59 +01:00
Web Dev Cody
57b7f92e61 Merge pull request #312 from Shevanio/feat/improve-rate-limit-error-handling
feat: Improve rate limit error handling with user-friendly messages
2025-12-29 15:36:06 -05:00
Kacper
6c3d3aa111 feat: Unify AI provider settings tabs with consistent design
- Add CursorCliStatus component matching Claude's card design
- Add authentication status display to Claude CLI status card
- Add skeleton loading states for both Claude and Cursor tabs
- Add usage info banners (Primary Provider / Board View Only)
- Remove duplicate auth status from API Keys section
- Update Model Configuration card to use unified styling

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-29 21:30:06 +01:00
Shirone
dd822c41c5 Merge pull request #314 from AutoMaker-Org/feat/enchance-agent-runner
feat: enchance agent runner ui
2025-12-29 17:18:42 +01:00
Shirone
7016985bf2 chore: format 2025-12-29 16:16:24 +01:00
Shirone
67a6c10edc refactor: improve code readability in RunningAgentsView component
- Reformatted JSX for better clarity and consistency.
- Enhanced the layout of the feature description prop for improved maintainability.
2025-12-29 16:10:33 +01:00
Kacper
0317dadcaf feat: address pr comments 2025-12-29 16:03:27 +01:00
shevanio
625fddb71e test: update claude-provider test to match new error logging format 2025-12-29 15:39:48 +01:00
Kacper
63b0ccd035 feat: enchance agent runner ui 2025-12-29 15:30:11 +01:00
shevanio
19aa86c027 refactor: improve error handling code quality
Address code review feedback from Gemini Code Assist:

1. Reduce duplication in ClaudeProvider catch block
   - Consolidate error creation logic into single path
   - Use conditional message building instead of duplicate blocks
   - Improves maintainability and follows DRY principle

2. Better separation of concerns in error utilities
   - Move default retry-after (60s) logic from extractRetryAfter to classifyError
   - extractRetryAfter now only extracts explicit values
   - classifyError provides default using nullish coalescing (?? 60)
   - Clearer single responsibility for each function

3. Update test to match new behavior
   - extractRetryAfter now returns undefined for rate limits without explicit value
   - Default value is tested in classifyError tests instead

All 162 tests still passing 
Builds successfully with no TypeScript errors 
2025-12-29 13:50:08 +01:00
shevanio
76ad6667f1 feat: improve rate limit error handling with user-friendly messages
- Add rate_limit error type to ErrorInfo classification
- Implement isRateLimitError() and extractRetryAfter() utilities
- Enhance ClaudeProvider error handling with actionable messages
- Add comprehensive test coverage (8 new tests, 162 total passing)

**Problem:**
When hitting API rate limits, users saw cryptic 'exit code 1' errors
with no explanation or guidance on how to resolve the issue.

**Solution:**
- Detect rate limit errors (429) and extract retry-after duration
- Provide clear, user-friendly error messages with:
  * Explanation of what went wrong
  * How long to wait before retrying
  * Actionable tip to reduce concurrency in auto-mode
- Preserve original error details for debugging

**Changes:**
- libs/types: Add 'rate_limit' type and retryAfter field to ErrorInfo
- libs/utils: Add rate limit detection and extraction logic
- apps/server: Enhance ClaudeProvider with better error messages
- tests: Add 8 new test cases covering rate limit scenarios

**Benefits:**
 Clear communication - users understand the problem
 Actionable guidance - users know how to fix it
 Better debugging - original errors preserved
 Type safety - proper TypeScript typing
 Comprehensive testing - all edge cases covered

See CHANGELOG_RATE_LIMIT_HANDLING.md for detailed documentation.
2025-12-29 13:50:08 +01:00
Web Dev Cody
25c9259b50 Merge pull request #286 from mzubair481/feature/mcp-server-support
feat: add MCP server support
2025-12-28 22:42:12 -05:00
Test User
0e1e855cc5 feat: enhance security measures for MCP server interactions
- Restricted CORS to localhost origins to prevent remote code execution (RCE) attacks.
- Updated MCP server configuration handling to enforce security warnings when adding or importing servers.
- Introduced a SecurityWarningDialog to inform users about potential risks associated with server commands and configurations.
- Ensured that only serverId is accepted for testing server connections, preventing arbitrary command execution.

These changes improve the overall security posture of the MCP server management and usage.
2025-12-28 22:38:29 -05:00
Shirone
69a847fe8c Merge pull request #310 from AutoMaker-Org/chore/remove-duplicate-lock-file
chore: remove pnpm lock file
2025-12-28 23:44:01 +01:00
Kacper
6f2402e16d chore: add pnpm-lock.yaml and yarn.lock to .gitignore 2025-12-28 23:43:44 +01:00
Kacper
bacd4f385d chore: remove pnpm lock file 2025-12-28 23:41:26 +01:00
Shirone
cc42b79fbc Merge pull request #308 from AutoMaker-Org/feat/github-issue-comments
feat: add GitHub issue comments display and AI validation integration
2025-12-28 23:00:06 +01:00
Shirone
eaeb503ee7 Merge pull request #309 from illia1f/docs/contributing-security-issues
docs: update security vulnerability reporting to Discord
2025-12-28 22:50:43 +01:00
Kacper
d028932dc8 chore: remove debug logs from issue validation
Remove console.log and logger.debug calls that were added during
development. Keep essential logger.info and logger.error calls.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-28 22:48:32 +01:00
Kacper
6bdac230df fix: address PR review comments for GitHub issue comments feature
- Use GraphQL variables instead of string interpolation for safety
- Add cursor validation to prevent potential GraphQL injection
- Add 30s timeout for spawned gh process to prevent hanging
- Export ValidationComment and ValidationLinkedPR from validation-schema
- Remove duplicate interface definitions from validate-issue.ts
- Use ISO date format instead of locale-dependent toLocaleDateString()
- Reset error state when issue is deselected in useIssueComments hook

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-28 22:40:37 +01:00
Illia Filippov
43728e451e docs: clarify security vulnerability reporting instructions 2025-12-28 22:36:27 +01:00
Illia Filippov
b93b59951b docs: update security vulnerability reporting method in contributing guide 2025-12-28 22:31:25 +01:00
Shirone
b5a8ed229c Merge pull request #302 from AutoMaker-Org/fix/docker-build
refactor: update Dockerfiles for server and UI to streamline dependen…
2025-12-28 22:25:26 +01:00
Kacper
97ae4b6362 feat: enhance AI validation with PR analysis and UI improvements
- Replace HTML checkbox with proper UI Checkbox component
- Add system prompt instructions for AI to check PR changes via gh CLI
- Add PRAnalysis schema field with recommendation (wait_for_merge, pr_needs_work, no_pr)
- Show detailed PR analysis badge in validation dialog
- Hide "Convert to Task" button when PR fix is ready (wait_for_merge)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-28 22:22:14 +01:00
Test User
5a1e53ca7c docs: add Contribution License Agreement to contributing guide 2025-12-28 16:19:25 -05:00
Web Dev Cody
876d383936 Merge pull request #307 from illia1f/feature/add-contributing-md
docs: add comprehensive contributing guide
2025-12-28 16:16:55 -05:00
Kacper
96196f906f feat: add GitHub issue comments display and AI validation integration
- Add comments section to issue detail panel with lazy loading
- Fetch comments via GraphQL API with pagination (50 at a time)
- Include comments in AI validation analysis when checkbox enabled
- Pass linked PRs info to AI validation for context
- Add "Work in Progress" badge in validation dialog for open PRs
- Add debug logging for validation requests

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-28 22:11:02 +01:00
Illia Filippov
0ee9313441 docs: update contributing guide with additional setup recommendations and formatting improvements 2025-12-28 22:01:23 +01:00
Illia Filippov
496ace8a8e docs: add comprehensive contributing guide 2025-12-28 21:48:29 +01:00
Kacper
0a21c11a35 chore: update Dockerfile to use Node.js 22 and improve health check
- Upgraded base and server images in Dockerfile from Node.js 20 to 22-alpine for better performance and security.
- Replaced wget with curl in the health check command for improved reliability.
- Enhanced README with detailed Docker deployment instructions, including configuration for API key and Claude CLI authentication, and examples for working with projects and GitHub CLI authentication.

This update ensures a more secure and efficient Docker setup for the application.
2025-12-28 20:53:35 +01:00
firstfloris
495af733da fix: auto-disable sandbox mode for cloud storage paths
The Claude CLI sandbox feature is incompatible with cloud storage
virtual filesystems (Dropbox, Google Drive, iCloud, OneDrive).
When a project is in a cloud storage location, sandbox mode is now
automatically disabled with a warning log to prevent process crashes.

Added:
- isCloudStoragePath() to detect cloud storage locations
- checkSandboxCompatibility() for graceful degradation
- 15 new tests for cloud storage detection and sandbox behavior
2025-12-28 20:45:44 +01:00
Kacper
a526869f21 fix: configure git to use gh as credential helper
Add system-level git config to use `gh auth git-credential` for
HTTPS authentication. This allows git push/pull to work automatically
using the GH_TOKEN environment variable.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-28 20:34:43 +01:00
Kacper
789b807542 fix: configure git safe.directory for mounted volumes
Use system-level gitconfig to set safe.directory='*' so it works
with mounted volumes and isn't overwritten by user's mounted .gitconfig.

Fixes git "dubious ownership" errors when working with projects
mounted from the host.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-28 20:32:49 +01:00
Kacper
35b3d3931e fix: add bash for terminal support and ARM64 gh CLI support
- Install bash in Alpine for terminal feature to work
- Add dynamic architecture detection for GitHub CLI download
  (supports x86_64 and aarch64/arm64)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-28 20:26:11 +01:00
Kacper
bad4393dda fix: improve gh auth detection to work with GH_TOKEN env var
Use gh api user to verify authentication instead of gh auth status,
which can return non-zero even when GH_TOKEN is valid (due to stale
config file entries).

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-28 20:03:39 +01:00
Kacper
6012e8312b refactor: consolidate Dockerfiles into single multi-stage build
- Create unified Dockerfile with multi-stage builds (base, server, ui targets)
- Centralize lib package.json COPYs in shared base stage (DRY)
- Add Claude CLI installation for Docker authentication support
- Remove duplicate apps/server/Dockerfile and apps/ui/Dockerfile
- Update docker-compose.yml to use target: parameter
- Add docker-compose.override.yml to .gitignore

Build commands:
  docker build --target server -t automaker-server .
  docker build --target ui -t automaker-ui .
  docker-compose build && docker-compose up -d

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-28 19:57:22 +01:00
Kacper
8f458e55e2 refactor: update Dockerfiles for server and UI to streamline dependency installation and build process
- Modified Dockerfiles to copy package files for all workspaces, enhancing modularity.
- Changed dependency installation to skip scripts, preventing unnecessary execution during builds.
- Updated build commands to first build packages in dependency order before building the server and UI, ensuring proper build sequence.
2025-12-28 18:33:59 +01:00
Shirone
61881d99e2 Merge pull request #296 from ugurkellecioglu/feat/agent-view-multiline-input
feat: enhance AgentView with adjustable textarea and improved input handling #294
2025-12-28 18:13:34 +01:00
Kacper
3c719f05a1 refactor: split mcp-servers-section into modular components
Refactored 1348-line monolithic file into proper folder structure
following folder-pattern.md conventions:

Structure:
- components/ - UI components (card, header, settings, warning)
- dialogs/ - 5 dialog components (add/edit, delete, import, json edit)
- hooks/use-mcp-servers.ts - all state management & handlers
- types.ts, constants.ts, utils.tsx - shared code

Main file reduced from 1348 to 192 lines (composition only).

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-28 17:07:57 +01:00
Kacper
9cba2e509a feat: add API key masking and 80+ tools warning for MCP servers
Security improvements:
- Mask sensitive values in URLs (api_key, token, auth, secret, etc.)
- Prevents accidental API key leaks when sharing screen or screenshots

Performance guidance:
- Show warning banner when total MCP tools exceed 80
- Warns users that high tool count may degrade AI model performance

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-28 16:37:19 +01:00
Kacper
c61eaff525 fix: keep MCP servers collapsed on auto-test
Only auto-expand servers when user manually clicks Test button.
Auto-test on mount now keeps servers collapsed to avoid clutter
when there are many MCP servers configured.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-28 15:55:32 +01:00
Kacper
ef0a96182a fix: remove unreachable else branches in MCP routes
CodeRabbit identified dead code - the else blocks were unreachable
since validation ensures serverId or serverConfig is truthy.
Simplified to ternary expression.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-28 15:49:39 +01:00
Kacper
a680f3a9c1 Merge main into feature/mcp-server-support
Resolved conflicts:
- apps/server/src/index.ts: merged MCP and Pipeline routes
- apps/ui/src/lib/http-api-client.ts: merged MCP and Pipeline APIs
- apps/ui/src/store/app-store.ts: merged type imports

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-28 15:48:19 +01:00
Kacper
ea6a39c6ab feat: enhance MCP servers section with JSON editing capabilities
- Introduced JSON editing for individual and global MCP server configurations.
- Added functionality to open JSON edit dialogs for specific servers and all servers collectively.
- Implemented validation for JSON input to ensure correct server configuration.
- Enhanced server testing logic to allow silent testing without toast notifications.
- Updated UI to include buttons for editing JSON configurations and improved user experience.

This update streamlines server management and configuration, allowing for more flexible and user-friendly interactions.
2025-12-28 15:36:37 +01:00
Kacper
f0c2860dec feat: add MCP server testing and tool listing functionality
- Add MCPTestService for testing MCP server connections
- Support stdio, SSE, and HTTP transport types
- Implement workaround for SSE headers bug (SDK Issue #436)
- Create API routes for /api/mcp/test and /api/mcp/tools
- Add API client methods for MCP operations
- Create MCPToolsList component with collapsible schema display
- Add Test button to MCP servers section with status indicators
- Add Headers field for HTTP/SSE servers
- Add Environment Variables field for stdio servers
- Fix text overflow in tools list display

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-28 14:51:49 +01:00
Kacper
f9882fe37e fix: Update label in BoardHeader component for clarity
- Changed label text from "Auto Mode" to "Plan" in the BoardHeader component to enhance user understanding of the feature.
2025-12-28 13:52:47 +01:00
Kacper
9c4f8f9e73 fix: Update label in BoardHeader component for clarity
- Changed label text from "Auto Plus" to "Auto Mode" in the BoardHeader component to improve user understanding of the feature.
2025-12-28 13:50:57 +01:00
Kacper
1a37603e89 feat: Add WSL support for Cursor CLI on Windows
- Add reusable WSL utilities in @automaker/platform (wsl.ts):
  - isWslAvailable() - Check if WSL is available on Windows
  - findCliInWsl() - Find CLI tools in WSL, tries multiple distributions
  - execInWsl() - Execute commands in WSL
  - createWslCommand() - Create spawn-compatible command/args for WSL
  - windowsToWslPath/wslToWindowsPath - Path conversion utilities
  - getWslDistributions() - List available WSL distributions

- Update CursorProvider to use WSL on Windows:
  - Detect cursor-agent in WSL distributions (prioritizes Ubuntu)
  - Use full path to wsl.exe for spawn() compatibility
  - Pass --cd flag for working directory inside WSL
  - Store and use WSL distribution for all commands
  - Show "(WSL:Ubuntu) /path" in installation status

- Add 'wsl' to InstallationStatus.method type

- Fix bugs:
  - Fix ternary in settings-view.tsx that always returned 'claude'
  - Fix findIndex -1 handling in WSL command construction
  - Remove 'gpt-5.2' from unknown models test (now valid Cursor model)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-28 13:47:24 +01:00
Shirone
3e8d2d73d5 feat: Enhance tool event handling in CursorProvider
- Added checks to skip processing for partial streaming events when tool call arguments are not yet populated.
- Updated the event emission logic to include both tool_use and tool_result for completed events, ensuring the UI reflects all tool calls accurately even if the 'started' event is skipped.
2025-12-28 11:26:34 +01:00
Shirone
9900d54f60 refactor: Update default model configuration to use dynamic model IDs
- Replaced hardcoded model IDs with a call to `getAllCursorModelIds()` for dynamic retrieval of available models.
- Updated comments to reflect the change in configuration logic, enhancing clarity on the default model setup.
2025-12-28 11:20:00 +01:00
Uğur Kellecioğlu
1321a8bd4d feat: enhance AgentView with adjustable textarea and improved input handling #294 2025-12-28 12:26:20 +03:00
Web Dev Cody
85dfabec0a Merge pull request #291 from AutoMaker-Org/pipelines
feat: implement pipeline feature for automated workflow steps
2025-12-28 00:07:31 -05:00
Test User
15dca79fb7 test: add unit tests for pipeline routes and service functionality
- Introduced comprehensive unit tests for the pipeline routes, covering handlers for getting, saving, adding, updating, deleting, and reordering steps.
- Added tests for the pipeline service, ensuring correct behavior for methods like getting and saving pipeline configurations, adding, updating, and deleting steps, as well as reordering them.
- Implemented error handling tests to verify graceful degradation in case of missing parameters or service failures.
- Enhanced test coverage for the `getNextStatus` and `getStep` methods to ensure accurate status transitions and step retrieval.

These tests improve the reliability of the pipeline feature by ensuring that all critical functionalities are validated against expected behaviors.
2025-12-28 00:05:23 -05:00
Test User
e9b366fa18 feat: implement pipeline feature for automated workflow steps
- Introduced a new pipeline service to manage custom workflow steps that execute after a feature is marked "In Progress".
- Added API endpoints for configuring, saving, adding, updating, deleting, and reordering pipeline steps.
- Enhanced the UI to support pipeline settings, including a dialog for managing steps and integration with the Kanban board.
- Updated the application state management to handle pipeline configurations per project.
- Implemented dynamic column generation in the Kanban board to display pipeline steps between "In Progress" and "Waiting Approval".
- Added documentation for the new pipeline feature, including usage instructions and configuration details.

This feature allows for a more structured workflow, enabling automated processes such as code reviews and testing after feature implementation.
2025-12-27 23:57:15 -05:00
Shirone
de246bbff1 feat: Update Cursor model definitions and metadata
- Replaced outdated model IDs with new versions, including Claude Sonnet 4.5 and Claude Opus 4.5.
- Added new models such as Gemini 3 Pro, GPT-5.1, and GPT-5.2 with various configurations.
- Enhanced model metadata with descriptions and thinking capabilities for improved clarity and usability.
2025-12-28 02:56:16 +01:00
Shirone
f20053efe7 docs: Add comprehensive guides for integrating new Cursor models and analyzing Cursor CLI behavior
- Created a detailed documentation file on adding new Cursor CLI models to AutoMaker, including a step-by-step guide and model configuration examples.
- Developed an analysis document for the Cursor CLI integration, outlining the current architecture, CLI behavior, and proposed integration strategies for the Cursor provider.
- Included verification checklists and next steps for implementation phases.
2025-12-28 02:36:55 +01:00
Shirone
e404262cb0 feat: Add raw output logging and endpoint for debugging
- Introduced a new environment variable `AUTOMAKER_DEBUG_RAW_OUTPUT` to enable raw output logging for agent streams.
- Added a new endpoint `/raw-output` to retrieve raw JSONL output for debugging purposes.
- Implemented functionality in `AutoModeService` to log raw output events and save them to `raw-output.jsonl`.
- Enhanced `FeatureLoader` to provide access to raw output files.
- Updated UI components to clean fragmented streaming text for better log parsing.
2025-12-28 02:34:10 +01:00
M Zubair
145dcf4b97 test: add unit tests for MCP settings helper functions
- Add tests for getMCPServersFromSettings()
- Add tests for getMCPPermissionSettings()
- Cover all server types (stdio, sse, http)
- Test error handling and edge cases
- Increases branch coverage from 54.91% to 56.59%
2025-12-28 02:21:15 +01:00
Shirone
52b1dc98b8 fix: ops mistake 2025-12-28 01:57:51 +01:00
Shirone
b32eacc913 feat: Enhance model resolution for Cursor models
- Added support for Cursor models in the model resolver, allowing cursor-prefixed models to pass through unchanged.
- Implemented logic to handle bare Cursor model IDs by adding the cursor- prefix.
- Updated logging to provide detailed information on model resolution processes for both Claude and Cursor models.
- Expanded unit tests to cover new Cursor model handling scenarios, ensuring robust validation of model resolution logic.
2025-12-28 01:55:40 +01:00
Web Dev Cody
4a708aa305 Merge pull request #287 from AutoMaker-Org/persist-terminals
persist the terminals when clicking around the app
2025-12-27 19:52:36 -05:00
Shirone
0bcc8fca5d docs: Add guide for integrating new Cursor models in AutoMaker
- Created a comprehensive documentation file detailing the steps to add new Cursor CLI models to AutoMaker.
- Included an overview of the necessary types and configurations, along with a step-by-step guide for model integration.
- Provided examples and a checklist to ensure proper implementation and verification of new models in the UI.
2025-12-28 01:50:18 +01:00
Test User
3a1781eb39 persist the terminals when clicking around the app 2025-12-27 19:49:36 -05:00
Shirone
c90f12208f feat: Enhance AutoModeService and UI for Cursor model support
- Updated AutoModeService to track model and provider for running features, improving logging and state management.
- Modified AddFeatureDialog to handle model selection for both Claude and Cursor, adjusting thinking level logic accordingly.
- Expanded ModelSelector to allow provider selection and dynamically display models based on the selected provider.
- Introduced new model constants for Cursor models, integrating them into the existing model management structure.
- Updated README and project plan to reflect the completion of task execution integration for Cursor models.
2025-12-28 01:43:57 +01:00
M Zubair
5f328a4c13 feat: add MCP server support for AI agents
Add Model Context Protocol (MCP) server integration to extend AI agent
capabilities with external tools. This allows users to configure MCP
servers (stdio, SSE, HTTP) in global settings and have agents use them.

Note: MCP servers are currently configured globally. Per-project MCP
server configuration is planned for a future update.

Features:
- New MCP Servers settings section with full CRUD operations
- Import/Export JSON configs (Claude Code format compatible)
- Configurable permission settings:
  - Auto-approve MCP tools (bypass permission prompts)
  - Unrestricted tools (allow all tools when MCP enabled)
- Refresh button to reload from settings file

Implementation:
- Added MCPServerConfig and MCPToolInfo types
- Added store actions for MCP server management
- Updated claude-provider to use configurable MCP permissions
- Updated sdk-options factory functions for MCP support
- Added settings helpers for loading MCP configs
2025-12-28 01:43:18 +01:00
Shirone
de11908db1 feat: Integrate Cursor provider support in AI profiles
- Updated AIProfile type to include support for Cursor provider, adding cursorModel and validation logic.
- Enhanced ProfileForm component to handle provider selection and corresponding model configurations for both Claude and Cursor.
- Implemented display functions for model and thinking configurations in ProfileQuickSelect.
- Added default Cursor profiles to the application state.
- Updated UI components to reflect provider-specific settings and validations.
- Marked completion of the AI Profiles Integration phase in the project plan.
2025-12-28 01:32:55 +01:00
Shirone
c602314312 feat: Implement Provider Tabs in Settings View
- Added a new `ProviderTabs` component to manage different AI providers (Claude and Cursor) within the settings view.
- Created `ClaudeSettingsTab` and `CursorSettingsTab` components for provider-specific configurations.
- Updated navigation to reflect the new provider structure, replacing the previous Claude-only setup.
- Marked completion of the settings view provider tabs phase in the integration plan.
2025-12-28 01:19:30 +01:00
Shirone
22044bc474 feat: Add Cursor setup step to UI setup wizard
- Introduced a new `CursorSetupStep` component for optional Cursor CLI configuration during the setup process.
- Updated `SetupView` to include the cursor step in the setup flow, allowing users to skip or proceed with Cursor CLI setup.
- Enhanced state management to track Cursor CLI installation and authentication status.
- Updated Electron API to support fetching Cursor CLI status.
- Marked completion of the UI setup wizard phase in the integration plan.
2025-12-28 01:06:41 +01:00
Shirone
6b03b3cd0a feat: Enhance log parser to support Cursor CLI events
- Added functions to detect and normalize Cursor stream events, including tool calls and system messages.
- Updated `parseLogLine` to handle Cursor events and integrate them into the log entry structure.
- Marked completion of the log parser integration phase in the project plan.
2025-12-28 00:58:32 +01:00
Shirone
59612231bb feat: Add Cursor CLI configuration and status endpoints
- Implemented new routes for managing Cursor CLI configuration, including getting current settings and updating default models.
- Created status endpoint to check Cursor CLI installation and authentication status.
- Updated HttpApiClient to include methods for interacting with the new Cursor API endpoints.
- Marked completion of the setup routes and status endpoints phase in the integration plan.
2025-12-28 00:53:31 +01:00
Shirone
6e9468a56e feat: Integrate CursorProvider into ProviderFactory
- Added CursorProvider to the ProviderFactory for handling cursor-* models.
- Updated getProviderNameForModel method to determine the appropriate provider based on model identifiers.
- Enhanced getAllProviders method to return both ClaudeProvider and CursorProvider.
- Updated documentation to reflect the completion of the Provider Factory integration phase.
2025-12-28 00:48:41 +01:00
Shirone
d8dedf8e40 feat: Implement Cursor CLI Provider and Configuration Manager
- Added CursorConfigManager to manage Cursor CLI configuration, including loading, saving, and resetting settings.
- Introduced CursorProvider to integrate the cursor-agent CLI, handling installation checks, authentication, and query execution.
- Enhanced error handling with detailed CursorError codes for better debugging and user feedback.
- Updated documentation to reflect the completion of the Cursor Provider implementation phase.
2025-12-28 00:43:48 +01:00
Shirone
8b1f5975d9 feat: Add Cursor CLI types and models
- Introduced new types and interfaces for Cursor CLI configuration, authentication status, and event handling.
- Created a comprehensive model definition for Cursor models, including metadata and helper functions.
- Updated existing interfaces to support both Claude and Cursor models in the UI.
- Enhanced the default model configuration to include Cursor's recommended default.
- Updated type exports to include new Cursor-related definitions.
2025-12-28 00:37:07 +01:00
Shirone
2fae948edb chore: remove pnpm lock file 2025-12-28 00:35:03 +01:00
Shirone
525c4c303f docs: Add Cursor CLI Integration Analysis document
- Created a comprehensive analysis document detailing the existing Claude CLI integration architecture and the planned Cursor CLI implementation.
- Document includes sections on current architecture, service integration, UI components, and a thorough examination of Cursor CLI behavior.
- Outlined integration strategy, including types to add, provider implementation, setup flow changes, and UI updates.
- Marked the completion of Phase 0 analysis and documentation tasks.
2025-12-28 00:27:43 +01:00
Kacper
81f35ad6aa chore: Add Cursor CLI integration plan and phases
- Introduced a comprehensive integration plan for the Cursor CLI, including detailed phases for implementation.
- Created initial markdown files for each phase, outlining objectives, tasks, and verification steps.
- Established a global prompt template for starting new sessions with the Cursor CLI.
- Added necessary types and configuration for Cursor models and their integration into the AutoMaker architecture.
- Implemented routing logic to ensure proper model handling between Cursor and Claude providers.
- Developed UI components for setup and settings management related to Cursor integration.
- Included extensive testing and validation plans to ensure robust functionality across all scenarios.
2025-12-27 23:50:17 +01:00
Web Dev Cody
f7a0365bee Merge pull request #281 from tony-nekola-silk/fix/flaky-context-tests
fix: add retry mechanisms to context test helpers for flaky test stability
2025-12-27 15:50:24 -05:00
Web Dev Cody
4eae231166 Merge pull request #285 from AutoMaker-Org/adding-make-button
adding button to make when creating a new feature
2025-12-27 15:49:37 -05:00
Web Dev Cody
ba4540b13e Merge pull request #282 from casiusss/feat/sandbox-mode-setting
feat: add configurable sandbox mode setting
2025-12-27 15:49:30 -05:00
Test User
01911287f2 refactor: streamline feature creation and auto-start logic in BoardView
- Removed the delay mechanism for starting newly created features, simplifying the process.
- Updated the logic to capture existing feature IDs before adding a new feature, allowing for immediate identification of the newly created feature.
- Enhanced error handling to notify users if the feature could not be started automatically.
2025-12-27 14:20:52 -05:00
Test User
7b7de2b601 adding button to make when creating a new feature 2025-12-27 13:55:56 -05:00
Tony Nekola
b65fccbcf7 fix: add retry mechanisms to context test helpers for flaky test stability
Update waitForContextFile, selectContextFile, and waitForFileContentToLoad
helpers to use Playwright's expect().toPass() with retry intervals, handling
race conditions between API calls completing and UI re-rendering. Also add
waitForNetworkIdle after dialog closes in context-file-management test.
2025-12-27 15:09:08 +02:00
Stephan Rieche
71c17e1fbb chore: remove debug logging from agent-service
Removed all debug console.log statements from agent-service.ts to avoid
polluting production logs. This addresses code review feedback from
gemini-code-assist.

Removed debug logs for:
- sendMessage() entry and session state
- Event emissions (started, message, stream, complete)
- Provider execution
- SDK session ID capture
- Tool use detection
- Queue processing
- emitAgentEvent() calls

Kept console.error logs for actual errors (session not found, execution
errors, etc.) as they are useful for troubleshooting.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-27 13:45:34 +01:00
Stephan Rieche
296ef20ef7 test: update claude-provider tests for sandbox changes
Updated tests to reflect changes made to sandbox mode implementation:

1. Changed permissionMode expectation from 'acceptEdits' to 'default'
   - ClaudeProvider now uses 'default' permission mode

2. Renamed test "should enable sandbox by default" to "should pass sandbox configuration when provided"
   - Sandbox is no longer enabled by default in the provider
   - Provider now forwards sandbox config only when explicitly provided via ExecuteOptions

3. Updated error handling test expectations
   - Now expects two console.error calls with new format
   - First call: '[ClaudeProvider] ERROR: executeQuery() error during execution:'
   - Second call: '[ClaudeProvider] ERROR stack:' with stack trace

All 32 tests in claude-provider.test.ts now pass.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-27 13:37:19 +01:00
Stephan Rieche
23d6756f03 test: fix sandbox mode test assertions
Add comprehensive test coverage for sandbox mode configuration:
- Added tests for enableSandboxMode=false for both createChatOptions and createAutoModeOptions
- Added tests for enableSandboxMode not provided for both functions
- Updated existing tests to pass enableSandboxMode=true where sandbox assertions exist

This addresses the broken test assertions identified by coderabbit-ai review.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-27 13:20:39 +01:00
Stephan Rieche
01e6b7fa52 chore: address code review feedback
Address suggestions from gemini-code-assist and coderabbit-ai:

Logging Improvements:
- Remove excessive debug logging from ClaudeProvider
- Remove sensitive environment variable logging (API key length, HOME, USER)
- Remove verbose per-message stream logging from AgentService
- Remove redundant SDK options logging
- Remove watchdog timer logging (diagnostic tool)

Documentation:
- Update JSDoc example in ClaudeMdSettings to include sandbox props

Persistence Fix:
- Add enableSandboxMode to syncSettingsToServer updates object
- Ensures sandbox setting is properly persisted to server storage

This reduces log volume significantly while maintaining important
error and state transition logging.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-27 13:13:17 +01:00
Stephan Rieche
348a4d95e9 fix: pass sandbox configuration through ExecuteOptions
The sandbox configuration was set in createChatOptions() and
createAutoModeOptions(), but was never passed to the ClaudeProvider.
This caused the sandbox to never actually be enabled.

Changes:
- Add sandbox field to ExecuteOptions interface
- Pass sandbox config from AgentService to provider
- Pass sandbox config from AutoModeService to provider
- Forward sandbox config in ClaudeProvider to SDK options

Now the sandbox configuration from settings is properly used.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-27 13:06:22 +01:00
Stephan Rieche
94e166636b fix: set consistent default for enableSandboxMode to true
The default value should be 'true' to match the defaults in
libs/types/src/settings.ts and apps/ui/src/store/app-store.ts.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-27 12:55:43 +01:00
Stephan Rieche
920dcd105f feat: add configurable sandbox mode setting
Add a global setting to enable/disable sandbox mode for Claude Agent SDK.
This allows users to control sandbox behavior based on their authentication
setup and system compatibility.

Changes:
- Add enableSandboxMode to GlobalSettings (default: true)
- Add sandbox mode checkbox in Claude settings UI
- Wire up setting through app store and settings service
- Update createChatOptions and createAutoModeOptions to use setting
- Add getEnableSandboxModeSetting helper function
- Remove hardcoded sandbox configuration from ClaudeProvider
- Add detailed logging throughout agent execution flow

The sandbox mode requires API key or OAuth token authentication. Users
experiencing issues with CLI-only auth can disable it in settings.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-27 12:24:28 +01:00
Web Dev Cody
b60e8f0392 Merge pull request #272 from AutoMaker-Org/fix/task-execution
feat: Implement throttling and retry logic in secure-fs module
2025-12-26 18:36:51 -05:00
Web Dev Cody
35d2d8cc01 Merge pull request #277 from illia1f/fix/universal-scrollbar-styling
fix: use CSS variables for universal scrollbar styling across all themes
2025-12-26 18:20:46 -05:00
Web Dev Cody
d4b2a3eb27 Merge pull request #274 from illia1f/fix/ui-trash-operations
fix: replace window.confirm with React dialogs in trash operations
2025-12-26 18:15:28 -05:00
Web Dev Cody
2caa63ae21 Merge pull request #268 from illia1f/feature/path-input-search
feat: Add search functionality to PathInput with keyboard shortcut support
2025-12-26 18:12:43 -05:00
Illia Filippov
4c16e5e09c style: unify scrollbar styling across themes
- Replaced theme-specific scrollbar styles with a universal approach using CSS variables for better maintainability.
- Moved theme-specific scrollbar overrides from `global.css` to their respective theme files (`retro.css`, `red.css`)
2025-12-27 00:12:31 +01:00
Kacper
ad983c6422 refactor: improve secure-fs throttling configuration and add unit tests
- Enhanced the configureThrottling function to prevent changes to maxConcurrency while operations are in flight.
- Added comprehensive unit tests for secure-fs throttling and retry logic, ensuring correct behavior and configuration.
- Removed outdated secure-fs test file and replaced it with a new, updated version to improve test coverage.
2025-12-26 22:06:39 +01:00
Web Dev Cody
0fe6a12d20 Merge pull request #275 from AutoMaker-Org/agent-runner-queue
adding a queue system to the agent runner
2025-12-26 11:49:27 -05:00
Test User
ce78165b59 fix: update test expectations for file read calls in agent-service
- Adjusted the test to reflect the addition of queue state file reading, increasing the expected number of file read calls from 2 to 3.
- Updated comments for clarity regarding the file reading process in the agent-service tests.
2025-12-26 11:17:21 -05:00
Test User
17c1c733b7 adding a queue system to the agent runner 2025-12-26 10:59:13 -05:00
Illia Filippov
3bb9d27dc6 refactor: simplify DeleteConfirmDialog rendering in TrashDialog component 2025-12-26 12:51:53 +01:00
Illia Filippov
04a5ae48e2 refactor: replace window.confirm with React dialogs in trash operations 2025-12-26 12:36:57 +01:00
Shirone
6d3314f980 Merge branch 'main' into fix/task-execution 2025-12-26 00:50:11 +01:00
Kacper
35541f810d feat: Implement throttling and retry logic in secure-fs module
- Added concurrency limiting using p-limit to prevent ENFILE/EMFILE errors.
- Introduced retry logic with exponential backoff for transient file descriptor errors.
- Enhanced secure-fs with new functions for configuring throttling and monitoring active/pending operations.
- Added unit tests for throttling and retry logic to ensure reliability.
2025-12-26 00:48:14 +01:00
Illia Filippov
3d361028b3 feat: add OS detection hook and integrate into FileBrowserDialog for improved keyboard shortcut handling
- Introduced useOSDetection hook to determine the user's operating system.
- Updated FileBrowserDialog to utilize the OS detection for displaying the correct keyboard shortcut (⌘ or Ctrl) based on the detected OS.
2025-12-25 19:38:03 +01:00
Illia Filippov
7f4b60b8c0 fix(path-input): added e.stopPropagation() to ensure the parent modal does not close when the search is active and the ESC key is pressed 2025-12-25 12:50:10 +01:00
Illia Filippov
1c59eabf5f refactor(path-input): optimize entry rendering and clarify keydown handling in comments
- Replaced inline entry mapping with a memoized entryItems component for improved performance.
- Clarified keydown event handling comments to enhance understanding of ESC key behavior in relation to modal interactions.
2025-12-25 12:35:40 +01:00
Web Dev Cody
f95282069d Merge pull request #266 from tony-nekola-silk/fix/untracked-directory-diff-display
fix: expand untracked directories to show individual file diffs
2025-12-24 21:53:48 -05:00
Web Dev Cody
a3fcf5bda1 Merge pull request #267 from AutoMaker-Org/feat/load-claude-files
feat: automaticly load claude.md files and settings json globally / per project
2025-12-24 21:53:34 -05:00
Illia Filippov
a7de6406ed fix(path-input): improve keydown handling
- Updated keydown event logic to prevent search activation when input fields or contenteditable elements are focused.
- Enhanced ESC key handling to ensure parent modal does not close when search is open.
- Adjusted dependencies in useEffect to include entries length for better state management.
2025-12-25 02:39:42 +01:00
Illia Filippov
fd51abb3ce feat: enhance PathInput component by replacing kbd with Kbd component for better styling 2025-12-25 01:58:40 +01:00
Illia Filippov
cd30306afe refactor: update KbdGroup component to use span instead of kbd; enhance PathInput with autoFocus on CommandInput 2025-12-25 01:50:03 +01:00
Illia Filippov
bed8038d16 fix: add custom scrollbar styling to CommandList in PathInput component 2025-12-25 01:05:36 +01:00
Illia Filippov
862a33982d feat: enhance FileBrowserDialog and PathInput with search functionality
- Added Kbd and KbdGroup components for keyboard shortcuts in FileBrowserDialog.
- Implemented search functionality in PathInput, allowing users to search files and directories.
- Updated PathInput to handle file system entries and selection from search results.
- Improved UI/UX with better focus management and search input handling.
2025-12-25 00:47:45 +01:00
Illia Filippov
90ebb52536 chore: add Kbd and KbdGroup ui components for keyboard shortcuts 2025-12-25 00:45:59 +01:00
Kacper
072ad72f14 refactor: implement filterClaudeMdFromContext utility for context file handling
- Introduced a new utility function to filter out CLAUDE.md from context files when autoLoadClaudeMd is enabled, enhancing clarity and preventing duplication.
- Updated AgentService and AutoModeService to utilize the new filtering function, streamlining context file management.
- Improved documentation for the new utility, detailing its purpose and usage in context file handling.
2025-12-24 23:17:20 +01:00
Kacper
387bb15a3d refactor: enhance context file handling in AgentService and AutoModeService
- Updated both services to conditionally load context files while excluding CLAUDE.md when autoLoadClaudeMd is enabled, preventing duplication.
- Improved the structure and clarity of the context files prompt, emphasizing the importance of following project-specific rules and conventions.
- Ensured consistent handling of context file loading across different methods in both services.
2025-12-24 23:07:00 +01:00
Kacper
077dd31b4f refactor: enhance context loading strategy in AgentService and AutoModeService
- Updated both services to conditionally load CLAUDE.md based on the autoLoadClaudeMd setting, preventing duplication.
- Improved clarity in comments regarding the loading process of context files.
- Ensured consistent retrieval of the autoLoadClaudeMd setting across different methods.
2025-12-24 22:59:57 +01:00
Kacper
99a19cb2a2 refactor: streamline auto-load CLAUDE.md setting retrieval
- Removed the private method for getting the autoLoadClaudeMd setting from AgentService and AutoModeService.
- Updated both services to utilize the new settings helper for retrieving the autoLoadClaudeMd setting, improving code reusability and clarity.
- Adjusted error handling in the settings helper to throw errors instead of returning false when the settings service is unavailable.
2025-12-24 22:48:02 +01:00
Tony Nekola
407cf633e0 fix: check directory before binary extension to handle edge cases
Move directory check before binary file check to handle edge cases
where a directory has a binary file extension (e.g., "images.png/").
Previously, such directories would be incorrectly treated as binary
files instead of being expanded.
2025-12-24 23:42:05 +02:00
Tony Nekola
b0ce01d008 refactor: use sequential processing for directory file diffs
Address PR review feedback: replace Promise.all with sequential for...of
loop to avoid exhausting file descriptors when processing directories
with many files.
2025-12-24 23:34:44 +02:00
Kacper
3154121840 feat: integrate settings service for auto-load CLAUDE.md functionality
- Updated API routes to accept an optional settings service for loading the autoLoadClaudeMd setting.
- Introduced a new settings helper utility for retrieving project-specific settings.
- Enhanced feature generation and spec generation processes to utilize the autoLoadClaudeMd setting.
- Refactored relevant route handlers to support the new settings integration across various endpoints.
2025-12-24 22:34:22 +01:00
Tony Nekola
8f2d134d03 fix: expand untracked directories to show individual file diffs
Previously, when git status reported an untracked directory (e.g., "?? apps/"),
the code would try to read the directory as a file, which failed and showed
"[Unable to read file content]".

Now, when encountering a directory:
- Strip trailing slash from path (git reports dirs as "dirname/")
- Check if path is a directory using stats.isDirectory()
- Recursively list all files inside using listAllFilesInDirectory
- Generate synthetic diffs for each file found

This ensures users see the actual file contents in the diff view instead
of an error placeholder.
2025-12-24 23:16:12 +02:00
Kacper
07bcb6b767 feat: add auto-load CLAUDE.md functionality
- Introduced a new setting to enable automatic loading of CLAUDE.md files from project-specific directories.
- Updated relevant services and components to support the new setting, including the AgentService and AutoModeService.
- Added UI controls for managing the auto-load setting in the settings view.
- Enhanced SDK options to incorporate settingSources for CLAUDE.md loading.
- Updated global and project settings interfaces to include autoLoadClaudeMd property.
2025-12-24 22:05:50 +01:00
Web Dev Cody
8a0226512d Merge pull request #263 from AutoMaker-Org/chore/update-readme
docs: Update README for clarity and feature enhancements
2025-12-24 14:27:18 -05:00
Web Dev Cody
5418d04529 Merge pull request #262 from illia1f/refactor/file-path-input
refactor: Extract PathInput component from FileBrowserDialog & Improve UI/UX
2025-12-24 14:26:57 -05:00
Kacper
3325b91de9 docs: adress code rabbit suggestions
- Updated Discord join link to a markdown format for better presentation.
- Enhanced section headers for Web, Desktop, Docker Deployment, Testing, and Environment Configuration for consistency.
- Clarified instructions regarding the build process and authentication setup.
- Improved formatting for better readability and organization of content.
2025-12-24 20:19:58 +01:00
Kacper
aad5dfc745 docs: Update README for clarity and feature enhancements
- Changed "Powered by Claude Code" to "Powered by Claude Agent SDK" for accuracy.
- Reorganized sections for better flow, including new entries for Environment Configuration, Authentication Setup, and detailed feature descriptions.
- Expanded installation and setup instructions, including Docker deployment and testing configurations.
- Added new features and tools available in Automaker, enhancing user understanding of capabilities.
- Improved overall readability and structure of the documentation.
2025-12-24 20:10:05 +01:00
Illia Filippov
60d4b5c877 fix: handle root path in breadcrumb parsing for PathInput component
- Added logic to correctly parse and return the root path for Unix-like systems in the breadcrumb segment function.
2025-12-24 19:50:57 +01:00
Illia Filippov
9dee9fb366 refactor: optimize breadcrumb parsing in PathInput component
- Introduced useMemo for breadcrumb parsing to enhance performance.
- Updated breadcrumb rendering to utilize memoized values for improved efficiency.
2025-12-24 19:49:02 +01:00
Illia Filippov
ccc7c6c21d fix: update navigation instructions and enhance path input UI
- Changed the navigation instruction text in FileBrowserDialog to use an arrow symbol for clarity.
- Added an ArrowRight icon to the PathInput component's button for improved visual feedback when navigating to a path.
2025-12-24 19:28:54 +01:00
Illia Filippov
896e183e41 refactor: streamline file browser dialog and introduce PathInput component
- Removed unused state and imports from FileBrowserDialog.
- Replaced direct path input with a new PathInput component for improved navigation.
- Enhanced state management for path navigation and error handling.
- Updated UI elements for better user experience and code clarity.
2025-12-24 19:08:23 +01:00
Illia Filippov
7c0d70ab3c chore: add breadcrumb component with various subcomponents for navigation 2025-12-24 19:07:15 +01:00
Web Dev Cody
91eeda3a73 Merge pull request #255 from AutoMaker-Org/feat/improve-ai-suggestions
feat: improve ai suggestions
2025-12-24 11:49:45 -05:00
Web Dev Cody
e4235cbd4b Merge pull request #243 from JBotwina/JBotwina/task-deps-spawn
feat: Add task dependencies and spawn sub-task functionality
2025-12-24 11:48:22 -05:00
Web Dev Cody
fc7f342617 Merge pull request #261 from AutoMaker-Org/small-fixes
small fixes
2025-12-24 11:37:24 -05:00
Test User
6aa9e5fbc9 small fixes 2025-12-24 10:13:24 -05:00
Web Dev Cody
97af998066 Merge pull request #250 from AutoMaker-Org/feat/convert-issues-to-task
feat: abbility to analyze github issues with ai with confidence / task creation
2025-12-23 22:34:18 -05:00
Web Dev Cody
44e341ab41 Merge pull request #256 from AutoMaker-Org/feat/improve-ai-suggestions-ui
feat: Improve ai suggestion output ui
2025-12-23 22:33:53 -05:00
James
34c0d39e39 fix 2025-12-23 20:44:05 -05:00
James
686a24d3c6 small log fix 2025-12-23 20:39:28 -05:00
Kacper
38addacf1e refactor: Enhance fetchIssues logic with mounted state checks
- Introduced a useRef hook to track component mount status, preventing state updates on unmounted components.
- Updated fetchIssues function to conditionally set state only if the component is still mounted, improving reliability during asynchronous operations.
- Ensured proper cleanup in useEffect to maintain accurate mounted state, enhancing overall component stability.
2025-12-24 02:31:56 +01:00
Kacper
a85e1aaa89 refactor: Simplify validation handling in GitHubIssuesView
- Removed the isValidating prop from GitHubIssuesView and ValidationDialog components to streamline validation logic.
- Updated handleValidateIssue function to eliminate unnecessary dialog options, focusing on background validation notifications.
- Enhanced user feedback by notifying users when validation starts, improving overall experience during issue analysis.
2025-12-24 02:30:36 +01:00
James
3307ff8100 fix lock 2025-12-23 20:29:14 -05:00
James
502043f6de feat(graph-view): implement task deletion and dependency management enhancements
- Added onDeleteTask functionality to allow task deletion from both board and graph views.
- Integrated delete options for dependencies in the graph view, enhancing user interaction.
- Updated ancestor context section to clarify the role of parent tasks in task descriptions.
- Improved layout handling in graph view to preserve node positions during updates.

This update enhances task management capabilities and improves user experience in the graph view.
2025-12-23 20:25:06 -05:00
Kacper
dd86e987a4 feat: Introduce ErrorState and LoadingState components for improved UI feedback
- Added ErrorState component to display error messages with retry functionality, enhancing user experience during issue loading failures.
- Implemented LoadingState component to provide visual feedback while issues are being fetched, improving the overall responsiveness of the GitHubIssuesView.
- Refactored GitHubIssuesView to utilize the new components, streamlining error and loading handling logic.
2025-12-24 02:23:12 +01:00
Kacper
6cd2898923 feat: Improve GitHubIssuesView with stable event handler references
- Introduced refs for selected issue and validation dialog state to prevent unnecessary re-subscribing on state changes.
- Added cleanup logic to ensure proper handling of asynchronous operations during component unmounting.
- Enhanced error handling in validation loading functions to only log errors if the component is still mounted, improving reliability.
2025-12-24 02:19:03 +01:00
Kacper
7fec9e7c5c chore: remove old app folder ? 2025-12-24 02:18:52 +01:00
Kacper
2c9a3c5161 feat: Refactor validation logic in GitHubIssuesView for improved clarity
- Simplified the validation staleness check by introducing a dedicated variable for stale validation status.
- Enhanced the conditions for unviewed and viewed validation indicators, improving user feedback on validation status.
- Added a visual indicator for viewed validations, enhancing the user interface and experience.
2025-12-24 02:12:22 +01:00
Kacper
bb3b1960c5 feat: Enhance GitHubIssuesView with AI profile and worktree integration
- Added support for default AI profile retrieval and integration into task creation, improving user experience in task management.
- Implemented current branch detection based on selected worktree, ensuring accurate context for issue handling.
- Updated fetchIssues function dependencies to include new profile and branch data, enhancing task creation logic.
2025-12-24 02:07:33 +01:00
Kacper
7007a8aa66 feat: Add ConfirmDialog component and integrate into GitHubIssuesView
- Introduced a new ConfirmDialog component for user confirmation prompts.
- Integrated ConfirmDialog into GitHubIssuesView to confirm re-validation of issues, enhancing user interaction and decision-making.
- Updated handleValidateIssue function to support re-validation options, improving flexibility in issue validation handling.
2025-12-24 01:53:40 +01:00
Kacper
1ff617703c fix: Update fetchLinkedPRs to prevent shell injection vulnerabilities
- Modified the fetchLinkedPRs function to use JSON.stringify for the request body, ensuring safe input handling when spawning the GitHub CLI command.
- Changed the command to read the query from stdin using the --input flag, enhancing security against shell injection risks.
2025-12-24 01:41:05 +01:00
jbotwina
76b7cfec9e refactor: Move utility functions to @automaker/dependency-resolver
Consolidated dependency validation and ancestor traversal utilities:
- wouldCreateCircularDependency, dependencyExists -> @automaker/dependency-resolver
- getAncestors, formatAncestorContextForPrompt, AncestorContext -> @automaker/dependency-resolver
- Removed graph-view/utils directory (now redundant)
- Updated all imports to use shared package

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-23 19:30:30 -05:00
jbotwina
8d80c73faa feat: Add task dependencies and spawn sub-task functionality
- Add edge dragging to create dependencies in graph view
- Add spawn sub-task action available in graph view and kanban board
- Implement ancestor context selection when spawning tasks
- Add dependency validation (circular, self, duplicate prevention)
- Include ancestor context in spawned task descriptions

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2025-12-23 19:30:30 -05:00
Kacper
0461045767 Changes from feat/improve-ai-suggestions-ui 2025-12-24 01:02:49 +01:00
Kacper
e07fba13d8 fix: adress pr reviews suggestions 2025-12-24 00:18:46 +01:00
Kacper
dbc21c8f73 Changes from feat/improve-ai-suggestions 2025-12-24 00:09:28 +01:00
Kacper
7b61a274e5 fix: Prevent race condition in unviewed validations count update
- Added a guard to ensure the unviewed count is only updated if the current project matches the reference, preventing potential race conditions during state updates.
2025-12-23 23:28:02 +01:00
Kacper
ef8eaa0463 feat: Add unit tests for validation storage functionality
- Introduced comprehensive unit tests for the validation storage module, covering functions such as writeValidation, readValidation, getAllValidations, deleteValidation, and others.
- Implemented tests to ensure correct behavior for validation creation, retrieval, deletion, and freshness checks.
- Enhanced test coverage for edge cases, including handling of non-existent validations and directory structure validation.
2025-12-23 23:17:41 +01:00
Kacper
65319f93b4 feat: Improve GitHub issues view with validation indicators and Markdown support
- Added an `isValidating` prop to the `IssueRow` component to indicate ongoing validation for issues.
- Introduced a visual indicator for validation in progress, enhancing user feedback during analysis.
- Updated the `ValidationDialog` to render validation reasoning and suggested fixes using Markdown for better formatting and readability.
2025-12-23 22:49:37 +01:00
Kacper
dd27c5c4fb feat: Enhance validation viewing functionality with event emission
- Updated the `createMarkViewedHandler` to emit an event when a validation is marked as viewed, allowing the UI to update the unviewed count dynamically.
- Modified the `useUnviewedValidations` hook to handle the new event type for decrementing the unviewed validations count.
- Introduced a new event type `issue_validation_viewed` in the issue validation event type definition for better event handling.
2025-12-23 22:25:48 +01:00
Kacper
d1418aa054 feat: Implement stale validation cleanup and improve GitHub issue handling
- Added a scheduled task to clean up stale validation entries every hour, preventing memory leaks.
- Enhanced the `getAllValidations` function to read validation files in parallel for improved performance.
- Updated the `fetchLinkedPRs` function to use `spawn` for safer execution of GitHub CLI commands, mitigating shell injection risks.
- Modified event handling in the GitHub issues view to utilize the model for validation, ensuring consistency and reducing stale closure issues.
- Introduced a new property in the issue validation event to track the model used for validation.
2025-12-23 22:21:08 +01:00
Kacper
0c9f05ee38 feat: Add validation viewing functionality and UI updates
- Implemented a new function to mark validations as viewed by the user, updating the validation state accordingly.
- Added a new API endpoint for marking validations as viewed, integrated with the existing GitHub routes.
- Enhanced the sidebar to display the count of unviewed validations, providing real-time updates.
- Updated the GitHub issues view to mark validations as viewed when issues are accessed, improving user interaction.
- Introduced a visual indicator for unviewed validations in the issue list, enhancing user awareness of pending validations.
2025-12-23 22:11:26 +01:00
Web Dev Cody
d50b15e639 Merge pull request #245 from illia1f/feature/project-picker-scroll
feat(ProjectSelector): add auto-scroll and improved UX for project picker
2025-12-23 15:46:34 -05:00
Web Dev Cody
172f1a7a3f Merge pull request #251 from AutoMaker-Org/fix/list-branch-issue-on-fresh-repo
fix: branch list issue and improve ui feedback
2025-12-23 15:43:27 -05:00
Web Dev Cody
5edb38691c Merge pull request #249 from AutoMaker-Org/fix/new-project-dialog-path-overflow
fix: new project path overflow
2025-12-23 15:42:44 -05:00
Web Dev Cody
f1f149c6c0 Merge pull request #247 from AutoMaker-Org/fix/git-diff-loop
fix: git diff loop
2025-12-23 15:42:24 -05:00
Kacper
e0c5f55fe7 fix: adress pr reviews 2025-12-23 21:07:36 +01:00
Kacper
4958ee1dda Changes from fix/list-branch-issue-on-fresh-repo 2025-12-23 20:46:10 +01:00
Kacper
3d00f40ea0 Changes from fix/new-project-dialog-path-overflow 2025-12-23 18:58:15 +01:00
Kacper
c9e0957dfe feat(diff): add helper function to create synthetic diffs for new files
This update introduces a new function, createNewFileDiff, to streamline the generation of synthetic diffs for untracked files. The function reduces code duplication by handling the diff formatting for new files, including directories and large files, improving overall maintainability.
2025-12-23 18:39:43 +01:00
Kacper
9d4f912c93 Changes from main 2025-12-23 18:26:02 +01:00
Illia Filippov
4898a1307e refactor(ProjectSelector): enhance project picker scrollbar styling and improve selection logic 2025-12-23 18:17:12 +01:00
Kacper
6acb751eb3 feat: Implement GitHub issue validation management and UI enhancements
- Introduced CRUD operations for GitHub issue validation results, including storage and retrieval.
- Added new endpoints for checking validation status, stopping validations, and deleting stored validations.
- Enhanced the GitHub routes to support validation management features.
- Updated the UI to display validation results and manage validation states for GitHub issues.
- Integrated event handling for validation progress and completion notifications.
2025-12-23 18:15:30 +01:00
Web Dev Cody
629b7e7433 Merge pull request #244 from WikiRik/WikiRik/fix-urls
docs: update links to Claude
2025-12-23 11:54:17 -05:00
Illia Filippov
190f18ecae feat(ProjectSelector): add auto-scroll and improved UX for project picker 2025-12-23 17:45:04 +01:00
Rik Smale
e6eb5ad97e docs: update links to Claude
These links were referring to pages that do not exist anymore. I have updated them to what I think are the new URLs.
2025-12-23 16:12:52 +00:00
Kacper
5f0ecc8dd6 feat: Enhance GitHub issue handling with assignees and linked PRs
- Added support for assignees in GitHub issue data structure.
- Implemented fetching of linked pull requests for open issues using the GitHub GraphQL API.
- Updated UI to display assignees and linked PRs for selected issues.
- Adjusted issue listing commands to include assignees in the fetched data.
2025-12-23 16:57:29 +01:00
Web Dev Cody
e95912f931 Merge pull request #232 from leonvanzyl/main
fix: Open in Browser button not working on Windows
2025-12-23 10:27:27 -05:00
Web Dev Cody
eb1875f558 Merge pull request #239 from illia1f/refactor/project-selector-with-options
refactor(ProjectSelector): improve project selection logic and UI/UX
2025-12-23 10:24:59 -05:00
Web Dev Cody
c761ce8120 Merge pull request #240 from AutoMaker-Org/fix/onboarding-dialog-overflow
fix: onboarding dialog title overflowing
2025-12-23 10:14:24 -05:00
Illia Filippov
ee9cb4deec refactor(ProjectSelector): streamline project selection handling by removing unnecessary useCallback 2025-12-23 16:03:13 +01:00
Kacper
a881d175bc feat: Implement GitHub issue validation endpoint and UI integration
- Added a new endpoint for validating GitHub issues using the Claude SDK.
- Introduced validation schema and logic to handle issue validation requests.
- Updated GitHub routes to include the new validation route.
- Enhanced the UI with a validation dialog and button to trigger issue validation.
- Mapped issue complexity to feature priority for better task management.
- Integrated validation results display in the UI, allowing users to convert validated issues into tasks.
2025-12-23 15:50:10 +01:00
Kacper
17ed2be918 fix(OnboardingDialog): adjust layout for title and description to improve responsiveness 2025-12-23 14:54:45 +01:00
Illia Filippov
5a5165818e refactor(ProjectSelector): improve project selection logic and UI/UX 2025-12-23 13:44:09 +01:00
Auto
9a7d21438b fix: Open in Browser button not working on Windows
The handleOpenDevServerUrl function was looking up the dev server info using an un-normalized path, but the Map stores entries with normalized paths (forward slashes).

On Windows, paths come in as C:\Projects\foo but stored keys use C:/Projects/foo (normalized). The lookup used the raw path, so it never matched.

Fix: Use getWorktreeKey() helper which normalizes the path, consistent with how isDevServerRunning() and getDevServerInfo() already work.
2025-12-23 07:50:37 +02:00
Test User
d4d4b8fb3d feat(TaskNode): conditionally render title and adjust description styling 2025-12-22 23:08:58 -05:00
Web Dev Cody
48955e9a71 Merge pull request #231 from stephan271c/add-pause-button
feat: Add a stop button to halt agent execution when processing.
2025-12-22 21:49:43 -05:00
Web Dev Cody
870df88cd1 Merge pull request #225 from illia1f/fix/project-picker-dropdown
fix: project picker dropdown highlights first item instead of current project
2025-12-22 21:22:35 -05:00
Web Dev Cody
7618a75d85 Merge pull request #226 from JBotwina/graph-filtering-and-node-controls
feat: Graph Filtering and Node Controls
2025-12-22 21:18:19 -05:00
Stephan Cho
51281095ea feat: Add a stop button to halt agent execution when processing. 2025-12-22 21:08:04 -05:00
Illia Filippov
50a595a8da fix(useProjectPicker): ensure project selection resets correctly when project picker is opened 2025-12-23 02:30:28 +01:00
Illia Filippov
a398367f00 refactor: simplify project index retrieval and selection logic in project picker 2025-12-23 02:06:49 +01:00
James
fe6faf9aae fix type errors 2025-12-22 19:44:48 -05:00
James
a1331ed514 fix format 2025-12-22 19:37:36 -05:00
Illia Filippov
38f2e0beea fix: ensure current project is highlighted in project picker dropdown without side effects 2025-12-23 01:36:20 +01:00
James
ef4035a462 fix lock file 2025-12-22 19:35:48 -05:00
James
cb07206dae add use ts hooks 2025-12-22 19:30:44 -05:00
James
cc0405cf27 refactor: update graph view actions to include onViewDetails and remove onViewBranch
- Added onViewDetails callback to handle feature detail viewing.
- Removed onViewBranch functionality and associated UI elements for a cleaner interface.
2025-12-22 19:30:44 -05:00
James
4dd00a98e4 add more filters about process status 2025-12-22 19:30:44 -05:00
James
b3c321ce02 add node actions 2025-12-22 19:30:44 -05:00
James
12a796bcbb branch filtering 2025-12-22 19:30:44 -05:00
James
ffcdbf7d75 fix styling of graph controls 2025-12-22 19:30:44 -05:00
Illia Filippov
e70c3b7722 fix: project picker dropdown highlights first item instead of current project 2025-12-23 00:50:21 +01:00
Web Dev Cody
524a9736b4 Merge pull request #222 from JBotwina/claude/task-dependency-graph-iPz1k
feat: task dependency graph view
2025-12-22 17:30:52 -05:00
Test User
036a7d9d26 refactor: update e2e tests to use 'load' state for page navigation
- Changed instances of `waitForLoadState('networkidle')` to `waitForLoadState('load')` across multiple test files and utility functions to improve test reliability in applications with persistent connections.
- Added documentation to the e2e testing guide explaining the rationale behind using 'load' state instead of 'networkidle' to prevent timeouts and flaky tests.
2025-12-22 17:16:55 -05:00
Test User
c4df2c141a Merge branch 'main' of github.com:AutoMaker-Org/automaker into claude/task-dependency-graph-iPz1k 2025-12-22 17:01:18 -05:00
James
7c75c24b5c fix: graph nodes now respect theme colors
Override React Flow's default node styling (white background) with
transparent to allow the TaskNode component's bg-card class to show
through with the correct theme colors.
2025-12-22 15:53:15 -05:00
James
2588ecaafa Merge remote-tracking branch 'origin/main' into claude/task-dependency-graph-iPz1k 2025-12-22 15:37:24 -05:00
James Botwina
a071097c0d Merge branch 'AutoMaker-Org:main' into claude/task-dependency-graph-iPz1k 2025-12-22 14:23:18 -05:00
Claude
b930091c42 feat: add dependency graph view for task visualization
Add a new interactive graph view alongside the kanban board for visualizing
task dependencies. The graph view uses React Flow with dagre auto-layout to
display tasks as nodes connected by dependency edges.

Key features:
- Toggle between kanban and graph view via new control buttons
- Custom TaskNode component matching existing card styling/themes
- Animated edges that flow when tasks are in progress
- Status-aware node colors (backlog, in-progress, waiting, verified)
- Blocked tasks show lock icon with dependency count tooltip
- MiniMap for navigation in large graphs
- Zoom, pan, fit-view, and lock controls
- Horizontal/vertical layout options via dagre
- Click node to view details, double-click to edit
- Respects all 32 themes via CSS variables
- Reduced motion support for animations

New dependencies: @xyflow/react, dagre
2025-12-22 19:10:32 +00:00
691 changed files with 89184 additions and 15455 deletions

2
.claude/.gitignore vendored Normal file
View File

@@ -0,0 +1,2 @@
hans/
skills/

View File

@@ -0,0 +1,86 @@
---
name: clean-code-architect
description: Use this agent when the user needs help implementing new code or refactoring existing code with a focus on quality, cleanliness, maintainability, and reusability. This includes writing new features, creating utility functions, building modules, or improving existing implementations.\n\nExamples:\n\n<example>\nContext: User requests implementation of a new feature\nuser: "I need a function to validate email addresses"\nassistant: "I'll use the clean-code-architect agent to implement a high-quality, reusable email validation solution."\n<commentary>\nSince the user needs new code implemented, use the clean-code-architect agent to ensure the implementation follows best practices for clean, maintainable, and reusable code.\n</commentary>\n</example>\n\n<example>\nContext: User asks for help building a data processing module\nuser: "Can you help me create a module that handles user authentication?"\nassistant: "Let me engage the clean-code-architect agent to design and implement a robust, maintainable authentication module."\n<commentary>\nThe user needs a new module implemented. The clean-code-architect agent will ensure proper separation of concerns, reusable components, and clean architecture.\n</commentary>\n</example>\n\n<example>\nContext: User wants to implement a utility class\nuser: "I need a helper class for date formatting across my application"\nassistant: "I'll use the clean-code-architect agent to create a well-designed, DRY date formatting utility."\n<commentary>\nThis is a perfect use case for clean-code-architect as utilities need to be highly reusable and maintainable.\n</commentary>\n</example>
model: opus
color: red
---
You are an elite software architect and clean code craftsman with decades of experience building maintainable, scalable systems. You treat code as a craft, approaching every implementation with the precision of an artist and the rigor of an engineer. Your code has been praised in code reviews across Fortune 500 companies for its clarity, elegance, and robustness.
## Core Philosophy
You believe that code is read far more often than it is written. Every line you produce should be immediately understandable to another developer—or to yourself six months from now. You write code that is a joy to maintain and extend.
## Implementation Principles
### DRY (Don't Repeat Yourself)
- Extract common patterns into reusable functions, classes, or modules
- Identify repetition not just in code, but in concepts and logic
- Create abstractions at the right level—not too early, not too late
- Use composition and inheritance judiciously to share behavior
- When you see similar code blocks, ask: "What is the underlying abstraction?"
### Clean Code Standards
- **Naming**: Use intention-revealing names that make comments unnecessary. Variables should explain what they hold; functions should explain what they do
- **Functions**: Keep them small, focused on a single task, and at one level of abstraction. A function should do one thing and do it well
- **Classes**: Follow Single Responsibility Principle. A class should have only one reason to change
- **Comments**: Write code that doesn't need comments. When comments are necessary, explain "why" not "what"
- **Formatting**: Consistent indentation, logical grouping, and visual hierarchy that guides the reader
### Reusability Architecture
- Design components with clear interfaces and minimal dependencies
- Use dependency injection to decouple implementations from their consumers
- Create modules that can be easily extracted and reused in other projects
- Follow the Interface Segregation Principle—don't force clients to depend on methods they don't use
- Build with configuration over hard-coding; externalize what might change
### Maintainability Focus
- Write self-documenting code through expressive naming and clear structure
- Keep cognitive complexity low—minimize nested conditionals and loops
- Handle errors gracefully with meaningful messages and appropriate recovery
- Design for testability from the start; if it's hard to test, it's hard to maintain
- Apply the Scout Rule: leave code better than you found it
## Implementation Process
1. **Understand Before Building**: Before writing any code, ensure you fully understand the requirements. Ask clarifying questions if the scope is ambiguous.
2. **Design First**: Consider the architecture before implementation. Think about how this code fits into the larger system, what interfaces it needs, and how it might evolve.
3. **Implement Incrementally**: Build in small, tested increments. Each piece should work correctly before moving to the next.
4. **Refactor Continuously**: After getting something working, review it critically. Can it be cleaner? More expressive? More efficient?
5. **Self-Review**: Before presenting code, review it as if you're seeing it for the first time. Does it make sense? Is anything confusing?
## Quality Checklist
Before considering any implementation complete, verify:
- [ ] All names are clear and intention-revealing
- [ ] No code duplication exists
- [ ] Functions are small and focused
- [ ] Error handling is comprehensive and graceful
- [ ] The code is testable with clear boundaries
- [ ] Dependencies are properly managed and injected
- [ ] The code follows established patterns in the codebase
- [ ] Edge cases are handled appropriately
- [ ] Performance considerations are addressed where relevant
## Project Context Awareness
Always consider existing project patterns, coding standards, and architectural decisions from project configuration files. Your implementations should feel native to the codebase, following established conventions while still applying clean code principles.
## Communication Style
- Explain your design decisions and the reasoning behind them
- Highlight trade-offs when they exist
- Point out where you've applied specific clean code principles
- Suggest future improvements or extensions when relevant
- If you see opportunities to refactor existing code you encounter, mention them
You are not just writing code—you are crafting software that will be a pleasure to work with for years to come. Every implementation should be your best work, something you would be proud to show as an example of excellent software engineering.

249
.claude/agents/deepcode.md Normal file
View File

@@ -0,0 +1,249 @@
---
name: deepcode
description: >
Use this agent to implement, fix, and build code solutions based on AGENT DEEPDIVE's detailed analysis. AGENT DEEPCODE receives findings and recommendations from AGENT DEEPDIVE—who thoroughly investigates bugs, performance issues, security vulnerabilities, and architectural concerns—and is responsible for carrying out the required code changes. Typical workflow:
- Analyze AGENT DEEPDIVE's handoff, which identifies root causes, file paths, and suggested solutions.
- Implement recommended fixes, feature improvements, or refactorings as specified.
- Ask for clarification if any aspect of the analysis or requirements is unclear.
- Test changes to verify the solution works as intended.
- Provide feedback or request further investigation if needed.
AGENT DEEPCODE should focus on high-quality execution, thorough testing, and clear communication throughout the deep dive/code remediation cycle.
model: opus
color: yellow
---
# AGENT DEEPCODE
You are **Agent DEEPCODE**, a coding agent working alongside **Agent DEEPDIVE** (an analysis agent in another Claude instance). The human will copy relevant context between you.
**Your role:** Implement, fix, and build based on AGENT DEEPDIVE's analysis. You write the code. You can ask AGENT DEEPDIVE for more information when needed.
---
## STEP 1: GET YOUR BEARINGS (MANDATORY)
Before ANY work, understand the environment:
```bash
# 1. Where are you?
pwd
# 2. What's here?
ls -la
# 3. Understand the project
cat README.md 2>/dev/null || echo "No README"
find . -type f -name "*.md" | head -20
# 4. Read any relevant documentation
cat *.md 2>/dev/null | head -100
cat docs/*.md 2>/dev/null | head -100
# 5. Understand the tech stack
cat package.json 2>/dev/null | head -30
cat requirements.txt 2>/dev/null
ls src/ 2>/dev/null
```
---
## STEP 2: PARSE AGENT DEEPDIVE'S HANDOFF
Read AGENT DEEPDIVE's analysis carefully. Extract:
- **Root cause:** What did they identify as the problem?
- **Location:** Which files and line numbers?
- **Recommended fix:** What did they suggest?
- **Gotchas:** What did they warn you about?
- **Verification:** How should you test the fix?
**If their analysis is unclear or incomplete:**
- Don't guess — ask AGENT DEEPDIVE for clarification
- Be specific about what you need to know
---
## STEP 3: REVIEW THE CODE
Before changing anything, read the relevant files:
```bash
# Read files AGENT DEEPDIVE identified
cat path/to/file.js
cat path/to/other.py
# Understand the context around the problem area
cat -n path/to/file.js | head -100 # With line numbers
# Check related files they mentioned
cat path/to/reference.js
```
**Verify AGENT DEEPDIVE's analysis makes sense.** If something doesn't add up, ask them.
---
## STEP 4: IMPLEMENT THE FIX
Now write the code.
**Quality standards:**
- Production-ready code (no lazy shortcuts)
- Handle errors properly
- Follow existing project patterns and style
- No debugging code left behind (console.log, print statements)
- Add comments only where logic is non-obvious
**As you code:**
- Make targeted changes — don't refactor unrelated code
- Keep changes minimal but complete
- Handle the edge cases AGENT DEEPDIVE identified
---
## STEP 5: TEST YOUR CHANGES
**Don't skip this.** Verify your fix actually works.
```bash
# Run existing tests
npm test 2>/dev/null
pytest 2>/dev/null
go test ./... 2>/dev/null
# Run specific test files if relevant
npm test -- --grep "auth"
pytest tests/test_auth.py
# Manual verification (use AGENT DEEPDIVE's "How to Verify" section)
curl -s localhost:3000/api/endpoint
# [other verification commands]
# Check for regressions
# - Does the original bug still happen? (Should be fixed)
# - Did anything else break? (Should still work)
```
**If tests fail, fix them before moving on.**
---
## STEP 6: REPORT BACK
**Always end with a structured response.**
### If successful:
```
---
## RESPONSE TO AGENT DEEPDIVE
**Status:** ✅ Implemented and verified
**What I did:**
- [Change 1 with file and brief description]
- [Change 2 with file and brief description]
**Files modified:**
- `path/to/file.js` — [what changed]
- `path/to/other.py` — [what changed]
**Testing:**
- [x] Unit tests passing
- [x] Manual verification done
- [x] Original bug fixed
- [x] No regressions found
**Notes:**
- [Anything worth mentioning about the implementation]
- [Any deviations from AGENT DEEPDIVE's recommendation and why]
---
```
### If you need help from AGENT DEEPDIVE:
```
---
## QUESTION FOR AGENT DEEPDIVE
**I'm stuck on:** [Specific issue]
**What I've tried:**
- [Attempt 1 and result]
- [Attempt 2 and result]
**What I need from you:**
- [Specific question 1]
- [Specific question 2]
**Relevant context:**
[Code snippet or error message]
**My best guess:**
[What you think might be the issue, if any]
---
```
### If you found issues with the analysis:
```
---
## FEEDBACK FOR AGENT DEEPDIVE
**Issue with analysis:** [What doesn't match]
**What I found instead:**
- [Your finding]
- [Evidence]
**Questions:**
- [What you need clarified]
**Should I:**
- [ ] Wait for your input
- [ ] Proceed with my interpretation
---
```
---
## WHEN TO ASK AGENT DEEPDIVE FOR HELP
Ask AGENT DEEPDIVE when:
1. **Analysis seems incomplete** — Missing files, unclear root cause
2. **You found something different** — Evidence contradicts their findings
3. **Multiple valid approaches** — Need guidance on which direction
4. **Edge cases unclear** — Not sure how to handle specific scenarios
5. **Blocked by missing context** — Need to understand "why" before implementing
**Be specific when asking:**
❌ Bad: "I don't understand the auth issue"
✅ Good: "In src/auth/validate.js, you mentioned line 47, but I see the expiry check on line 52. Also, there's a similar pattern in refresh.js lines 23 AND 45 — should I change both?"
---
## RULES
1. **Understand before coding** — Read AGENT DEEPDIVE's full analysis first
2. **Ask if unclear** — Don't guess on important decisions
3. **Test your changes** — Verify the fix actually works
4. **Stay in scope** — Fix what was identified, flag other issues separately
5. **Report back clearly** — AGENT DEEPDIVE should know exactly what you did
6. **No half-done work** — Either complete the fix or clearly state what's blocking
---
## REMEMBER
- AGENT DEEPDIVE did the research — use their findings
- You own the implementation — make it production-quality
- When in doubt, ask — it's faster than guessing wrong
- Test thoroughly — don't assume it works

253
.claude/agents/deepdive.md Normal file
View File

@@ -0,0 +1,253 @@
---
name: deepdive
description: >
Use this agent to investigate, analyze, and uncover root causes for bugs, performance issues, security concerns, and architectural problems. AGENT DEEPDIVE performs deep dives into codebases, reviews files, traces behavior, surfaces vulnerabilities or inefficiencies, and provides detailed findings. Typical workflow:
- Research and analyze source code, configurations, and project structure.
- Identify security vulnerabilities, unusual patterns, logic flaws, or bottlenecks.
- Summarize findings with evidence: what, where, and why.
- Recommend next diagnostic steps or flag ambiguities for clarification.
- Clearly scope the problem—what to fix, relevant files/lines, and testing or verification hints.
AGENT DEEPDIVE does not write production code or fixes, but arms AGENT DEEPCODE with comprehensive, actionable analysis and context.
model: opus
color: yellow
---
# AGENT DEEPDIVE - ANALYST
You are **Agent Deepdive**, an analysis agent working alongside **Agent DEEPCODE** (a coding agent in another Claude instance). The human will copy relevant context between you.
**Your role:** Research, investigate, analyze, and provide findings. You do NOT write code. You give Agent DEEPCODE the information they need to implement solutions.
---
## STEP 1: GET YOUR BEARINGS (MANDATORY)
Before ANY work, understand the environment:
```bash
# 1. Where are you?
pwd
# 2. What's here?
ls -la
# 3. Understand the project
cat README.md 2>/dev/null || echo "No README"
find . -type f -name "*.md" | head -20
# 4. Read any relevant documentation
cat *.md 2>/dev/null | head -100
cat docs/*.md 2>/dev/null | head -100
# 5. Understand the tech stack
cat package.json 2>/dev/null | head -30
cat requirements.txt 2>/dev/null
ls src/ 2>/dev/null
```
**Understand the landscape before investigating.**
---
## STEP 2: UNDERSTAND THE TASK
Parse what you're being asked to analyze:
- **What's the problem?** Bug? Performance issue? Architecture question?
- **What's the scope?** Which parts of the system are involved?
- **What does success look like?** What does Agent DEEPCODE need from you?
- **Is there context from Agent DEEPCODE?** Questions they need answered?
If unclear, **ask clarifying questions before starting.**
---
## STEP 3: INVESTIGATE DEEPLY
This is your core job. Be thorough.
**Explore the codebase:**
```bash
# Find relevant files
find . -type f -name "*.js" | head -20
find . -type f -name "*.py" | head -20
# Search for keywords related to the problem
grep -r "error_keyword" --include="*.{js,ts,py}" .
grep -r "functionName" --include="*.{js,ts,py}" .
grep -r "ClassName" --include="*.{js,ts,py}" .
# Read relevant files
cat src/path/to/relevant-file.js
cat src/path/to/another-file.py
```
**Check logs and errors:**
```bash
# Application logs
cat logs/*.log 2>/dev/null | tail -100
cat *.log 2>/dev/null | tail -50
# Look for error patterns
grep -r "error\|Error\|ERROR" logs/ 2>/dev/null | tail -30
grep -r "exception\|Exception" logs/ 2>/dev/null | tail -30
```
**Trace the problem:**
```bash
# Follow the data flow
grep -r "functionA" --include="*.{js,ts,py}" . # Where is it defined?
grep -r "functionA(" --include="*.{js,ts,py}" . # Where is it called?
# Check imports/dependencies
grep -r "import.*moduleName" --include="*.{js,ts,py}" .
grep -r "require.*moduleName" --include="*.{js,ts,py}" .
```
**Document everything you find as you go.**
---
## STEP 4: ANALYZE & FORM CONCLUSIONS
Once you've gathered information:
1. **Identify the root cause** (or top candidates if uncertain)
2. **Trace the chain** — How does the problem manifest?
3. **Consider edge cases** — When does it happen? When doesn't it?
4. **Evaluate solutions** — What are the options to fix it?
5. **Assess risk** — What could go wrong with each approach?
**Be specific.** Don't say "something's wrong with auth" — say "the token validation in src/auth/validate.js is checking expiry with `<` instead of `<=`, causing tokens to fail 1 second early."
---
## STEP 5: HANDOFF TO Agent DEEPCODE
**Always end with a structured handoff.** Agent DEEPCODE needs clear, actionable information.
```
---
## HANDOFF TO Agent DEEPCODE
**Task:** [Original problem/question]
**Summary:** [1-2 sentence overview of what you found]
**Root Cause Analysis:**
[Detailed explanation of what's causing the problem]
- **Where:** [File paths and line numbers]
- **What:** [Exact issue]
- **Why:** [How this causes the observed problem]
**Evidence:**
- [Specific log entry, error message, or code snippet you found]
- [Another piece of evidence]
- [Pattern you observed]
**Recommended Fix:**
[Describe what needs to change — but don't write the code]
1. In `path/to/file.js`:
- [What needs to change and why]
2. In `path/to/other.py`:
- [What needs to change and why]
**Alternative Approaches:**
1. [Option A] — Pros: [x], Cons: [y]
2. [Option B] — Pros: [x], Cons: [y]
**Things to Watch Out For:**
- [Potential gotcha 1]
- [Potential gotcha 2]
- [Edge case to handle]
**Files You'll Need to Modify:**
- `path/to/file1.js` — [what needs doing]
- `path/to/file2.py` — [what needs doing]
**Files for Reference (don't modify):**
- `path/to/reference.js` — [useful pattern here]
- `docs/api.md` — [relevant documentation]
**Open Questions:**
- [Anything you're uncertain about]
- [Anything that needs more investigation]
**How to Verify the Fix:**
[Describe how Agent DEEPCODE can test that their fix works]
---
```
---
## WHEN Agent DEEPCODE ASKS YOU QUESTIONS
If Agent DEEPCODE sends you questions or needs more analysis:
1. **Read their full message** — Understand exactly what they're stuck on
2. **Investigate further** — Do more targeted research
3. **Respond specifically** — Answer their exact questions
4. **Provide context** — Give them what they need to proceed
**Response format:**
```
---
## RESPONSE TO Agent DEEPCODE
**Regarding:** [Their question/blocker]
**Answer:**
[Direct answer to their question]
**Additional context:**
- [Supporting information]
- [Related findings]
**Files to look at:**
- `path/to/file.js` — [relevant section]
**Suggested approach:**
[Your recommendation based on analysis]
---
```
---
## RULES
1. **You do NOT write code** — Describe what needs to change, Agent DEEPCODE implements
2. **Be specific** — File paths, line numbers, exact variable names
3. **Show your evidence** — Don't just assert, prove it with findings
4. **Consider alternatives** — Give Agent DEEPCODE options when possible
5. **Flag uncertainty** — If you're not sure, say so
6. **Stay focused** — Analyze what was asked, note tangential issues separately
---
## WHAT GOOD ANALYSIS LOOKS LIKE
**Bad:**
> "The authentication is broken. Check the auth files."
**Good:**
> "The JWT validation fails for tokens expiring within 1 second. In `src/auth/validate.js` line 47, the expiry check uses `token.exp < now` but should use `token.exp <= now`. This causes a race condition where tokens that expire at exactly the current second are incorrectly rejected. You'll need to change the comparison operator. Also check `src/auth/refresh.js` line 23 which has the same pattern."
---
## REMEMBER
- Your job is to give Agent DEEPCODE everything they need to succeed
- Depth over speed — investigate thoroughly
- Be the expert who explains the "what" and "why"
- Agent DEEPCODE handles the "how" (implementation)

View File

@@ -0,0 +1,78 @@
---
name: security-vulnerability-scanner
description: Use this agent when you need to identify security vulnerabilities in code, perform security audits, or get a prioritized list of security issues to fix. This includes reviewing authentication logic, input validation, data handling, API endpoints, dependency vulnerabilities, and common security anti-patterns.\n\nExamples:\n\n<example>\nContext: User has just written a new authentication endpoint\nuser: "I just finished the login endpoint, can you check it?"\nassistant: "I'll use the security-vulnerability-scanner agent to review your authentication code for potential security issues."\n<Task tool invocation to launch security-vulnerability-scanner agent>\n</example>\n\n<example>\nContext: User wants to review their API before deployment\nuser: "We're about to deploy our API, can you do a security check?"\nassistant: "Let me launch the security-vulnerability-scanner agent to audit your API code for vulnerabilities before deployment."\n<Task tool invocation to launch security-vulnerability-scanner agent>\n</example>\n\n<example>\nContext: User completed a feature involving user data handling\nuser: "Just implemented the user profile update feature"\nassistant: "I'll use the security-vulnerability-scanner agent to check the new code for any security concerns with user data handling."\n<Task tool invocation to launch security-vulnerability-scanner agent>\n</example>
model: opus
color: yellow
---
You are an elite application security researcher with deep expertise in vulnerability assessment, secure coding practices, and penetration testing. You have extensive experience with OWASP Top 10, CWE classifications, and real-world exploitation techniques. Your mission is to systematically analyze code for security vulnerabilities and deliver a clear, actionable list of issues to fix.
## Your Approach
1. **Systematic Analysis**: Methodically examine the code looking for:
- Injection vulnerabilities (SQL, NoSQL, Command, LDAP, XPath, etc.)
- Authentication and session management flaws
- Cross-Site Scripting (XSS) - reflected, stored, and DOM-based
- Insecure Direct Object References (IDOR)
- Security misconfigurations
- Sensitive data exposure
- Missing access controls
- Cross-Site Request Forgery (CSRF)
- Using components with known vulnerabilities
- Insufficient logging and monitoring
- Race conditions and TOCTOU issues
- Cryptographic weaknesses
- Path traversal vulnerabilities
- Deserialization vulnerabilities
- Server-Side Request Forgery (SSRF)
2. **Context Awareness**: Consider the technology stack, framework conventions, and deployment context when assessing risk.
3. **Severity Assessment**: Classify each finding by severity (Critical, High, Medium, Low) based on exploitability and potential impact.
## Research Process
- Use available tools to read and explore the codebase
- Follow data flows from user input to sensitive operations
- Check configuration files for security settings
- Examine dependency files for known vulnerable packages
- Review authentication/authorization logic paths
- Analyze error handling and logging practices
## Output Format
After your analysis, provide a concise, prioritized list in this format:
### Security Vulnerabilities Found
**Critical:**
- [Brief description] — File: `path/to/file.ext` (line X)
**High:**
- [Brief description] — File: `path/to/file.ext` (line X)
**Medium:**
- [Brief description] — File: `path/to/file.ext` (line X)
**Low:**
- [Brief description] — File: `path/to/file.ext` (line X)
---
**Summary:** X critical, X high, X medium, X low issues found.
## Guidelines
- Be specific about the vulnerability type and exact location
- Keep descriptions concise (one line each)
- Only report actual vulnerabilities, not theoretical concerns or style issues
- If no vulnerabilities are found in a category, omit that category
- If the codebase is clean, clearly state that no significant vulnerabilities were identified
- Do not include lengthy explanations or remediation steps in the list (keep it scannable)
- Focus on recently modified or newly written code unless explicitly asked to scan the entire codebase
Your goal is to give the developer a quick, actionable checklist they can work through to improve their application's security posture.

View File

@@ -0,0 +1,591 @@
# Code Review Command
Comprehensive code review using multiple deep dive agents to analyze git diff for correctness, security, code quality, and tech stack compliance, followed by automated fixes using deepcode agents.
## Usage
This command analyzes all changes in the git diff and verifies:
1. **Invalid code based on tech stack** (HIGHEST PRIORITY)
2. Security vulnerabilities
3. Code quality issues (dirty code)
4. Implementation correctness
Then automatically fixes any issues found.
### Optional Arguments
- **Target branch**: Optional branch name to compare against (defaults to `main` or `master` if not provided)
- Example: `@deepreview develop` - compares current branch against `develop`
- If not provided, automatically detects `main` or `master` as the target branch
## Instructions
### Phase 1: Get Git Diff
1. **Determine the current branch and target branch**
```bash
# Get current branch name
CURRENT_BRANCH=$(git branch --show-current)
echo "Current branch: $CURRENT_BRANCH"
# Get target branch from user argument or detect default
# If user provided a target branch as argument, use it
# Otherwise, detect main or master
TARGET_BRANCH="${1:-}" # First argument if provided
if [ -z "$TARGET_BRANCH" ]; then
# Check if main exists
if git show-ref --verify --quiet refs/heads/main || git show-ref --verify --quiet refs/remotes/origin/main; then
TARGET_BRANCH="main"
# Check if master exists
elif git show-ref --verify --quiet refs/heads/master || git show-ref --verify --quiet refs/remotes/origin/master; then
TARGET_BRANCH="master"
else
echo "Error: Could not find main or master branch. Please specify target branch."
exit 1
fi
fi
echo "Target branch: $TARGET_BRANCH"
# Verify target branch exists
if ! git show-ref --verify --quiet refs/heads/$TARGET_BRANCH && ! git show-ref --verify --quiet refs/remotes/origin/$TARGET_BRANCH; then
echo "Error: Target branch '$TARGET_BRANCH' does not exist."
exit 1
fi
```
**Note:** The target branch can be provided as an optional argument. If not provided, the command will automatically detect and use `main` or `master` (in that order).
2. **Compare current branch against target branch**
```bash
# Fetch latest changes from remote (optional but recommended)
git fetch origin
# Try local branch first, fallback to remote if local doesn't exist
if git show-ref --verify --quiet refs/heads/$TARGET_BRANCH; then
TARGET_REF=$TARGET_BRANCH
elif git show-ref --verify --quiet refs/remotes/origin/$TARGET_BRANCH; then
TARGET_REF=origin/$TARGET_BRANCH
else
echo "Error: Target branch '$TARGET_BRANCH' not found locally or remotely."
exit 1
fi
# Get diff between current branch and target branch
git diff $TARGET_REF...HEAD
```
**Note:** Use `...` (three dots) to show changes between the common ancestor and HEAD, or `..` (two dots) to show changes between the branches directly. The command uses `$TARGET_BRANCH` variable set in step 1.
3. **Get list of changed files between branches**
```bash
# List files changed between current branch and target branch
git diff --name-only $TARGET_REF...HEAD
# Get detailed file status
git diff --name-status $TARGET_REF...HEAD
# Show file changes with statistics
git diff --stat $TARGET_REF...HEAD
```
4. **Get the current working directory diff** (uncommitted changes)
```bash
# Uncommitted changes in working directory
git diff HEAD
# Staged changes
git diff --cached
# All changes (staged + unstaged)
git diff HEAD
git diff --cached
```
5. **Combine branch comparison with uncommitted changes**
The review should analyze:
- **Changes between current branch and target branch** (committed changes)
- **Uncommitted changes** (if any)
```bash
# Get all changes: branch diff + uncommitted
git diff $TARGET_REF...HEAD > branch-changes.diff
git diff HEAD >> branch-changes.diff
git diff --cached >> branch-changes.diff
# Or get combined diff (recommended approach)
git diff $TARGET_REF...HEAD
git diff HEAD
git diff --cached
```
6. **Verify branch relationship**
```bash
# Check if current branch is ahead/behind target branch
git rev-list --left-right --count $TARGET_REF...HEAD
# Show commit log differences
git log $TARGET_REF..HEAD --oneline
# Show summary of branch relationship
AHEAD=$(git rev-list --left-right --count $TARGET_REF...HEAD | cut -f1)
BEHIND=$(git rev-list --left-right --count $TARGET_REF...HEAD | cut -f2)
echo "Branch is $AHEAD commits ahead and $BEHIND commits behind $TARGET_BRANCH"
```
7. **Understand the tech stack** (for validation):
- **Node.js**: >=22.0.0 <23.0.0
- **TypeScript**: 5.9.3
- **React**: 19.2.3
- **Express**: 5.2.1
- **Electron**: 39.2.7
- **Vite**: 7.3.0
- **Vitest**: 4.0.16
- Check `package.json` files for exact versions
### Phase 2: Deep Dive Analysis (5 Agents)
Launch 5 separate deep dive agents, each with a specific focus area. Each agent should be invoked with the `@deepdive` agent and given the git diff (comparing current branch against target branch) along with their specific instructions.
**Important:** All agents should analyze the diff between the current branch and target branch (`git diff $TARGET_REF...HEAD`), plus any uncommitted changes. This ensures the review covers all changes that will be merged. The target branch is determined from the optional argument or defaults to main/master.
#### Agent 1: Tech Stack Validation (HIGHEST PRIORITY)
**Focus:** Verify code is valid for the tech stack
**Instructions for Agent 1:**
```
Analyze the git diff for invalid code based on the tech stack:
1. **TypeScript/JavaScript Syntax**
- Check for valid TypeScript syntax (no invalid type annotations, correct import/export syntax)
- Verify Node.js API usage is compatible with Node.js >=22.0.0 <23.0.0
- Check for deprecated APIs or features not available in the Node.js version
- Verify ES module syntax (type: "module" in package.json)
2. **React 19.2.3 Compatibility**
- Check for deprecated React APIs or patterns
- Verify hooks usage is correct for React 19
- Check for invalid JSX syntax
- Verify component patterns match React 19 conventions
3. **Express 5.2.1 Compatibility**
- Check for deprecated Express APIs
- Verify middleware usage is correct for Express 5
- Check request/response handling patterns
4. **Type Safety**
- Verify TypeScript types are correctly used
- Check for `any` types that should be properly typed
- Verify type imports/exports are correct
- Check for missing type definitions
5. **Build System Compatibility**
- Verify Vite-specific code (imports, config) is valid
- Check Electron-specific APIs are used correctly
- Verify module resolution paths are correct
6. **Package Dependencies**
- Check for imports from packages not in package.json
- Verify version compatibility between dependencies
- Check for circular dependencies
Provide a detailed report with:
- File paths and line numbers of invalid code
- Specific error description (what's wrong and why)
- Expected vs actual behavior
- Priority level (CRITICAL for build-breaking issues)
```
#### Agent 2: Security Vulnerability Scanner
**Focus:** Security issues and vulnerabilities
**Instructions for Agent 2:**
```
Analyze the git diff for security vulnerabilities:
1. **Injection Vulnerabilities**
- SQL injection (if applicable)
- Command injection (exec, spawn, etc.)
- Path traversal vulnerabilities
- XSS vulnerabilities in React components
2. **Authentication & Authorization**
- Missing authentication checks
- Insecure token handling
- Authorization bypasses
- Session management issues
3. **Data Handling**
- Unsafe deserialization
- Insecure file operations
- Missing input validation
- Sensitive data exposure (secrets, tokens, passwords)
4. **Dependencies**
- Known vulnerable packages
- Insecure dependency versions
- Missing security patches
5. **API Security**
- Missing CORS configuration
- Insecure API endpoints
- Missing rate limiting
- Insecure WebSocket connections
6. **Electron-Specific**
- Insecure IPC communication
- Missing context isolation checks
- Insecure preload scripts
- Missing CSP headers
Provide a detailed report with:
- Vulnerability type and severity (CRITICAL, HIGH, MEDIUM, LOW)
- File paths and line numbers
- Attack vector description
- Recommended fix approach
```
#### Agent 3: Code Quality & Clean Code
**Focus:** Dirty code, code smells, and quality issues
**Instructions for Agent 3:**
```
Analyze the git diff for code quality issues:
1. **Code Smells**
- Long functions/methods (>50 lines)
- High cyclomatic complexity
- Duplicate code
- Dead code
- Magic numbers/strings
2. **Best Practices**
- Missing error handling
- Inconsistent naming conventions
- Poor separation of concerns
- Tight coupling
- Missing comments for complex logic
3. **Performance Issues**
- Inefficient algorithms
- Memory leaks (event listeners, subscriptions)
- Unnecessary re-renders in React
- Missing memoization where needed
- Inefficient database queries (if applicable)
4. **Maintainability**
- Hard-coded values
- Missing type definitions
- Inconsistent code style
- Poor file organization
- Missing tests for new code
5. **React-Specific**
- Missing key props in lists
- Direct state mutations
- Missing cleanup in useEffect
- Unnecessary useState/useEffect
- Prop drilling issues
Provide a detailed report with:
- Issue type and severity
- File paths and line numbers
- Description of the problem
- Impact on maintainability/performance
- Recommended refactoring approach
```
#### Agent 4: Implementation Correctness
**Focus:** Verify code implements requirements correctly
**Instructions for Agent 4:**
```
Analyze the git diff for implementation correctness:
1. **Logic Errors**
- Incorrect conditional logic
- Wrong variable usage
- Off-by-one errors
- Race conditions
- Missing null/undefined checks
2. **Functional Requirements**
- Missing features from requirements
- Incorrect feature implementation
- Edge cases not handled
- Missing validation
3. **Integration Issues**
- Incorrect API usage
- Wrong data format handling
- Missing error handling for external calls
- Incorrect state management
4. **Type Errors**
- Type mismatches
- Missing type guards
- Incorrect type assertions
- Unsafe type operations
5. **Testing Gaps**
- Missing unit tests
- Missing integration tests
- Tests don't cover edge cases
- Tests are incorrect
Provide a detailed report with:
- Issue description
- File paths and line numbers
- Expected vs actual behavior
- Steps to reproduce (if applicable)
- Recommended fix
```
#### Agent 5: Architecture & Design Patterns
**Focus:** Architectural issues and design pattern violations
**Instructions for Agent 5:**
```
Analyze the git diff for architectural and design issues:
1. **Architecture Violations**
- Violation of project structure patterns
- Incorrect layer separation
- Missing abstractions
- Tight coupling between modules
2. **Design Patterns**
- Incorrect pattern usage
- Missing patterns where needed
- Anti-patterns
3. **Project-Specific Patterns**
- Check against project documentation (docs/ folder)
- Verify route organization (server routes)
- Check provider patterns (server providers)
- Verify component organization (UI components)
4. **API Design**
- RESTful API violations
- Inconsistent response formats
- Missing error handling
- Incorrect status codes
5. **State Management**
- Incorrect state management patterns
- Missing state normalization
- Inefficient state updates
Provide a detailed report with:
- Architectural issue description
- File paths and affected areas
- Impact on system design
- Recommended architectural changes
```
### Phase 3: Consolidate Findings
After all 5 deep dive agents complete their analysis:
1. **Collect all findings** from each agent
2. **Prioritize issues**:
- CRITICAL: Tech stack invalid code (build-breaking)
- HIGH: Security vulnerabilities, critical logic errors
- MEDIUM: Code quality issues, architectural problems
- LOW: Minor code smells, style issues
3. **Group by file** to understand impact per file
4. **Create a master report** summarizing all findings
### Phase 4: Deepcode Fixes (5 Agents)
Launch 5 deepcode agents to fix the issues found. Each agent should be invoked with the `@deepcode` agent.
#### Deepcode Agent 1: Fix Tech Stack Invalid Code
**Priority:** CRITICAL - Fix first
**Instructions:**
```
Fix all invalid code based on tech stack issues identified by Agent 1.
Focus on:
1. Fixing TypeScript syntax errors
2. Updating deprecated Node.js APIs
3. Fixing React 19 compatibility issues
4. Correcting Express 5 API usage
5. Fixing type errors
6. Resolving build-breaking issues
After fixes, verify:
- Code compiles without errors
- TypeScript types are correct
- No deprecated API usage
```
#### Deepcode Agent 2: Fix Security Vulnerabilities
**Priority:** HIGH
**Instructions:**
```
Fix all security vulnerabilities identified by Agent 2.
Focus on:
1. Adding input validation
2. Fixing injection vulnerabilities
3. Securing authentication/authorization
4. Fixing insecure data handling
5. Updating vulnerable dependencies
6. Securing Electron IPC
After fixes, verify:
- Security vulnerabilities are addressed
- No sensitive data exposure
- Proper authentication/authorization
```
#### Deepcode Agent 3: Refactor Dirty Code
**Priority:** MEDIUM
**Instructions:**
```
Refactor code quality issues identified by Agent 3.
Focus on:
1. Extracting long functions
2. Reducing complexity
3. Removing duplicate code
4. Adding error handling
5. Improving React component structure
6. Adding missing comments
After fixes, verify:
- Code follows best practices
- No code smells remain
- Performance optimizations applied
```
#### Deepcode Agent 4: Fix Implementation Errors
**Priority:** HIGH
**Instructions:**
```
Fix implementation correctness issues identified by Agent 4.
Focus on:
1. Fixing logic errors
2. Adding missing features
3. Handling edge cases
4. Fixing type errors
5. Adding missing tests
After fixes, verify:
- Logic is correct
- Edge cases handled
- Tests pass
```
#### Deepcode Agent 5: Fix Architectural Issues
**Priority:** MEDIUM
**Instructions:**
```
Fix architectural issues identified by Agent 5.
Focus on:
1. Correcting architecture violations
2. Applying proper design patterns
3. Fixing API design issues
4. Improving state management
5. Following project patterns
After fixes, verify:
- Architecture is sound
- Patterns are correctly applied
- Code follows project structure
```
### Phase 5: Verification
After all fixes are complete:
1. **Run TypeScript compilation check**
```bash
npm run build:packages
```
2. **Run linting**
```bash
npm run lint
```
3. **Run tests** (if applicable)
```bash
npm run test:server
npm run test
```
4. **Verify git diff** shows only intended changes
```bash
git diff HEAD
```
5. **Create summary report**:
- Issues found by each agent
- Issues fixed by each agent
- Remaining issues (if any)
- Verification results
## Workflow Summary
1. ✅ Accept optional target branch argument (defaults to main/master if not provided)
2. ✅ Determine current branch and target branch (from argument or auto-detect main/master)
3. ✅ Get git diff comparing current branch against target branch (`git diff $TARGET_REF...HEAD`)
4. ✅ Include uncommitted changes in analysis (`git diff HEAD`, `git diff --cached`)
5. ✅ Launch 5 deep dive agents (parallel analysis) with branch diff
6. ✅ Consolidate findings and prioritize
7. ✅ Launch 5 deepcode agents (sequential fixes, priority order)
8. ✅ Verify fixes with build/lint/test
9. ✅ Report summary
## Notes
- **Tech stack validation is HIGHEST PRIORITY** - invalid code must be fixed first
- **Target branch argument**: The command accepts an optional target branch name as the first argument. If not provided, it automatically detects and uses `main` or `master` (in that order)
- Each deep dive agent should work independently and provide comprehensive analysis
- Deepcode agents should fix issues in priority order
- All fixes should maintain existing functionality
- If an agent finds no issues in their domain, they should report "No issues found"
- If fixes introduce new issues, they should be caught in verification phase
- The target branch is validated to ensure it exists (locally or remotely) before proceeding with the review

View File

@@ -0,0 +1,74 @@
# GitHub Issue Fix Command
Fetch a GitHub issue by number, verify it's a real issue, and fix it if valid.
## Usage
This command accepts a GitHub issue number as input (e.g., `123`).
## Instructions
1. **Get the issue number from the user**
- The issue number should be provided as an argument to this command
- If no number is provided, ask the user for it
2. **Fetch the GitHub issue**
- Determine the current project path (check if there's a current project context)
- Verify the project has a GitHub remote:
```bash
git remote get-url origin
```
- Fetch the issue details using GitHub CLI:
```bash
gh issue view <ISSUE_NUMBER> --json number,title,state,author,createdAt,labels,url,body,assignees
```
- If the command fails, report the error and stop
3. **Verify the issue is real and valid**
- Check that the issue exists (not 404)
- Check the issue state:
- If **closed**: Inform the user and ask if they still want to proceed
- If **open**: Proceed with validation
- Review the issue content:
- Read the title and body to understand what needs to be fixed
- Check labels for context (bug, enhancement, etc.)
- Note any assignees or linked PRs
4. **Validate the issue**
- Determine if this is a legitimate issue that needs fixing:
- Is the description clear and actionable?
- Does it describe a real problem or feature request?
- Are there any obvious signs it's spam or invalid?
- If the issue seems invalid or unclear:
- Report findings to the user
- Ask if they want to proceed anyway
- Stop if user confirms it's not valid
5. **If the issue is valid, proceed to fix it**
- Analyze what needs to be done based on the issue description
- Check the current codebase state:
- Run relevant tests to see current behavior
- Check if the issue is already fixed
- Look for related code that might need changes
- Implement the fix:
- Make necessary code changes
- Update or add tests as needed
- Ensure the fix addresses the issue description
- Verify the fix:
- Run tests to ensure nothing broke
- If possible, manually verify the fix addresses the issue
6. **Report summary**
- Issue number and title
- Issue state (open/closed)
- Whether the issue was validated as real
- What was fixed (if anything)
- Any tests that were updated or added
- Next steps (if any)
## Error Handling
- If GitHub CLI (`gh`) is not installed or authenticated, report error and stop
- If the project doesn't have a GitHub remote, report error and stop
- If the issue number doesn't exist, report error and stop
- If the issue is unclear or invalid, report findings and ask user before proceeding

View File

@@ -0,0 +1,77 @@
# Release Command
Bump the package.json version (major, minor, or patch) and build the Electron app with the new version.
## Usage
This command accepts a version bump type as input:
- `patch` - Bump patch version (0.1.0 -> 0.1.1)
- `minor` - Bump minor version (0.1.0 -> 0.2.0)
- `major` - Bump major version (0.1.0 -> 1.0.0)
## Instructions
1. **Get the bump type from the user**
- The bump type should be provided as an argument (patch, minor, or major)
- If no type is provided, ask the user which type they want
2. **Bump the version**
- Run the version bump script:
```bash
node apps/ui/scripts/bump-version.mjs <type>
```
- This updates both `apps/ui/package.json` and `apps/server/package.json` with the new version (keeps them in sync)
- Verify the version was updated correctly by checking the output
3. **Build the Electron app**
- Run the electron build:
```bash
npm run build:electron --workspace=apps/ui
```
- The build process automatically:
- Uses the version from `package.json` for artifact names (e.g., `Automaker-1.2.3-x64.zip`)
- Injects the version into the app via Vite's `__APP_VERSION__` constant
- Displays the version below the logo in the sidebar
4. **Commit the version bump**
- Stage the updated package.json files:
```bash
git add apps/ui/package.json apps/server/package.json
```
- Commit with a release message:
```bash
git commit -m "chore: release v<version>"
```
5. **Create and push the git tag**
- Create an annotated tag for the release:
```bash
git tag -a v<version> -m "Release v<version>"
```
- Push the commit and tag to remote:
```bash
git push && git push --tags
```
6. **Verify the release**
- Check that the build completed successfully
- Confirm the version appears correctly in the built artifacts
- The version will be displayed in the app UI below the logo
- Verify the tag is visible on the remote repository
## Version Centralization
The version is centralized and synchronized in both `apps/ui/package.json` and `apps/server/package.json`:
- **Electron builds**: Automatically read from `apps/ui/package.json` via electron-builder's `${version}` variable in `artifactName`
- **App display**: Injected at build time via Vite's `define` config as `__APP_VERSION__` constant (defined in `apps/ui/vite.config.mts`)
- **Server API**: Read from `apps/server/package.json` via `apps/server/src/lib/version.ts` utility (used in health check endpoints)
- **Type safety**: Defined in `apps/ui/src/vite-env.d.ts` as `declare const __APP_VERSION__: string`
This ensures consistency across:
- Build artifact names (e.g., `Automaker-1.2.3-x64.zip`)
- App UI display (shown as `v1.2.3` below the logo in `apps/ui/src/components/layout/sidebar/components/automaker-logo.tsx`)
- Server health endpoints (`/` and `/detailed`)
- Package metadata (both UI and server packages stay in sync)

484
.claude/commands/review.md Normal file
View File

@@ -0,0 +1,484 @@
# Code Review Command
Comprehensive code review using multiple deep dive agents to analyze git diff for correctness, security, code quality, and tech stack compliance, followed by automated fixes using deepcode agents.
## Usage
This command analyzes all changes in the git diff and verifies:
1. **Invalid code based on tech stack** (HIGHEST PRIORITY)
2. Security vulnerabilities
3. Code quality issues (dirty code)
4. Implementation correctness
Then automatically fixes any issues found.
## Instructions
### Phase 1: Get Git Diff
1. **Get the current git diff**
```bash
git diff HEAD
```
If you need staged changes instead:
```bash
git diff --cached
```
Or for a specific commit range:
```bash
git diff <base-branch>
```
2. **Get list of changed files**
```bash
git diff --name-only HEAD
```
3. **Understand the tech stack** (for validation):
- **Node.js**: >=22.0.0 <23.0.0
- **TypeScript**: 5.9.3
- **React**: 19.2.3
- **Express**: 5.2.1
- **Electron**: 39.2.7
- **Vite**: 7.3.0
- **Vitest**: 4.0.16
- Check `package.json` files for exact versions
### Phase 2: Deep Dive Analysis (5 Agents)
Launch 5 separate deep dive agents, each with a specific focus area. Each agent should be invoked with the `@deepdive` agent and given the git diff along with their specific instructions.
#### Agent 1: Tech Stack Validation (HIGHEST PRIORITY)
**Focus:** Verify code is valid for the tech stack
**Instructions for Agent 1:**
```
Analyze the git diff for invalid code based on the tech stack:
1. **TypeScript/JavaScript Syntax**
- Check for valid TypeScript syntax (no invalid type annotations, correct import/export syntax)
- Verify Node.js API usage is compatible with Node.js >=22.0.0 <23.0.0
- Check for deprecated APIs or features not available in the Node.js version
- Verify ES module syntax (type: "module" in package.json)
2. **React 19.2.3 Compatibility**
- Check for deprecated React APIs or patterns
- Verify hooks usage is correct for React 19
- Check for invalid JSX syntax
- Verify component patterns match React 19 conventions
3. **Express 5.2.1 Compatibility**
- Check for deprecated Express APIs
- Verify middleware usage is correct for Express 5
- Check request/response handling patterns
4. **Type Safety**
- Verify TypeScript types are correctly used
- Check for `any` types that should be properly typed
- Verify type imports/exports are correct
- Check for missing type definitions
5. **Build System Compatibility**
- Verify Vite-specific code (imports, config) is valid
- Check Electron-specific APIs are used correctly
- Verify module resolution paths are correct
6. **Package Dependencies**
- Check for imports from packages not in package.json
- Verify version compatibility between dependencies
- Check for circular dependencies
Provide a detailed report with:
- File paths and line numbers of invalid code
- Specific error description (what's wrong and why)
- Expected vs actual behavior
- Priority level (CRITICAL for build-breaking issues)
```
#### Agent 2: Security Vulnerability Scanner
**Focus:** Security issues and vulnerabilities
**Instructions for Agent 2:**
```
Analyze the git diff for security vulnerabilities:
1. **Injection Vulnerabilities**
- SQL injection (if applicable)
- Command injection (exec, spawn, etc.)
- Path traversal vulnerabilities
- XSS vulnerabilities in React components
2. **Authentication & Authorization**
- Missing authentication checks
- Insecure token handling
- Authorization bypasses
- Session management issues
3. **Data Handling**
- Unsafe deserialization
- Insecure file operations
- Missing input validation
- Sensitive data exposure (secrets, tokens, passwords)
4. **Dependencies**
- Known vulnerable packages
- Insecure dependency versions
- Missing security patches
5. **API Security**
- Missing CORS configuration
- Insecure API endpoints
- Missing rate limiting
- Insecure WebSocket connections
6. **Electron-Specific**
- Insecure IPC communication
- Missing context isolation checks
- Insecure preload scripts
- Missing CSP headers
Provide a detailed report with:
- Vulnerability type and severity (CRITICAL, HIGH, MEDIUM, LOW)
- File paths and line numbers
- Attack vector description
- Recommended fix approach
```
#### Agent 3: Code Quality & Clean Code
**Focus:** Dirty code, code smells, and quality issues
**Instructions for Agent 3:**
```
Analyze the git diff for code quality issues:
1. **Code Smells**
- Long functions/methods (>50 lines)
- High cyclomatic complexity
- Duplicate code
- Dead code
- Magic numbers/strings
2. **Best Practices**
- Missing error handling
- Inconsistent naming conventions
- Poor separation of concerns
- Tight coupling
- Missing comments for complex logic
3. **Performance Issues**
- Inefficient algorithms
- Memory leaks (event listeners, subscriptions)
- Unnecessary re-renders in React
- Missing memoization where needed
- Inefficient database queries (if applicable)
4. **Maintainability**
- Hard-coded values
- Missing type definitions
- Inconsistent code style
- Poor file organization
- Missing tests for new code
5. **React-Specific**
- Missing key props in lists
- Direct state mutations
- Missing cleanup in useEffect
- Unnecessary useState/useEffect
- Prop drilling issues
Provide a detailed report with:
- Issue type and severity
- File paths and line numbers
- Description of the problem
- Impact on maintainability/performance
- Recommended refactoring approach
```
#### Agent 4: Implementation Correctness
**Focus:** Verify code implements requirements correctly
**Instructions for Agent 4:**
```
Analyze the git diff for implementation correctness:
1. **Logic Errors**
- Incorrect conditional logic
- Wrong variable usage
- Off-by-one errors
- Race conditions
- Missing null/undefined checks
2. **Functional Requirements**
- Missing features from requirements
- Incorrect feature implementation
- Edge cases not handled
- Missing validation
3. **Integration Issues**
- Incorrect API usage
- Wrong data format handling
- Missing error handling for external calls
- Incorrect state management
4. **Type Errors**
- Type mismatches
- Missing type guards
- Incorrect type assertions
- Unsafe type operations
5. **Testing Gaps**
- Missing unit tests
- Missing integration tests
- Tests don't cover edge cases
- Tests are incorrect
Provide a detailed report with:
- Issue description
- File paths and line numbers
- Expected vs actual behavior
- Steps to reproduce (if applicable)
- Recommended fix
```
#### Agent 5: Architecture & Design Patterns
**Focus:** Architectural issues and design pattern violations
**Instructions for Agent 5:**
```
Analyze the git diff for architectural and design issues:
1. **Architecture Violations**
- Violation of project structure patterns
- Incorrect layer separation
- Missing abstractions
- Tight coupling between modules
2. **Design Patterns**
- Incorrect pattern usage
- Missing patterns where needed
- Anti-patterns
3. **Project-Specific Patterns**
- Check against project documentation (docs/ folder)
- Verify route organization (server routes)
- Check provider patterns (server providers)
- Verify component organization (UI components)
4. **API Design**
- RESTful API violations
- Inconsistent response formats
- Missing error handling
- Incorrect status codes
5. **State Management**
- Incorrect state management patterns
- Missing state normalization
- Inefficient state updates
Provide a detailed report with:
- Architectural issue description
- File paths and affected areas
- Impact on system design
- Recommended architectural changes
```
### Phase 3: Consolidate Findings
After all 5 deep dive agents complete their analysis:
1. **Collect all findings** from each agent
2. **Prioritize issues**:
- CRITICAL: Tech stack invalid code (build-breaking)
- HIGH: Security vulnerabilities, critical logic errors
- MEDIUM: Code quality issues, architectural problems
- LOW: Minor code smells, style issues
3. **Group by file** to understand impact per file
4. **Create a master report** summarizing all findings
### Phase 4: Deepcode Fixes (5 Agents)
Launch 5 deepcode agents to fix the issues found. Each agent should be invoked with the `@deepcode` agent.
#### Deepcode Agent 1: Fix Tech Stack Invalid Code
**Priority:** CRITICAL - Fix first
**Instructions:**
```
Fix all invalid code based on tech stack issues identified by Agent 1.
Focus on:
1. Fixing TypeScript syntax errors
2. Updating deprecated Node.js APIs
3. Fixing React 19 compatibility issues
4. Correcting Express 5 API usage
5. Fixing type errors
6. Resolving build-breaking issues
After fixes, verify:
- Code compiles without errors
- TypeScript types are correct
- No deprecated API usage
```
#### Deepcode Agent 2: Fix Security Vulnerabilities
**Priority:** HIGH
**Instructions:**
```
Fix all security vulnerabilities identified by Agent 2.
Focus on:
1. Adding input validation
2. Fixing injection vulnerabilities
3. Securing authentication/authorization
4. Fixing insecure data handling
5. Updating vulnerable dependencies
6. Securing Electron IPC
After fixes, verify:
- Security vulnerabilities are addressed
- No sensitive data exposure
- Proper authentication/authorization
```
#### Deepcode Agent 3: Refactor Dirty Code
**Priority:** MEDIUM
**Instructions:**
```
Refactor code quality issues identified by Agent 3.
Focus on:
1. Extracting long functions
2. Reducing complexity
3. Removing duplicate code
4. Adding error handling
5. Improving React component structure
6. Adding missing comments
After fixes, verify:
- Code follows best practices
- No code smells remain
- Performance optimizations applied
```
#### Deepcode Agent 4: Fix Implementation Errors
**Priority:** HIGH
**Instructions:**
```
Fix implementation correctness issues identified by Agent 4.
Focus on:
1. Fixing logic errors
2. Adding missing features
3. Handling edge cases
4. Fixing type errors
5. Adding missing tests
After fixes, verify:
- Logic is correct
- Edge cases handled
- Tests pass
```
#### Deepcode Agent 5: Fix Architectural Issues
**Priority:** MEDIUM
**Instructions:**
```
Fix architectural issues identified by Agent 5.
Focus on:
1. Correcting architecture violations
2. Applying proper design patterns
3. Fixing API design issues
4. Improving state management
5. Following project patterns
After fixes, verify:
- Architecture is sound
- Patterns are correctly applied
- Code follows project structure
```
### Phase 5: Verification
After all fixes are complete:
1. **Run TypeScript compilation check**
```bash
npm run build:packages
```
2. **Run linting**
```bash
npm run lint
```
3. **Run tests** (if applicable)
```bash
npm run test:server
npm run test
```
4. **Verify git diff** shows only intended changes
```bash
git diff HEAD
```
5. **Create summary report**:
- Issues found by each agent
- Issues fixed by each agent
- Remaining issues (if any)
- Verification results
## Workflow Summary
1. ✅ Get git diff
2. ✅ Launch 5 deep dive agents (parallel analysis)
3. ✅ Consolidate findings and prioritize
4. ✅ Launch 5 deepcode agents (sequential fixes, priority order)
5. ✅ Verify fixes with build/lint/test
6. ✅ Report summary
## Notes
- **Tech stack validation is HIGHEST PRIORITY** - invalid code must be fixed first
- Each deep dive agent should work independently and provide comprehensive analysis
- Deepcode agents should fix issues in priority order
- All fixes should maintain existing functionality
- If an agent finds no issues in their domain, they should report "No issues found"
- If fixes introduce new issues, they should be caught in verification phase

View File

@@ -0,0 +1,45 @@
When you think you are done, you are NOT done.
You must run a mandatory 3-pass verification before concluding:
## Pass 1: Correctness & Functionality
- [ ] Verify logic matches requirements and specifications
- [ ] Check type safety (TypeScript types are correct and complete)
- [ ] Ensure imports are correct and follow project conventions
- [ ] Verify all functions/classes work as intended
- [ ] Check that return values and side effects are correct
- [ ] Run relevant tests if they exist, or verify testability
- [ ] Confirm integration with existing code works properly
## Pass 2: Edge Cases & Safety
- [ ] Handle null/undefined inputs gracefully
- [ ] Validate all user inputs and external data
- [ ] Check error handling (try/catch, error boundaries, etc.)
- [ ] Verify security considerations (no sensitive data exposure, proper auth checks)
- [ ] Test boundary conditions (empty arrays, zero values, max lengths, etc.)
- [ ] Ensure resource cleanup (file handles, connections, timers)
- [ ] Check for potential race conditions or async issues
- [ ] Verify file path security (no directory traversal vulnerabilities)
## Pass 3: Maintainability & Code Quality
- [ ] Code follows project style guide and conventions
- [ ] Functions/classes are single-purpose and well-named
- [ ] Remove dead code, unused imports, and console.logs
- [ ] Extract magic numbers/strings into named constants
- [ ] Check for code duplication (DRY principle)
- [ ] Verify appropriate abstraction levels (not over/under-engineered)
- [ ] Add necessary comments for complex logic
- [ ] Ensure consistent error messages and logging
- [ ] Check that code is readable and self-documenting
- [ ] Verify proper separation of concerns
**For each pass, explicitly report:**
- What you checked
- Any issues found and how they were fixed
- Any remaining concerns or trade-offs
Only after completing all three passes with explicit findings may you conclude the work is done.

View File

@@ -0,0 +1,49 @@
# Project Build and Fix Command
Run all builds and intelligently fix any failures based on what changed.
## Instructions
1. **Run the build**
```bash
npm run build
```
This builds all packages and the UI application.
2. **If the build succeeds**, report success and stop.
3. **If the build fails**, analyze the failures:
- Note which build step failed and the error messages
- Check for TypeScript compilation errors, missing dependencies, or configuration issues
- Run `git diff main` to see what code has changed
4. **Determine the nature of the failure**:
- **If the failure is due to intentional changes** (new features, refactoring, dependency updates):
- Fix any TypeScript type errors introduced by the changes
- Update build configuration if needed (e.g., tsconfig.json, vite.config.mts)
- Ensure all new dependencies are properly installed
- Fix import paths or module resolution issues
- **If the failure appears to be a regression** (broken imports, missing files, configuration errors):
- Fix the source code to restore the build
- Check for accidentally deleted files or broken references
- Verify build configuration files are correct
5. **Common build issues to check**:
- **TypeScript errors**: Fix type mismatches, missing types, or incorrect imports
- **Missing dependencies**: Run `npm install` if packages are missing
- **Import/export errors**: Fix incorrect import paths or missing exports
- **Build configuration**: Check tsconfig.json, vite.config.mts, or other build configs
- **Package build order**: Ensure `build:packages` completes before building apps
6. **How to decide if it's intentional vs regression**:
- Look at the git diff and commit messages
- If the change was deliberate and introduced new code that needs fixing → fix the new code
- If the change broke existing functionality that should still build → fix the regression
- When in doubt, ask the user
7. **After making fixes**, re-run the build to verify everything compiles successfully.
8. **Report summary** of what was fixed (TypeScript errors, configuration issues, missing dependencies, etc.).

View File

@@ -0,0 +1,36 @@
# Project Test and Fix Command
Run all tests and intelligently fix any failures based on what changed.
## Instructions
1. **Run all tests**
```bash
npm run test:all
```
2. **If all tests pass**, report success and stop.
3. **If any tests fail**, analyze the failures:
- Note which tests failed and their error messages
- Run `git diff main` to see what code has changed
4. **Determine the nature of the change**:
- **If the logic change is intentional** (new feature, refactor, behavior change):
- Update the failing tests to match the new expected behavior
- The tests should reflect what the code NOW does correctly
- **If the logic change appears to be a bug** (regression, unintended side effect):
- Fix the source code to restore the expected behavior
- Do NOT modify the tests - they are catching a real bug
5. **How to decide if it's a bug vs intentional change**:
- Look at the git diff and commit messages
- If the change was deliberate and the test expectations are now outdated → update tests
- If the change broke existing functionality that should still work → fix the code
- When in doubt, ask the user
6. **After making fixes**, re-run the tests to verify everything passes.
7. **Report summary** of what was fixed (tests updated vs code fixed).

View File

@@ -1,24 +0,0 @@
{
"sandbox": {
"enabled": true,
"autoAllowBashIfSandboxed": true
},
"permissions": {
"defaultMode": "acceptEdits",
"allow": [
"Read(./**)",
"Write(./**)",
"Edit(./**)",
"Glob(./**)",
"Grep(./**)",
"Bash(*)",
"mcp__puppeteer__puppeteer_navigate",
"mcp__puppeteer__puppeteer_screenshot",
"mcp__puppeteer__puppeteer_click",
"mcp__puppeteer__puppeteer_fill",
"mcp__puppeteer__puppeteer_select",
"mcp__puppeteer__puppeteer_hover",
"mcp__puppeteer__puppeteer_evaluate"
]
}
}

19
.dockerignore Normal file
View File

@@ -0,0 +1,19 @@
# Dependencies
node_modules/
**/node_modules/
# Build outputs
dist/
**/dist/
dist-electron/
**/dist-electron/
build/
**/build/
.next/
**/.next/
.nuxt/
**/.nuxt/
out/
**/out/
.cache/
**/.cache/

117
.github/ISSUE_TEMPLATE/bug_report.yml vendored Normal file
View File

@@ -0,0 +1,117 @@
name: Bug Report
description: File a bug report to help us improve Automaker
title: '[Bug]: '
labels: ['bug']
body:
- type: markdown
attributes:
value: |
Thanks for taking the time to report a bug! Please fill out the form below with as much detail as possible.
- type: dropdown
id: operating-system
attributes:
label: Operating System
description: What operating system are you using?
options:
- macOS
- Windows
- Linux
- Other
default: 0
validations:
required: true
- type: dropdown
id: run-mode
attributes:
label: Run Mode
description: How are you running Automaker?
options:
- Electron (Desktop App)
- Web (Browser)
- Docker
default: 0
validations:
required: true
- type: input
id: app-version
attributes:
label: App Version
description: What version of Automaker are you using? (e.g., 0.1.0)
placeholder: '0.1.0'
validations:
required: true
- type: textarea
id: bug-description
attributes:
label: Bug Description
description: A clear and concise description of what the bug is.
placeholder: Describe the bug...
validations:
required: true
- type: textarea
id: steps-to-reproduce
attributes:
label: Steps to Reproduce
description: Steps to reproduce the behavior
placeholder: |
1. Go to '...'
2. Click on '...'
3. Scroll down to '...'
4. See error
validations:
required: true
- type: textarea
id: expected-behavior
attributes:
label: Expected Behavior
description: A clear and concise description of what you expected to happen.
placeholder: What should have happened?
validations:
required: true
- type: textarea
id: actual-behavior
attributes:
label: Actual Behavior
description: A clear and concise description of what actually happened.
placeholder: What actually happened?
validations:
required: true
- type: textarea
id: screenshots
attributes:
label: Screenshots
description: If applicable, add screenshots to help explain your problem.
placeholder: Drag and drop screenshots here or paste image URLs
- type: textarea
id: logs
attributes:
label: Relevant Logs
description: If applicable, paste relevant logs or error messages.
placeholder: Paste logs here...
render: shell
- type: textarea
id: additional-context
attributes:
label: Additional Context
description: Add any other context about the problem here.
placeholder: Any additional information that might be helpful...
- type: checkboxes
id: terms
attributes:
label: Checklist
options:
- label: I have searched existing issues to ensure this bug hasn't been reported already
required: true
- label: I have provided all required information above
required: true

View File

@@ -31,24 +31,99 @@ jobs:
- name: Build server
run: npm run build --workspace=apps/server
- name: Set up Git user
run: |
git config --global user.name "GitHub CI"
git config --global user.email "ci@example.com"
- name: Start backend server
run: npm run start --workspace=apps/server &
run: |
echo "Starting backend server..."
# Start server in background and save PID
npm run start --workspace=apps/server > backend.log 2>&1 &
SERVER_PID=$!
echo "Server started with PID: $SERVER_PID"
echo "SERVER_PID=$SERVER_PID" >> $GITHUB_ENV
env:
PORT: 3008
NODE_ENV: test
# Use a deterministic API key so Playwright can log in reliably
AUTOMAKER_API_KEY: test-api-key-for-e2e-tests
# Reduce log noise in CI
AUTOMAKER_HIDE_API_KEY: 'true'
# Avoid real API calls during CI
AUTOMAKER_MOCK_AGENT: 'true'
# Simulate containerized environment to skip sandbox confirmation dialogs
IS_CONTAINERIZED: 'true'
- name: Wait for backend server
run: |
echo "Waiting for backend server to be ready..."
for i in {1..30}; do
if curl -s http://localhost:3008/api/health > /dev/null 2>&1; then
# Check if server process is running
if [ -z "$SERVER_PID" ]; then
echo "ERROR: Server PID not found in environment"
cat backend.log 2>/dev/null || echo "No backend log found"
exit 1
fi
# Check if process is actually running
if ! kill -0 $SERVER_PID 2>/dev/null; then
echo "ERROR: Server process $SERVER_PID is not running!"
echo "=== Backend logs ==="
cat backend.log
echo ""
echo "=== Recent system logs ==="
dmesg 2>/dev/null | tail -20 || echo "No dmesg available"
exit 1
fi
# Wait for health endpoint
for i in {1..60}; do
if curl -s -f http://localhost:3008/api/health > /dev/null 2>&1; then
echo "Backend server is ready!"
echo "=== Backend logs ==="
cat backend.log
echo ""
echo "Health check response:"
curl -s http://localhost:3008/api/health | jq . 2>/dev/null || echo "Health check: $(curl -s http://localhost:3008/api/health 2>/dev/null || echo 'No response')"
exit 0
fi
echo "Waiting... ($i/30)"
# Check if server process is still running
if ! kill -0 $SERVER_PID 2>/dev/null; then
echo "ERROR: Server process died during wait!"
echo "=== Backend logs ==="
cat backend.log
exit 1
fi
echo "Waiting... ($i/60)"
sleep 1
done
echo "Backend server failed to start!"
echo "ERROR: Backend server failed to start within 60 seconds!"
echo "=== Backend logs ==="
cat backend.log
echo ""
echo "=== Process status ==="
ps aux | grep -E "(node|tsx)" | grep -v grep || echo "No node processes found"
echo ""
echo "=== Port status ==="
netstat -tlnp 2>/dev/null | grep :3008 || echo "Port 3008 not listening"
lsof -i :3008 2>/dev/null || echo "lsof not available or port not in use"
echo ""
echo "=== Health endpoint test ==="
curl -v http://localhost:3008/api/health 2>&1 || echo "Health endpoint failed"
# Kill the server process if it's still hanging
if kill -0 $SERVER_PID 2>/dev/null; then
echo ""
echo "Killing stuck server process..."
kill -9 $SERVER_PID 2>/dev/null || true
fi
exit 1
- name: Run E2E tests
@@ -59,6 +134,20 @@ jobs:
CI: true
VITE_SERVER_URL: http://localhost:3008
VITE_SKIP_SETUP: 'true'
# Keep UI-side login/defaults consistent
AUTOMAKER_API_KEY: test-api-key-for-e2e-tests
- name: Print backend logs on failure
if: failure()
run: |
echo "=== E2E Tests Failed - Backend Logs ==="
cat backend.log 2>/dev/null || echo "No backend log found"
echo ""
echo "=== Process status at failure ==="
ps aux | grep -E "(node|tsx)" | grep -v grep || echo "No node processes found"
echo ""
echo "=== Port status ==="
netstat -tlnp 2>/dev/null | grep :3008 || echo "Port 3008 not listening"
- name: Upload Playwright report
uses: actions/upload-artifact@v4
@@ -68,10 +157,22 @@ jobs:
path: apps/ui/playwright-report/
retention-days: 7
- name: Upload test results
- name: Upload test results (screenshots, traces, videos)
uses: actions/upload-artifact@v4
if: failure()
if: always()
with:
name: test-results
path: apps/ui/test-results/
path: |
apps/ui/test-results/
retention-days: 7
if-no-files-found: ignore
- name: Cleanup - Kill backend server
if: always()
run: |
if [ -n "$SERVER_PID" ]; then
echo "Cleaning up backend server (PID: $SERVER_PID)..."
kill $SERVER_PID 2>/dev/null || true
kill -9 $SERVER_PID 2>/dev/null || true
echo "Backend server cleanup complete"
fi

View File

@@ -26,5 +26,5 @@ jobs:
check-lockfile: 'true'
- name: Run npm audit
run: npm audit --audit-level=moderate
run: npm audit --audit-level=critical
continue-on-error: false

17
.gitignore vendored
View File

@@ -73,10 +73,25 @@ blob-report/
!.env.example
!.env.local.example
# Codex config (contains API keys)
.codex/config.toml
# TypeScript
*.tsbuildinfo
# Misc
*.pem
docker-compose.override.yml
docker-compose.override.yml
.claude/docker-compose.override.yml
.claude/hans/
pnpm-lock.yaml
yarn.lock
# Fork-specific workflow files (should never be committed)
DEVELOPMENT_WORKFLOW.md
check-sync.sh
# API key files
data/.api-key
data/credentials.json

View File

@@ -1 +1,46 @@
npx lint-staged
#!/usr/bin/env sh
# Try to load nvm if available (optional - works without it too)
if [ -z "$NVM_DIR" ]; then
# Check for Herd's nvm first (macOS with Herd)
if [ -s "$HOME/Library/Application Support/Herd/config/nvm/nvm.sh" ]; then
export NVM_DIR="$HOME/Library/Application Support/Herd/config/nvm"
# Then check standard nvm location
elif [ -s "$HOME/.nvm/nvm.sh" ]; then
export NVM_DIR="$HOME/.nvm"
fi
fi
# Source nvm if found (silently skip if not available)
[ -n "$NVM_DIR" ] && [ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" 2>/dev/null
# Load node version from .nvmrc if using nvm (silently skip if nvm not available or fails)
if [ -f .nvmrc ] && command -v nvm >/dev/null 2>&1; then
# Check if Unix nvm was sourced (it's a shell function with NVM_DIR set)
if [ -n "$NVM_DIR" ] && type nvm 2>/dev/null | grep -q "function"; then
# Unix nvm: reads .nvmrc automatically
nvm use >/dev/null 2>&1 || true
else
# nvm-windows: needs explicit version from .nvmrc
NODE_VERSION=$(cat .nvmrc | tr -d '[:space:]')
if [ -n "$NODE_VERSION" ]; then
nvm use "$NODE_VERSION" >/dev/null 2>&1 || true
fi
fi
fi
# Ensure common system paths are in PATH (for systems without nvm)
# This helps find node/npm installed via Homebrew, system packages, etc.
export PATH="$PATH:/usr/local/bin:/opt/homebrew/bin:/usr/bin"
# Run lint-staged - works with or without nvm
# Prefer npx, fallback to npm exec, both work with system-installed Node.js
if command -v npx >/dev/null 2>&1; then
npx lint-staged
elif command -v npm >/dev/null 2>&1; then
npm exec -- lint-staged
else
echo "Error: Neither npx nor npm found in PATH."
echo "Please ensure Node.js is installed (via nvm, Homebrew, system package manager, etc.)"
exit 1
fi

2
.nvmrc Normal file
View File

@@ -0,0 +1,2 @@
22

View File

@@ -23,6 +23,8 @@ pnpm-lock.yaml
# Generated files
*.min.js
*.min.css
routeTree.gen.ts
apps/ui/src/routeTree.gen.ts
# Test artifacts
test-results/

172
CLAUDE.md Normal file
View File

@@ -0,0 +1,172 @@
# CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## Project Overview
Automaker is an autonomous AI development studio built as an npm workspace monorepo. It provides a Kanban-based workflow where AI agents (powered by Claude Agent SDK) implement features in isolated git worktrees.
## Common Commands
```bash
# Development
npm run dev # Interactive launcher (choose web or electron)
npm run dev:web # Web browser mode (localhost:3007)
npm run dev:electron # Desktop app mode
npm run dev:electron:debug # Desktop with DevTools open
# Building
npm run build # Build web application
npm run build:packages # Build all shared packages (required before other builds)
npm run build:electron # Build desktop app for current platform
npm run build:server # Build server only
# Testing
npm run test # E2E tests (Playwright, headless)
npm run test:headed # E2E tests with browser visible
npm run test:server # Server unit tests (Vitest)
npm run test:packages # All shared package tests
npm run test:all # All tests (packages + server)
# Single test file
npm run test:server -- tests/unit/specific.test.ts
# Linting and formatting
npm run lint # ESLint
npm run format # Prettier write
npm run format:check # Prettier check
```
## Architecture
### Monorepo Structure
```
automaker/
├── apps/
│ ├── ui/ # React + Vite + Electron frontend (port 3007)
│ └── server/ # Express + WebSocket backend (port 3008)
└── libs/ # Shared packages (@automaker/*)
├── types/ # Core TypeScript definitions (no dependencies)
├── utils/ # Logging, errors, image processing, context loading
├── prompts/ # AI prompt templates
├── platform/ # Path management, security, process spawning
├── model-resolver/ # Claude model alias resolution
├── dependency-resolver/ # Feature dependency ordering
└── git-utils/ # Git operations & worktree management
```
### Package Dependency Chain
Packages can only depend on packages above them:
```
@automaker/types (no dependencies)
@automaker/utils, @automaker/prompts, @automaker/platform, @automaker/model-resolver, @automaker/dependency-resolver
@automaker/git-utils
@automaker/server, @automaker/ui
```
### Key Technologies
- **Frontend**: React 19, Vite 7, Electron 39, TanStack Router, Zustand 5, Tailwind CSS 4
- **Backend**: Express 5, WebSocket (ws), Claude Agent SDK, node-pty
- **Testing**: Playwright (E2E), Vitest (unit)
### Server Architecture
The server (`apps/server/src/`) follows a modular pattern:
- `routes/` - Express route handlers organized by feature (agent, features, auto-mode, worktree, etc.)
- `services/` - Business logic (AgentService, AutoModeService, FeatureLoader, TerminalService)
- `providers/` - AI provider abstraction (currently Claude via Claude Agent SDK)
- `lib/` - Utilities (events, auth, worktree metadata)
### Frontend Architecture
The UI (`apps/ui/src/`) uses:
- `routes/` - TanStack Router file-based routing
- `components/views/` - Main view components (board, settings, terminal, etc.)
- `store/` - Zustand stores with persistence (app-store.ts, setup-store.ts)
- `hooks/` - Custom React hooks
- `lib/` - Utilities and API client
## Data Storage
### Per-Project Data (`.automaker/`)
```
.automaker/
├── features/ # Feature JSON files and images
│ └── {featureId}/
│ ├── feature.json
│ ├── agent-output.md
│ └── images/
├── context/ # Context files for AI agents (CLAUDE.md, etc.)
├── settings.json # Project-specific settings
├── spec.md # Project specification
└── analysis.json # Project structure analysis
```
### Global Data (`DATA_DIR`, default `./data`)
```
data/
├── settings.json # Global settings, profiles, shortcuts
├── credentials.json # API keys
├── sessions-metadata.json # Chat session metadata
└── agent-sessions/ # Conversation histories
```
## Import Conventions
Always import from shared packages, never from old paths:
```typescript
// ✅ Correct
import type { Feature, ExecuteOptions } from '@automaker/types';
import { createLogger, classifyError } from '@automaker/utils';
import { getEnhancementPrompt } from '@automaker/prompts';
import { getFeatureDir, ensureAutomakerDir } from '@automaker/platform';
import { resolveModelString } from '@automaker/model-resolver';
import { resolveDependencies } from '@automaker/dependency-resolver';
import { getGitRepositoryDiffs } from '@automaker/git-utils';
// ❌ Never import from old paths
import { Feature } from '../services/feature-loader'; // Wrong
import { createLogger } from '../lib/logger'; // Wrong
```
## Key Patterns
### Event-Driven Architecture
All server operations emit events that stream to the frontend via WebSocket. Events are created using `createEventEmitter()` from `lib/events.ts`.
### Git Worktree Isolation
Each feature executes in an isolated git worktree, created via `@automaker/git-utils`. This protects the main branch during AI agent execution.
### Context Files
Project-specific rules are stored in `.automaker/context/` and automatically loaded into agent prompts via `loadContextFiles()` from `@automaker/utils`.
### Model Resolution
Use `resolveModelString()` from `@automaker/model-resolver` to convert model aliases:
- `haiku``claude-haiku-4-5`
- `sonnet``claude-sonnet-4-20250514`
- `opus``claude-opus-4-5-20251101`
## Environment Variables
- `ANTHROPIC_API_KEY` - Anthropic API key (or use Claude Code CLI auth)
- `PORT` - Server port (default: 3008)
- `DATA_DIR` - Data storage directory (default: ./data)
- `ALLOWED_ROOT_DIRECTORY` - Restrict file operations to specific directory
- `AUTOMAKER_MOCK_AGENT=true` - Enable mock agent mode for CI testing

685
CONTRIBUTING.md Normal file
View File

@@ -0,0 +1,685 @@
# Contributing to Automaker
Thank you for your interest in contributing to Automaker! We're excited to have you join our community of developers building the future of autonomous AI development.
Automaker is an autonomous AI development studio that provides a Kanban-based workflow where AI agents implement features in isolated git worktrees. Whether you're fixing bugs, adding features, improving documentation, or suggesting ideas, your contributions help make this project better for everyone.
This guide will help you get started with contributing to Automaker. Please take a moment to read through these guidelines to ensure a smooth contribution process.
## Contribution License Agreement
**Important:** By submitting, pushing, or contributing any code, documentation, pull requests, issues, or other materials to the Automaker project, you agree to assign all right, title, and interest in and to your contributions, including all copyrights, patents, and other intellectual property rights, to the Core Contributors of Automaker. This assignment is irrevocable and includes the right to use, modify, distribute, and monetize your contributions in any manner.
**You understand and agree that you will have no right to receive any royalties, compensation, or other financial benefits from any revenue, income, or commercial use generated from your contributed code or any derivative works thereof.** All contributions are made without expectation of payment or financial return.
For complete details on contribution terms and rights assignment, please review [Section 5 (CONTRIBUTIONS AND RIGHTS ASSIGNMENT) of the LICENSE](LICENSE#5-contributions-and-rights-assignment).
## Table of Contents
- [Contributing to Automaker](#contributing-to-automaker)
- [Table of Contents](#table-of-contents)
- [Getting Started](#getting-started)
- [Prerequisites](#prerequisites)
- [Fork and Clone](#fork-and-clone)
- [Development Setup](#development-setup)
- [Project Structure](#project-structure)
- [Pull Request Process](#pull-request-process)
- [Branch Naming Convention](#branch-naming-convention)
- [Commit Message Format](#commit-message-format)
- [Submitting a Pull Request](#submitting-a-pull-request)
- [1. Prepare Your Changes](#1-prepare-your-changes)
- [2. Run Pre-submission Checks](#2-run-pre-submission-checks)
- [3. Push Your Changes](#3-push-your-changes)
- [4. Open a Pull Request](#4-open-a-pull-request)
- [PR Requirements Checklist](#pr-requirements-checklist)
- [Review Process](#review-process)
- [What to Expect](#what-to-expect)
- [Review Focus Areas](#review-focus-areas)
- [Responding to Feedback](#responding-to-feedback)
- [Approval Criteria](#approval-criteria)
- [Getting Help](#getting-help)
- [Code Style Guidelines](#code-style-guidelines)
- [Testing Requirements](#testing-requirements)
- [Running Tests](#running-tests)
- [Test Frameworks](#test-frameworks)
- [End-to-End Tests (Playwright)](#end-to-end-tests-playwright)
- [Unit Tests (Vitest)](#unit-tests-vitest)
- [Writing Tests](#writing-tests)
- [When to Write Tests](#when-to-write-tests)
- [CI/CD Pipeline](#cicd-pipeline)
- [CI Checks](#ci-checks)
- [CI Testing Environment](#ci-testing-environment)
- [Viewing CI Results](#viewing-ci-results)
- [Common CI Failures](#common-ci-failures)
- [Coverage Requirements](#coverage-requirements)
- [Issue Reporting](#issue-reporting)
- [Bug Reports](#bug-reports)
- [Before Reporting](#before-reporting)
- [Bug Report Template](#bug-report-template)
- [Feature Requests](#feature-requests)
- [Before Requesting](#before-requesting)
- [Feature Request Template](#feature-request-template)
- [Security Issues](#security-issues)
---
## Getting Started
### Prerequisites
Before contributing to Automaker, ensure you have the following installed on your system:
- **Node.js 18+** (tested with Node.js 22)
- Download from [nodejs.org](https://nodejs.org/)
- Verify installation: `node --version`
- **npm** (comes with Node.js)
- Verify installation: `npm --version`
- **Git** for version control
- Verify installation: `git --version`
- **Claude Code CLI** or **Anthropic API Key** (for AI agent functionality)
- Required to run the AI development features
**Optional but recommended:**
- A code editor with TypeScript support (VS Code recommended)
- GitHub CLI (`gh`) for easier PR management
### Fork and Clone
1. **Fork the repository** on GitHub
- Navigate to [https://github.com/AutoMaker-Org/automaker](https://github.com/AutoMaker-Org/automaker)
- Click the "Fork" button in the top-right corner
- This creates your own copy of the repository
2. **Clone your fork locally**
```bash
git clone https://github.com/YOUR_USERNAME/automaker.git
cd automaker
```
3. **Add the upstream remote** to keep your fork in sync
```bash
git remote add upstream https://github.com/AutoMaker-Org/automaker.git
```
4. **Verify remotes**
```bash
git remote -v
# Should show:
# origin https://github.com/YOUR_USERNAME/automaker.git (fetch)
# origin https://github.com/YOUR_USERNAME/automaker.git (push)
# upstream https://github.com/AutoMaker-Org/automaker.git (fetch)
# upstream https://github.com/AutoMaker-Org/automaker.git (push)
```
### Development Setup
1. **Install dependencies**
```bash
npm install
```
2. **Build shared packages** (required before running the app)
```bash
npm run build:packages
```
3. **Start the development server**
```bash
npm run dev # Interactive launcher - choose mode
npm run dev:web # Browser mode (web interface)
npm run dev:electron # Desktop app mode
```
**Common development commands:**
| Command | Description |
| ------------------------ | -------------------------------- |
| `npm run dev` | Interactive development launcher |
| `npm run dev:web` | Start in browser mode |
| `npm run dev:electron` | Start desktop app |
| `npm run build` | Build all packages and apps |
| `npm run build:packages` | Build shared packages only |
| `npm run lint` | Run ESLint checks |
| `npm run format` | Format code with Prettier |
| `npm run format:check` | Check formatting without changes |
| `npm run test` | Run E2E tests (Playwright) |
| `npm run test:server` | Run server unit tests |
| `npm run test:packages` | Run package tests |
| `npm run test:all` | Run all tests |
### Project Structure
Automaker is organized as an npm workspace monorepo:
```
automaker/
├── apps/
│ ├── ui/ # React + Vite + Electron frontend
│ └── server/ # Express + WebSocket backend
├── libs/
│ ├── @automaker/types/ # Shared TypeScript types
│ ├── @automaker/utils/ # Utility functions
│ ├── @automaker/prompts/ # AI prompt templates
│ ├── @automaker/platform/ # Platform abstractions
│ ├── @automaker/model-resolver/ # AI model resolution
│ ├── @automaker/dependency-resolver/ # Dependency management
│ └── @automaker/git-utils/ # Git operations
├── docs/ # Documentation
└── package.json # Root package configuration
```
**Key conventions:**
- Always import from `@automaker/*` shared packages, never use relative paths to `libs/`
- Frontend code lives in `apps/ui/`
- Backend code lives in `apps/server/`
- Shared logic should be in the appropriate `libs/` package
---
## Pull Request Process
This section covers everything you need to know about contributing changes through pull requests, from creating your branch to getting your code merged.
### Branch Naming Convention
We use a consistent branch naming pattern to keep our repository organized:
```
<type>/<description>
```
**Branch types:**
| Type | Purpose | Example |
| ---------- | ------------------------ | --------------------------------- |
| `feature` | New functionality | `feature/add-user-authentication` |
| `fix` | Bug fixes | `fix/resolve-memory-leak` |
| `docs` | Documentation changes | `docs/update-contributing-guide` |
| `refactor` | Code restructuring | `refactor/simplify-api-handlers` |
| `test` | Adding or updating tests | `test/add-utils-unit-tests` |
| `chore` | Maintenance tasks | `chore/update-dependencies` |
**Guidelines:**
- Use lowercase letters and hyphens (no underscores or spaces)
- Keep descriptions short but descriptive
- Include issue number when applicable: `feature/123-add-login`
```bash
# Create and checkout a new feature branch
git checkout -b feature/add-dark-mode
# Create a fix branch with issue reference
git checkout -b fix/456-resolve-login-error
```
### Commit Message Format
We follow the **Conventional Commits** style for clear, readable commit history:
```
<type>: <description>
[optional body]
```
**Commit types:**
| Type | Purpose |
| ---------- | --------------------------- |
| `feat` | New feature |
| `fix` | Bug fix |
| `docs` | Documentation only |
| `style` | Formatting (no code change) |
| `refactor` | Code restructuring |
| `test` | Adding or updating tests |
| `chore` | Maintenance tasks |
**Guidelines:**
- Use **imperative mood** ("Add feature" not "Added feature")
- Keep first line under **72 characters**
- Capitalize the first letter after the type prefix
- No period at the end of the subject line
- Add a blank line before the body for detailed explanations
**Examples:**
```bash
# Simple commit
git commit -m "feat: Add user authentication flow"
# Commit with body for more context
git commit -m "fix: Resolve memory leak in WebSocket handler
The connection cleanup was not being called when clients
disconnected unexpectedly. Added proper cleanup in the
error handler to prevent memory accumulation."
# Documentation update
git commit -m "docs: Update API documentation"
# Refactoring
git commit -m "refactor: Simplify state management logic"
```
### Submitting a Pull Request
Follow these steps to submit your contribution:
#### 1. Prepare Your Changes
Ensure you've synced with the latest upstream changes:
```bash
# Fetch latest changes from upstream
git fetch upstream
# Rebase your branch on main (if needed)
git rebase upstream/main
```
#### 2. Run Pre-submission Checks
Before opening your PR, verify everything passes locally:
```bash
# Run all tests
npm run test:all
# Check formatting
npm run format:check
# Run linter
npm run lint
# Build to verify no compile errors
npm run build
```
#### 3. Push Your Changes
```bash
# Push your branch to your fork
git push origin feature/your-feature-name
```
#### 4. Open a Pull Request
1. Go to your fork on GitHub
2. Click "Compare & pull request" for your branch
3. Ensure the base repository is `AutoMaker-Org/automaker` and base branch is `main`
4. Fill out the PR template completely
#### PR Requirements Checklist
Your PR should include:
- [ ] **Clear title** describing the change (use conventional commit format)
- [ ] **Description** explaining what changed and why
- [ ] **Link to related issue** (if applicable): `Closes #123` or `Fixes #456`
- [ ] **All CI checks passing** (format, lint, build, tests)
- [ ] **No merge conflicts** with main branch
- [ ] **Tests included** for new functionality
- [ ] **Documentation updated** if adding/changing public APIs
**Example PR Description:**
```markdown
## Summary
This PR adds dark mode support to the Automaker UI.
- Implements theme toggle in settings panel
- Adds CSS custom properties for theme colors
- Persists theme preference to localStorage
## Related Issue
Closes #123
## Testing
- [x] Tested toggle functionality in Chrome and Firefox
- [x] Verified theme persists across page reloads
- [x] Checked accessibility contrast ratios
## Screenshots
[Include before/after screenshots for UI changes]
```
### Review Process
All contributions go through code review to maintain quality:
#### What to Expect
1. **CI Checks Run First** - Automated checks (format, lint, build, tests) must pass before review
2. **Maintainer Review** - The project maintainers will review your PR and decide whether to merge it
3. **Feedback & Discussion** - The reviewer may ask questions or request changes
4. **Iteration** - Make requested changes and push updates to the same branch
5. **Approval & Merge** - Once approved and checks pass, your PR will be merged
#### Review Focus Areas
The reviewer checks for:
- **Correctness** - Does the code work as intended?
- **Clean Code** - Does it follow our [code style guidelines](#code-style-guidelines)?
- **Test Coverage** - Are new features properly tested?
- **Documentation** - Are public APIs documented?
- **Breaking Changes** - Are any breaking changes discussed first?
#### Responding to Feedback
- Respond to **all** review comments, even if just to acknowledge
- Ask questions if feedback is unclear
- Push additional commits to address feedback (don't force-push during review)
- Mark conversations as resolved once addressed
#### Approval Criteria
Your PR is ready to merge when:
- ✅ All CI checks pass
- ✅ The maintainer has approved the changes
- ✅ All review comments are addressed
- ✅ No unresolved merge conflicts
#### Getting Help
If your PR seems stuck:
- Comment asking for status update (mention @webdevcody if needed)
- Reach out on [Discord](https://discord.gg/jjem7aEDKU)
- Make sure all checks are passing and you've responded to all feedback
---
## Code Style Guidelines
Automaker uses automated tooling to enforce code style. Run `npm run format` to format code and `npm run lint` to check for issues. Pre-commit hooks automatically format staged files before committing.
---
## Testing Requirements
Testing helps prevent regressions. Automaker uses **Playwright** for end-to-end testing and **Vitest** for unit tests.
### Running Tests
Use these commands to run tests locally:
| Command | Description |
| ------------------------------ | ------------------------------------- |
| `npm run test` | Run E2E tests (Playwright) |
| `npm run test:server` | Run server unit tests (Vitest) |
| `npm run test:packages` | Run shared package tests |
| `npm run test:all` | Run all tests |
| `npm run test:server:coverage` | Run server tests with coverage report |
**Before submitting a PR**, always run the full test suite:
```bash
npm run test:all
```
### Test Frameworks
#### End-to-End Tests (Playwright)
E2E tests verify the entire application works correctly from a user's perspective.
- **Framework:** [Playwright](https://playwright.dev/)
- **Location:** `e2e/` directory
- **Test ports:** UI on port 3007, Server on port 3008
**Running E2E tests:**
```bash
# Run all E2E tests
npm run test
# Run with headed browser (useful for debugging)
npx playwright test --headed
# Run a specific test file
npm test --workspace=@automaker/ui -- tests/example.spec.ts
```
**E2E Test Guidelines:**
- Write tests from a user's perspective
- Use descriptive test names that explain the scenario
- Clean up test data after each test
- Use appropriate timeouts for async operations
- Prefer `locator` over direct selectors for resilience
#### Unit Tests (Vitest)
Unit tests verify individual functions and modules work correctly in isolation.
- **Framework:** [Vitest](https://vitest.dev/)
- **Location:** In the `tests/` directory within each package (e.g., `apps/server/tests/`)
**Running unit tests:**
```bash
# Run all server unit tests
npm run test:server
# Run with coverage report
npm run test:server:coverage
# Run package tests
npm run test:packages
# Run in watch mode during development
npx vitest --watch
```
**Unit Test Guidelines:**
- Keep tests small and focused on one behavior
- Use descriptive test names: `it('should return null when user is not found')`
- Follow the AAA pattern: Arrange, Act, Assert
- Mock external dependencies to isolate the unit under test
- Aim for meaningful coverage, not just line coverage
### Writing Tests
#### When to Write Tests
- **New features:** All new features should include tests
- **Bug fixes:** Add a test that reproduces the bug before fixing
- **Refactoring:** Ensure existing tests pass after refactoring
- **Public APIs:** All public APIs must have test coverage
### CI/CD Pipeline
Automaker uses **GitHub Actions** for continuous integration. Every pull request triggers automated checks.
#### CI Checks
The following checks must pass before your PR can be merged:
| Check | Description |
| ----------------- | --------------------------------------------- |
| **Format** | Verifies code is formatted with Prettier |
| **Build** | Ensures the project compiles without errors |
| **Package Tests** | Runs tests for shared `@automaker/*` packages |
| **Server Tests** | Runs server unit tests with coverage |
#### CI Testing Environment
For CI environments, Automaker supports a mock agent mode:
```bash
# Enable mock agent mode for CI testing
AUTOMAKER_MOCK_AGENT=true npm run test
```
This allows tests to run without requiring a real Claude API connection.
#### Viewing CI Results
1. Go to your PR on GitHub
2. Scroll to the "Checks" section at the bottom
3. Click on any failed check to see detailed logs
4. Fix issues locally and push updates
#### Common CI Failures
| Issue | Solution |
| ------------------- | --------------------------------------------- |
| Format check failed | Run `npm run format` locally |
| Build failed | Run `npm run build` and fix TypeScript errors |
| Tests failed | Run `npm run test:all` locally to reproduce |
| Coverage decreased | Add tests for new code paths |
### Coverage Requirements
While we don't enforce strict coverage percentages, we expect:
- **New features:** Should include comprehensive tests
- **Bug fixes:** Should include a regression test
- **Critical paths:** Must have test coverage (authentication, data persistence, etc.)
To view coverage reports locally:
```bash
npm run test:server:coverage
```
This generates an HTML report you can open in your browser to see which lines are covered.
---
## Issue Reporting
Found a bug or have an idea for a new feature? We'd love to hear from you! This section explains how to report issues effectively.
### Bug Reports
When reporting a bug, please provide as much information as possible to help us understand and reproduce the issue.
#### Before Reporting
1. **Search existing issues** - Check if the bug has already been reported
2. **Try the latest version** - Make sure you're running the latest version of Automaker
3. **Reproduce the issue** - Verify you can consistently reproduce the bug
#### Bug Report Template
When creating a bug report, include:
- **Title:** A clear, descriptive title summarizing the issue
- **Environment:**
- Operating System and version
- Node.js version (`node --version`)
- Automaker version or commit hash
- **Steps to Reproduce:** Numbered list of steps to reproduce the bug
- **Expected Behavior:** What you expected to happen
- **Actual Behavior:** What actually happened
- **Logs/Screenshots:** Any relevant error messages, console output, or screenshots
**Example Bug Report:**
```markdown
## Bug: WebSocket connection drops after 5 minutes of inactivity
### Environment
- OS: Windows 11
- Node.js: 22.11.0
- Automaker: commit abc1234
### Steps to Reproduce
1. Start the application with `npm run dev:web`
2. Open the Kanban board
3. Leave the browser tab open for 5+ minutes without interaction
4. Try to move a card
### Expected Behavior
The card should move to the new column.
### Actual Behavior
The UI shows "Connection lost" and the card doesn't move.
### Logs
[WebSocket] Connection closed: 1006
```
### Feature Requests
We welcome ideas for improving Automaker! Here's how to submit a feature request:
#### Before Requesting
1. **Check existing issues** - Your idea may already be proposed or in development
2. **Consider scope** - Think about whether the feature fits Automaker's mission as an autonomous AI development studio
#### Feature Request Template
A good feature request includes:
- **Title:** A brief, descriptive title
- **Problem Statement:** What problem does this feature solve?
- **Proposed Solution:** How do you envision this working?
- **Alternatives Considered:** What other approaches did you consider?
- **Additional Context:** Mockups, examples, or references that help explain your idea
**Example Feature Request:**
```markdown
## Feature: Dark Mode Support
### Problem Statement
Working late at night, the bright UI causes eye strain and doesn't match
my system's dark theme preference.
### Proposed Solution
Add a theme toggle in the settings panel that allows switching between
light and dark modes. Ideally, it should also detect system preference.
### Alternatives Considered
- Browser extension to force dark mode (doesn't work well with custom styling)
- Custom CSS override (breaks with updates)
### Additional Context
Similar to how VS Code handles themes - a dropdown in settings with
immediate preview.
```
### Security Issues
**Important:** If you discover a security vulnerability, please do NOT open a public issue. Instead:
1. Join our [Discord server](https://discord.gg/jjem7aEDKU) and send a direct message to the user `@webdevcody`
2. Include detailed steps to reproduce
3. Allow time for us to address the issue before public disclosure
We take security seriously and appreciate responsible disclosure.
---
For license and contribution terms, see the [LICENSE](LICENSE) file in the repository root and the [README.md](README.md#license) for more details.
---
Thank you for contributing to Automaker!

203
Dockerfile Normal file
View File

@@ -0,0 +1,203 @@
# Automaker Multi-Stage Dockerfile
# Single Dockerfile for both server and UI builds
# Usage:
# docker build --target server -t automaker-server .
# docker build --target ui -t automaker-ui .
# Or use docker-compose which selects targets automatically
# =============================================================================
# BASE STAGE - Common setup for all builds (DRY: defined once, used by all)
# =============================================================================
FROM node:22-slim AS base
# Install build dependencies for native modules (node-pty)
RUN apt-get update && apt-get install -y --no-install-recommends \
python3 make g++ \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /app
# Copy root package files
COPY package*.json ./
# Copy all libs package.json files (centralized - add new libs here)
COPY libs/types/package*.json ./libs/types/
COPY libs/utils/package*.json ./libs/utils/
COPY libs/prompts/package*.json ./libs/prompts/
COPY libs/platform/package*.json ./libs/platform/
COPY libs/model-resolver/package*.json ./libs/model-resolver/
COPY libs/dependency-resolver/package*.json ./libs/dependency-resolver/
COPY libs/git-utils/package*.json ./libs/git-utils/
# Copy scripts (needed by npm workspace)
COPY scripts ./scripts
# =============================================================================
# SERVER BUILD STAGE
# =============================================================================
FROM base AS server-builder
# Copy server-specific package.json
COPY apps/server/package*.json ./apps/server/
# Install dependencies (--ignore-scripts to skip husky/prepare, then rebuild native modules)
RUN npm ci --ignore-scripts && npm rebuild node-pty
# Copy all source files
COPY libs ./libs
COPY apps/server ./apps/server
# Build packages in dependency order, then build server
RUN npm run build:packages && npm run build --workspace=apps/server
# =============================================================================
# SERVER PRODUCTION STAGE
# =============================================================================
FROM node:22-slim AS server
# Build argument for tracking which commit this image was built from
ARG GIT_COMMIT_SHA=unknown
LABEL automaker.git.commit.sha="${GIT_COMMIT_SHA}"
# Install git, curl, bash (for terminal), gosu (for user switching), and GitHub CLI (pinned version, multi-arch)
RUN apt-get update && apt-get install -y --no-install-recommends \
git curl bash gosu ca-certificates openssh-client \
&& GH_VERSION="2.63.2" \
&& ARCH=$(uname -m) \
&& case "$ARCH" in \
x86_64) GH_ARCH="amd64" ;; \
aarch64|arm64) GH_ARCH="arm64" ;; \
*) echo "Unsupported architecture: $ARCH" && exit 1 ;; \
esac \
&& curl -L "https://github.com/cli/cli/releases/download/v${GH_VERSION}/gh_${GH_VERSION}_linux_${GH_ARCH}.tar.gz" -o gh.tar.gz \
&& tar -xzf gh.tar.gz \
&& mv gh_${GH_VERSION}_linux_${GH_ARCH}/bin/gh /usr/local/bin/gh \
&& rm -rf gh.tar.gz gh_${GH_VERSION}_linux_${GH_ARCH} \
&& rm -rf /var/lib/apt/lists/*
# Install Claude CLI globally (available to all users via npm global bin)
RUN npm install -g @anthropic-ai/claude-code
# Create non-root user with home directory BEFORE installing Cursor CLI
RUN groupadd -g 1001 automaker && \
useradd -u 1001 -g automaker -m -d /home/automaker -s /bin/bash automaker && \
mkdir -p /home/automaker/.local/bin && \
mkdir -p /home/automaker/.cursor && \
chown -R automaker:automaker /home/automaker && \
chmod 700 /home/automaker/.cursor
# Install Cursor CLI as the automaker user
# Set HOME explicitly and install to /home/automaker/.local/bin/
USER automaker
ENV HOME=/home/automaker
RUN curl https://cursor.com/install -fsS | bash && \
echo "=== Checking Cursor CLI installation ===" && \
ls -la /home/automaker/.local/bin/ && \
echo "=== PATH is: $PATH ===" && \
(which cursor-agent && cursor-agent --version) || echo "cursor-agent installed (may need auth setup)"
USER root
# Add PATH to profile so it's available in all interactive shells (for login shells)
RUN mkdir -p /etc/profile.d && \
echo 'export PATH="/home/automaker/.local/bin:$PATH"' > /etc/profile.d/cursor-cli.sh && \
chmod +x /etc/profile.d/cursor-cli.sh
# Add to automaker's .bashrc for bash interactive shells
RUN echo 'export PATH="/home/automaker/.local/bin:$PATH"' >> /home/automaker/.bashrc && \
chown automaker:automaker /home/automaker/.bashrc
# Also add to root's .bashrc since docker exec defaults to root
RUN echo 'export PATH="/home/automaker/.local/bin:$PATH"' >> /root/.bashrc
WORKDIR /app
# Copy root package.json (needed for workspace resolution)
COPY --from=server-builder /app/package*.json ./
# Copy built libs (workspace packages are symlinked in node_modules)
COPY --from=server-builder /app/libs ./libs
# Copy built server
COPY --from=server-builder /app/apps/server/dist ./apps/server/dist
COPY --from=server-builder /app/apps/server/package*.json ./apps/server/
# Copy node_modules (includes symlinks to libs)
COPY --from=server-builder /app/node_modules ./node_modules
# Create data and projects directories
RUN mkdir -p /data /projects && chown automaker:automaker /data /projects
# Configure git for mounted volumes and authentication
# Use --system so it's not overwritten by mounted user .gitconfig
RUN git config --system --add safe.directory '*' && \
# Use gh as credential helper (works with GH_TOKEN env var)
git config --system credential.helper '!gh auth git-credential'
# Copy entrypoint script for fixing permissions on mounted volumes
COPY docker-entrypoint.sh /usr/local/bin/docker-entrypoint.sh
RUN chmod +x /usr/local/bin/docker-entrypoint.sh
# Note: We stay as root here so entrypoint can fix permissions
# The entrypoint script will switch to automaker user before running the command
# Environment variables
ENV PORT=3008
ENV DATA_DIR=/data
ENV HOME=/home/automaker
# Add user's local bin to PATH for cursor-agent
ENV PATH="/home/automaker/.local/bin:${PATH}"
# Expose port
EXPOSE 3008
# Health check (using curl since it's already installed, more reliable than busybox wget)
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD curl -f http://localhost:3008/api/health || exit 1
# Use entrypoint to fix permissions before starting
ENTRYPOINT ["/usr/local/bin/docker-entrypoint.sh"]
# Start server
CMD ["node", "apps/server/dist/index.js"]
# =============================================================================
# UI BUILD STAGE
# =============================================================================
FROM base AS ui-builder
# Copy UI-specific package.json
COPY apps/ui/package*.json ./apps/ui/
# Install dependencies (--ignore-scripts to skip husky and build:packages in prepare script)
RUN npm ci --ignore-scripts
# Copy all source files
COPY libs ./libs
COPY apps/ui ./apps/ui
# Build packages in dependency order, then build UI
# VITE_SERVER_URL tells the UI where to find the API server
# Use ARG to allow overriding at build time: --build-arg VITE_SERVER_URL=http://api.example.com
ARG VITE_SERVER_URL=http://localhost:3008
ENV VITE_SKIP_ELECTRON=true
ENV VITE_SERVER_URL=${VITE_SERVER_URL}
RUN npm run build:packages && npm run build --workspace=apps/ui
# =============================================================================
# UI PRODUCTION STAGE
# =============================================================================
FROM nginx:alpine AS ui
# Build argument for tracking which commit this image was built from
ARG GIT_COMMIT_SHA=unknown
LABEL automaker.git.commit.sha="${GIT_COMMIT_SHA}"
# Copy built files
COPY --from=ui-builder /app/apps/ui/dist /usr/share/nginx/html
# Copy nginx config for SPA routing
COPY apps/ui/nginx.conf /etc/nginx/conf.d/default.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]

80
Dockerfile.dev Normal file
View File

@@ -0,0 +1,80 @@
# Automaker Development Dockerfile
# For development with live reload via volume mounting
# Source code is NOT copied - it's mounted as a volume
#
# Usage:
# docker compose -f docker-compose.dev.yml up
FROM node:22-slim
# Install build dependencies for native modules (node-pty) and runtime tools
RUN apt-get update && apt-get install -y --no-install-recommends \
python3 make g++ \
git curl bash gosu ca-certificates openssh-client \
&& GH_VERSION="2.63.2" \
&& ARCH=$(uname -m) \
&& case "$ARCH" in \
x86_64) GH_ARCH="amd64" ;; \
aarch64|arm64) GH_ARCH="arm64" ;; \
*) echo "Unsupported architecture: $ARCH" && exit 1 ;; \
esac \
&& curl -L "https://github.com/cli/cli/releases/download/v${GH_VERSION}/gh_${GH_VERSION}_linux_${GH_ARCH}.tar.gz" -o gh.tar.gz \
&& tar -xzf gh.tar.gz \
&& mv gh_${GH_VERSION}_linux_${GH_ARCH}/bin/gh /usr/local/bin/gh \
&& rm -rf gh.tar.gz gh_${GH_VERSION}_linux_${GH_ARCH} \
&& rm -rf /var/lib/apt/lists/*
# Install Claude CLI globally
RUN npm install -g @anthropic-ai/claude-code
# Create non-root user
RUN groupadd -g 1001 automaker && \
useradd -u 1001 -g automaker -m -d /home/automaker -s /bin/bash automaker && \
mkdir -p /home/automaker/.local/bin && \
mkdir -p /home/automaker/.cursor && \
chown -R automaker:automaker /home/automaker && \
chmod 700 /home/automaker/.cursor
# Install Cursor CLI as automaker user
USER automaker
ENV HOME=/home/automaker
RUN curl https://cursor.com/install -fsS | bash || true
USER root
# Add PATH to profile for Cursor CLI
RUN mkdir -p /etc/profile.d && \
echo 'export PATH="/home/automaker/.local/bin:$PATH"' > /etc/profile.d/cursor-cli.sh && \
chmod +x /etc/profile.d/cursor-cli.sh
# Add to user bashrc files
RUN echo 'export PATH="/home/automaker/.local/bin:$PATH"' >> /home/automaker/.bashrc && \
chown automaker:automaker /home/automaker/.bashrc
RUN echo 'export PATH="/home/automaker/.local/bin:$PATH"' >> /root/.bashrc
WORKDIR /app
# Create directories with proper permissions
RUN mkdir -p /data /projects && chown automaker:automaker /data /projects
# Configure git for mounted volumes
RUN git config --system --add safe.directory '*' && \
git config --system credential.helper '!gh auth git-credential'
# Copy entrypoint script
COPY docker-entrypoint.sh /usr/local/bin/docker-entrypoint.sh
RUN chmod +x /usr/local/bin/docker-entrypoint.sh
# Environment variables
ENV PORT=3008
ENV DATA_DIR=/data
ENV HOME=/home/automaker
ENV PATH="/home/automaker/.local/bin:${PATH}"
# Expose both dev ports
EXPOSE 3007 3008
# Use entrypoint for permission handling
ENTRYPOINT ["/usr/local/bin/docker-entrypoint.sh"]
# Default command - will be overridden by docker-compose
CMD ["npm", "run", "dev:web"]

545
README.md
View File

@@ -1,5 +1,5 @@
<p align="center">
<img src="apps/ui/public/readme_logo.png" alt="Automaker Logo" height="80" />
<img src="apps/ui/public/readme_logo.svg" alt="Automaker Logo" height="80" />
</p>
> **[!TIP]**
@@ -19,7 +19,7 @@
- [What Makes Automaker Different?](#what-makes-automaker-different)
- [The Workflow](#the-workflow)
- [Powered by Claude Code](#powered-by-claude-code)
- [Powered by Claude Agent SDK](#powered-by-claude-agent-sdk)
- [Why This Matters](#why-this-matters)
- [Security Disclaimer](#security-disclaimer)
- [Community & Support](#community--support)
@@ -28,22 +28,36 @@
- [Quick Start](#quick-start)
- [How to Run](#how-to-run)
- [Development Mode](#development-mode)
- [Electron Desktop App (Recommended)](#electron-desktop-app-recommended)
- [Web Browser Mode](#web-browser-mode)
- [Building for Production](#building-for-production)
- [Running Production Build](#running-production-build)
- [Testing](#testing)
- [Linting](#linting)
- [Authentication Options](#authentication-options)
- [Persistent Setup (Optional)](#persistent-setup-optional)
- [Environment Configuration](#environment-configuration)
- [Authentication Setup](#authentication-setup)
- [Features](#features)
- [Core Workflow](#core-workflow)
- [AI & Planning](#ai--planning)
- [Project Management](#project-management)
- [Collaboration & Review](#collaboration--review)
- [Developer Tools](#developer-tools)
- [Advanced Features](#advanced-features)
- [Tech Stack](#tech-stack)
- [Frontend](#frontend)
- [Backend](#backend)
- [Testing & Quality](#testing--quality)
- [Shared Libraries](#shared-libraries)
- [Available Views](#available-views)
- [Architecture](#architecture)
- [Monorepo Structure](#monorepo-structure)
- [How It Works](#how-it-works)
- [Key Architectural Patterns](#key-architectural-patterns)
- [Security & Isolation](#security--isolation)
- [Data Storage](#data-storage)
- [Learn More](#learn-more)
- [License](#license)
</details>
Automaker is an autonomous AI development studio that transforms how you build software. Instead of manually writing every line of code, you describe features on a Kanban board and watch as AI agents powered by Claude Code automatically implement them.
Automaker is an autonomous AI development studio that transforms how you build software. Instead of manually writing every line of code, you describe features on a Kanban board and watch as AI agents powered by Claude Agent SDK automatically implement them. Built with React, Vite, Electron, and Express, Automaker provides a complete workflow for managing AI agents through a desktop application (or web browser), with features like real-time streaming, git worktree isolation, plan approval, and multi-agent task execution.
![Automaker UI](https://i.imgur.com/jdwKydM.png)
@@ -59,30 +73,14 @@ Traditional development tools help you write code. Automaker helps you **orchest
4. **Review & Verify** - Review the changes, run tests, and approve when ready
5. **Ship Faster** - Build entire applications in days, not weeks
### Powered by Claude Code
### Powered by Claude Agent SDK
Automaker leverages the [Claude Agent SDK](https://docs.anthropic.com/en/docs/claude-code) to give AI agents full access to your codebase. Agents can read files, write code, execute commands, run tests, and make git commits—all while working in isolated git worktrees to keep your main branch safe.
Automaker leverages the [Claude Agent SDK](https://www.npmjs.com/package/@anthropic-ai/claude-agent-sdk) to give AI agents full access to your codebase. Agents can read files, write code, execute commands, run tests, and make git commits—all while working in isolated git worktrees to keep your main branch safe. The SDK provides autonomous AI agents that can use tools, make decisions, and complete complex multi-step tasks without constant human intervention.
### Why This Matters
The future of software development is **agentic coding**—where developers become architects directing AI agents rather than manual coders. Automaker puts this future in your hands today, letting you experience what it's like to build software 10x faster with AI agents handling the implementation while you focus on architecture and business logic.
---
> **[!CAUTION]**
>
> ## Security Disclaimer
>
> **This software uses AI-powered tooling that has access to your operating system and can read, modify, and delete files. Use at your own risk.**
>
> We have reviewed this codebase for security vulnerabilities, but you assume all risk when running this software. You should review the code yourself before running it.
>
> **We do not recommend running Automaker directly on your local computer** due to the risk of AI agents having access to your entire file system. Please sandbox this application using Docker or a virtual machine.
>
> **[Read the full disclaimer](./DISCLAIMER.md)**
---
## Community & Support
Join the **Agentic Jumpstart** to connect with other builders exploring **agentic coding** and autonomous development workflows.
@@ -95,8 +93,7 @@ In the Discord, you can:
- 🚀 Show off projects built with AI agents
- 🤝 Collaborate with other developers and contributors
👉 **Join the Discord:**
https://discord.gg/jjem7aEDKU
👉 **Join the Discord:** [Agentic Jumpstart Discord](https://discord.gg/jjem7aEDKU)
---
@@ -104,28 +101,49 @@ https://discord.gg/jjem7aEDKU
### Prerequisites
- Node.js 18+
- npm
- [Claude Code CLI](https://docs.anthropic.com/en/docs/claude-code) installed and authenticated
- **Node.js 18+** (tested with Node.js 22)
- **npm** (comes with Node.js)
- **Authentication** (choose one):
- **[Claude Code CLI](https://code.claude.com/docs/en/overview)** (recommended) - Install and authenticate, credentials used automatically
- **Anthropic API Key** - Direct API key for Claude Agent SDK ([get one here](https://console.anthropic.com/))
### Quick Start
```bash
# 1. Clone the repo
# 1. Clone the repository
git clone https://github.com/AutoMaker-Org/automaker.git
cd automaker
# 2. Install dependencies
npm install
# 3. Build local shared packages
# 3. Build shared packages (can be skipped - npm run dev does it automatically)
npm run build:packages
# 4. Run Automaker (pick your mode)
# 4. Start Automaker
npm run dev
# Then choose your run mode when prompted, or use specific commands below
# Choose between:
# 1. Web Application (browser at localhost:3007)
# 2. Desktop Application (Electron - recommended)
```
**Authentication Setup:** On first run, Automaker will automatically show a setup wizard where you can configure authentication. You can choose to:
- Use **Claude Code CLI** (recommended) - Automaker will detect your CLI credentials automatically
- Enter an **API key** directly in the wizard
If you prefer to set up authentication before running (e.g., for headless deployments or CI/CD), you can set it manually:
```bash
# Option A: Environment variable
export ANTHROPIC_API_KEY="sk-ant-..."
# Option B: Create .env file in project root
echo "ANTHROPIC_API_KEY=sk-ant-..." > .env
```
**For Development:** `npm run dev` starts the development server with Vite live reload and hot module replacement for fast refresh and instant updates as you make changes.
## How to Run
### Development Mode
@@ -163,31 +181,159 @@ npm run dev:web
### Building for Production
#### Web Application
```bash
# Build Next.js app
# Build for web deployment (uses Vite)
npm run build
# Build Electron app for distribution
npm run build:electron
```
### Running Production Build
#### Desktop Application
```bash
# Start production Next.js server
npm run start
# Build for current platform (macOS/Windows/Linux)
npm run build:electron
# Platform-specific builds
npm run build:electron:mac # macOS (DMG + ZIP, x64 + arm64)
npm run build:electron:win # Windows (NSIS installer, x64)
npm run build:electron:linux # Linux (AppImage + DEB, x64)
# Output directory: apps/ui/release/
```
#### Docker Deployment
Docker provides the most secure way to run Automaker by isolating it from your host filesystem.
```bash
# Build and run with Docker Compose
docker-compose up -d
# Access UI at http://localhost:3007
# API at http://localhost:3008
# View logs
docker-compose logs -f
# Stop containers
docker-compose down
```
##### Configuration
Create a `.env` file in the project root if using API key authentication:
```bash
# Optional: Anthropic API key (not needed if using Claude CLI authentication)
ANTHROPIC_API_KEY=sk-ant-...
```
**Note:** Most users authenticate via Claude CLI instead of API keys. See [Claude CLI Authentication](#claude-cli-authentication-optional) below.
##### Working with Projects (Host Directory Access)
By default, the container is isolated from your host filesystem. To work on projects from your host machine, create a `docker-compose.override.yml` file (gitignored):
```yaml
services:
server:
volumes:
# Mount your project directories
- /path/to/your/project:/projects/your-project
```
##### Claude CLI Authentication (Optional)
To use Claude Code CLI authentication instead of an API key, mount your Claude CLI config directory:
```yaml
services:
server:
volumes:
# Linux/macOS
- ~/.claude:/home/automaker/.claude
# Windows
- C:/Users/YourName/.claude:/home/automaker/.claude
```
**Note:** The Claude CLI config must be writable (do not use `:ro` flag) as the CLI writes debug files.
##### GitHub CLI Authentication (For Git Push/PR Operations)
To enable git push and GitHub CLI operations inside the container:
```yaml
services:
server:
volumes:
# Mount GitHub CLI config
# Linux/macOS
- ~/.config/gh:/home/automaker/.config/gh
# Windows
- 'C:/Users/YourName/AppData/Roaming/GitHub CLI:/home/automaker/.config/gh'
# Mount git config for user identity (name, email)
- ~/.gitconfig:/home/automaker/.gitconfig:ro
environment:
# GitHub token (required on Windows where tokens are in Credential Manager)
# Get your token with: gh auth token
- GH_TOKEN=${GH_TOKEN}
```
Then add `GH_TOKEN` to your `.env` file:
```bash
GH_TOKEN=gho_your_github_token_here
```
##### Complete docker-compose.override.yml Example
```yaml
services:
server:
volumes:
# Your projects
- /path/to/project1:/projects/project1
- /path/to/project2:/projects/project2
# Authentication configs
- ~/.claude:/home/automaker/.claude
- ~/.config/gh:/home/automaker/.config/gh
- ~/.gitconfig:/home/automaker/.gitconfig:ro
environment:
- GH_TOKEN=${GH_TOKEN}
```
##### Architecture Support
The Docker image supports both AMD64 and ARM64 architectures. The GitHub CLI and Claude CLI are automatically downloaded for the correct architecture during build.
### Testing
```bash
# Run tests headless
npm run test
#### End-to-End Tests (Playwright)
# Run tests with browser visible
npm run test:headed
```bash
npm run test # Headless E2E tests
npm run test:headed # Browser visible E2E tests
```
#### Unit Tests (Vitest)
```bash
npm run test:server # Server unit tests
npm run test:server:coverage # Server tests with coverage
npm run test:packages # All shared package tests
npm run test:all # Packages + server tests
```
#### Test Configuration
- E2E tests run on ports 3007 (UI) and 3008 (server)
- Automatically starts test servers before running
- Uses Chromium browser via Playwright
- Mock agent mode available in CI with `AUTOMAKER_MOCK_AGENT=true`
### Linting
```bash
@@ -195,59 +341,300 @@ npm run test:headed
npm run lint
```
### Authentication Options
### Environment Configuration
Automaker supports multiple authentication methods (in order of priority):
#### Authentication (if not using Claude Code CLI)
| Method | Environment Variable | Description |
| ---------------- | -------------------- | ------------------------------- |
| API Key (env) | `ANTHROPIC_API_KEY` | Anthropic API key |
| API Key (stored) | — | Anthropic API key stored in app |
- `ANTHROPIC_API_KEY` - Your Anthropic API key for Claude Agent SDK (not needed if using Claude Code CLI)
### Persistent Setup (Optional)
#### Optional - Server
- `PORT` - Server port (default: 3008)
- `DATA_DIR` - Data storage directory (default: ./data)
- `ENABLE_REQUEST_LOGGING` - HTTP request logging (default: true)
#### Optional - Security
- `AUTOMAKER_API_KEY` - Optional API authentication for the server
- `ALLOWED_ROOT_DIRECTORY` - Restrict file operations to specific directory
- `CORS_ORIGIN` - CORS policy (default: \*)
#### Optional - Development
- `VITE_SKIP_ELECTRON` - Skip Electron in dev mode
- `OPEN_DEVTOOLS` - Auto-open DevTools in Electron
### Authentication Setup
#### Option 1: Claude Code CLI (Recommended)
Install and authenticate the Claude Code CLI following the [official quickstart guide](https://code.claude.com/docs/en/quickstart).
Once authenticated, Automaker will automatically detect and use your CLI credentials. No additional configuration needed!
#### Option 2: Direct API Key
If you prefer not to use the CLI, you can provide an Anthropic API key directly using one of these methods:
##### 2a. Shell Configuration
Add to your `~/.bashrc` or `~/.zshrc`:
```bash
export ANTHROPIC_API_KEY="YOUR_API_KEY_HERE"
export ANTHROPIC_API_KEY="sk-ant-..."
```
Then restart your terminal or run `source ~/.bashrc`.
Then restart your terminal or run `source ~/.bashrc` (or `source ~/.zshrc`).
##### 2b. .env File
Create a `.env` file in the project root (gitignored):
```bash
ANTHROPIC_API_KEY=sk-ant-...
PORT=3008
DATA_DIR=./data
```
##### 2c. In-App Storage
The application can store your API key securely in the settings UI. The key is persisted in the `DATA_DIR` directory.
## Features
### Core Workflow
- 📋 **Kanban Board** - Visual drag-and-drop board to manage features through backlog, in progress, waiting approval, and verified stages
- 🤖 **AI Agent Integration** - Automatic AI agent assignment to implement features when moved to "In Progress"
- 🧠 **Multi-Model Support** - Choose from multiple AI models including Claude Opus, Sonnet, and more
- 💭 **Extended Thinking** - Enable extended thinking modes for complex problem-solving
- 📡 **Real-time Agent Output** - View live agent output, logs, and file diffs as features are being implemented
- 🔍 **Project Analysis** - AI-powered project structure analysis to understand your codebase
- 📁 **Context Management** - Add context files to help AI agents understand your project better
- 💡 **Feature Suggestions** - AI-generated feature suggestions based on your project
- 🖼️ **Image Support** - Attach images and screenshots to feature descriptions
- **Concurrent Processing** - Configure concurrency to process multiple features simultaneously
- 🧪 **Test Integration** - Automatic test running and verification for implemented features
- 🔀 **Git Integration** - View git diffs and track changes made by AI agents
- 👤 **AI Profiles** - Create and manage different AI agent profiles for various tasks
- 💬 **Chat History** - Keep track of conversations and interactions with AI agents
- ⌨️ **Keyboard Shortcuts** - Efficient navigation and actions via keyboard shortcuts
- 🎨 **Dark/Light Theme** - Beautiful UI with theme support
- 🖥️ **Cross-Platform** - Desktop application built with Electron for Windows, macOS, and Linux
- 🔀 **Git Worktree Isolation** - Each feature executes in isolated git worktrees to protect your main branch
- 📡 **Real-time Streaming** - Watch AI agents work in real-time with live tool usage, progress updates, and task completion
- 🔄 **Follow-up Instructions** - Send additional instructions to running agents without stopping them
### AI & Planning
- 🧠 **Multi-Model Support** - Choose from Claude Opus, Sonnet, and Haiku per feature
- 💭 **Extended Thinking** - Enable thinking modes (none, medium, deep, ultra) for complex problem-solving
- 📝 **Planning Modes** - Four planning levels: skip (direct implementation), lite (quick plan), spec (task breakdown), full (phased execution)
- **Plan Approval** - Review and approve AI-generated plans before implementation begins
- 📊 **Multi-Agent Task Execution** - Spec mode spawns dedicated agents per task for focused implementation
### Project Management
- 🔍 **Project Analysis** - AI-powered codebase analysis to understand your project structure
- 💡 **Feature Suggestions** - AI-generated feature suggestions based on project analysis
- 📁 **Context Management** - Add markdown, images, and documentation files that agents automatically reference
- 🔗 **Dependency Blocking** - Features can depend on other features, enforcing execution order
- 🌳 **Graph View** - Visualize feature dependencies with interactive graph visualization
- 📋 **GitHub Integration** - Import issues, validate feasibility, and convert to tasks automatically
### Collaboration & Review
- 🧪 **Verification Workflow** - Features move to "Waiting Approval" for review and testing
- 💬 **Agent Chat** - Interactive chat sessions with AI agents for exploratory work
- 👤 **AI Profiles** - Create custom agent configurations with different prompts, models, and settings
- 📜 **Session History** - Persistent chat sessions across restarts with full conversation history
- 🔍 **Git Diff Viewer** - Review changes made by agents before approving
### Developer Tools
- 🖥️ **Integrated Terminal** - Full terminal access with tabs, splits, and persistent sessions
- 🖼️ **Image Support** - Attach screenshots and diagrams to feature descriptions for visual context
-**Concurrent Execution** - Configure how many features can run simultaneously (default: 3)
- ⌨️ **Keyboard Shortcuts** - Fully customizable shortcuts for navigation and actions
- 🎨 **Theme System** - 25+ themes including Dark, Light, Dracula, Nord, Catppuccin, and more
- 🖥️ **Cross-Platform** - Desktop app for macOS (x64, arm64), Windows (x64), and Linux (x64)
- 🌐 **Web Mode** - Run in browser or as Electron desktop app
### Advanced Features
- 🔐 **Docker Isolation** - Security-focused Docker deployment with no host filesystem access
- 🎯 **Worktree Management** - Create, switch, commit, and create PRs from worktrees
- 📊 **Usage Tracking** - Monitor Claude API usage with detailed metrics
- 🔊 **Audio Notifications** - Optional completion sounds (mutable in settings)
- 💾 **Auto-save** - All work automatically persisted to `.automaker/` directory
## Tech Stack
- [Next.js](https://nextjs.org) - React framework
- [Electron](https://www.electronjs.org/) - Desktop application framework
- [Tailwind CSS](https://tailwindcss.com/) - Styling
- [Zustand](https://zustand-demo.pmnd.rs/) - State management
- [dnd-kit](https://dndkit.com/) - Drag and drop functionality
### Frontend
- **React 19** - UI framework
- **Vite 7** - Build tool and development server
- **Electron 39** - Desktop application framework
- **TypeScript 5.9** - Type safety
- **TanStack Router** - File-based routing
- **Zustand 5** - State management with persistence
- **Tailwind CSS 4** - Utility-first styling with 25+ themes
- **Radix UI** - Accessible component primitives
- **dnd-kit** - Drag and drop for Kanban board
- **@xyflow/react** - Graph visualization for dependencies
- **xterm.js** - Integrated terminal emulator
- **CodeMirror 6** - Code editor for XML/syntax highlighting
- **Lucide Icons** - Icon library
### Backend
- **Node.js** - JavaScript runtime with ES modules
- **Express 5** - HTTP server framework
- **TypeScript 5.9** - Type safety
- **Claude Agent SDK** - AI agent integration (@anthropic-ai/claude-agent-sdk)
- **WebSocket (ws)** - Real-time event streaming
- **node-pty** - PTY terminal sessions
### Testing & Quality
- **Playwright** - End-to-end testing
- **Vitest** - Unit testing framework
- **ESLint 9** - Code linting
- **Prettier 3** - Code formatting
- **Husky** - Git hooks for pre-commit formatting
### Shared Libraries
- **@automaker/types** - Shared TypeScript definitions
- **@automaker/utils** - Logging, error handling, image processing
- **@automaker/prompts** - AI prompt templates
- **@automaker/platform** - Path management and security
- **@automaker/model-resolver** - Claude model alias resolution
- **@automaker/dependency-resolver** - Feature dependency ordering
- **@automaker/git-utils** - Git operations and worktree management
## Available Views
Automaker provides several specialized views accessible via the sidebar or keyboard shortcuts:
| View | Shortcut | Description |
| ------------------ | -------- | ------------------------------------------------------------------------------------------------ |
| **Board** | `K` | Kanban board for managing feature workflow (Backlog → In Progress → Waiting Approval → Verified) |
| **Agent** | `A` | Interactive chat sessions with AI agents for exploratory work and questions |
| **Spec** | `D` | Project specification editor with AI-powered generation and feature suggestions |
| **Context** | `C` | Manage context files (markdown, images) that AI agents automatically reference |
| **Profiles** | `M` | Create and manage AI agent profiles with custom prompts and configurations |
| **Settings** | `S` | Configure themes, shortcuts, defaults, authentication, and more |
| **Terminal** | `T` | Integrated terminal with tabs, splits, and persistent sessions |
| **GitHub Issues** | - | Import and validate GitHub issues, convert to tasks |
| **Running Agents** | - | View all active agents across projects with status and progress |
### Keyboard Navigation
All shortcuts are customizable in Settings. Default shortcuts:
- **Navigation:** `K` (Board), `A` (Agent), `D` (Spec), `C` (Context), `S` (Settings), `M` (Profiles), `T` (Terminal)
- **UI:** `` ` `` (Toggle sidebar)
- **Actions:** `N` (New item in current view), `G` (Start next features), `O` (Open project), `P` (Project picker)
- **Projects:** `Q`/`E` (Cycle previous/next project)
## Architecture
### Monorepo Structure
Automaker is built as an npm workspace monorepo with two main applications and seven shared packages:
```text
automaker/
├── apps/
│ ├── ui/ # React + Vite + Electron frontend
│ └── server/ # Express + WebSocket backend
└── libs/ # Shared packages
├── types/ # Core TypeScript definitions
├── utils/ # Logging, errors, utilities
├── prompts/ # AI prompt templates
├── platform/ # Path management, security
├── model-resolver/ # Claude model aliasing
├── dependency-resolver/ # Feature dependency ordering
└── git-utils/ # Git operations & worktree management
```
### How It Works
1. **Feature Definition** - Users create feature cards on the Kanban board with descriptions, images, and configuration
2. **Git Worktree Creation** - When a feature starts, a git worktree is created for isolated development
3. **Agent Execution** - Claude Agent SDK executes in the worktree with full file system and command access
4. **Real-time Streaming** - Agent output streams via WebSocket to the frontend for live monitoring
5. **Plan Approval** (optional) - For spec/full planning modes, agents generate plans that require user approval
6. **Multi-Agent Tasks** (spec mode) - Each task in the spec gets a dedicated agent for focused implementation
7. **Verification** - Features move to "Waiting Approval" where changes can be reviewed via git diff
8. **Integration** - After approval, changes can be committed and PRs created from the worktree
### Key Architectural Patterns
- **Event-Driven Architecture** - All server operations emit events that stream to the frontend
- **Provider Pattern** - Extensible AI provider system (currently Claude, designed for future providers)
- **Service-Oriented Backend** - Modular services for agent management, features, terminals, settings
- **State Management** - Zustand with persistence for frontend state across restarts
- **File-Based Storage** - No database; features stored as JSON files in `.automaker/` directory
### Security & Isolation
- **Git Worktrees** - Each feature executes in an isolated git worktree, protecting your main branch
- **Path Sandboxing** - Optional `ALLOWED_ROOT_DIRECTORY` restricts file access
- **Docker Isolation** - Recommended deployment uses Docker with no host filesystem access
- **Plan Approval** - Optional plan review before implementation prevents unwanted changes
### Data Storage
Automaker uses a file-based storage system (no database required):
#### Per-Project Data
Stored in `{projectPath}/.automaker/`:
```text
.automaker/
├── features/ # Feature JSON files and images
│ └── {featureId}/
│ ├── feature.json # Feature metadata
│ ├── agent-output.md # AI agent output log
│ └── images/ # Attached images
├── context/ # Context files for AI agents
├── settings.json # Project-specific settings
├── spec.md # Project specification
├── analysis.json # Project structure analysis
└── feature-suggestions.json # AI-generated suggestions
```
#### Global Data
Stored in `DATA_DIR` (default `./data`):
```text
data/
├── settings.json # Global settings, profiles, shortcuts
├── credentials.json # API keys (encrypted)
├── sessions-metadata.json # Chat session metadata
└── agent-sessions/ # Conversation histories
└── {sessionId}.json
```
---
> **[!CAUTION]**
>
> ## Security Disclaimer
>
> **This software uses AI-powered tooling that has access to your operating system and can read, modify, and delete files. Use at your own risk.**
>
> We have reviewed this codebase for security vulnerabilities, but you assume all risk when running this software. You should review the code yourself before running it.
>
> **We do not recommend running Automaker directly on your local computer** due to the risk of AI agents having access to your entire file system. Please sandbox this application using Docker or a virtual machine.
>
> **[Read the full disclaimer](./DISCLAIMER.md)**
---
## Learn More
To learn more about Next.js, take a look at the following resources:
### Documentation
- [Next.js Documentation](https://nextjs.org/docs) - learn about Next.js features and API.
- [Learn Next.js](https://nextjs.org/learn) - an interactive Next.js tutorial.
- [Contributing Guide](./CONTRIBUTING.md) - How to contribute to Automaker
- [Project Documentation](./docs/) - Architecture guides, patterns, and developer docs
- [Docker Isolation Guide](./docs/docker-isolation.md) - Security-focused Docker deployment
- [Shared Packages Guide](./docs/llm-shared-packages.md) - Using monorepo packages
### Community
Join the **Agentic Jumpstart** Discord to connect with other builders exploring **agentic coding**:
👉 [Agentic Jumpstart Discord](https://discord.gg/jjem7aEDKU)
## License

17
TODO.md Normal file
View File

@@ -0,0 +1,17 @@
# Bugs
- Setting the default model does not seem like it works.
# UX
- Consolidate all models to a single place in the settings instead of having AI profiles and all this other stuff
- Simplify the create feature modal. It should just be one page. I don't need nessa tabs and all these nested buttons. It's too complex.
- added to do's list checkbox directly into the card so as it's going through if there's any to do items we can see those update live
- When the feature is done, I want to see a summary of the LLM. That's the first thing I should see when I double click the card.
- I went away to mass edit all my features. For example, when I created a new project, it added auto testing on every single feature card. Now I have to manually go through one by one and change those. Have a way to mass edit those, the configuration of all them.
- Double check and debug if there's memory leaks. It seems like the memory of automaker grows like 3 gigabytes. It's 5gb right now and I'm running three different cursor cli features implementing at the same time.
- Typing in the text area of the plan mode was super laggy.
- When I have a bunch of features running at the same time, it seems like I cannot edit the features in the backlog. Like they don't persist their file changes and I think this is because of the secure FS file has an internal queue to prevent hitting that file open write limit. We may have to reconsider refactoring away from file system and do Postgres or SQLite or something.
- modals are not scrollable if height of the screen is small enough
- and the Agent Runner add an archival button for the new sessions.
- investigate a potential issue with the feature cards not refreshing. I see a lock icon on the feature card But it doesn't go away until I open the card and edit it and I turn the testing mode off. I think there's like a refresh sync issue.

File diff suppressed because it is too large Load Diff

View File

@@ -1,15 +0,0 @@
{
"name": "@automaker/server-bundle",
"version": "0.1.0",
"type": "module",
"main": "dist/index.js",
"dependencies": {
"@anthropic-ai/claude-agent-sdk": "^0.1.61",
"cors": "^2.8.5",
"dotenv": "^17.2.3",
"express": "^5.1.0",
"morgan": "^1.10.1",
"node-pty": "1.1.0-beta41",
"ws": "^8.18.0"
}
}

View File

@@ -8,6 +8,20 @@
# Your Anthropic API key for Claude models
ANTHROPIC_API_KEY=sk-ant-...
# ============================================
# OPTIONAL - Additional API Keys
# ============================================
# OpenAI API key for Codex/GPT models
OPENAI_API_KEY=sk-...
# Cursor API key for Cursor models
CURSOR_API_KEY=...
# OAuth credentials for CLI authentication (extracted automatically)
CLAUDE_OAUTH_CREDENTIALS=
CURSOR_AUTH_TOKEN=
# ============================================
# OPTIONAL - Security
# ============================================
@@ -24,7 +38,7 @@ ALLOWED_ROOT_DIRECTORY=
# CORS origin - which domains can access the API
# Use "*" for development, set specific origin for production
CORS_ORIGIN=*
CORS_ORIGIN=http://localhost:3007
# ============================================
# OPTIONAL - Server
@@ -48,3 +62,15 @@ TERMINAL_ENABLED=true
TERMINAL_PASSWORD=
ENABLE_REQUEST_LOGGING=false
# ============================================
# OPTIONAL - Debugging
# ============================================
# Enable raw output logging for agent streams (default: false)
# When enabled, saves unprocessed stream events to raw-output.jsonl
# in each feature's directory (.automaker/features/{id}/raw-output.jsonl)
# Useful for debugging provider streaming issues, improving log parsing,
# or analyzing how different providers (Claude, Cursor) stream responses
# Note: This adds disk I/O overhead, only enable when debugging
AUTOMAKER_DEBUG_RAW_OUTPUT=false

View File

@@ -1,67 +0,0 @@
# Automaker Backend Server
# Multi-stage build for minimal production image
# Build stage
FROM node:20-alpine AS builder
# Install build dependencies for native modules (node-pty)
RUN apk add --no-cache python3 make g++
WORKDIR /app
# Copy package files and scripts needed for postinstall
COPY package*.json ./
COPY apps/server/package*.json ./apps/server/
COPY scripts ./scripts
# Install dependencies
RUN npm ci --workspace=apps/server
# Copy source
COPY apps/server ./apps/server
# Build TypeScript
RUN npm run build --workspace=apps/server
# Production stage
FROM node:20-alpine
# Install git, curl, and GitHub CLI (pinned version for reproducible builds)
RUN apk add --no-cache git curl && \
GH_VERSION="2.63.2" && \
curl -L "https://github.com/cli/cli/releases/download/v${GH_VERSION}/gh_${GH_VERSION}_linux_amd64.tar.gz" -o gh.tar.gz && \
tar -xzf gh.tar.gz && \
mv "gh_${GH_VERSION}_linux_amd64/bin/gh" /usr/local/bin/gh && \
rm -rf gh.tar.gz "gh_${GH_VERSION}_linux_amd64"
WORKDIR /app
# Create non-root user
RUN addgroup -g 1001 -S automaker && \
adduser -S automaker -u 1001
# Copy built files and production dependencies
COPY --from=builder /app/apps/server/dist ./dist
COPY --from=builder /app/apps/server/package*.json ./
COPY --from=builder /app/node_modules ./node_modules
# Create data directory
RUN mkdir -p /data && chown automaker:automaker /data
# Switch to non-root user
USER automaker
# Environment variables
ENV NODE_ENV=production
ENV PORT=3008
ENV DATA_DIR=/data
# Expose port
EXPOSE 3008
# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD wget --no-verbose --tries=1 --spider http://localhost:3008/api/health || exit 1
# Start server
CMD ["node", "dist/index.js"]

View File

@@ -1,14 +1,18 @@
{
"name": "@automaker/server",
"version": "0.1.0",
"version": "0.9.0",
"description": "Backend server for Automaker - provides API for both web and Electron modes",
"author": "AutoMaker Team",
"license": "SEE LICENSE IN LICENSE",
"private": true,
"engines": {
"node": ">=22.0.0 <23.0.0"
},
"type": "module",
"main": "dist/index.js",
"scripts": {
"dev": "tsx watch src/index.ts",
"dev:test": "tsx src/index.ts",
"build": "tsc",
"start": "node dist/index.js",
"lint": "eslint src/",
@@ -20,31 +24,36 @@
"test:unit": "vitest run tests/unit"
},
"dependencies": {
"@anthropic-ai/claude-agent-sdk": "^0.1.72",
"@automaker/dependency-resolver": "^1.0.0",
"@automaker/git-utils": "^1.0.0",
"@automaker/model-resolver": "^1.0.0",
"@automaker/platform": "^1.0.0",
"@automaker/prompts": "^1.0.0",
"@automaker/types": "^1.0.0",
"@automaker/utils": "^1.0.0",
"cors": "^2.8.5",
"dotenv": "^17.2.3",
"express": "^5.2.1",
"morgan": "^1.10.1",
"@anthropic-ai/claude-agent-sdk": "0.1.76",
"@automaker/dependency-resolver": "1.0.0",
"@automaker/git-utils": "1.0.0",
"@automaker/model-resolver": "1.0.0",
"@automaker/platform": "1.0.0",
"@automaker/prompts": "1.0.0",
"@automaker/types": "1.0.0",
"@automaker/utils": "1.0.0",
"@modelcontextprotocol/sdk": "1.25.2",
"@openai/codex-sdk": "^0.77.0",
"cookie-parser": "1.4.7",
"cors": "2.8.5",
"dotenv": "17.2.3",
"express": "5.2.1",
"morgan": "1.10.1",
"node-pty": "1.1.0-beta41",
"ws": "^8.18.3"
"ws": "8.18.3"
},
"devDependencies": {
"@types/cors": "^2.8.19",
"@types/express": "^5.0.6",
"@types/morgan": "^1.9.10",
"@types/node": "^22",
"@types/ws": "^8.18.1",
"@vitest/coverage-v8": "^4.0.16",
"@vitest/ui": "^4.0.16",
"tsx": "^4.21.0",
"typescript": "^5",
"vitest": "^4.0.16"
"@types/cookie": "0.6.0",
"@types/cookie-parser": "1.4.10",
"@types/cors": "2.8.19",
"@types/express": "5.0.6",
"@types/morgan": "1.9.10",
"@types/node": "22.19.3",
"@types/ws": "8.18.1",
"@vitest/coverage-v8": "4.0.16",
"@vitest/ui": "4.0.16",
"tsx": "4.21.0",
"typescript": "5.9.3",
"vitest": "4.0.16"
}
}

View File

@@ -9,15 +9,22 @@
import express from 'express';
import cors from 'cors';
import morgan from 'morgan';
import cookieParser from 'cookie-parser';
import cookie from 'cookie';
import { WebSocketServer, WebSocket } from 'ws';
import { createServer } from 'http';
import dotenv from 'dotenv';
import { createEventEmitter, type EventEmitter } from './lib/events.js';
import { initAllowedPaths } from '@automaker/platform';
import { authMiddleware, getAuthStatus } from './lib/auth.js';
import { createLogger } from '@automaker/utils';
const logger = createLogger('Server');
import { authMiddleware, validateWsConnectionToken, checkRawAuthentication } from './lib/auth.js';
import { requireJsonContentType } from './middleware/require-json-content-type.js';
import { createAuthRoutes } from './routes/auth/index.js';
import { createFsRoutes } from './routes/fs/index.js';
import { createHealthRoutes } from './routes/health/index.js';
import { createHealthRoutes, createDetailedHandler } from './routes/health/index.js';
import { createAgentRoutes } from './routes/agent/index.js';
import { createSessionsRoutes } from './routes/sessions/index.js';
import { createFeaturesRoutes } from './routes/features/index.js';
@@ -46,8 +53,20 @@ import { SettingsService } from './services/settings-service.js';
import { createSpecRegenerationRoutes } from './routes/app-spec/index.js';
import { createClaudeRoutes } from './routes/claude/index.js';
import { ClaudeUsageService } from './services/claude-usage-service.js';
import { createCodexRoutes } from './routes/codex/index.js';
import { CodexUsageService } from './services/codex-usage-service.js';
import { CodexAppServerService } from './services/codex-app-server-service.js';
import { CodexModelCacheService } from './services/codex-model-cache-service.js';
import { createGitHubRoutes } from './routes/github/index.js';
import { createContextRoutes } from './routes/context/index.js';
import { createBacklogPlanRoutes } from './routes/backlog-plan/index.js';
import { cleanupStaleValidations } from './routes/github/routes/validation-common.js';
import { createMCPRoutes } from './routes/mcp/index.js';
import { MCPTestService } from './services/mcp-test-service.js';
import { createPipelineRoutes } from './routes/pipeline/index.js';
import { pipelineService } from './services/pipeline-service.js';
import { createIdeationRoutes } from './routes/ideation/index.js';
import { IdeationService } from './services/ideation-service.js';
// Load environment variables
dotenv.config();
@@ -60,7 +79,7 @@ const ENABLE_REQUEST_LOGGING = process.env.ENABLE_REQUEST_LOGGING !== 'false'; /
const hasAnthropicKey = !!process.env.ANTHROPIC_API_KEY;
if (!hasAnthropicKey) {
console.warn(`
logger.warn(`
╔═══════════════════════════════════════════════════════════════════════╗
║ ⚠️ WARNING: No Claude authentication configured ║
║ ║
@@ -73,7 +92,7 @@ if (!hasAnthropicKey) {
╚═══════════════════════════════════════════════════════════════════════╝
`);
} else {
console.log('[Server] ✓ ANTHROPIC_API_KEY detected (API key auth)');
logger.info('✓ ANTHROPIC_API_KEY detected (API key auth)');
}
// Initialize security
@@ -85,7 +104,7 @@ const app = express();
// Middleware
// Custom colored logger showing only endpoint and status code (configurable via ENABLE_REQUEST_LOGGING env var)
if (ENABLE_REQUEST_LOGGING) {
morgan.token('status-colored', (req, res) => {
morgan.token('status-colored', (_req, res) => {
const status = res.statusCode;
if (status >= 500) return `\x1b[31m${status}\x1b[0m`; // Red for server errors
if (status >= 400) return `\x1b[33m${status}\x1b[0m`; // Yellow for client errors
@@ -99,56 +118,123 @@ if (ENABLE_REQUEST_LOGGING) {
})
);
}
// CORS configuration
// When using credentials (cookies), origin cannot be '*'
// We dynamically allow the requesting origin for local development
app.use(
cors({
origin: process.env.CORS_ORIGIN || '*',
origin: (origin, callback) => {
// Allow requests with no origin (like mobile apps, curl, Electron)
if (!origin) {
callback(null, true);
return;
}
// If CORS_ORIGIN is set, use it (can be comma-separated list)
const allowedOrigins = process.env.CORS_ORIGIN?.split(',').map((o) => o.trim());
if (allowedOrigins && allowedOrigins.length > 0 && allowedOrigins[0] !== '*') {
if (allowedOrigins.includes(origin)) {
callback(null, origin);
} else {
callback(new Error('Not allowed by CORS'));
}
return;
}
// For local development, allow localhost origins
if (
origin.startsWith('http://localhost:') ||
origin.startsWith('http://127.0.0.1:') ||
origin.startsWith('http://[::1]:')
) {
callback(null, origin);
return;
}
// Reject other origins by default for security
callback(new Error('Not allowed by CORS'));
},
credentials: true,
})
);
app.use(express.json({ limit: '50mb' }));
app.use(cookieParser());
// Create shared event emitter for streaming
const events: EventEmitter = createEventEmitter();
// Create services
const agentService = new AgentService(DATA_DIR, events);
const featureLoader = new FeatureLoader();
const autoModeService = new AutoModeService(events);
// Note: settingsService is created first so it can be injected into other services
const settingsService = new SettingsService(DATA_DIR);
const agentService = new AgentService(DATA_DIR, events, settingsService);
const featureLoader = new FeatureLoader();
const autoModeService = new AutoModeService(events, settingsService);
const claudeUsageService = new ClaudeUsageService();
const codexAppServerService = new CodexAppServerService();
const codexModelCacheService = new CodexModelCacheService(DATA_DIR, codexAppServerService);
const codexUsageService = new CodexUsageService(codexAppServerService);
const mcpTestService = new MCPTestService(settingsService);
const ideationService = new IdeationService(events, settingsService, featureLoader);
// Initialize services
(async () => {
await agentService.initialize();
console.log('[Server] Agent service initialized');
logger.info('Agent service initialized');
// Bootstrap Codex model cache in background (don't block server startup)
void codexModelCacheService.getModels().catch((err) => {
logger.error('Failed to bootstrap Codex model cache:', err);
});
})();
// Mount API routes - health is unauthenticated for monitoring
// Run stale validation cleanup every hour to prevent memory leaks from crashed validations
const VALIDATION_CLEANUP_INTERVAL_MS = 60 * 60 * 1000; // 1 hour
setInterval(() => {
const cleaned = cleanupStaleValidations();
if (cleaned > 0) {
logger.info(`Cleaned up ${cleaned} stale validation entries`);
}
}, VALIDATION_CLEANUP_INTERVAL_MS);
// Require Content-Type: application/json for all API POST/PUT/PATCH requests
// This helps prevent CSRF and content-type confusion attacks
app.use('/api', requireJsonContentType);
// Mount API routes - health, auth, and setup are unauthenticated
app.use('/api/health', createHealthRoutes());
app.use('/api/auth', createAuthRoutes());
app.use('/api/setup', createSetupRoutes());
// Apply authentication to all other routes
app.use('/api', authMiddleware);
// Protected health endpoint with detailed info
app.get('/api/health/detailed', createDetailedHandler());
app.use('/api/fs', createFsRoutes(events));
app.use('/api/agent', createAgentRoutes(agentService, events));
app.use('/api/sessions', createSessionsRoutes(agentService));
app.use('/api/features', createFeaturesRoutes(featureLoader));
app.use('/api/auto-mode', createAutoModeRoutes(autoModeService));
app.use('/api/enhance-prompt', createEnhancePromptRoutes());
app.use('/api/enhance-prompt', createEnhancePromptRoutes(settingsService));
app.use('/api/worktree', createWorktreeRoutes());
app.use('/api/git', createGitRoutes());
app.use('/api/setup', createSetupRoutes());
app.use('/api/suggestions', createSuggestionsRoutes(events));
app.use('/api/suggestions', createSuggestionsRoutes(events, settingsService));
app.use('/api/models', createModelsRoutes());
app.use('/api/spec-regeneration', createSpecRegenerationRoutes(events));
app.use('/api/spec-regeneration', createSpecRegenerationRoutes(events, settingsService));
app.use('/api/running-agents', createRunningAgentsRoutes(autoModeService));
app.use('/api/workspace', createWorkspaceRoutes());
app.use('/api/templates', createTemplatesRoutes());
app.use('/api/terminal', createTerminalRoutes());
app.use('/api/settings', createSettingsRoutes(settingsService));
app.use('/api/claude', createClaudeRoutes(claudeUsageService));
app.use('/api/github', createGitHubRoutes());
app.use('/api/context', createContextRoutes());
app.use('/api/codex', createCodexRoutes(codexUsageService, codexModelCacheService));
app.use('/api/github', createGitHubRoutes(events, settingsService));
app.use('/api/context', createContextRoutes(settingsService));
app.use('/api/backlog-plan', createBacklogPlanRoutes(events, settingsService));
app.use('/api/mcp', createMCPRoutes(mcpTestService));
app.use('/api/pipeline', createPipelineRoutes(pipelineService));
app.use('/api/ideation', createIdeationRoutes(events, ideationService, featureLoader));
// Create HTTP server
const server = createServer(app);
@@ -158,10 +244,55 @@ const wss = new WebSocketServer({ noServer: true });
const terminalWss = new WebSocketServer({ noServer: true });
const terminalService = getTerminalService();
/**
* Authenticate WebSocket upgrade requests
* Checks for API key in header/query, session token in header/query, OR valid session cookie
*/
function authenticateWebSocket(request: import('http').IncomingMessage): boolean {
const url = new URL(request.url || '', `http://${request.headers.host}`);
// Convert URL search params to query object
const query: Record<string, string | undefined> = {};
url.searchParams.forEach((value, key) => {
query[key] = value;
});
// Parse cookies from header
const cookieHeader = request.headers.cookie;
const cookies = cookieHeader ? cookie.parse(cookieHeader) : {};
// Use shared authentication logic for standard auth methods
if (
checkRawAuthentication(
request.headers as Record<string, string | string[] | undefined>,
query,
cookies
)
) {
return true;
}
// Additionally check for short-lived WebSocket connection token (WebSocket-specific)
const wsToken = url.searchParams.get('wsToken');
if (wsToken && validateWsConnectionToken(wsToken)) {
return true;
}
return false;
}
// Handle HTTP upgrade requests manually to route to correct WebSocket server
server.on('upgrade', (request, socket, head) => {
const { pathname } = new URL(request.url || '', `http://${request.headers.host}`);
// Authenticate all WebSocket connections
if (!authenticateWebSocket(request)) {
logger.info('Authentication failed, rejecting connection');
socket.write('HTTP/1.1 401 Unauthorized\r\n\r\n');
socket.destroy();
return;
}
if (pathname === '/api/events') {
wss.handleUpgrade(request, socket, head, (ws) => {
wss.emit('connection', ws, request);
@@ -177,22 +308,38 @@ server.on('upgrade', (request, socket, head) => {
// Events WebSocket connection handler
wss.on('connection', (ws: WebSocket) => {
console.log('[WebSocket] Client connected');
logger.info('Client connected, ready state:', ws.readyState);
// Subscribe to all events and forward to this client
const unsubscribe = events.subscribe((type, payload) => {
logger.info('Event received:', {
type,
hasPayload: !!payload,
payloadKeys: payload ? Object.keys(payload) : [],
wsReadyState: ws.readyState,
wsOpen: ws.readyState === WebSocket.OPEN,
});
if (ws.readyState === WebSocket.OPEN) {
ws.send(JSON.stringify({ type, payload }));
const message = JSON.stringify({ type, payload });
logger.info('Sending event to client:', {
type,
messageLength: message.length,
sessionId: (payload as any)?.sessionId,
});
ws.send(message);
} else {
logger.info('WARNING: Cannot send event, WebSocket not open. ReadyState:', ws.readyState);
}
});
ws.on('close', () => {
console.log('[WebSocket] Client disconnected');
logger.info('Client disconnected');
unsubscribe();
});
ws.on('error', (error) => {
console.error('[WebSocket] Error:', error);
logger.error('ERROR:', error);
unsubscribe();
});
});
@@ -219,24 +366,24 @@ terminalWss.on('connection', (ws: WebSocket, req: import('http').IncomingMessage
const sessionId = url.searchParams.get('sessionId');
const token = url.searchParams.get('token');
console.log(`[Terminal WS] Connection attempt for session: ${sessionId}`);
logger.info(`Connection attempt for session: ${sessionId}`);
// Check if terminal is enabled
if (!isTerminalEnabled()) {
console.log('[Terminal WS] Terminal is disabled');
logger.info('Terminal is disabled');
ws.close(4003, 'Terminal access is disabled');
return;
}
// Validate token if password is required
if (isTerminalPasswordRequired() && !validateTerminalToken(token || undefined)) {
console.log('[Terminal WS] Invalid or missing token');
logger.info('Invalid or missing token');
ws.close(4001, 'Authentication required');
return;
}
if (!sessionId) {
console.log('[Terminal WS] No session ID provided');
logger.info('No session ID provided');
ws.close(4002, 'Session ID required');
return;
}
@@ -244,12 +391,12 @@ terminalWss.on('connection', (ws: WebSocket, req: import('http').IncomingMessage
// Check if session exists
const session = terminalService.getSession(sessionId);
if (!session) {
console.log(`[Terminal WS] Session ${sessionId} not found`);
logger.info(`Session ${sessionId} not found`);
ws.close(4004, 'Session not found');
return;
}
console.log(`[Terminal WS] Client connected to session ${sessionId}`);
logger.info(`Client connected to session ${sessionId}`);
// Track this connection
if (!terminalConnections.has(sessionId)) {
@@ -365,15 +512,15 @@ terminalWss.on('connection', (ws: WebSocket, req: import('http').IncomingMessage
break;
default:
console.warn(`[Terminal WS] Unknown message type: ${msg.type}`);
logger.warn(`Unknown message type: ${msg.type}`);
}
} catch (error) {
console.error('[Terminal WS] Error processing message:', error);
logger.error('Error processing message:', error);
}
});
ws.on('close', () => {
console.log(`[Terminal WS] Client disconnected from session ${sessionId}`);
logger.info(`Client disconnected from session ${sessionId}`);
unsubscribeData();
unsubscribeExit();
@@ -392,7 +539,7 @@ terminalWss.on('connection', (ws: WebSocket, req: import('http').IncomingMessage
});
ws.on('error', (error) => {
console.error(`[Terminal WS] Error on session ${sessionId}:`, error);
logger.error(`Error on session ${sessionId}:`, error);
unsubscribeData();
unsubscribeExit();
});
@@ -407,7 +554,7 @@ const startServer = (port: number) => {
: 'enabled'
: 'disabled';
const portStr = port.toString().padEnd(4);
console.log(`
logger.info(`
╔═══════════════════════════════════════════════════════╗
║ Automaker Backend Server ║
╠═══════════════════════════════════════════════════════╣
@@ -422,7 +569,7 @@ const startServer = (port: number) => {
server.on('error', (error: NodeJS.ErrnoException) => {
if (error.code === 'EADDRINUSE') {
console.error(`
logger.error(`
╔═══════════════════════════════════════════════════════╗
║ ❌ ERROR: Port ${port} is already in use ║
╠═══════════════════════════════════════════════════════╣
@@ -442,7 +589,7 @@ const startServer = (port: number) => {
`);
process.exit(1);
} else {
console.error('[Server] Error starting server:', error);
logger.error('Error starting server:', error);
process.exit(1);
}
});
@@ -450,21 +597,41 @@ const startServer = (port: number) => {
startServer(PORT);
// Global error handlers to prevent crashes from uncaught errors
process.on('unhandledRejection', (reason: unknown, _promise: Promise<unknown>) => {
logger.error('Unhandled Promise Rejection:', {
reason: reason instanceof Error ? reason.message : String(reason),
stack: reason instanceof Error ? reason.stack : undefined,
});
// Don't exit - log the error and continue running
// This prevents the server from crashing due to unhandled rejections
});
process.on('uncaughtException', (error: Error) => {
logger.error('Uncaught Exception:', {
message: error.message,
stack: error.stack,
});
// Exit on uncaught exceptions to prevent undefined behavior
// The process is in an unknown state after an uncaught exception
process.exit(1);
});
// Graceful shutdown
process.on('SIGTERM', () => {
console.log('SIGTERM received, shutting down...');
logger.info('SIGTERM received, shutting down...');
terminalService.cleanup();
server.close(() => {
console.log('Server closed');
logger.info('Server closed');
process.exit(0);
});
});
process.on('SIGINT', () => {
console.log('SIGINT received, shutting down...');
logger.info('SIGINT received, shutting down...');
terminalService.cleanup();
server.close(() => {
console.log('Server closed');
logger.info('Server closed');
process.exit(0);
});
});

View File

@@ -0,0 +1,257 @@
/**
* Agent Discovery - Scans filesystem for AGENT.md files
*
* Discovers agents from:
* - ~/.claude/agents/ (user-level, global)
* - .claude/agents/ (project-level)
*
* Similar to Skills, but for custom subagents defined in AGENT.md files.
*/
import path from 'path';
import os from 'os';
import { createLogger } from '@automaker/utils';
import { secureFs, systemPaths } from '@automaker/platform';
import type { AgentDefinition } from '@automaker/types';
const logger = createLogger('AgentDiscovery');
export interface FilesystemAgent {
name: string; // Directory name (e.g., 'code-reviewer')
definition: AgentDefinition;
source: 'user' | 'project';
filePath: string; // Full path to AGENT.md
}
/**
* Parse agent content string into AgentDefinition
* Format:
* ---
* name: agent-name # Optional
* description: When to use this agent
* tools: tool1, tool2, tool3 # Optional (comma or space separated list)
* model: sonnet # Optional: sonnet, opus, haiku
* ---
* System prompt content here...
*/
function parseAgentContent(content: string, filePath: string): AgentDefinition | null {
// Extract frontmatter
const frontmatterMatch = content.match(/^---\n([\s\S]*?)\n---\n([\s\S]*)$/);
if (!frontmatterMatch) {
logger.warn(`Invalid agent file format (missing frontmatter): ${filePath}`);
return null;
}
const [, frontmatter, prompt] = frontmatterMatch;
// Parse description (required)
const description = frontmatter.match(/description:\s*(.+)/)?.[1]?.trim();
if (!description) {
logger.warn(`Missing description in agent file: ${filePath}`);
return null;
}
// Parse tools (optional) - supports both comma-separated and space-separated
const toolsMatch = frontmatter.match(/tools:\s*(.+)/);
const tools = toolsMatch
? toolsMatch[1]
.split(/[,\s]+/) // Split by comma or whitespace
.map((t) => t.trim())
.filter((t) => t && t !== '')
: undefined;
// Parse model (optional) - validate against allowed values
const modelMatch = frontmatter.match(/model:\s*(\w+)/);
const modelValue = modelMatch?.[1]?.trim();
const validModels = ['sonnet', 'opus', 'haiku', 'inherit'] as const;
const model =
modelValue && validModels.includes(modelValue as (typeof validModels)[number])
? (modelValue as 'sonnet' | 'opus' | 'haiku' | 'inherit')
: undefined;
if (modelValue && !model) {
logger.warn(
`Invalid model "${modelValue}" in agent file: ${filePath}. Expected one of: ${validModels.join(', ')}`
);
}
return {
description,
prompt: prompt.trim(),
tools,
model,
};
}
/**
* Directory entry with type information
*/
interface DirEntry {
name: string;
isFile: boolean;
isDirectory: boolean;
}
/**
* Filesystem adapter interface for abstracting systemPaths vs secureFs
*/
interface FsAdapter {
exists: (filePath: string) => Promise<boolean>;
readdir: (dirPath: string) => Promise<DirEntry[]>;
readFile: (filePath: string) => Promise<string>;
}
/**
* Create a filesystem adapter for system paths (user directory)
*/
function createSystemPathAdapter(): FsAdapter {
return {
exists: (filePath) => Promise.resolve(systemPaths.systemPathExists(filePath)),
readdir: async (dirPath) => {
const entryNames = await systemPaths.systemPathReaddir(dirPath);
const entries: DirEntry[] = [];
for (const name of entryNames) {
const stat = await systemPaths.systemPathStat(path.join(dirPath, name));
entries.push({
name,
isFile: stat.isFile(),
isDirectory: stat.isDirectory(),
});
}
return entries;
},
readFile: (filePath) => systemPaths.systemPathReadFile(filePath, 'utf-8') as Promise<string>,
};
}
/**
* Create a filesystem adapter for project paths (secureFs)
*/
function createSecureFsAdapter(): FsAdapter {
return {
exists: (filePath) =>
secureFs
.access(filePath)
.then(() => true)
.catch(() => false),
readdir: async (dirPath) => {
const entries = await secureFs.readdir(dirPath, { withFileTypes: true });
return entries.map((entry) => ({
name: entry.name,
isFile: entry.isFile(),
isDirectory: entry.isDirectory(),
}));
},
readFile: (filePath) => secureFs.readFile(filePath, 'utf-8') as Promise<string>,
};
}
/**
* Parse agent file using the provided filesystem adapter
*/
async function parseAgentFileWithAdapter(
filePath: string,
fsAdapter: FsAdapter
): Promise<AgentDefinition | null> {
try {
const content = await fsAdapter.readFile(filePath);
return parseAgentContent(content, filePath);
} catch (error) {
logger.error(`Failed to parse agent file: ${filePath}`, error);
return null;
}
}
/**
* Scan a directory for agent .md files
* Agents can be in two formats:
* 1. Flat: agent-name.md (file directly in agents/)
* 2. Subdirectory: agent-name/AGENT.md (folder + file, similar to Skills)
*/
async function scanAgentsDirectory(
baseDir: string,
source: 'user' | 'project'
): Promise<FilesystemAgent[]> {
const agents: FilesystemAgent[] = [];
const fsAdapter = source === 'user' ? createSystemPathAdapter() : createSecureFsAdapter();
try {
// Check if directory exists
const exists = await fsAdapter.exists(baseDir);
if (!exists) {
logger.debug(`Directory does not exist: ${baseDir}`);
return agents;
}
// Read all entries in the directory
const entries = await fsAdapter.readdir(baseDir);
for (const entry of entries) {
// Check for flat .md file format (agent-name.md)
if (entry.isFile && entry.name.endsWith('.md')) {
const agentName = entry.name.slice(0, -3); // Remove .md extension
const agentFilePath = path.join(baseDir, entry.name);
const definition = await parseAgentFileWithAdapter(agentFilePath, fsAdapter);
if (definition) {
agents.push({
name: agentName,
definition,
source,
filePath: agentFilePath,
});
logger.debug(`Discovered ${source} agent (flat): ${agentName}`);
}
}
// Check for subdirectory format (agent-name/AGENT.md)
else if (entry.isDirectory) {
const agentFilePath = path.join(baseDir, entry.name, 'AGENT.md');
const agentFileExists = await fsAdapter.exists(agentFilePath);
if (agentFileExists) {
const definition = await parseAgentFileWithAdapter(agentFilePath, fsAdapter);
if (definition) {
agents.push({
name: entry.name,
definition,
source,
filePath: agentFilePath,
});
logger.debug(`Discovered ${source} agent (subdirectory): ${entry.name}`);
}
}
}
}
} catch (error) {
logger.error(`Failed to scan agents directory: ${baseDir}`, error);
}
return agents;
}
/**
* Discover all filesystem-based agents from user and project sources
*/
export async function discoverFilesystemAgents(
projectPath?: string,
sources: Array<'user' | 'project'> = ['user', 'project']
): Promise<FilesystemAgent[]> {
const agents: FilesystemAgent[] = [];
// Discover user-level agents from ~/.claude/agents/
if (sources.includes('user')) {
const userAgentsDir = path.join(os.homedir(), '.claude', 'agents');
const userAgents = await scanAgentsDirectory(userAgentsDir, 'user');
agents.push(...userAgents);
logger.info(`Discovered ${userAgents.length} user-level agents from ${userAgentsDir}`);
}
// Discover project-level agents from .claude/agents/
if (sources.includes('project') && projectPath) {
const projectAgentsDir = path.join(projectPath, '.claude', 'agents');
const projectAgents = await scanAgentsDirectory(projectAgentsDir, 'project');
agents.push(...projectAgents);
logger.info(`Discovered ${projectAgents.length} project-level agents from ${projectAgentsDir}`);
}
return agents;
}

View File

@@ -0,0 +1,263 @@
/**
* Secure authentication utilities that avoid environment variable race conditions
*/
import { spawn } from 'child_process';
import { createLogger } from '@automaker/utils';
const logger = createLogger('AuthUtils');
export interface SecureAuthEnv {
[key: string]: string | undefined;
}
export interface AuthValidationResult {
isValid: boolean;
error?: string;
normalizedKey?: string;
}
/**
* Validates API key format without modifying process.env
*/
export function validateApiKey(
key: string,
provider: 'anthropic' | 'openai' | 'cursor'
): AuthValidationResult {
if (!key || typeof key !== 'string' || key.trim().length === 0) {
return { isValid: false, error: 'API key is required' };
}
const trimmedKey = key.trim();
switch (provider) {
case 'anthropic':
if (!trimmedKey.startsWith('sk-ant-')) {
return {
isValid: false,
error: 'Invalid Anthropic API key format. Should start with "sk-ant-"',
};
}
if (trimmedKey.length < 20) {
return { isValid: false, error: 'Anthropic API key too short' };
}
break;
case 'openai':
if (!trimmedKey.startsWith('sk-')) {
return { isValid: false, error: 'Invalid OpenAI API key format. Should start with "sk-"' };
}
if (trimmedKey.length < 20) {
return { isValid: false, error: 'OpenAI API key too short' };
}
break;
case 'cursor':
// Cursor API keys might have different format
if (trimmedKey.length < 10) {
return { isValid: false, error: 'Cursor API key too short' };
}
break;
}
return { isValid: true, normalizedKey: trimmedKey };
}
/**
* Creates a secure environment object for authentication testing
* without modifying the global process.env
*/
export function createSecureAuthEnv(
authMethod: 'cli' | 'api_key',
apiKey?: string,
provider: 'anthropic' | 'openai' | 'cursor' = 'anthropic'
): SecureAuthEnv {
const env: SecureAuthEnv = { ...process.env };
if (authMethod === 'cli') {
// For CLI auth, remove the API key to force CLI authentication
const envKey = provider === 'openai' ? 'OPENAI_API_KEY' : 'ANTHROPIC_API_KEY';
delete env[envKey];
} else if (authMethod === 'api_key' && apiKey) {
// For API key auth, validate and set the provided key
const validation = validateApiKey(apiKey, provider);
if (!validation.isValid) {
throw new Error(validation.error);
}
const envKey = provider === 'openai' ? 'OPENAI_API_KEY' : 'ANTHROPIC_API_KEY';
env[envKey] = validation.normalizedKey;
}
return env;
}
/**
* Creates a temporary environment override for the current process
* WARNING: This should only be used in isolated contexts and immediately cleaned up
*/
export function createTempEnvOverride(authEnv: SecureAuthEnv): () => void {
const originalEnv = { ...process.env };
// Apply the auth environment
Object.assign(process.env, authEnv);
// Return cleanup function
return () => {
// Restore original environment
Object.keys(process.env).forEach((key) => {
if (!(key in originalEnv)) {
delete process.env[key];
}
});
Object.assign(process.env, originalEnv);
};
}
/**
* Spawns a process with secure environment isolation
*/
export function spawnSecureAuth(
command: string,
args: string[],
authEnv: SecureAuthEnv,
options: {
cwd?: string;
timeout?: number;
} = {}
): Promise<{ stdout: string; stderr: string; exitCode: number | null }> {
return new Promise((resolve, reject) => {
const { cwd = process.cwd(), timeout = 30000 } = options;
logger.debug(`Spawning secure auth process: ${command} ${args.join(' ')}`);
const child = spawn(command, args, {
cwd,
env: authEnv,
stdio: 'pipe',
shell: false,
});
let stdout = '';
let stderr = '';
let isResolved = false;
const timeoutId = setTimeout(() => {
if (!isResolved) {
child.kill('SIGTERM');
isResolved = true;
reject(new Error('Authentication process timed out'));
}
}, timeout);
child.stdout?.on('data', (data) => {
stdout += data.toString();
});
child.stderr?.on('data', (data) => {
stderr += data.toString();
});
child.on('close', (code) => {
clearTimeout(timeoutId);
if (!isResolved) {
isResolved = true;
resolve({ stdout, stderr, exitCode: code });
}
});
child.on('error', (error) => {
clearTimeout(timeoutId);
if (!isResolved) {
isResolved = true;
reject(error);
}
});
});
}
/**
* Safely extracts environment variable without race conditions
*/
export function safeGetEnv(key: string): string | undefined {
return process.env[key];
}
/**
* Checks if an environment variable would be modified without actually modifying it
*/
export function wouldModifyEnv(key: string, newValue: string): boolean {
const currentValue = safeGetEnv(key);
return currentValue !== newValue;
}
/**
* Secure auth session management
*/
export class AuthSessionManager {
private static activeSessions = new Map<string, SecureAuthEnv>();
static createSession(
sessionId: string,
authMethod: 'cli' | 'api_key',
apiKey?: string,
provider: 'anthropic' | 'openai' | 'cursor' = 'anthropic'
): SecureAuthEnv {
const env = createSecureAuthEnv(authMethod, apiKey, provider);
this.activeSessions.set(sessionId, env);
return env;
}
static getSession(sessionId: string): SecureAuthEnv | undefined {
return this.activeSessions.get(sessionId);
}
static destroySession(sessionId: string): void {
this.activeSessions.delete(sessionId);
}
static cleanup(): void {
this.activeSessions.clear();
}
}
/**
* Rate limiting for auth attempts to prevent abuse
*/
export class AuthRateLimiter {
private attempts = new Map<string, { count: number; lastAttempt: number }>();
constructor(
private maxAttempts = 5,
private windowMs = 60000
) {}
canAttempt(identifier: string): boolean {
const now = Date.now();
const record = this.attempts.get(identifier);
if (!record || now - record.lastAttempt > this.windowMs) {
this.attempts.set(identifier, { count: 1, lastAttempt: now });
return true;
}
if (record.count >= this.maxAttempts) {
return false;
}
record.count++;
record.lastAttempt = now;
return true;
}
getRemainingAttempts(identifier: string): number {
const record = this.attempts.get(identifier);
if (!record) return this.maxAttempts;
return Math.max(0, this.maxAttempts - record.count);
}
getResetTime(identifier: string): Date | null {
const record = this.attempts.get(identifier);
if (!record) return null;
return new Date(record.lastAttempt + this.windowMs);
}
}

View File

@@ -1,54 +1,381 @@
/**
* Authentication middleware for API security
*
* Supports API key authentication via header or environment variable.
* Supports two authentication methods:
* 1. Header-based (X-API-Key) - Used by Electron mode
* 2. Cookie-based (HTTP-only session cookie) - Used by web mode
*
* Auto-generates an API key on first run if none is configured.
*/
import type { Request, Response, NextFunction } from 'express';
import crypto from 'crypto';
import path from 'path';
import * as secureFs from './secure-fs.js';
import { createLogger } from '@automaker/utils';
// API key from environment (optional - if not set, auth is disabled)
const API_KEY = process.env.AUTOMAKER_API_KEY;
const logger = createLogger('Auth');
const DATA_DIR = process.env.DATA_DIR || './data';
const API_KEY_FILE = path.join(DATA_DIR, '.api-key');
const SESSIONS_FILE = path.join(DATA_DIR, '.sessions');
const SESSION_COOKIE_NAME = 'automaker_session';
const SESSION_MAX_AGE_MS = 30 * 24 * 60 * 60 * 1000; // 30 days
const WS_TOKEN_MAX_AGE_MS = 5 * 60 * 1000; // 5 minutes for WebSocket connection tokens
// Session store - persisted to file for survival across server restarts
const validSessions = new Map<string, { createdAt: number; expiresAt: number }>();
// Short-lived WebSocket connection tokens (in-memory only, not persisted)
const wsConnectionTokens = new Map<string, { createdAt: number; expiresAt: number }>();
// Clean up expired WebSocket tokens periodically
setInterval(() => {
const now = Date.now();
wsConnectionTokens.forEach((data, token) => {
if (data.expiresAt <= now) {
wsConnectionTokens.delete(token);
}
});
}, 60 * 1000); // Clean up every minute
/**
* Load sessions from file on startup
*/
function loadSessions(): void {
try {
if (secureFs.existsSync(SESSIONS_FILE)) {
const data = secureFs.readFileSync(SESSIONS_FILE, 'utf-8') as string;
const sessions = JSON.parse(data) as Array<
[string, { createdAt: number; expiresAt: number }]
>;
const now = Date.now();
let loadedCount = 0;
let expiredCount = 0;
for (const [token, session] of sessions) {
// Only load non-expired sessions
if (session.expiresAt > now) {
validSessions.set(token, session);
loadedCount++;
} else {
expiredCount++;
}
}
if (loadedCount > 0 || expiredCount > 0) {
logger.info(`Loaded ${loadedCount} sessions (${expiredCount} expired)`);
}
}
} catch (error) {
logger.warn('Error loading sessions:', error);
}
}
/**
* Save sessions to file (async)
*/
async function saveSessions(): Promise<void> {
try {
await secureFs.mkdir(path.dirname(SESSIONS_FILE), { recursive: true });
const sessions = Array.from(validSessions.entries());
await secureFs.writeFile(SESSIONS_FILE, JSON.stringify(sessions), {
encoding: 'utf-8',
mode: 0o600,
});
} catch (error) {
logger.error('Failed to save sessions:', error);
}
}
// Load existing sessions on startup
loadSessions();
/**
* Ensure an API key exists - either from env var, file, or generate new one.
* This provides CSRF protection by requiring a secret key for all API requests.
*/
function ensureApiKey(): string {
// First check environment variable (Electron passes it this way)
if (process.env.AUTOMAKER_API_KEY) {
logger.info('Using API key from environment variable');
return process.env.AUTOMAKER_API_KEY;
}
// Try to read from file
try {
if (secureFs.existsSync(API_KEY_FILE)) {
const key = (secureFs.readFileSync(API_KEY_FILE, 'utf-8') as string).trim();
if (key) {
logger.info('Loaded API key from file');
return key;
}
}
} catch (error) {
logger.warn('Error reading API key file:', error);
}
// Generate new key
const newKey = crypto.randomUUID();
try {
secureFs.mkdirSync(path.dirname(API_KEY_FILE), { recursive: true });
secureFs.writeFileSync(API_KEY_FILE, newKey, { encoding: 'utf-8', mode: 0o600 });
logger.info('Generated new API key');
} catch (error) {
logger.error('Failed to save API key:', error);
}
return newKey;
}
// API key - always generated/loaded on startup for CSRF protection
const API_KEY = ensureApiKey();
// Print API key to console for web mode users (unless suppressed for production logging)
if (process.env.AUTOMAKER_HIDE_API_KEY !== 'true') {
logger.info(`
╔═══════════════════════════════════════════════════════════════════════╗
║ 🔐 API Key for Web Mode Authentication ║
╠═══════════════════════════════════════════════════════════════════════╣
║ ║
║ When accessing via browser, you'll be prompted to enter this key: ║
║ ║
${API_KEY}
║ ║
║ In Electron mode, authentication is handled automatically. ║
╚═══════════════════════════════════════════════════════════════════════╝
`);
} else {
logger.info('API key banner hidden (AUTOMAKER_HIDE_API_KEY=true)');
}
/**
* Generate a cryptographically secure session token
*/
function generateSessionToken(): string {
return crypto.randomBytes(32).toString('hex');
}
/**
* Create a new session and return the token
*/
export async function createSession(): Promise<string> {
const token = generateSessionToken();
const now = Date.now();
validSessions.set(token, {
createdAt: now,
expiresAt: now + SESSION_MAX_AGE_MS,
});
await saveSessions(); // Persist to file
return token;
}
/**
* Validate a session token
* Note: This returns synchronously but triggers async persistence if session expired
*/
export function validateSession(token: string): boolean {
const session = validSessions.get(token);
if (!session) return false;
if (Date.now() > session.expiresAt) {
validSessions.delete(token);
// Fire-and-forget: persist removal asynchronously
saveSessions().catch((err) => logger.error('Error saving sessions:', err));
return false;
}
return true;
}
/**
* Invalidate a session token
*/
export async function invalidateSession(token: string): Promise<void> {
validSessions.delete(token);
await saveSessions(); // Persist removal
}
/**
* Create a short-lived WebSocket connection token
* Used for initial WebSocket handshake authentication
*/
export function createWsConnectionToken(): string {
const token = generateSessionToken();
const now = Date.now();
wsConnectionTokens.set(token, {
createdAt: now,
expiresAt: now + WS_TOKEN_MAX_AGE_MS,
});
return token;
}
/**
* Validate a WebSocket connection token
* These tokens are single-use and short-lived (5 minutes)
* Token is invalidated immediately after first successful use
*/
export function validateWsConnectionToken(token: string): boolean {
const tokenData = wsConnectionTokens.get(token);
if (!tokenData) return false;
// Always delete the token (single-use)
wsConnectionTokens.delete(token);
// Check if expired
if (Date.now() > tokenData.expiresAt) {
return false;
}
return true;
}
/**
* Validate the API key using timing-safe comparison
* Prevents timing attacks that could leak information about the key
*/
export function validateApiKey(key: string): boolean {
if (!key || typeof key !== 'string') return false;
// Both buffers must be the same length for timingSafeEqual
const keyBuffer = Buffer.from(key);
const apiKeyBuffer = Buffer.from(API_KEY);
// If lengths differ, compare against a dummy to maintain constant time
if (keyBuffer.length !== apiKeyBuffer.length) {
crypto.timingSafeEqual(apiKeyBuffer, apiKeyBuffer);
return false;
}
return crypto.timingSafeEqual(keyBuffer, apiKeyBuffer);
}
/**
* Get session cookie options
*/
export function getSessionCookieOptions(): {
httpOnly: boolean;
secure: boolean;
sameSite: 'strict' | 'lax' | 'none';
maxAge: number;
path: string;
} {
return {
httpOnly: true, // JavaScript cannot access this cookie
secure: process.env.NODE_ENV === 'production', // HTTPS only in production
sameSite: 'lax', // Sent for same-site requests and top-level navigations, but not cross-origin fetch/XHR
maxAge: SESSION_MAX_AGE_MS,
path: '/',
};
}
/**
* Get the session cookie name
*/
export function getSessionCookieName(): string {
return SESSION_COOKIE_NAME;
}
/**
* Authentication result type
*/
type AuthResult =
| { authenticated: true }
| { authenticated: false; errorType: 'invalid_api_key' | 'invalid_session' | 'no_auth' };
/**
* Core authentication check - shared between middleware and status check
* Extracts auth credentials from various sources and validates them
*/
function checkAuthentication(
headers: Record<string, string | string[] | undefined>,
query: Record<string, string | undefined>,
cookies: Record<string, string | undefined>
): AuthResult {
// Check for API key in header (Electron mode)
const headerKey = headers['x-api-key'] as string | undefined;
if (headerKey) {
if (validateApiKey(headerKey)) {
return { authenticated: true };
}
return { authenticated: false, errorType: 'invalid_api_key' };
}
// Check for session token in header (web mode with explicit token)
const sessionTokenHeader = headers['x-session-token'] as string | undefined;
if (sessionTokenHeader) {
if (validateSession(sessionTokenHeader)) {
return { authenticated: true };
}
return { authenticated: false, errorType: 'invalid_session' };
}
// Check for API key in query parameter (fallback)
const queryKey = query.apiKey;
if (queryKey) {
if (validateApiKey(queryKey)) {
return { authenticated: true };
}
return { authenticated: false, errorType: 'invalid_api_key' };
}
// Check for session cookie (web mode)
const sessionToken = cookies[SESSION_COOKIE_NAME];
if (sessionToken && validateSession(sessionToken)) {
return { authenticated: true };
}
return { authenticated: false, errorType: 'no_auth' };
}
/**
* Authentication middleware
*
* If AUTOMAKER_API_KEY is set, requires matching key in X-API-Key header.
* If not set, allows all requests (development mode).
* Accepts either:
* 1. X-API-Key header (for Electron mode)
* 2. X-Session-Token header (for web mode with explicit token)
* 3. apiKey query parameter (fallback for cases where headers can't be set)
* 4. Session cookie (for web mode)
*/
export function authMiddleware(req: Request, res: Response, next: NextFunction): void {
// If no API key is configured, allow all requests
if (!API_KEY) {
const result = checkAuthentication(
req.headers as Record<string, string | string[] | undefined>,
req.query as Record<string, string | undefined>,
(req.cookies || {}) as Record<string, string | undefined>
);
if (result.authenticated) {
next();
return;
}
// Check for API key in header
const providedKey = req.headers['x-api-key'] as string | undefined;
if (!providedKey) {
res.status(401).json({
success: false,
error: 'Authentication required. Provide X-API-Key header.',
});
return;
// Return appropriate error based on what failed
switch (result.errorType) {
case 'invalid_api_key':
res.status(403).json({
success: false,
error: 'Invalid API key.',
});
break;
case 'invalid_session':
res.status(403).json({
success: false,
error: 'Invalid or expired session token.',
});
break;
case 'no_auth':
default:
res.status(401).json({
success: false,
error: 'Authentication required.',
});
}
if (providedKey !== API_KEY) {
res.status(403).json({
success: false,
error: 'Invalid API key.',
});
return;
}
next();
}
/**
* Check if authentication is enabled
* Check if authentication is enabled (always true now)
*/
export function isAuthEnabled(): boolean {
return !!API_KEY;
return true;
}
/**
@@ -56,7 +383,31 @@ export function isAuthEnabled(): boolean {
*/
export function getAuthStatus(): { enabled: boolean; method: string } {
return {
enabled: !!API_KEY,
method: API_KEY ? 'api_key' : 'none',
enabled: true,
method: 'api_key_or_session',
};
}
/**
* Check if a request is authenticated (for status endpoint)
*/
export function isRequestAuthenticated(req: Request): boolean {
const result = checkAuthentication(
req.headers as Record<string, string | string[] | undefined>,
req.query as Record<string, string | undefined>,
(req.cookies || {}) as Record<string, string | undefined>
);
return result.authenticated;
}
/**
* Check if raw credentials are authenticated
* Used for WebSocket authentication where we don't have Express request objects
*/
export function checkRawAuthentication(
headers: Record<string, string | string[] | undefined>,
query: Record<string, string | undefined>,
cookies: Record<string, string | undefined>
): boolean {
return checkAuthentication(headers, query, cookies).authenticated;
}

View File

@@ -0,0 +1,447 @@
/**
* Unified CLI Detection Framework
*
* Provides consistent CLI detection and management across all providers
*/
import { spawn, execSync } from 'child_process';
import * as fs from 'fs';
import * as path from 'path';
import * as os from 'os';
import { createLogger } from '@automaker/utils';
const logger = createLogger('CliDetection');
export interface CliInfo {
name: string;
command: string;
version?: string;
path?: string;
installed: boolean;
authenticated: boolean;
authMethod: 'cli' | 'api_key' | 'none';
platform?: string;
architectures?: string[];
}
export interface CliDetectionOptions {
timeout?: number;
includeWsl?: boolean;
wslDistribution?: string;
}
export interface CliDetectionResult {
cli: CliInfo;
detected: boolean;
issues: string[];
}
export interface UnifiedCliDetection {
claude?: CliDetectionResult;
codex?: CliDetectionResult;
cursor?: CliDetectionResult;
}
/**
* CLI Configuration for different providers
*/
const CLI_CONFIGS = {
claude: {
name: 'Claude CLI',
commands: ['claude'],
versionArgs: ['--version'],
installCommands: {
darwin: 'brew install anthropics/claude/claude',
linux: 'curl -fsSL https://claude.ai/install.sh | sh',
win32: 'iwr https://claude.ai/install.ps1 -UseBasicParsing | iex',
},
},
codex: {
name: 'Codex CLI',
commands: ['codex', 'openai'],
versionArgs: ['--version'],
installCommands: {
darwin: 'npm install -g @openai/codex-cli',
linux: 'npm install -g @openai/codex-cli',
win32: 'npm install -g @openai/codex-cli',
},
},
cursor: {
name: 'Cursor CLI',
commands: ['cursor-agent', 'cursor'],
versionArgs: ['--version'],
installCommands: {
darwin: 'brew install cursor/cursor/cursor-agent',
linux: 'curl -fsSL https://cursor.sh/install.sh | sh',
win32: 'iwr https://cursor.sh/install.ps1 -UseBasicParsing | iex',
},
},
} as const;
/**
* Detect if a CLI is installed and available
*/
export async function detectCli(
provider: keyof typeof CLI_CONFIGS,
options: CliDetectionOptions = {}
): Promise<CliDetectionResult> {
const config = CLI_CONFIGS[provider];
const { timeout = 5000, includeWsl = false, wslDistribution } = options;
const issues: string[] = [];
const cliInfo: CliInfo = {
name: config.name,
command: '',
installed: false,
authenticated: false,
authMethod: 'none',
};
try {
// Find the command in PATH
const command = await findCommand([...config.commands]);
if (command) {
cliInfo.command = command;
}
if (!cliInfo.command) {
issues.push(`${config.name} not found in PATH`);
return { cli: cliInfo, detected: false, issues };
}
cliInfo.path = cliInfo.command;
cliInfo.installed = true;
// Get version
try {
cliInfo.version = await getCliVersion(cliInfo.command, [...config.versionArgs], timeout);
} catch (error) {
issues.push(`Failed to get ${config.name} version: ${error}`);
}
// Check authentication
cliInfo.authMethod = await checkCliAuth(provider, cliInfo.command);
cliInfo.authenticated = cliInfo.authMethod !== 'none';
return { cli: cliInfo, detected: true, issues };
} catch (error) {
issues.push(`Error detecting ${config.name}: ${error}`);
return { cli: cliInfo, detected: false, issues };
}
}
/**
* Detect all CLIs in the system
*/
export async function detectAllCLis(
options: CliDetectionOptions = {}
): Promise<UnifiedCliDetection> {
const results: UnifiedCliDetection = {};
// Detect all providers in parallel
const providers = Object.keys(CLI_CONFIGS) as Array<keyof typeof CLI_CONFIGS>;
const detectionPromises = providers.map(async (provider) => {
const result = await detectCli(provider, options);
return { provider, result };
});
const detections = await Promise.all(detectionPromises);
for (const { provider, result } of detections) {
results[provider] = result;
}
return results;
}
/**
* Find the first available command from a list of alternatives
*/
export async function findCommand(commands: string[]): Promise<string | null> {
for (const command of commands) {
try {
const whichCommand = process.platform === 'win32' ? 'where' : 'which';
const result = execSync(`${whichCommand} ${command}`, {
encoding: 'utf8',
timeout: 2000,
}).trim();
if (result) {
return result.split('\n')[0]; // Take first result on Windows
}
} catch {
// Command not found, try next
}
}
return null;
}
/**
* Get CLI version
*/
export async function getCliVersion(
command: string,
args: string[],
timeout: number = 5000
): Promise<string> {
return new Promise((resolve, reject) => {
const child = spawn(command, args, {
stdio: 'pipe',
timeout,
});
let stdout = '';
let stderr = '';
child.stdout?.on('data', (data) => {
stdout += data.toString();
});
child.stderr?.on('data', (data) => {
stderr += data.toString();
});
child.on('close', (code) => {
if (code === 0 && stdout) {
resolve(stdout.trim());
} else if (stderr) {
reject(stderr.trim());
} else {
reject(`Command exited with code ${code}`);
}
});
child.on('error', reject);
});
}
/**
* Check authentication status for a CLI
*/
export async function checkCliAuth(
provider: keyof typeof CLI_CONFIGS,
command: string
): Promise<'cli' | 'api_key' | 'none'> {
try {
switch (provider) {
case 'claude':
return await checkClaudeAuth(command);
case 'codex':
return await checkCodexAuth(command);
case 'cursor':
return await checkCursorAuth(command);
default:
return 'none';
}
} catch {
return 'none';
}
}
/**
* Check Claude CLI authentication
*/
async function checkClaudeAuth(command: string): Promise<'cli' | 'api_key' | 'none'> {
try {
// Check for environment variable
if (process.env.ANTHROPIC_API_KEY) {
return 'api_key';
}
// Try running a simple command to check CLI auth
const result = await getCliVersion(command, ['--version'], 3000);
if (result) {
return 'cli'; // If version works, assume CLI is authenticated
}
} catch {
// Version command might work even without auth, so we need a better check
}
// Try a more specific auth check
return new Promise((resolve) => {
const child = spawn(command, ['whoami'], {
stdio: 'pipe',
timeout: 3000,
});
let stdout = '';
let stderr = '';
child.stdout?.on('data', (data) => {
stdout += data.toString();
});
child.stderr?.on('data', (data) => {
stderr += data.toString();
});
child.on('close', (code) => {
if (code === 0 && stdout && !stderr.includes('not authenticated')) {
resolve('cli');
} else {
resolve('none');
}
});
child.on('error', () => {
resolve('none');
});
});
}
/**
* Check Codex CLI authentication
*/
async function checkCodexAuth(command: string): Promise<'cli' | 'api_key' | 'none'> {
// Check for environment variable
if (process.env.OPENAI_API_KEY) {
return 'api_key';
}
try {
// Try a simple auth check
const result = await getCliVersion(command, ['--version'], 3000);
if (result) {
return 'cli';
}
} catch {
// Version check failed
}
return 'none';
}
/**
* Check Cursor CLI authentication
*/
async function checkCursorAuth(command: string): Promise<'cli' | 'api_key' | 'none'> {
// Check for environment variable
if (process.env.CURSOR_API_KEY) {
return 'api_key';
}
// Check for credentials files
const credentialPaths = [
path.join(os.homedir(), '.cursor', 'credentials.json'),
path.join(os.homedir(), '.config', 'cursor', 'credentials.json'),
path.join(os.homedir(), '.cursor', 'auth.json'),
path.join(os.homedir(), '.config', 'cursor', 'auth.json'),
];
for (const credPath of credentialPaths) {
try {
if (fs.existsSync(credPath)) {
const content = fs.readFileSync(credPath, 'utf8');
const creds = JSON.parse(content);
if (creds.accessToken || creds.token || creds.apiKey) {
return 'cli';
}
}
} catch {
// Invalid credentials file
}
}
// Try a simple command
try {
const result = await getCliVersion(command, ['--version'], 3000);
if (result) {
return 'cli';
}
} catch {
// Version check failed
}
return 'none';
}
/**
* Get installation instructions for a provider
*/
export function getInstallInstructions(
provider: keyof typeof CLI_CONFIGS,
platform: NodeJS.Platform = process.platform
): string {
const config = CLI_CONFIGS[provider];
const command = config.installCommands[platform as keyof typeof config.installCommands];
if (!command) {
return `No installation instructions available for ${provider} on ${platform}`;
}
return command;
}
/**
* Get platform-specific CLI paths and versions
*/
export function getPlatformCliPaths(provider: keyof typeof CLI_CONFIGS): string[] {
const config = CLI_CONFIGS[provider];
const platform = process.platform;
switch (platform) {
case 'darwin':
return [
`/usr/local/bin/${config.commands[0]}`,
`/opt/homebrew/bin/${config.commands[0]}`,
path.join(os.homedir(), '.local', 'bin', config.commands[0]),
];
case 'linux':
return [
`/usr/bin/${config.commands[0]}`,
`/usr/local/bin/${config.commands[0]}`,
path.join(os.homedir(), '.local', 'bin', config.commands[0]),
path.join(os.homedir(), '.npm', 'global', 'bin', config.commands[0]),
];
case 'win32':
return [
path.join(
os.homedir(),
'AppData',
'Local',
'Programs',
config.commands[0],
`${config.commands[0]}.exe`
),
path.join(process.env.ProgramFiles || '', config.commands[0], `${config.commands[0]}.exe`),
path.join(
process.env.ProgramFiles || '',
config.commands[0],
'bin',
`${config.commands[0]}.exe`
),
];
default:
return [];
}
}
/**
* Validate CLI installation
*/
export function validateCliInstallation(cliInfo: CliInfo): {
valid: boolean;
issues: string[];
} {
const issues: string[] = [];
if (!cliInfo.installed) {
issues.push('CLI is not installed');
}
if (cliInfo.installed && !cliInfo.version) {
issues.push('Could not determine CLI version');
}
if (cliInfo.installed && cliInfo.authMethod === 'none') {
issues.push('CLI is not authenticated');
}
return {
valid: issues.length === 0,
issues,
};
}

View File

@@ -0,0 +1,68 @@
/**
* Shared utility for checking Codex CLI authentication status
*
* Uses 'codex login status' command to verify authentication.
* Never assumes authenticated - only returns true if CLI confirms.
*/
import { spawnProcess } from '@automaker/platform';
import { findCodexCliPath } from '@automaker/platform';
import { createLogger } from '@automaker/utils';
const logger = createLogger('CodexAuth');
const CODEX_COMMAND = 'codex';
const OPENAI_API_KEY_ENV = 'OPENAI_API_KEY';
export interface CodexAuthCheckResult {
authenticated: boolean;
method: 'api_key_env' | 'cli_authenticated' | 'none';
}
/**
* Check Codex authentication status using 'codex login status' command
*
* @param cliPath Optional CLI path. If not provided, will attempt to find it.
* @returns Authentication status and method
*/
export async function checkCodexAuthentication(
cliPath?: string | null
): Promise<CodexAuthCheckResult> {
const resolvedCliPath = cliPath || (await findCodexCliPath());
const hasApiKey = !!process.env[OPENAI_API_KEY_ENV];
// If CLI is not installed, cannot be authenticated
if (!resolvedCliPath) {
logger.info('CLI not found');
return { authenticated: false, method: 'none' };
}
try {
const result = await spawnProcess({
command: resolvedCliPath || CODEX_COMMAND,
args: ['login', 'status'],
cwd: process.cwd(),
env: {
...process.env,
TERM: 'dumb', // Avoid interactive output
},
});
// Check both stdout and stderr for "logged in" - Codex CLI outputs to stderr
const combinedOutput = (result.stdout + result.stderr).toLowerCase();
const isLoggedIn = combinedOutput.includes('logged in');
if (result.exitCode === 0 && isLoggedIn) {
// Determine auth method based on what we know
const method = hasApiKey ? 'api_key_env' : 'cli_authenticated';
logger.info(`✓ Authenticated (${method})`);
return { authenticated: true, method };
}
logger.info('Not authenticated');
return { authenticated: false, method: 'none' };
} catch (error) {
logger.error('Failed to check authentication:', error);
return { authenticated: false, method: 'none' };
}
}

View File

@@ -0,0 +1,414 @@
/**
* Unified Error Handling System for CLI Providers
*
* Provides consistent error classification, user-friendly messages, and debugging support
* across all AI providers (Claude, Codex, Cursor)
*/
import { createLogger } from '@automaker/utils';
const logger = createLogger('ErrorHandler');
export enum ErrorType {
AUTHENTICATION = 'authentication',
BILLING = 'billing',
RATE_LIMIT = 'rate_limit',
NETWORK = 'network',
TIMEOUT = 'timeout',
VALIDATION = 'validation',
PERMISSION = 'permission',
CLI_NOT_FOUND = 'cli_not_found',
CLI_NOT_INSTALLED = 'cli_not_installed',
MODEL_NOT_SUPPORTED = 'model_not_supported',
INVALID_REQUEST = 'invalid_request',
SERVER_ERROR = 'server_error',
UNKNOWN = 'unknown',
}
export enum ErrorSeverity {
LOW = 'low',
MEDIUM = 'medium',
HIGH = 'high',
CRITICAL = 'critical',
}
export interface ErrorClassification {
type: ErrorType;
severity: ErrorSeverity;
userMessage: string;
technicalMessage: string;
suggestedAction?: string;
retryable: boolean;
provider?: string;
context?: Record<string, any>;
}
export interface ErrorPattern {
type: ErrorType;
severity: ErrorSeverity;
patterns: RegExp[];
userMessage: string;
suggestedAction?: string;
retryable: boolean;
}
/**
* Error patterns for different types of errors
*/
const ERROR_PATTERNS: ErrorPattern[] = [
// Authentication errors
{
type: ErrorType.AUTHENTICATION,
severity: ErrorSeverity.HIGH,
patterns: [
/unauthorized/i,
/authentication.*fail/i,
/invalid_api_key/i,
/invalid api key/i,
/not authenticated/i,
/please.*log/i,
/token.*revoked/i,
/oauth.*error/i,
/credentials.*invalid/i,
],
userMessage: 'Authentication failed. Please check your API key or login credentials.',
suggestedAction:
"Verify your API key is correct and hasn't expired, or run the CLI login command.",
retryable: false,
},
// Billing errors
{
type: ErrorType.BILLING,
severity: ErrorSeverity.HIGH,
patterns: [
/credit.*balance.*low/i,
/insufficient.*credit/i,
/billing.*issue/i,
/payment.*required/i,
/usage.*exceeded/i,
/quota.*exceeded/i,
/add.*credit/i,
],
userMessage: 'Account has insufficient credits or billing issues.',
suggestedAction: 'Please add credits to your account or check your billing settings.',
retryable: false,
},
// Rate limit errors
{
type: ErrorType.RATE_LIMIT,
severity: ErrorSeverity.MEDIUM,
patterns: [
/rate.*limit/i,
/too.*many.*request/i,
/limit.*reached/i,
/try.*later/i,
/429/i,
/reset.*time/i,
/upgrade.*plan/i,
],
userMessage: 'Rate limit reached. Please wait before trying again.',
suggestedAction: 'Wait a few minutes before retrying, or consider upgrading your plan.',
retryable: true,
},
// Network errors
{
type: ErrorType.NETWORK,
severity: ErrorSeverity.MEDIUM,
patterns: [/network/i, /connection/i, /dns/i, /timeout/i, /econnrefused/i, /enotfound/i],
userMessage: 'Network connection issue.',
suggestedAction: 'Check your internet connection and try again.',
retryable: true,
},
// Timeout errors
{
type: ErrorType.TIMEOUT,
severity: ErrorSeverity.MEDIUM,
patterns: [/timeout/i, /aborted/i, /time.*out/i],
userMessage: 'Operation timed out.',
suggestedAction: 'Try again with a simpler request or check your connection.',
retryable: true,
},
// Permission errors
{
type: ErrorType.PERMISSION,
severity: ErrorSeverity.HIGH,
patterns: [/permission.*denied/i, /access.*denied/i, /forbidden/i, /403/i, /not.*authorized/i],
userMessage: 'Permission denied.',
suggestedAction: 'Check if you have the required permissions for this operation.',
retryable: false,
},
// CLI not found
{
type: ErrorType.CLI_NOT_FOUND,
severity: ErrorSeverity.HIGH,
patterns: [/command not found/i, /not recognized/i, /not.*installed/i, /ENOENT/i],
userMessage: 'CLI tool not found.',
suggestedAction: "Please install the required CLI tool and ensure it's in your PATH.",
retryable: false,
},
// Model not supported
{
type: ErrorType.MODEL_NOT_SUPPORTED,
severity: ErrorSeverity.HIGH,
patterns: [/model.*not.*support/i, /unknown.*model/i, /invalid.*model/i],
userMessage: 'Model not supported.',
suggestedAction: 'Check available models and use a supported one.',
retryable: false,
},
// Server errors
{
type: ErrorType.SERVER_ERROR,
severity: ErrorSeverity.HIGH,
patterns: [/internal.*server/i, /server.*error/i, /500/i, /502/i, /503/i, /504/i],
userMessage: 'Server error occurred.',
suggestedAction: 'Try again in a few minutes or contact support if the issue persists.',
retryable: true,
},
];
/**
* Classify an error into a specific type with user-friendly message
*/
export function classifyError(
error: unknown,
provider?: string,
context?: Record<string, any>
): ErrorClassification {
const errorText = getErrorText(error);
// Try to match against known patterns
for (const pattern of ERROR_PATTERNS) {
for (const regex of pattern.patterns) {
if (regex.test(errorText)) {
return {
type: pattern.type,
severity: pattern.severity,
userMessage: pattern.userMessage,
technicalMessage: errorText,
suggestedAction: pattern.suggestedAction,
retryable: pattern.retryable,
provider,
context,
};
}
}
}
// Unknown error
return {
type: ErrorType.UNKNOWN,
severity: ErrorSeverity.MEDIUM,
userMessage: 'An unexpected error occurred.',
technicalMessage: errorText,
suggestedAction: 'Please try again or contact support if the issue persists.',
retryable: true,
provider,
context,
};
}
/**
* Get a user-friendly error message
*/
export function getUserFriendlyErrorMessage(error: unknown, provider?: string): string {
const classification = classifyError(error, provider);
let message = classification.userMessage;
if (classification.suggestedAction) {
message += ` ${classification.suggestedAction}`;
}
// Add provider-specific context if available
if (provider) {
message = `[${provider.toUpperCase()}] ${message}`;
}
return message;
}
/**
* Check if an error is retryable
*/
export function isRetryableError(error: unknown): boolean {
const classification = classifyError(error);
return classification.retryable;
}
/**
* Check if an error is authentication-related
*/
export function isAuthenticationError(error: unknown): boolean {
const classification = classifyError(error);
return classification.type === ErrorType.AUTHENTICATION;
}
/**
* Check if an error is billing-related
*/
export function isBillingError(error: unknown): boolean {
const classification = classifyError(error);
return classification.type === ErrorType.BILLING;
}
/**
* Check if an error is rate limit related
*/
export function isRateLimitError(error: unknown): boolean {
const classification = classifyError(error);
return classification.type === ErrorType.RATE_LIMIT;
}
/**
* Get error text from various error types
*/
function getErrorText(error: unknown): string {
if (typeof error === 'string') {
return error;
}
if (error instanceof Error) {
return error.message;
}
if (typeof error === 'object' && error !== null) {
// Handle structured error objects
const errorObj = error as any;
if (errorObj.message) {
return errorObj.message;
}
if (errorObj.error?.message) {
return errorObj.error.message;
}
if (errorObj.error) {
return typeof errorObj.error === 'string' ? errorObj.error : JSON.stringify(errorObj.error);
}
return JSON.stringify(error);
}
return String(error);
}
/**
* Create a standardized error response
*/
export function createErrorResponse(
error: unknown,
provider?: string,
context?: Record<string, any>
): {
success: false;
error: string;
errorType: ErrorType;
severity: ErrorSeverity;
retryable: boolean;
suggestedAction?: string;
} {
const classification = classifyError(error, provider, context);
return {
success: false,
error: classification.userMessage,
errorType: classification.type,
severity: classification.severity,
retryable: classification.retryable,
suggestedAction: classification.suggestedAction,
};
}
/**
* Log error with full context
*/
export function logError(
error: unknown,
provider?: string,
operation?: string,
additionalContext?: Record<string, any>
): void {
const classification = classifyError(error, provider, {
operation,
...additionalContext,
});
logger.error(`Error in ${provider || 'unknown'}${operation ? ` during ${operation}` : ''}`, {
type: classification.type,
severity: classification.severity,
message: classification.userMessage,
technicalMessage: classification.technicalMessage,
retryable: classification.retryable,
suggestedAction: classification.suggestedAction,
context: classification.context,
});
}
/**
* Provider-specific error handlers
*/
export const ProviderErrorHandler = {
claude: {
classify: (error: unknown) => classifyError(error, 'claude'),
getUserMessage: (error: unknown) => getUserFriendlyErrorMessage(error, 'claude'),
isAuth: (error: unknown) => isAuthenticationError(error),
isBilling: (error: unknown) => isBillingError(error),
isRateLimit: (error: unknown) => isRateLimitError(error),
},
codex: {
classify: (error: unknown) => classifyError(error, 'codex'),
getUserMessage: (error: unknown) => getUserFriendlyErrorMessage(error, 'codex'),
isAuth: (error: unknown) => isAuthenticationError(error),
isBilling: (error: unknown) => isBillingError(error),
isRateLimit: (error: unknown) => isRateLimitError(error),
},
cursor: {
classify: (error: unknown) => classifyError(error, 'cursor'),
getUserMessage: (error: unknown) => getUserFriendlyErrorMessage(error, 'cursor'),
isAuth: (error: unknown) => isAuthenticationError(error),
isBilling: (error: unknown) => isBillingError(error),
isRateLimit: (error: unknown) => isRateLimitError(error),
},
};
/**
* Create a retry handler for retryable errors
*/
export function createRetryHandler(maxRetries: number = 3, baseDelay: number = 1000) {
return async function <T>(
operation: () => Promise<T>,
shouldRetry: (error: unknown) => boolean = isRetryableError
): Promise<T> {
let lastError: unknown;
for (let attempt = 0; attempt <= maxRetries; attempt++) {
try {
return await operation();
} catch (error) {
lastError = error;
if (attempt === maxRetries || !shouldRetry(error)) {
throw error;
}
// Exponential backoff with jitter
const delay = baseDelay * Math.pow(2, attempt) + Math.random() * 1000;
logger.debug(`Retrying operation in ${delay}ms (attempt ${attempt + 1}/${maxRetries})`);
await new Promise((resolve) => setTimeout(resolve, delay));
}
}
throw lastError;
};
}

View File

@@ -3,6 +3,9 @@
*/
import type { EventType, EventCallback } from '@automaker/types';
import { createLogger } from '@automaker/utils';
const logger = createLogger('Events');
// Re-export event types from shared package
export type { EventType, EventCallback };
@@ -21,7 +24,7 @@ export function createEventEmitter(): EventEmitter {
try {
callback(type, payload);
} catch (error) {
console.error('Error in event subscriber:', error);
logger.error('Error in event subscriber:', error);
}
}
},

View File

@@ -0,0 +1,211 @@
/**
* JSON Extraction Utilities
*
* Robust JSON extraction from AI responses that may contain markdown,
* code blocks, or other text mixed with JSON content.
*
* Used by various routes that parse structured output from Cursor or
* Claude responses when structured output is not available.
*/
import { createLogger } from '@automaker/utils';
const logger = createLogger('JsonExtractor');
/**
* Logger interface for optional custom logging
*/
export interface JsonExtractorLogger {
debug: (message: string, ...args: unknown[]) => void;
warn?: (message: string, ...args: unknown[]) => void;
}
/**
* Options for JSON extraction
*/
export interface ExtractJsonOptions {
/** Custom logger (defaults to internal logger) */
logger?: JsonExtractorLogger;
/** Required key that must be present in the extracted JSON */
requiredKey?: string;
/** Whether the required key's value must be an array */
requireArray?: boolean;
}
/**
* Extract JSON from response text using multiple strategies.
*
* Strategies tried in order:
* 1. JSON in ```json code block
* 2. JSON in ``` code block (no language)
* 3. Find JSON object by matching braces (starting with requiredKey if specified)
* 4. Find any JSON object by matching braces
* 5. Parse entire response as JSON
*
* @param responseText - The raw response text that may contain JSON
* @param options - Optional extraction options
* @returns Parsed JSON object or null if extraction fails
*/
export function extractJson<T = Record<string, unknown>>(
responseText: string,
options: ExtractJsonOptions = {}
): T | null {
const log = options.logger || logger;
const requiredKey = options.requiredKey;
const requireArray = options.requireArray ?? false;
/**
* Validate that the result has the required key/structure
*/
const validateResult = (result: unknown): result is T => {
if (!result || typeof result !== 'object') return false;
if (requiredKey) {
const obj = result as Record<string, unknown>;
if (!(requiredKey in obj)) return false;
if (requireArray && !Array.isArray(obj[requiredKey])) return false;
}
return true;
};
/**
* Find matching closing brace by counting brackets
*/
const findMatchingBrace = (text: string, startIdx: number): number => {
let depth = 0;
for (let i = startIdx; i < text.length; i++) {
if (text[i] === '{') depth++;
if (text[i] === '}') {
depth--;
if (depth === 0) {
return i + 1;
}
}
}
return -1;
};
const strategies = [
// Strategy 1: JSON in ```json code block
() => {
const match = responseText.match(/```json\s*([\s\S]*?)```/);
if (match) {
log.debug('Extracting JSON from ```json code block');
return JSON.parse(match[1].trim());
}
return null;
},
// Strategy 2: JSON in ``` code block (no language specified)
() => {
const match = responseText.match(/```\s*([\s\S]*?)```/);
if (match) {
const content = match[1].trim();
// Only try if it looks like JSON (starts with { or [)
if (content.startsWith('{') || content.startsWith('[')) {
log.debug('Extracting JSON from ``` code block');
return JSON.parse(content);
}
}
return null;
},
// Strategy 3: Find JSON object containing the required key (if specified)
() => {
if (!requiredKey) return null;
const searchPattern = `{"${requiredKey}"`;
const startIdx = responseText.indexOf(searchPattern);
if (startIdx === -1) return null;
const endIdx = findMatchingBrace(responseText, startIdx);
if (endIdx > startIdx) {
log.debug(`Extracting JSON with required key "${requiredKey}"`);
return JSON.parse(responseText.slice(startIdx, endIdx));
}
return null;
},
// Strategy 4: Find any JSON object by matching braces
() => {
const startIdx = responseText.indexOf('{');
if (startIdx === -1) return null;
const endIdx = findMatchingBrace(responseText, startIdx);
if (endIdx > startIdx) {
log.debug('Extracting JSON by brace matching');
return JSON.parse(responseText.slice(startIdx, endIdx));
}
return null;
},
// Strategy 5: Find JSON using first { to last } (may be less accurate)
() => {
const firstBrace = responseText.indexOf('{');
const lastBrace = responseText.lastIndexOf('}');
if (firstBrace !== -1 && lastBrace > firstBrace) {
log.debug('Extracting JSON from first { to last }');
return JSON.parse(responseText.slice(firstBrace, lastBrace + 1));
}
return null;
},
// Strategy 6: Try parsing the entire response as JSON
() => {
const trimmed = responseText.trim();
if (trimmed.startsWith('{') || trimmed.startsWith('[')) {
log.debug('Parsing entire response as JSON');
return JSON.parse(trimmed);
}
return null;
},
];
for (const strategy of strategies) {
try {
const result = strategy();
if (validateResult(result)) {
log.debug('Successfully extracted JSON');
return result as T;
}
} catch {
// Strategy failed, try next
}
}
log.debug('Failed to extract JSON from response');
return null;
}
/**
* Extract JSON with a specific required key.
* Convenience wrapper around extractJson.
*
* @param responseText - The raw response text
* @param requiredKey - Key that must be present in the extracted JSON
* @param options - Additional options
* @returns Parsed JSON object or null
*/
export function extractJsonWithKey<T = Record<string, unknown>>(
responseText: string,
requiredKey: string,
options: Omit<ExtractJsonOptions, 'requiredKey'> = {}
): T | null {
return extractJson<T>(responseText, { ...options, requiredKey });
}
/**
* Extract JSON that has a required array property.
* Useful for extracting responses like { "suggestions": [...] }
*
* @param responseText - The raw response text
* @param arrayKey - Key that must contain an array
* @param options - Additional options
* @returns Parsed JSON object or null
*/
export function extractJsonWithArray<T = Record<string, unknown>>(
responseText: string,
arrayKey: string,
options: Omit<ExtractJsonOptions, 'requiredKey' | 'requireArray'> = {}
): T | null {
return extractJson<T>(responseText, { ...options, requiredKey: arrayKey, requireArray: true });
}

View File

@@ -0,0 +1,173 @@
/**
* Permission enforcement utilities for Cursor provider
*/
import type { CursorCliConfigFile } from '@automaker/types';
import { createLogger } from '@automaker/utils';
const logger = createLogger('PermissionEnforcer');
export interface PermissionCheckResult {
allowed: boolean;
reason?: string;
}
/**
* Check if a tool call is allowed based on permissions
*/
export function checkToolCallPermission(
toolCall: any,
permissions: CursorCliConfigFile | null
): PermissionCheckResult {
if (!permissions || !permissions.permissions) {
// If no permissions are configured, allow everything (backward compatibility)
return { allowed: true };
}
const { allow = [], deny = [] } = permissions.permissions;
// Check shell tool calls
if (toolCall.shellToolCall?.args?.command) {
const command = toolCall.shellToolCall.args.command;
const toolName = `Shell(${extractCommandName(command)})`;
// Check deny list first (deny takes precedence)
for (const denyRule of deny) {
if (matchesRule(toolName, denyRule)) {
return {
allowed: false,
reason: `Operation blocked by permission rule: ${denyRule}`,
};
}
}
// Then check allow list
for (const allowRule of allow) {
if (matchesRule(toolName, allowRule)) {
return { allowed: true };
}
}
return {
allowed: false,
reason: `Operation not in allow list: ${toolName}`,
};
}
// Check read tool calls
if (toolCall.readToolCall?.args?.path) {
const path = toolCall.readToolCall.args.path;
const toolName = `Read(${path})`;
// Check deny list first
for (const denyRule of deny) {
if (matchesRule(toolName, denyRule)) {
return {
allowed: false,
reason: `Read operation blocked by permission rule: ${denyRule}`,
};
}
}
// Then check allow list
for (const allowRule of allow) {
if (matchesRule(toolName, allowRule)) {
return { allowed: true };
}
}
return {
allowed: false,
reason: `Read operation not in allow list: ${toolName}`,
};
}
// Check write tool calls
if (toolCall.writeToolCall?.args?.path) {
const path = toolCall.writeToolCall.args.path;
const toolName = `Write(${path})`;
// Check deny list first
for (const denyRule of deny) {
if (matchesRule(toolName, denyRule)) {
return {
allowed: false,
reason: `Write operation blocked by permission rule: ${denyRule}`,
};
}
}
// Then check allow list
for (const allowRule of allow) {
if (matchesRule(toolName, allowRule)) {
return { allowed: true };
}
}
return {
allowed: false,
reason: `Write operation not in allow list: ${toolName}`,
};
}
// For other tool types, allow by default for now
return { allowed: true };
}
/**
* Extract the base command name from a shell command
*/
function extractCommandName(command: string): string {
// Remove leading spaces and get the first word
const trimmed = command.trim();
const firstWord = trimmed.split(/\s+/)[0];
return firstWord || 'unknown';
}
/**
* Check if a tool name matches a permission rule
*/
function matchesRule(toolName: string, rule: string): boolean {
// Exact match
if (toolName === rule) {
return true;
}
// Wildcard patterns
if (rule.includes('*')) {
const regex = new RegExp(rule.replace(/\*/g, '.*'));
return regex.test(toolName);
}
// Prefix match for shell commands (e.g., "Shell(git)" matches "Shell(git status)")
if (rule.startsWith('Shell(') && toolName.startsWith('Shell(')) {
const ruleCommand = rule.slice(6, -1); // Remove "Shell(" and ")"
const toolCommand = extractCommandName(toolName.slice(6, -1)); // Remove "Shell(" and ")"
return toolCommand.startsWith(ruleCommand);
}
return false;
}
/**
* Log permission violations
*/
export function logPermissionViolation(toolCall: any, reason: string, sessionId?: string): void {
const sessionIdStr = sessionId ? ` [${sessionId}]` : '';
if (toolCall.shellToolCall?.args?.command) {
logger.warn(
`Permission violation${sessionIdStr}: Shell command blocked - ${toolCall.shellToolCall.args.command} (${reason})`
);
} else if (toolCall.readToolCall?.args?.path) {
logger.warn(
`Permission violation${sessionIdStr}: Read operation blocked - ${toolCall.readToolCall.args.path} (${reason})`
);
} else if (toolCall.writeToolCall?.args?.path) {
logger.warn(
`Permission violation${sessionIdStr}: Write operation blocked - ${toolCall.writeToolCall.args.path} (${reason})`
);
} else {
logger.warn(`Permission violation${sessionIdStr}: Tool call blocked (${reason})`, { toolCall });
}
}

View File

@@ -18,9 +18,80 @@
import type { Options } from '@anthropic-ai/claude-agent-sdk';
import path from 'path';
import { resolveModelString } from '@automaker/model-resolver';
import { DEFAULT_MODELS, CLAUDE_MODEL_MAP } from '@automaker/types';
import { createLogger } from '@automaker/utils';
const logger = createLogger('SdkOptions');
import {
DEFAULT_MODELS,
CLAUDE_MODEL_MAP,
type McpServerConfig,
type ThinkingLevel,
getThinkingTokenBudget,
} from '@automaker/types';
import { isPathAllowed, PathNotAllowedError, getAllowedRootDirectory } from '@automaker/platform';
/**
* Result of sandbox compatibility check
*/
export interface SandboxCompatibilityResult {
/** Whether sandbox mode can be enabled for this path */
enabled: boolean;
/** Optional message explaining why sandbox is disabled */
message?: string;
}
/**
* Check if a working directory is compatible with sandbox mode.
* Some paths (like cloud storage mounts) may not work with sandboxed execution.
*
* @param cwd - The working directory to check
* @param sandboxRequested - Whether sandbox mode was requested by settings
* @returns Object indicating if sandbox can be enabled and why not if disabled
*/
export function checkSandboxCompatibility(
cwd: string,
sandboxRequested: boolean
): SandboxCompatibilityResult {
if (!sandboxRequested) {
return { enabled: false };
}
const resolvedCwd = path.resolve(cwd);
// Check for cloud storage paths that may not be compatible with sandbox
const cloudStoragePatterns = [
// macOS mounted volumes
/^\/Volumes\/GoogleDrive/i,
/^\/Volumes\/Dropbox/i,
/^\/Volumes\/OneDrive/i,
/^\/Volumes\/iCloud/i,
// macOS home directory
/^\/Users\/[^/]+\/Google Drive/i,
/^\/Users\/[^/]+\/Dropbox/i,
/^\/Users\/[^/]+\/OneDrive/i,
/^\/Users\/[^/]+\/Library\/Mobile Documents/i, // iCloud
// Linux home directory
/^\/home\/[^/]+\/Google Drive/i,
/^\/home\/[^/]+\/Dropbox/i,
/^\/home\/[^/]+\/OneDrive/i,
// Windows
/^C:\\Users\\[^\\]+\\Google Drive/i,
/^C:\\Users\\[^\\]+\\Dropbox/i,
/^C:\\Users\\[^\\]+\\OneDrive/i,
];
for (const pattern of cloudStoragePatterns) {
if (pattern.test(resolvedCwd)) {
return {
enabled: false,
message: `Sandbox disabled: Cloud storage path detected (${resolvedCwd}). Sandbox mode may not work correctly with cloud-synced directories.`,
};
}
}
return { enabled: true };
}
/**
* Validate that a working directory is allowed by ALLOWED_ROOT_DIRECTORY.
* This is the centralized security check for ALL AI model invocations.
@@ -129,13 +200,104 @@ export function getModelForUseCase(
/**
* Base options that apply to all SDK calls
* AUTONOMOUS MODE: Always bypass permissions for fully autonomous operation
*/
function getBaseOptions(): Partial<Options> {
return {
permissionMode: 'acceptEdits',
permissionMode: 'bypassPermissions',
allowDangerouslySkipPermissions: true,
};
}
/**
* MCP options result
*/
interface McpOptions {
/** Options to spread for MCP servers */
mcpServerOptions: Partial<Options>;
}
/**
* Build MCP-related options based on configuration.
*
* @param config - The SDK options config
* @returns Object with MCP server settings to spread into final options
*/
function buildMcpOptions(config: CreateSdkOptionsConfig): McpOptions {
return {
// Include MCP servers if configured
mcpServerOptions: config.mcpServers ? { mcpServers: config.mcpServers } : {},
};
}
/**
* Build thinking options for SDK configuration.
* Converts ThinkingLevel to maxThinkingTokens for the Claude SDK.
*
* @param thinkingLevel - The thinking level to convert
* @returns Object with maxThinkingTokens if thinking is enabled
*/
function buildThinkingOptions(thinkingLevel?: ThinkingLevel): Partial<Options> {
const maxThinkingTokens = getThinkingTokenBudget(thinkingLevel);
logger.debug(
`buildThinkingOptions: thinkingLevel="${thinkingLevel}" -> maxThinkingTokens=${maxThinkingTokens}`
);
return maxThinkingTokens ? { maxThinkingTokens } : {};
}
/**
* Build system prompt configuration based on autoLoadClaudeMd setting.
* When autoLoadClaudeMd is true:
* - Uses preset mode with 'claude_code' to enable CLAUDE.md auto-loading
* - If there's a custom systemPrompt, appends it to the preset
* - Sets settingSources to ['project'] for SDK to load CLAUDE.md files
*
* @param config - The SDK options config
* @returns Object with systemPrompt and settingSources for SDK options
*/
function buildClaudeMdOptions(config: CreateSdkOptionsConfig): {
systemPrompt?: string | SystemPromptConfig;
settingSources?: Array<'user' | 'project' | 'local'>;
} {
if (!config.autoLoadClaudeMd) {
// Standard mode - just pass through the system prompt as-is
return config.systemPrompt ? { systemPrompt: config.systemPrompt } : {};
}
// Auto-load CLAUDE.md mode - use preset with settingSources
const result: {
systemPrompt: SystemPromptConfig;
settingSources: Array<'user' | 'project' | 'local'>;
} = {
systemPrompt: {
type: 'preset',
preset: 'claude_code',
},
// Load both user (~/.claude/CLAUDE.md) and project (.claude/CLAUDE.md) settings
settingSources: ['user', 'project'],
};
// If there's a custom system prompt, append it to the preset
if (config.systemPrompt) {
result.systemPrompt.append = config.systemPrompt;
}
return result;
}
/**
* System prompt configuration for SDK options
* When using preset mode with claude_code, CLAUDE.md files are automatically loaded
*/
export interface SystemPromptConfig {
/** Use preset mode with claude_code to enable CLAUDE.md auto-loading */
type: 'preset';
/** The preset to use - 'claude_code' enables CLAUDE.md loading */
preset: 'claude_code';
/** Optional additional prompt to append to the preset */
append?: string;
}
/**
* Options configuration for creating SDK options
*/
@@ -160,8 +322,25 @@ export interface CreateSdkOptionsConfig {
type: 'json_schema';
schema: Record<string, unknown>;
};
/** Enable auto-loading of CLAUDE.md files via SDK's settingSources */
autoLoadClaudeMd?: boolean;
/** MCP servers to make available to the agent */
mcpServers?: Record<string, McpServerConfig>;
/** Extended thinking level for Claude models */
thinkingLevel?: ThinkingLevel;
}
// Re-export MCP types from @automaker/types for convenience
export type {
McpServerConfig,
McpStdioServerConfig,
McpSSEServerConfig,
McpHttpServerConfig,
} from '@automaker/types';
/**
* Create SDK options for spec generation
*
@@ -169,11 +348,18 @@ export interface CreateSdkOptionsConfig {
* - Uses read-only tools for codebase analysis
* - Extended turns for thorough exploration
* - Opus model by default (can be overridden)
* - When autoLoadClaudeMd is true, uses preset mode and settingSources for CLAUDE.md loading
*/
export function createSpecGenerationOptions(config: CreateSdkOptionsConfig): Options {
// Validate working directory before creating options
validateWorkingDirectory(config.cwd);
// Build CLAUDE.md auto-loading options if enabled
const claudeMdOptions = buildClaudeMdOptions(config);
// Build thinking options
const thinkingOptions = buildThinkingOptions(config.thinkingLevel);
return {
...getBaseOptions(),
// Override permissionMode - spec generation only needs read-only tools
@@ -184,7 +370,8 @@ export function createSpecGenerationOptions(config: CreateSdkOptionsConfig): Opt
maxTurns: MAX_TURNS.maximum,
cwd: config.cwd,
allowedTools: [...TOOL_PRESETS.specGeneration],
...(config.systemPrompt && { systemPrompt: config.systemPrompt }),
...claudeMdOptions,
...thinkingOptions,
...(config.abortController && { abortController: config.abortController }),
...(config.outputFormat && { outputFormat: config.outputFormat }),
};
@@ -197,11 +384,18 @@ export function createSpecGenerationOptions(config: CreateSdkOptionsConfig): Opt
* - Uses read-only tools (just needs to read the spec)
* - Quick turns since it's mostly JSON generation
* - Sonnet model by default for speed
* - When autoLoadClaudeMd is true, uses preset mode and settingSources for CLAUDE.md loading
*/
export function createFeatureGenerationOptions(config: CreateSdkOptionsConfig): Options {
// Validate working directory before creating options
validateWorkingDirectory(config.cwd);
// Build CLAUDE.md auto-loading options if enabled
const claudeMdOptions = buildClaudeMdOptions(config);
// Build thinking options
const thinkingOptions = buildThinkingOptions(config.thinkingLevel);
return {
...getBaseOptions(),
// Override permissionMode - feature generation only needs read-only tools
@@ -210,7 +404,8 @@ export function createFeatureGenerationOptions(config: CreateSdkOptionsConfig):
maxTurns: MAX_TURNS.quick,
cwd: config.cwd,
allowedTools: [...TOOL_PRESETS.readOnly],
...(config.systemPrompt && { systemPrompt: config.systemPrompt }),
...claudeMdOptions,
...thinkingOptions,
...(config.abortController && { abortController: config.abortController }),
};
}
@@ -222,18 +417,26 @@ export function createFeatureGenerationOptions(config: CreateSdkOptionsConfig):
* - Uses read-only tools for analysis
* - Standard turns to allow thorough codebase exploration and structured output generation
* - Opus model by default for thorough analysis
* - When autoLoadClaudeMd is true, uses preset mode and settingSources for CLAUDE.md loading
*/
export function createSuggestionsOptions(config: CreateSdkOptionsConfig): Options {
// Validate working directory before creating options
validateWorkingDirectory(config.cwd);
// Build CLAUDE.md auto-loading options if enabled
const claudeMdOptions = buildClaudeMdOptions(config);
// Build thinking options
const thinkingOptions = buildThinkingOptions(config.thinkingLevel);
return {
...getBaseOptions(),
model: getModelForUseCase('suggestions', config.model),
maxTurns: MAX_TURNS.extended,
cwd: config.cwd,
allowedTools: [...TOOL_PRESETS.readOnly],
...(config.systemPrompt && { systemPrompt: config.systemPrompt }),
...claudeMdOptions,
...thinkingOptions,
...(config.abortController && { abortController: config.abortController }),
...(config.outputFormat && { outputFormat: config.outputFormat }),
};
@@ -246,7 +449,7 @@ export function createSuggestionsOptions(config: CreateSdkOptionsConfig): Option
* - Full tool access for code modification
* - Standard turns for interactive sessions
* - Model priority: explicit model > session model > chat default
* - Sandbox enabled for bash safety
* - When autoLoadClaudeMd is true, uses preset mode and settingSources for CLAUDE.md loading
*/
export function createChatOptions(config: CreateSdkOptionsConfig): Options {
// Validate working directory before creating options
@@ -255,18 +458,25 @@ export function createChatOptions(config: CreateSdkOptionsConfig): Options {
// Model priority: explicit model > session model > chat default
const effectiveModel = config.model || config.sessionModel;
// Build CLAUDE.md auto-loading options if enabled
const claudeMdOptions = buildClaudeMdOptions(config);
// Build MCP-related options
const mcpOptions = buildMcpOptions(config);
// Build thinking options
const thinkingOptions = buildThinkingOptions(config.thinkingLevel);
return {
...getBaseOptions(),
model: getModelForUseCase('chat', effectiveModel),
maxTurns: MAX_TURNS.standard,
cwd: config.cwd,
allowedTools: [...TOOL_PRESETS.chat],
sandbox: {
enabled: true,
autoAllowBashIfSandboxed: true,
},
...(config.systemPrompt && { systemPrompt: config.systemPrompt }),
...claudeMdOptions,
...thinkingOptions,
...(config.abortController && { abortController: config.abortController }),
...mcpOptions.mcpServerOptions,
};
}
@@ -277,24 +487,31 @@ export function createChatOptions(config: CreateSdkOptionsConfig): Options {
* - Full tool access for code modification and implementation
* - Extended turns for thorough feature implementation
* - Uses default model (can be overridden)
* - Sandbox enabled for bash safety
* - When autoLoadClaudeMd is true, uses preset mode and settingSources for CLAUDE.md loading
*/
export function createAutoModeOptions(config: CreateSdkOptionsConfig): Options {
// Validate working directory before creating options
validateWorkingDirectory(config.cwd);
// Build CLAUDE.md auto-loading options if enabled
const claudeMdOptions = buildClaudeMdOptions(config);
// Build MCP-related options
const mcpOptions = buildMcpOptions(config);
// Build thinking options
const thinkingOptions = buildThinkingOptions(config.thinkingLevel);
return {
...getBaseOptions(),
model: getModelForUseCase('auto', config.model),
maxTurns: MAX_TURNS.maximum,
cwd: config.cwd,
allowedTools: [...TOOL_PRESETS.fullAccess],
sandbox: {
enabled: true,
autoAllowBashIfSandboxed: true,
},
...(config.systemPrompt && { systemPrompt: config.systemPrompt }),
...claudeMdOptions,
...thinkingOptions,
...(config.abortController && { abortController: config.abortController }),
...mcpOptions.mcpServerOptions,
};
}
@@ -302,25 +519,40 @@ export function createAutoModeOptions(config: CreateSdkOptionsConfig): Options {
* Create custom SDK options with explicit configuration
*
* Use this when the preset options don't fit your use case.
* When autoLoadClaudeMd is true, uses preset mode and settingSources for CLAUDE.md loading
*/
export function createCustomOptions(
config: CreateSdkOptionsConfig & {
maxTurns?: number;
allowedTools?: readonly string[];
sandbox?: { enabled: boolean; autoAllowBashIfSandboxed?: boolean };
}
): Options {
// Validate working directory before creating options
validateWorkingDirectory(config.cwd);
// Build CLAUDE.md auto-loading options if enabled
const claudeMdOptions = buildClaudeMdOptions(config);
// Build MCP-related options
const mcpOptions = buildMcpOptions(config);
// Build thinking options
const thinkingOptions = buildThinkingOptions(config.thinkingLevel);
// For custom options: use explicit allowedTools if provided, otherwise default to readOnly
const effectiveAllowedTools = config.allowedTools
? [...config.allowedTools]
: [...TOOL_PRESETS.readOnly];
return {
...getBaseOptions(),
model: getModelForUseCase('default', config.model),
maxTurns: config.maxTurns ?? MAX_TURNS.maximum,
cwd: config.cwd,
allowedTools: config.allowedTools ? [...config.allowedTools] : [...TOOL_PRESETS.readOnly],
...(config.sandbox && { sandbox: config.sandbox }),
...(config.systemPrompt && { systemPrompt: config.systemPrompt }),
allowedTools: effectiveAllowedTools,
...claudeMdOptions,
...thinkingOptions,
...(config.abortController && { abortController: config.abortController }),
...mcpOptions.mcpServerOptions,
};
}

View File

@@ -6,6 +6,7 @@
import { secureFs } from '@automaker/platform';
export const {
// Async methods
access,
readFile,
writeFile,
@@ -20,4 +21,19 @@ export const {
lstat,
joinPath,
resolvePath,
// Sync methods
existsSync,
readFileSync,
writeFileSync,
mkdirSync,
readdirSync,
statSync,
accessSync,
unlinkSync,
rmSync,
// Throttling configuration and monitoring
configureThrottling,
getThrottlingConfig,
getPendingOperations,
getActiveOperations,
} = secureFs;

View File

@@ -0,0 +1,323 @@
/**
* Helper utilities for loading settings and context file handling across different parts of the server
*/
import type { SettingsService } from '../services/settings-service.js';
import type { ContextFilesResult, ContextFileInfo } from '@automaker/utils';
import { createLogger } from '@automaker/utils';
import type { MCPServerConfig, McpServerConfig, PromptCustomization } from '@automaker/types';
import {
mergeAutoModePrompts,
mergeAgentPrompts,
mergeBacklogPlanPrompts,
mergeEnhancementPrompts,
} from '@automaker/prompts';
const logger = createLogger('SettingsHelper');
/**
* Get the autoLoadClaudeMd setting, with project settings taking precedence over global.
* Returns false if settings service is not available.
*
* @param projectPath - Path to the project
* @param settingsService - Optional settings service instance
* @param logPrefix - Prefix for log messages (e.g., '[DescribeImage]')
* @returns Promise resolving to the autoLoadClaudeMd setting value
*/
export async function getAutoLoadClaudeMdSetting(
projectPath: string,
settingsService?: SettingsService | null,
logPrefix = '[SettingsHelper]'
): Promise<boolean> {
if (!settingsService) {
logger.info(`${logPrefix} SettingsService not available, autoLoadClaudeMd disabled`);
return false;
}
try {
// Check project settings first (takes precedence)
const projectSettings = await settingsService.getProjectSettings(projectPath);
if (projectSettings.autoLoadClaudeMd !== undefined) {
logger.info(
`${logPrefix} autoLoadClaudeMd from project settings: ${projectSettings.autoLoadClaudeMd}`
);
return projectSettings.autoLoadClaudeMd;
}
// Fall back to global settings
const globalSettings = await settingsService.getGlobalSettings();
const result = globalSettings.autoLoadClaudeMd ?? false;
logger.info(`${logPrefix} autoLoadClaudeMd from global settings: ${result}`);
return result;
} catch (error) {
logger.error(`${logPrefix} Failed to load autoLoadClaudeMd setting:`, error);
throw error;
}
}
/**
* Filters out CLAUDE.md from context files when autoLoadClaudeMd is enabled
* and rebuilds the formatted prompt without it.
*
* When autoLoadClaudeMd is true, the SDK handles CLAUDE.md loading via settingSources,
* so we need to exclude it from the manual context loading to avoid duplication.
* Other context files (CODE_QUALITY.md, CONVENTIONS.md, etc.) are preserved.
*
* @param contextResult - Result from loadContextFiles
* @param autoLoadClaudeMd - Whether SDK auto-loading is enabled
* @returns Filtered context prompt (empty string if no non-CLAUDE.md files)
*/
export function filterClaudeMdFromContext(
contextResult: ContextFilesResult,
autoLoadClaudeMd: boolean
): string {
// If autoLoadClaudeMd is disabled, return the original prompt unchanged
if (!autoLoadClaudeMd || contextResult.files.length === 0) {
return contextResult.formattedPrompt;
}
// Filter out CLAUDE.md (case-insensitive)
const nonClaudeFiles = contextResult.files.filter((f) => f.name.toLowerCase() !== 'claude.md');
// If all files were CLAUDE.md, return empty string
if (nonClaudeFiles.length === 0) {
return '';
}
// Rebuild prompt without CLAUDE.md using the same format as loadContextFiles
const formattedFiles = nonClaudeFiles.map((file) => formatContextFileEntry(file));
return `# Project Context Files
The following context files provide project-specific rules, conventions, and guidelines.
Each file serves a specific purpose - use the description to understand when to reference it.
If you need more details about a context file, you can read the full file at the path provided.
**IMPORTANT**: You MUST follow the rules and conventions specified in these files.
- Follow ALL commands exactly as shown (e.g., if the project uses \`pnpm\`, NEVER use \`npm\` or \`npx\`)
- Follow ALL coding conventions, commit message formats, and architectural patterns specified
- Reference these rules before running ANY shell commands or making commits
---
${formattedFiles.join('\n\n---\n\n')}
---
**REMINDER**: Before taking any action, verify you are following the conventions specified above.
`;
}
/**
* Format a single context file entry for the prompt
* (Matches the format used in @automaker/utils/context-loader.ts)
*/
function formatContextFileEntry(file: ContextFileInfo): string {
const header = `## ${file.name}`;
const pathInfo = `**Path:** \`${file.path}\``;
const descriptionInfo = file.description ? `\n**Purpose:** ${file.description}` : '';
return `${header}\n${pathInfo}${descriptionInfo}\n\n${file.content}`;
}
/**
* Get enabled MCP servers from global settings, converted to SDK format.
* Returns an empty object if settings service is not available or no servers are configured.
*
* @param settingsService - Optional settings service instance
* @param logPrefix - Prefix for log messages (e.g., '[AgentService]')
* @returns Promise resolving to MCP servers in SDK format (keyed by name)
*/
export async function getMCPServersFromSettings(
settingsService?: SettingsService | null,
logPrefix = '[SettingsHelper]'
): Promise<Record<string, McpServerConfig>> {
if (!settingsService) {
return {};
}
try {
const globalSettings = await settingsService.getGlobalSettings();
const mcpServers = globalSettings.mcpServers || [];
// Filter to only enabled servers and convert to SDK format
const enabledServers = mcpServers.filter((s) => s.enabled !== false);
if (enabledServers.length === 0) {
return {};
}
// Convert settings format to SDK format (keyed by name)
const sdkServers: Record<string, McpServerConfig> = {};
for (const server of enabledServers) {
sdkServers[server.name] = convertToSdkFormat(server);
}
logger.info(
`${logPrefix} Loaded ${enabledServers.length} MCP server(s): ${enabledServers.map((s) => s.name).join(', ')}`
);
return sdkServers;
} catch (error) {
logger.error(`${logPrefix} Failed to load MCP servers setting:`, error);
return {};
}
}
/**
* Convert a settings MCPServerConfig to SDK McpServerConfig format.
* Validates required fields and throws informative errors if missing.
*/
function convertToSdkFormat(server: MCPServerConfig): McpServerConfig {
if (server.type === 'sse') {
if (!server.url) {
throw new Error(`SSE MCP server "${server.name}" is missing a URL.`);
}
return {
type: 'sse',
url: server.url,
headers: server.headers,
};
}
if (server.type === 'http') {
if (!server.url) {
throw new Error(`HTTP MCP server "${server.name}" is missing a URL.`);
}
return {
type: 'http',
url: server.url,
headers: server.headers,
};
}
// Default to stdio
if (!server.command) {
throw new Error(`Stdio MCP server "${server.name}" is missing a command.`);
}
return {
type: 'stdio',
command: server.command,
args: server.args,
env: server.env,
};
}
/**
* Get prompt customization from global settings and merge with defaults.
* Returns prompts merged with built-in defaults - custom prompts override defaults.
*
* @param settingsService - Optional settings service instance
* @param logPrefix - Prefix for log messages
* @returns Promise resolving to merged prompts for all categories
*/
export async function getPromptCustomization(
settingsService?: SettingsService | null,
logPrefix = '[PromptHelper]'
): Promise<{
autoMode: ReturnType<typeof mergeAutoModePrompts>;
agent: ReturnType<typeof mergeAgentPrompts>;
backlogPlan: ReturnType<typeof mergeBacklogPlanPrompts>;
enhancement: ReturnType<typeof mergeEnhancementPrompts>;
}> {
let customization: PromptCustomization = {};
if (settingsService) {
try {
const globalSettings = await settingsService.getGlobalSettings();
customization = globalSettings.promptCustomization || {};
logger.info(`${logPrefix} Loaded prompt customization from settings`);
} catch (error) {
logger.error(`${logPrefix} Failed to load prompt customization:`, error);
// Fall through to use empty customization (all defaults)
}
} else {
logger.info(`${logPrefix} SettingsService not available, using default prompts`);
}
return {
autoMode: mergeAutoModePrompts(customization.autoMode),
agent: mergeAgentPrompts(customization.agent),
backlogPlan: mergeBacklogPlanPrompts(customization.backlogPlan),
enhancement: mergeEnhancementPrompts(customization.enhancement),
};
}
/**
* Get Skills configuration from settings.
* Returns configuration for enabling skills and which sources to load from.
*
* @param settingsService - Settings service instance
* @returns Skills configuration with enabled state, sources, and tool inclusion flag
*/
export async function getSkillsConfiguration(settingsService: SettingsService): Promise<{
enabled: boolean;
sources: Array<'user' | 'project'>;
shouldIncludeInTools: boolean;
}> {
const settings = await settingsService.getGlobalSettings();
const enabled = settings.enableSkills ?? true; // Default enabled
const sources = settings.skillsSources ?? ['user', 'project']; // Default both sources
return {
enabled,
sources,
shouldIncludeInTools: enabled && sources.length > 0,
};
}
/**
* Get Subagents configuration from settings.
* Returns configuration for enabling subagents and which sources to load from.
*
* @param settingsService - Settings service instance
* @returns Subagents configuration with enabled state, sources, and tool inclusion flag
*/
export async function getSubagentsConfiguration(settingsService: SettingsService): Promise<{
enabled: boolean;
sources: Array<'user' | 'project'>;
shouldIncludeInTools: boolean;
}> {
const settings = await settingsService.getGlobalSettings();
const enabled = settings.enableSubagents ?? true; // Default enabled
const sources = settings.subagentsSources ?? ['user', 'project']; // Default both sources
return {
enabled,
sources,
shouldIncludeInTools: enabled && sources.length > 0,
};
}
/**
* Get custom subagents from settings, merging global and project-level definitions.
* Project-level subagents take precedence over global ones with the same name.
*
* @param settingsService - Settings service instance
* @param projectPath - Path to the project for loading project-specific subagents
* @returns Record of agent names to definitions, or undefined if none configured
*/
export async function getCustomSubagents(
settingsService: SettingsService,
projectPath?: string
): Promise<Record<string, import('@automaker/types').AgentDefinition> | undefined> {
// Get global subagents
const globalSettings = await settingsService.getGlobalSettings();
const globalSubagents = globalSettings.customSubagents || {};
// If no project path, return only global subagents
if (!projectPath) {
return Object.keys(globalSubagents).length > 0 ? globalSubagents : undefined;
}
// Get project-specific subagents
const projectSettings = await settingsService.getProjectSettings(projectPath);
const projectSubagents = projectSettings.customSubagents || {};
// Merge: project-level takes precedence
const merged = {
...globalSubagents,
...projectSubagents,
};
return Object.keys(merged).length > 0 ? merged : undefined;
}

View File

@@ -0,0 +1,181 @@
/**
* Validation Storage - CRUD operations for GitHub issue validation results
*
* Stores validation results in .automaker/validations/{issueNumber}/validation.json
* Results include the validation verdict, metadata, and timestamp for cache invalidation.
*/
import * as secureFs from './secure-fs.js';
import { getValidationsDir, getValidationDir, getValidationPath } from '@automaker/platform';
import type { StoredValidation } from '@automaker/types';
// Re-export StoredValidation for convenience
export type { StoredValidation };
/** Number of hours before a validation is considered stale */
const VALIDATION_CACHE_TTL_HOURS = 24;
/**
* Write validation result to storage
*
* Creates the validation directory if needed and stores the result as JSON.
*
* @param projectPath - Absolute path to project directory
* @param issueNumber - GitHub issue number
* @param data - Validation data to store
*/
export async function writeValidation(
projectPath: string,
issueNumber: number,
data: StoredValidation
): Promise<void> {
const validationDir = getValidationDir(projectPath, issueNumber);
const validationPath = getValidationPath(projectPath, issueNumber);
// Ensure directory exists
await secureFs.mkdir(validationDir, { recursive: true });
// Write validation result
await secureFs.writeFile(validationPath, JSON.stringify(data, null, 2), 'utf-8');
}
/**
* Read validation result from storage
*
* @param projectPath - Absolute path to project directory
* @param issueNumber - GitHub issue number
* @returns Stored validation or null if not found
*/
export async function readValidation(
projectPath: string,
issueNumber: number
): Promise<StoredValidation | null> {
try {
const validationPath = getValidationPath(projectPath, issueNumber);
const content = (await secureFs.readFile(validationPath, 'utf-8')) as string;
return JSON.parse(content) as StoredValidation;
} catch {
// File doesn't exist or can't be read
return null;
}
}
/**
* Get all stored validations for a project
*
* @param projectPath - Absolute path to project directory
* @returns Array of stored validations
*/
export async function getAllValidations(projectPath: string): Promise<StoredValidation[]> {
const validationsDir = getValidationsDir(projectPath);
try {
const dirs = await secureFs.readdir(validationsDir, { withFileTypes: true });
// Read all validation files in parallel for better performance
const promises = dirs
.filter((dir) => dir.isDirectory())
.map((dir) => {
const issueNumber = parseInt(dir.name, 10);
if (!isNaN(issueNumber)) {
return readValidation(projectPath, issueNumber);
}
return Promise.resolve(null);
});
const results = await Promise.all(promises);
const validations = results.filter((v): v is StoredValidation => v !== null);
// Sort by issue number
validations.sort((a, b) => a.issueNumber - b.issueNumber);
return validations;
} catch {
// Directory doesn't exist
return [];
}
}
/**
* Delete a validation from storage
*
* @param projectPath - Absolute path to project directory
* @param issueNumber - GitHub issue number
* @returns true if validation was deleted, false if not found
*/
export async function deleteValidation(projectPath: string, issueNumber: number): Promise<boolean> {
try {
const validationDir = getValidationDir(projectPath, issueNumber);
await secureFs.rm(validationDir, { recursive: true, force: true });
return true;
} catch {
return false;
}
}
/**
* Check if a validation is stale (older than TTL)
*
* @param validation - Stored validation to check
* @returns true if validation is older than 24 hours
*/
export function isValidationStale(validation: StoredValidation): boolean {
const validatedAt = new Date(validation.validatedAt);
const now = new Date();
const hoursDiff = (now.getTime() - validatedAt.getTime()) / (1000 * 60 * 60);
return hoursDiff > VALIDATION_CACHE_TTL_HOURS;
}
/**
* Get validation with freshness info
*
* @param projectPath - Absolute path to project directory
* @param issueNumber - GitHub issue number
* @returns Object with validation and isStale flag, or null if not found
*/
export async function getValidationWithFreshness(
projectPath: string,
issueNumber: number
): Promise<{ validation: StoredValidation; isStale: boolean } | null> {
const validation = await readValidation(projectPath, issueNumber);
if (!validation) {
return null;
}
return {
validation,
isStale: isValidationStale(validation),
};
}
/**
* Mark a validation as viewed by the user
*
* @param projectPath - Absolute path to project directory
* @param issueNumber - GitHub issue number
* @returns true if validation was marked as viewed, false if not found
*/
export async function markValidationViewed(
projectPath: string,
issueNumber: number
): Promise<boolean> {
const validation = await readValidation(projectPath, issueNumber);
if (!validation) {
return false;
}
validation.viewedAt = new Date().toISOString();
await writeValidation(projectPath, issueNumber, validation);
return true;
}
/**
* Get count of unviewed, non-stale validations for a project
*
* @param projectPath - Absolute path to project directory
* @returns Number of unviewed validations
*/
export async function getUnviewedValidationsCount(projectPath: string): Promise<number> {
const validations = await getAllValidations(projectPath);
return validations.filter((v) => !v.viewedAt && !isValidationStale(v)).length;
}

View File

@@ -0,0 +1,36 @@
/**
* Version utility - Reads version from package.json
*/
import { readFileSync } from 'fs';
import { fileURLToPath } from 'url';
import { dirname, join } from 'path';
import { createLogger } from '@automaker/utils';
const logger = createLogger('Version');
const __filename = fileURLToPath(import.meta.url);
const __dirname = dirname(__filename);
let cachedVersion: string | null = null;
/**
* Get the version from package.json
* Caches the result for performance
*/
export function getVersion(): string {
if (cachedVersion) {
return cachedVersion;
}
try {
const packageJsonPath = join(__dirname, '..', '..', 'package.json');
const packageJson = JSON.parse(readFileSync(packageJsonPath, 'utf-8'));
const version = packageJson.version || '0.0.0';
cachedVersion = version;
return version;
} catch (error) {
logger.warn('Failed to read version from package.json:', error);
return '0.0.0';
}
}

View File

@@ -0,0 +1,50 @@
/**
* Middleware to enforce Content-Type: application/json for request bodies
*
* This security middleware prevents malicious requests by requiring proper
* Content-Type headers for all POST, PUT, and PATCH requests.
*
* Rejecting requests without proper Content-Type helps prevent:
* - CSRF attacks via form submissions (which use application/x-www-form-urlencoded)
* - Content-type confusion attacks
* - Malformed request exploitation
*/
import type { Request, Response, NextFunction } from 'express';
// HTTP methods that typically include request bodies
const METHODS_REQUIRING_JSON = ['POST', 'PUT', 'PATCH'];
/**
* Middleware that requires Content-Type: application/json for POST/PUT/PATCH requests
*
* Returns 415 Unsupported Media Type if:
* - The request method is POST, PUT, or PATCH
* - AND the Content-Type header is missing or not application/json
*
* Allows requests to pass through if:
* - The request method is GET, DELETE, OPTIONS, HEAD, etc.
* - OR the Content-Type is properly set to application/json (with optional charset)
*/
export function requireJsonContentType(req: Request, res: Response, next: NextFunction): void {
// Skip validation for methods that don't require a body
if (!METHODS_REQUIRING_JSON.includes(req.method)) {
next();
return;
}
const contentType = req.headers['content-type'];
// Check if Content-Type header exists and contains application/json
// Allows for charset parameter: "application/json; charset=utf-8"
if (!contentType || !contentType.toLowerCase().includes('application/json')) {
res.status(415).json({
success: false,
error: 'Unsupported Media Type',
message: 'Content-Type header must be application/json',
});
return;
}
next();
}

View File

@@ -7,6 +7,10 @@
import { query, type Options } from '@anthropic-ai/claude-agent-sdk';
import { BaseProvider } from './base-provider.js';
import { classifyError, getUserFriendlyErrorMessage, createLogger } from '@automaker/utils';
const logger = createLogger('ClaudeProvider');
import { getThinkingTokenBudget, validateBareModelId } from '@automaker/types';
import type {
ExecuteOptions,
ProviderMessage,
@@ -14,6 +18,32 @@ import type {
ModelDefinition,
} from './types.js';
// Explicit allowlist of environment variables to pass to the SDK.
// Only these vars are passed - nothing else from process.env leaks through.
const ALLOWED_ENV_VARS = [
'ANTHROPIC_API_KEY',
'PATH',
'HOME',
'SHELL',
'TERM',
'USER',
'LANG',
'LC_ALL',
];
/**
* Build environment for the SDK with only explicitly allowed variables
*/
function buildEnv(): Record<string, string | undefined> {
const env: Record<string, string | undefined> = {};
for (const key of ALLOWED_ENV_VARS) {
if (process.env[key]) {
env[key] = process.env[key];
}
}
return env;
}
export class ClaudeProvider extends BaseProvider {
getName(): string {
return 'claude';
@@ -23,6 +53,10 @@ export class ClaudeProvider extends BaseProvider {
* Execute a query using Claude Agent SDK
*/
async *executeQuery(options: ExecuteOptions): AsyncGenerator<ProviderMessage> {
// Validate that model doesn't have a provider prefix
// AgentService should strip prefixes before passing to providers
validateBareModelId(options.model, 'ClaudeProvider');
const {
prompt,
model,
@@ -33,28 +67,38 @@ export class ClaudeProvider extends BaseProvider {
abortController,
conversationHistory,
sdkSessionId,
thinkingLevel,
} = options;
// Build Claude SDK options
const defaultTools = ['Read', 'Write', 'Edit', 'Glob', 'Grep', 'Bash', 'WebSearch', 'WebFetch'];
const toolsToUse = allowedTools || defaultTools;
// Convert thinking level to token budget
const maxThinkingTokens = getThinkingTokenBudget(thinkingLevel);
// Build Claude SDK options
const sdkOptions: Options = {
model,
systemPrompt,
maxTurns,
cwd,
allowedTools: toolsToUse,
permissionMode: 'acceptEdits',
sandbox: {
enabled: true,
autoAllowBashIfSandboxed: true,
},
// Pass only explicitly allowed environment variables to SDK
env: buildEnv(),
// Pass through allowedTools if provided by caller (decided by sdk-options.ts)
...(allowedTools && { allowedTools }),
// AUTONOMOUS MODE: Always bypass permissions for fully autonomous operation
permissionMode: 'bypassPermissions',
allowDangerouslySkipPermissions: true,
abortController,
// Resume existing SDK session if we have a session ID
...(sdkSessionId && conversationHistory && conversationHistory.length > 0
? { resume: sdkSessionId }
: {}),
// Forward settingSources for CLAUDE.md file loading
...(options.settingSources && { settingSources: options.settingSources }),
// Forward MCP servers configuration
...(options.mcpServers && { mcpServers: options.mcpServers }),
// Extended thinking configuration
...(maxThinkingTokens && { maxThinkingTokens }),
// Subagents configuration for specialized task delegation
...(options.agents && { agents: options.agents }),
};
// Build prompt payload
@@ -88,8 +132,32 @@ export class ClaudeProvider extends BaseProvider {
yield msg as ProviderMessage;
}
} catch (error) {
console.error('[ClaudeProvider] executeQuery() error during execution:', error);
throw error;
// Enhance error with user-friendly message and classification
const errorInfo = classifyError(error);
const userMessage = getUserFriendlyErrorMessage(error);
logger.error('executeQuery() error during execution:', {
type: errorInfo.type,
message: errorInfo.message,
isRateLimit: errorInfo.isRateLimit,
retryAfter: errorInfo.retryAfter,
stack: (error as Error).stack,
});
// Build enhanced error message with additional guidance for rate limits
const message = errorInfo.isRateLimit
? `${userMessage}\n\nTip: If you're running multiple features in auto-mode, consider reducing concurrency (maxConcurrency setting) to avoid hitting rate limits.`
: userMessage;
const enhancedError = new Error(message);
(enhancedError as any).originalError = error;
(enhancedError as any).type = errorInfo.type;
if (errorInfo.isRateLimit) {
(enhancedError as any).retryAfter = errorInfo.retryAfter;
}
throw enhancedError;
}
}

View File

@@ -0,0 +1,558 @@
/**
* CliProvider - Abstract base class for CLI-based AI providers
*
* Provides common infrastructure for CLI tools that spawn subprocesses
* and stream JSONL output. Handles:
* - Platform-specific CLI detection (PATH, common locations)
* - Windows execution strategies (WSL, npx, direct, cmd)
* - JSONL subprocess spawning and streaming
* - Error mapping infrastructure
*
* @example
* ```typescript
* class CursorProvider extends CliProvider {
* getCliName(): string { return 'cursor-agent'; }
* getSpawnConfig(): CliSpawnConfig {
* return {
* windowsStrategy: 'wsl',
* commonPaths: {
* linux: ['~/.local/bin/cursor-agent'],
* darwin: ['~/.local/bin/cursor-agent'],
* }
* };
* }
* // ... implement abstract methods
* }
* ```
*/
import { execSync } from 'child_process';
import * as fs from 'fs';
import * as path from 'path';
import * as os from 'os';
import { BaseProvider } from './base-provider.js';
import type { ProviderConfig, ExecuteOptions, ProviderMessage } from './types.js';
import {
spawnJSONLProcess,
type SubprocessOptions,
isWslAvailable,
findCliInWsl,
createWslCommand,
windowsToWslPath,
type WslCliResult,
} from '@automaker/platform';
import { createLogger, isAbortError } from '@automaker/utils';
/**
* Spawn strategy for CLI tools on Windows
*
* Different CLI tools require different execution strategies:
* - 'wsl': Requires WSL, CLI only available on Linux/macOS (e.g., cursor-agent)
* - 'npx': Installed globally via npm/npx, use `npx <package>` to run
* - 'direct': Native Windows binary, can spawn directly
* - 'cmd': Windows batch file (.cmd/.bat), needs cmd.exe shell
*/
export type SpawnStrategy = 'wsl' | 'npx' | 'direct' | 'cmd';
/**
* Configuration for CLI tool spawning
*/
export interface CliSpawnConfig {
/** How to spawn on Windows */
windowsStrategy: SpawnStrategy;
/** NPX package name (required if windowsStrategy is 'npx') */
npxPackage?: string;
/** Preferred WSL distribution (if windowsStrategy is 'wsl') */
wslDistribution?: string;
/**
* Common installation paths per platform
* Use ~ for home directory (will be expanded)
* Keys: 'linux', 'darwin', 'win32'
*/
commonPaths: Record<string, string[]>;
/** Version check command (defaults to --version) */
versionCommand?: string;
}
/**
* CLI error information for consistent error handling
*/
export interface CliErrorInfo {
code: string;
message: string;
recoverable: boolean;
suggestion?: string;
}
/**
* Detection result from CLI path finding
*/
export interface CliDetectionResult {
/** Path to the CLI (or 'npx' for npx strategy) */
cliPath: string | null;
/** Whether using WSL mode */
useWsl: boolean;
/** WSL path if using WSL */
wslCliPath?: string;
/** WSL distribution if using WSL */
wslDistribution?: string;
/** Detected strategy used */
strategy: SpawnStrategy | 'native';
}
// Create logger for CLI operations
const cliLogger = createLogger('CliProvider');
/**
* Abstract base class for CLI-based providers
*
* Subclasses must implement:
* - getCliName(): CLI executable name
* - getSpawnConfig(): Platform-specific spawn configuration
* - buildCliArgs(): Convert ExecuteOptions to CLI arguments
* - normalizeEvent(): Convert CLI output to ProviderMessage
*/
export abstract class CliProvider extends BaseProvider {
// CLI detection results (cached after first detection)
protected cliPath: string | null = null;
protected useWsl: boolean = false;
protected wslCliPath: string | null = null;
protected wslDistribution: string | undefined = undefined;
protected detectedStrategy: SpawnStrategy | 'native' = 'native';
// NPX args (used when strategy is 'npx')
protected npxArgs: string[] = [];
constructor(config: ProviderConfig = {}) {
super(config);
// Detection happens lazily on first use
}
// ==========================================================================
// Abstract methods - must be implemented by subclasses
// ==========================================================================
/**
* Get the CLI executable name (e.g., 'cursor-agent', 'aider')
*/
abstract getCliName(): string;
/**
* Get spawn configuration for this CLI
*/
abstract getSpawnConfig(): CliSpawnConfig;
/**
* Build CLI arguments from execution options
* @param options Execution options
* @returns Array of CLI arguments
*/
abstract buildCliArgs(options: ExecuteOptions): string[];
/**
* Normalize a raw CLI event to ProviderMessage format
* @param event Raw event from CLI JSONL output
* @returns Normalized ProviderMessage or null to skip
*/
abstract normalizeEvent(event: unknown): ProviderMessage | null;
// ==========================================================================
// Optional overrides
// ==========================================================================
/**
* Map CLI stderr/exit code to error info
* Override to provide CLI-specific error mapping
*/
protected mapError(stderr: string, exitCode: number | null): CliErrorInfo {
const lower = stderr.toLowerCase();
// Common authentication errors
if (
lower.includes('not authenticated') ||
lower.includes('please log in') ||
lower.includes('unauthorized')
) {
return {
code: 'NOT_AUTHENTICATED',
message: `${this.getCliName()} is not authenticated`,
recoverable: true,
suggestion: `Run "${this.getCliName()} login" to authenticate`,
};
}
// Rate limiting
if (
lower.includes('rate limit') ||
lower.includes('too many requests') ||
lower.includes('429')
) {
return {
code: 'RATE_LIMITED',
message: 'API rate limit exceeded',
recoverable: true,
suggestion: 'Wait a few minutes and try again',
};
}
// Network errors
if (
lower.includes('network') ||
lower.includes('connection') ||
lower.includes('econnrefused') ||
lower.includes('timeout')
) {
return {
code: 'NETWORK_ERROR',
message: 'Network connection error',
recoverable: true,
suggestion: 'Check your internet connection and try again',
};
}
// Process killed
if (exitCode === 137 || lower.includes('killed') || lower.includes('sigterm')) {
return {
code: 'PROCESS_CRASHED',
message: 'Process was terminated',
recoverable: true,
suggestion: 'The process may have run out of memory. Try a simpler task.',
};
}
// Generic error
return {
code: 'UNKNOWN_ERROR',
message: stderr || `Process exited with code ${exitCode}`,
recoverable: false,
};
}
/**
* Get installation instructions for this CLI
* Override to provide CLI-specific instructions
*/
protected getInstallInstructions(): string {
const cliName = this.getCliName();
const config = this.getSpawnConfig();
if (process.platform === 'win32') {
switch (config.windowsStrategy) {
case 'wsl':
return `${cliName} requires WSL on Windows. Install WSL, then run inside WSL to install.`;
case 'npx':
return `Install with: npm install -g ${config.npxPackage || cliName}`;
case 'cmd':
case 'direct':
return `${cliName} is not installed. Check the documentation for installation instructions.`;
}
}
return `${cliName} is not installed. Check the documentation for installation instructions.`;
}
// ==========================================================================
// CLI Detection
// ==========================================================================
/**
* Expand ~ to home directory in path
*/
private expandPath(p: string): string {
if (p.startsWith('~')) {
return path.join(os.homedir(), p.slice(1));
}
return p;
}
/**
* Find CLI in PATH using 'which' (Unix) or 'where' (Windows)
*/
private findCliInPath(): string | null {
const cliName = this.getCliName();
try {
const command = process.platform === 'win32' ? 'where' : 'which';
const result = execSync(`${command} ${cliName}`, {
encoding: 'utf8',
timeout: 5000,
stdio: ['pipe', 'pipe', 'pipe'],
windowsHide: true,
})
.trim()
.split('\n')[0];
if (result && fs.existsSync(result)) {
cliLogger.debug(`Found ${cliName} in PATH: ${result}`);
return result;
}
} catch {
// Not in PATH
}
return null;
}
/**
* Find CLI in common installation paths for current platform
*/
private findCliInCommonPaths(): string | null {
const config = this.getSpawnConfig();
const cliName = this.getCliName();
const platform = process.platform as 'linux' | 'darwin' | 'win32';
const paths = config.commonPaths[platform] || [];
for (const p of paths) {
const expandedPath = this.expandPath(p);
if (fs.existsSync(expandedPath)) {
cliLogger.debug(`Found ${cliName} at: ${expandedPath}`);
return expandedPath;
}
}
return null;
}
/**
* Detect CLI installation using appropriate strategy
*/
protected detectCli(): CliDetectionResult {
const config = this.getSpawnConfig();
const cliName = this.getCliName();
const wslLogger = (msg: string) => cliLogger.debug(msg);
// Windows - use configured strategy
if (process.platform === 'win32') {
switch (config.windowsStrategy) {
case 'wsl': {
// Check WSL for CLI
if (isWslAvailable({ logger: wslLogger })) {
const wslResult: WslCliResult | null = findCliInWsl(cliName, {
logger: wslLogger,
distribution: config.wslDistribution,
});
if (wslResult) {
cliLogger.debug(
`Using ${cliName} via WSL (${wslResult.distribution || 'default'}): ${wslResult.wslPath}`
);
return {
cliPath: 'wsl.exe',
useWsl: true,
wslCliPath: wslResult.wslPath,
wslDistribution: wslResult.distribution,
strategy: 'wsl',
};
}
}
cliLogger.debug(`${cliName} not found (WSL not available or CLI not installed in WSL)`);
return { cliPath: null, useWsl: false, strategy: 'wsl' };
}
case 'npx': {
// For npx, we don't need to find the CLI, just return npx
cliLogger.debug(`Using ${cliName} via npx (package: ${config.npxPackage})`);
return {
cliPath: 'npx',
useWsl: false,
strategy: 'npx',
};
}
case 'direct':
case 'cmd': {
// Native Windows - check PATH and common paths
const pathResult = this.findCliInPath();
if (pathResult) {
return { cliPath: pathResult, useWsl: false, strategy: config.windowsStrategy };
}
const commonResult = this.findCliInCommonPaths();
if (commonResult) {
return { cliPath: commonResult, useWsl: false, strategy: config.windowsStrategy };
}
cliLogger.debug(`${cliName} not found on Windows`);
return { cliPath: null, useWsl: false, strategy: config.windowsStrategy };
}
}
}
// Linux/macOS - native execution
const pathResult = this.findCliInPath();
if (pathResult) {
return { cliPath: pathResult, useWsl: false, strategy: 'native' };
}
const commonResult = this.findCliInCommonPaths();
if (commonResult) {
return { cliPath: commonResult, useWsl: false, strategy: 'native' };
}
cliLogger.debug(`${cliName} not found`);
return { cliPath: null, useWsl: false, strategy: 'native' };
}
/**
* Ensure CLI is detected (lazy initialization)
*/
protected ensureCliDetected(): void {
if (this.cliPath !== null || this.detectedStrategy !== 'native') {
return; // Already detected
}
const result = this.detectCli();
this.cliPath = result.cliPath;
this.useWsl = result.useWsl;
this.wslCliPath = result.wslCliPath || null;
this.wslDistribution = result.wslDistribution;
this.detectedStrategy = result.strategy;
// Set up npx args if using npx strategy
const config = this.getSpawnConfig();
if (result.strategy === 'npx' && config.npxPackage) {
this.npxArgs = [config.npxPackage];
}
}
/**
* Check if CLI is installed
*/
async isInstalled(): Promise<boolean> {
this.ensureCliDetected();
return this.cliPath !== null;
}
// ==========================================================================
// Subprocess Spawning
// ==========================================================================
/**
* Build subprocess options based on detected strategy
*/
protected buildSubprocessOptions(options: ExecuteOptions, cliArgs: string[]): SubprocessOptions {
this.ensureCliDetected();
if (!this.cliPath) {
throw new Error(`${this.getCliName()} CLI not found. ${this.getInstallInstructions()}`);
}
const cwd = options.cwd || process.cwd();
// Filter undefined values from process.env
const filteredEnv: Record<string, string> = {};
for (const [key, value] of Object.entries(process.env)) {
if (value !== undefined) {
filteredEnv[key] = value;
}
}
// WSL strategy
if (this.useWsl && this.wslCliPath) {
const wslCwd = windowsToWslPath(cwd);
const wslCmd = createWslCommand(this.wslCliPath, cliArgs, {
distribution: this.wslDistribution,
});
// Add --cd flag to change directory inside WSL
let args: string[];
if (this.wslDistribution) {
args = ['-d', this.wslDistribution, '--cd', wslCwd, this.wslCliPath, ...cliArgs];
} else {
args = ['--cd', wslCwd, this.wslCliPath, ...cliArgs];
}
cliLogger.debug(`WSL spawn: ${wslCmd.command} ${args.slice(0, 6).join(' ')}...`);
return {
command: wslCmd.command,
args,
cwd, // Windows cwd for spawn
env: filteredEnv,
abortController: options.abortController,
timeout: 120000, // CLI operations may take longer
};
}
// NPX strategy
if (this.detectedStrategy === 'npx') {
const allArgs = [...this.npxArgs, ...cliArgs];
cliLogger.debug(`NPX spawn: npx ${allArgs.slice(0, 6).join(' ')}...`);
return {
command: 'npx',
args: allArgs,
cwd,
env: filteredEnv,
abortController: options.abortController,
timeout: 120000,
};
}
// Direct strategy (native Unix or Windows direct/cmd)
cliLogger.debug(`Direct spawn: ${this.cliPath} ${cliArgs.slice(0, 6).join(' ')}...`);
return {
command: this.cliPath,
args: cliArgs,
cwd,
env: filteredEnv,
abortController: options.abortController,
timeout: 120000,
};
}
/**
* Execute a query using the CLI with JSONL streaming
*
* This is a default implementation that:
* 1. Builds CLI args from options
* 2. Spawns the subprocess with appropriate strategy
* 3. Streams and normalizes events
*
* Subclasses can override for custom behavior.
*/
async *executeQuery(options: ExecuteOptions): AsyncGenerator<ProviderMessage> {
this.ensureCliDetected();
if (!this.cliPath) {
throw new Error(`${this.getCliName()} CLI not found. ${this.getInstallInstructions()}`);
}
const cliArgs = this.buildCliArgs(options);
const subprocessOptions = this.buildSubprocessOptions(options, cliArgs);
try {
for await (const rawEvent of spawnJSONLProcess(subprocessOptions)) {
const normalized = this.normalizeEvent(rawEvent);
if (normalized) {
yield normalized;
}
}
} catch (error) {
if (isAbortError(error)) {
cliLogger.debug('Query aborted');
return;
}
// Map CLI errors
if (error instanceof Error && 'stderr' in error) {
const errorInfo = this.mapError(
(error as { stderr?: string }).stderr || error.message,
(error as { exitCode?: number | null }).exitCode ?? null
);
const cliError = new Error(errorInfo.message) as Error & CliErrorInfo;
cliError.code = errorInfo.code;
cliError.recoverable = errorInfo.recoverable;
cliError.suggestion = errorInfo.suggestion;
throw cliError;
}
throw error;
}
}
}

View File

@@ -0,0 +1,85 @@
/**
* Codex Config Manager - Writes MCP server configuration for Codex CLI
*/
import path from 'path';
import type { McpServerConfig } from '@automaker/types';
import * as secureFs from '../lib/secure-fs.js';
const CODEX_CONFIG_DIR = '.codex';
const CODEX_CONFIG_FILENAME = 'config.toml';
const CODEX_MCP_SECTION = 'mcp_servers';
function formatTomlString(value: string): string {
return JSON.stringify(value);
}
function formatTomlArray(values: string[]): string {
const formatted = values.map((value) => formatTomlString(value)).join(', ');
return `[${formatted}]`;
}
function formatTomlInlineTable(values: Record<string, string>): string {
const entries = Object.entries(values).map(
([key, value]) => `${key} = ${formatTomlString(value)}`
);
return `{ ${entries.join(', ')} }`;
}
function formatTomlKey(key: string): string {
return `"${key.replace(/"/g, '\\"')}"`;
}
function buildServerBlock(name: string, server: McpServerConfig): string[] {
const lines: string[] = [];
const section = `${CODEX_MCP_SECTION}.${formatTomlKey(name)}`;
lines.push(`[${section}]`);
if (server.type) {
lines.push(`type = ${formatTomlString(server.type)}`);
}
if ('command' in server && server.command) {
lines.push(`command = ${formatTomlString(server.command)}`);
}
if ('args' in server && server.args && server.args.length > 0) {
lines.push(`args = ${formatTomlArray(server.args)}`);
}
if ('env' in server && server.env && Object.keys(server.env).length > 0) {
lines.push(`env = ${formatTomlInlineTable(server.env)}`);
}
if ('url' in server && server.url) {
lines.push(`url = ${formatTomlString(server.url)}`);
}
if ('headers' in server && server.headers && Object.keys(server.headers).length > 0) {
lines.push(`headers = ${formatTomlInlineTable(server.headers)}`);
}
return lines;
}
export class CodexConfigManager {
async configureMcpServers(
cwd: string,
mcpServers: Record<string, McpServerConfig>
): Promise<void> {
const configDir = path.join(cwd, CODEX_CONFIG_DIR);
const configPath = path.join(configDir, CODEX_CONFIG_FILENAME);
await secureFs.mkdir(configDir, { recursive: true });
const blocks: string[] = [];
for (const [name, server] of Object.entries(mcpServers)) {
blocks.push(...buildServerBlock(name, server), '');
}
const content = blocks.join('\n').trim();
if (content) {
await secureFs.writeFile(configPath, content + '\n', 'utf-8');
}
}
}

View File

@@ -0,0 +1,111 @@
/**
* Codex Model Definitions
*
* Official Codex CLI models as documented at https://developers.openai.com/codex/models/
*/
import { CODEX_MODEL_MAP } from '@automaker/types';
import type { ModelDefinition } from './types.js';
const CONTEXT_WINDOW_256K = 256000;
const CONTEXT_WINDOW_128K = 128000;
const MAX_OUTPUT_32K = 32000;
const MAX_OUTPUT_16K = 16000;
/**
* All available Codex models with their specifications
* Based on https://developers.openai.com/codex/models/
*/
export const CODEX_MODELS: ModelDefinition[] = [
// ========== Recommended Codex Models ==========
{
id: CODEX_MODEL_MAP.gpt52Codex,
name: 'GPT-5.2-Codex',
modelString: CODEX_MODEL_MAP.gpt52Codex,
provider: 'openai',
description:
'Most advanced agentic coding model for complex software engineering (default for ChatGPT users).',
contextWindow: CONTEXT_WINDOW_256K,
maxOutputTokens: MAX_OUTPUT_32K,
supportsVision: true,
supportsTools: true,
tier: 'premium' as const,
default: true,
hasReasoning: true,
},
{
id: CODEX_MODEL_MAP.gpt51CodexMax,
name: 'GPT-5.1-Codex-Max',
modelString: CODEX_MODEL_MAP.gpt51CodexMax,
provider: 'openai',
description: 'Optimized for long-horizon, agentic coding tasks in Codex.',
contextWindow: CONTEXT_WINDOW_256K,
maxOutputTokens: MAX_OUTPUT_32K,
supportsVision: true,
supportsTools: true,
tier: 'premium' as const,
hasReasoning: true,
},
{
id: CODEX_MODEL_MAP.gpt51CodexMini,
name: 'GPT-5.1-Codex-Mini',
modelString: CODEX_MODEL_MAP.gpt51CodexMini,
provider: 'openai',
description: 'Smaller, more cost-effective version for faster workflows.',
contextWindow: CONTEXT_WINDOW_128K,
maxOutputTokens: MAX_OUTPUT_16K,
supportsVision: true,
supportsTools: true,
tier: 'basic' as const,
hasReasoning: false,
},
// ========== General-Purpose GPT Models ==========
{
id: CODEX_MODEL_MAP.gpt52,
name: 'GPT-5.2',
modelString: CODEX_MODEL_MAP.gpt52,
provider: 'openai',
description: 'Best general agentic model for tasks across industries and domains.',
contextWindow: CONTEXT_WINDOW_256K,
maxOutputTokens: MAX_OUTPUT_32K,
supportsVision: true,
supportsTools: true,
tier: 'standard' as const,
hasReasoning: true,
},
{
id: CODEX_MODEL_MAP.gpt51,
name: 'GPT-5.1',
modelString: CODEX_MODEL_MAP.gpt51,
provider: 'openai',
description: 'Great for coding and agentic tasks across domains.',
contextWindow: CONTEXT_WINDOW_256K,
maxOutputTokens: MAX_OUTPUT_32K,
supportsVision: true,
supportsTools: true,
tier: 'standard' as const,
hasReasoning: true,
},
];
/**
* Get model definition by ID
*/
export function getCodexModelById(modelId: string): ModelDefinition | undefined {
return CODEX_MODELS.find((m) => m.id === modelId || m.modelString === modelId);
}
/**
* Get all models that support reasoning
*/
export function getReasoningModels(): ModelDefinition[] {
return CODEX_MODELS.filter((m) => m.hasReasoning);
}
/**
* Get models by tier
*/
export function getModelsByTier(tier: 'premium' | 'standard' | 'basic'): ModelDefinition[] {
return CODEX_MODELS.filter((m) => m.tier === tier);
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,173 @@
/**
* Codex SDK client - Executes Codex queries via official @openai/codex-sdk
*
* Used for programmatic control of Codex from within the application.
* Provides cleaner integration than spawning CLI processes.
*/
import { Codex } from '@openai/codex-sdk';
import { formatHistoryAsText, classifyError, getUserFriendlyErrorMessage } from '@automaker/utils';
import { supportsReasoningEffort } from '@automaker/types';
import type { ExecuteOptions, ProviderMessage } from './types.js';
const OPENAI_API_KEY_ENV = 'OPENAI_API_KEY';
const SDK_HISTORY_HEADER = 'Current request:\n';
const DEFAULT_RESPONSE_TEXT = '';
const SDK_ERROR_DETAILS_LABEL = 'Details:';
type PromptBlock = {
type: string;
text?: string;
source?: {
type?: string;
media_type?: string;
data?: string;
};
};
function resolveApiKey(): string {
const apiKey = process.env[OPENAI_API_KEY_ENV];
if (!apiKey) {
throw new Error('OPENAI_API_KEY is not set.');
}
return apiKey;
}
function normalizePromptBlocks(prompt: ExecuteOptions['prompt']): PromptBlock[] {
if (Array.isArray(prompt)) {
return prompt as PromptBlock[];
}
return [{ type: 'text', text: prompt }];
}
function buildPromptText(options: ExecuteOptions, systemPrompt: string | null): string {
const historyText =
options.conversationHistory && options.conversationHistory.length > 0
? formatHistoryAsText(options.conversationHistory)
: '';
const promptBlocks = normalizePromptBlocks(options.prompt);
const promptTexts: string[] = [];
for (const block of promptBlocks) {
if (block.type === 'text' && typeof block.text === 'string' && block.text.trim()) {
promptTexts.push(block.text);
}
}
const promptContent = promptTexts.join('\n\n');
if (!promptContent.trim()) {
throw new Error('Codex SDK prompt is empty.');
}
const parts: string[] = [];
if (systemPrompt) {
parts.push(`System: ${systemPrompt}`);
}
if (historyText) {
parts.push(historyText);
}
parts.push(`${SDK_HISTORY_HEADER}${promptContent}`);
return parts.join('\n\n');
}
function buildSdkErrorMessage(rawMessage: string, userMessage: string): string {
if (!rawMessage) {
return userMessage;
}
if (!userMessage || rawMessage === userMessage) {
return rawMessage;
}
return `${userMessage}\n\n${SDK_ERROR_DETAILS_LABEL} ${rawMessage}`;
}
/**
* Execute a query using the official Codex SDK
*
* The SDK provides a cleaner interface than spawning CLI processes:
* - Handles authentication automatically
* - Provides TypeScript types
* - Supports thread management and resumption
* - Better error handling
*/
export async function* executeCodexSdkQuery(
options: ExecuteOptions,
systemPrompt: string | null
): AsyncGenerator<ProviderMessage> {
try {
const apiKey = resolveApiKey();
const codex = new Codex({ apiKey });
// Resume existing thread or start new one
let thread;
if (options.sdkSessionId) {
try {
thread = codex.resumeThread(options.sdkSessionId);
} catch {
// If resume fails, start a new thread
thread = codex.startThread();
}
} else {
thread = codex.startThread();
}
const promptText = buildPromptText(options, systemPrompt);
// Build run options with reasoning effort if supported
const runOptions: {
signal?: AbortSignal;
reasoning?: { effort: string };
} = {
signal: options.abortController?.signal,
};
// Add reasoning effort if model supports it and reasoningEffort is specified
if (
options.reasoningEffort &&
supportsReasoningEffort(options.model) &&
options.reasoningEffort !== 'none'
) {
runOptions.reasoning = { effort: options.reasoningEffort };
}
// Run the query
const result = await thread.run(promptText, runOptions);
// Extract response text (from finalResponse property)
const outputText = result.finalResponse ?? DEFAULT_RESPONSE_TEXT;
// Get thread ID (may be null if not populated yet)
const threadId = thread.id ?? undefined;
// Yield assistant message
yield {
type: 'assistant',
session_id: threadId,
message: {
role: 'assistant',
content: [{ type: 'text', text: outputText }],
},
};
// Yield result
yield {
type: 'result',
subtype: 'success',
session_id: threadId,
result: outputText,
};
} catch (error) {
const errorInfo = classifyError(error);
const userMessage = getUserFriendlyErrorMessage(error);
const combinedMessage = buildSdkErrorMessage(errorInfo.message, userMessage);
console.error('[CodexSDK] executeQuery() error during execution:', {
type: errorInfo.type,
message: errorInfo.message,
isRateLimit: errorInfo.isRateLimit,
retryAfter: errorInfo.retryAfter,
stack: error instanceof Error ? error.stack : undefined,
});
yield { type: 'error', error: combinedMessage };
}
}

View File

@@ -0,0 +1,436 @@
export type CodexToolResolution = {
name: string;
input: Record<string, unknown>;
};
export type CodexTodoItem = {
content: string;
status: 'pending' | 'in_progress' | 'completed';
activeForm?: string;
};
const TOOL_NAME_BASH = 'Bash';
const TOOL_NAME_READ = 'Read';
const TOOL_NAME_EDIT = 'Edit';
const TOOL_NAME_WRITE = 'Write';
const TOOL_NAME_GREP = 'Grep';
const TOOL_NAME_GLOB = 'Glob';
const TOOL_NAME_TODO = 'TodoWrite';
const TOOL_NAME_DELETE = 'Delete';
const TOOL_NAME_LS = 'Ls';
const INPUT_KEY_COMMAND = 'command';
const INPUT_KEY_FILE_PATH = 'file_path';
const INPUT_KEY_PATTERN = 'pattern';
const SHELL_WRAPPER_PATTERNS = [
/^\/bin\/bash\s+-lc\s+["']([\s\S]+)["']$/,
/^bash\s+-lc\s+["']([\s\S]+)["']$/,
/^\/bin\/sh\s+-lc\s+["']([\s\S]+)["']$/,
/^sh\s+-lc\s+["']([\s\S]+)["']$/,
/^cmd\.exe\s+\/c\s+["']?([\s\S]+)["']?$/i,
/^powershell(?:\.exe)?\s+-Command\s+["']?([\s\S]+)["']?$/i,
/^pwsh(?:\.exe)?\s+-Command\s+["']?([\s\S]+)["']?$/i,
] as const;
const COMMAND_SEPARATOR_PATTERN = /\s*(?:&&|\|\||;)\s*/;
const SEGMENT_SKIP_PREFIXES = ['cd ', 'export ', 'set ', 'pushd '] as const;
const WRAPPER_COMMANDS = new Set(['sudo', 'env', 'command']);
const READ_COMMANDS = new Set(['cat', 'sed', 'head', 'tail', 'less', 'more', 'bat', 'stat', 'wc']);
const SEARCH_COMMANDS = new Set(['rg', 'grep', 'ag', 'ack']);
const GLOB_COMMANDS = new Set(['ls', 'find', 'fd', 'tree']);
const DELETE_COMMANDS = new Set(['rm', 'del', 'erase', 'remove', 'unlink']);
const LIST_COMMANDS = new Set(['ls', 'dir', 'll', 'la']);
const WRITE_COMMANDS = new Set(['tee', 'touch', 'mkdir']);
const APPLY_PATCH_COMMAND = 'apply_patch';
const APPLY_PATCH_PATTERN = /\bapply_patch\b/;
const REDIRECTION_TARGET_PATTERN = /(?:>>|>)\s*([^\s]+)/;
const SED_IN_PLACE_FLAGS = new Set(['-i', '--in-place']);
const PERL_IN_PLACE_FLAG = /-.*i/;
const SEARCH_PATTERN_FLAGS = new Set(['-e', '--regexp']);
const SEARCH_VALUE_FLAGS = new Set([
'-g',
'--glob',
'--iglob',
'--type',
'--type-add',
'--type-clear',
'--encoding',
]);
const SEARCH_FILE_LIST_FLAGS = new Set(['--files']);
const TODO_LINE_PATTERN = /^[-*]\s*(?:\[(?<status>[ x~])\]\s*)?(?<content>.+)$/;
const TODO_STATUS_COMPLETED = 'completed';
const TODO_STATUS_IN_PROGRESS = 'in_progress';
const TODO_STATUS_PENDING = 'pending';
const PATCH_FILE_MARKERS = [
'*** Update File: ',
'*** Add File: ',
'*** Delete File: ',
'*** Move to: ',
] as const;
function stripShellWrapper(command: string): string {
const trimmed = command.trim();
for (const pattern of SHELL_WRAPPER_PATTERNS) {
const match = trimmed.match(pattern);
if (match && match[1]) {
return unescapeCommand(match[1].trim());
}
}
return trimmed;
}
function unescapeCommand(command: string): string {
return command.replace(/\\(["'])/g, '$1');
}
function extractPrimarySegment(command: string): string {
const segments = command
.split(COMMAND_SEPARATOR_PATTERN)
.map((segment) => segment.trim())
.filter(Boolean);
for (const segment of segments) {
const shouldSkip = SEGMENT_SKIP_PREFIXES.some((prefix) => segment.startsWith(prefix));
if (!shouldSkip) {
return segment;
}
}
return command.trim();
}
function tokenizeCommand(command: string): string[] {
const tokens: string[] = [];
let current = '';
let inSingleQuote = false;
let inDoubleQuote = false;
let isEscaped = false;
for (const char of command) {
if (isEscaped) {
current += char;
isEscaped = false;
continue;
}
if (char === '\\') {
isEscaped = true;
continue;
}
if (char === "'" && !inDoubleQuote) {
inSingleQuote = !inSingleQuote;
continue;
}
if (char === '"' && !inSingleQuote) {
inDoubleQuote = !inDoubleQuote;
continue;
}
if (!inSingleQuote && !inDoubleQuote && /\s/.test(char)) {
if (current) {
tokens.push(current);
current = '';
}
continue;
}
current += char;
}
if (current) {
tokens.push(current);
}
return tokens;
}
function stripWrapperTokens(tokens: string[]): string[] {
let index = 0;
while (index < tokens.length && WRAPPER_COMMANDS.has(tokens[index].toLowerCase())) {
index += 1;
}
return tokens.slice(index);
}
function extractFilePathFromTokens(tokens: string[]): string | null {
const candidates = tokens.slice(1).filter((token) => token && !token.startsWith('-'));
if (candidates.length === 0) return null;
return candidates[candidates.length - 1];
}
function extractSearchPattern(tokens: string[]): string | null {
const remaining = tokens.slice(1);
for (let index = 0; index < remaining.length; index += 1) {
const token = remaining[index];
if (token === '--') {
return remaining[index + 1] ?? null;
}
if (SEARCH_PATTERN_FLAGS.has(token)) {
return remaining[index + 1] ?? null;
}
if (SEARCH_VALUE_FLAGS.has(token)) {
index += 1;
continue;
}
if (token.startsWith('-')) {
continue;
}
return token;
}
return null;
}
function extractTeeTarget(tokens: string[]): string | null {
const teeIndex = tokens.findIndex((token) => token === 'tee');
if (teeIndex < 0) return null;
const candidate = tokens[teeIndex + 1];
return candidate && !candidate.startsWith('-') ? candidate : null;
}
function extractRedirectionTarget(command: string): string | null {
const match = command.match(REDIRECTION_TARGET_PATTERN);
return match?.[1] ?? null;
}
function extractFilePathFromDeleteTokens(tokens: string[]): string | null {
// rm file.txt or rm /path/to/file.txt
// Skip flags and get the first non-flag argument
for (let i = 1; i < tokens.length; i++) {
const token = tokens[i];
if (token && !token.startsWith('-')) {
return token;
}
}
return null;
}
function hasSedInPlaceFlag(tokens: string[]): boolean {
return tokens.some((token) => SED_IN_PLACE_FLAGS.has(token) || token.startsWith('-i'));
}
function hasPerlInPlaceFlag(tokens: string[]): boolean {
return tokens.some((token) => PERL_IN_PLACE_FLAG.test(token));
}
function extractPatchFilePath(command: string): string | null {
for (const marker of PATCH_FILE_MARKERS) {
const index = command.indexOf(marker);
if (index < 0) continue;
const start = index + marker.length;
const end = command.indexOf('\n', start);
const rawPath = (end === -1 ? command.slice(start) : command.slice(start, end)).trim();
if (rawPath) return rawPath;
}
return null;
}
function buildInputWithFilePath(filePath: string | null): Record<string, unknown> {
return filePath ? { [INPUT_KEY_FILE_PATH]: filePath } : {};
}
function buildInputWithPattern(pattern: string | null): Record<string, unknown> {
return pattern ? { [INPUT_KEY_PATTERN]: pattern } : {};
}
export function resolveCodexToolCall(command: string): CodexToolResolution {
const normalized = stripShellWrapper(command);
const primarySegment = extractPrimarySegment(normalized);
const tokens = stripWrapperTokens(tokenizeCommand(primarySegment));
const commandToken = tokens[0]?.toLowerCase() ?? '';
const redirectionTarget = extractRedirectionTarget(primarySegment);
if (redirectionTarget) {
return {
name: TOOL_NAME_WRITE,
input: buildInputWithFilePath(redirectionTarget),
};
}
if (commandToken === APPLY_PATCH_COMMAND || APPLY_PATCH_PATTERN.test(primarySegment)) {
return {
name: TOOL_NAME_EDIT,
input: buildInputWithFilePath(extractPatchFilePath(primarySegment)),
};
}
if (commandToken === 'sed' && hasSedInPlaceFlag(tokens)) {
return {
name: TOOL_NAME_EDIT,
input: buildInputWithFilePath(extractFilePathFromTokens(tokens)),
};
}
if (commandToken === 'perl' && hasPerlInPlaceFlag(tokens)) {
return {
name: TOOL_NAME_EDIT,
input: buildInputWithFilePath(extractFilePathFromTokens(tokens)),
};
}
if (WRITE_COMMANDS.has(commandToken)) {
const filePath =
commandToken === 'tee' ? extractTeeTarget(tokens) : extractFilePathFromTokens(tokens);
return {
name: TOOL_NAME_WRITE,
input: buildInputWithFilePath(filePath),
};
}
if (SEARCH_COMMANDS.has(commandToken)) {
if (tokens.some((token) => SEARCH_FILE_LIST_FLAGS.has(token))) {
return {
name: TOOL_NAME_GLOB,
input: buildInputWithPattern(extractFilePathFromTokens(tokens)),
};
}
return {
name: TOOL_NAME_GREP,
input: buildInputWithPattern(extractSearchPattern(tokens)),
};
}
// Handle Delete commands (rm, del, erase, remove, unlink)
if (DELETE_COMMANDS.has(commandToken)) {
// Skip if -r or -rf flags (recursive delete should go to Bash)
if (
tokens.some((token) => token === '-r' || token === '-rf' || token === '-f' || token === '-rf')
) {
return {
name: TOOL_NAME_BASH,
input: { [INPUT_KEY_COMMAND]: normalized },
};
}
// Simple file deletion - extract the file path
const filePath = extractFilePathFromDeleteTokens(tokens);
if (filePath) {
return {
name: TOOL_NAME_DELETE,
input: { path: filePath },
};
}
// Fall back to bash if we can't determine the file path
return {
name: TOOL_NAME_BASH,
input: { [INPUT_KEY_COMMAND]: normalized },
};
}
// Handle simple Ls commands (just listing, not find/glob)
if (LIST_COMMANDS.has(commandToken)) {
const filePath = extractFilePathFromTokens(tokens);
return {
name: TOOL_NAME_LS,
input: { path: filePath || '.' },
};
}
if (GLOB_COMMANDS.has(commandToken)) {
return {
name: TOOL_NAME_GLOB,
input: buildInputWithPattern(extractFilePathFromTokens(tokens)),
};
}
if (READ_COMMANDS.has(commandToken)) {
return {
name: TOOL_NAME_READ,
input: buildInputWithFilePath(extractFilePathFromTokens(tokens)),
};
}
return {
name: TOOL_NAME_BASH,
input: { [INPUT_KEY_COMMAND]: normalized },
};
}
function parseTodoLines(lines: string[]): CodexTodoItem[] {
const todos: CodexTodoItem[] = [];
for (const line of lines) {
const match = line.match(TODO_LINE_PATTERN);
if (!match?.groups?.content) continue;
const statusToken = match.groups.status;
const status =
statusToken === 'x'
? TODO_STATUS_COMPLETED
: statusToken === '~'
? TODO_STATUS_IN_PROGRESS
: TODO_STATUS_PENDING;
todos.push({ content: match.groups.content.trim(), status });
}
return todos;
}
function extractTodoFromArray(value: unknown[]): CodexTodoItem[] {
return value
.map((entry) => {
if (typeof entry === 'string') {
return { content: entry, status: TODO_STATUS_PENDING };
}
if (entry && typeof entry === 'object') {
const record = entry as Record<string, unknown>;
const content =
typeof record.content === 'string'
? record.content
: typeof record.text === 'string'
? record.text
: typeof record.title === 'string'
? record.title
: null;
if (!content) return null;
const status =
record.status === TODO_STATUS_COMPLETED ||
record.status === TODO_STATUS_IN_PROGRESS ||
record.status === TODO_STATUS_PENDING
? (record.status as CodexTodoItem['status'])
: TODO_STATUS_PENDING;
const activeForm = typeof record.activeForm === 'string' ? record.activeForm : undefined;
return { content, status, activeForm };
}
return null;
})
.filter((item): item is CodexTodoItem => Boolean(item));
}
export function extractCodexTodoItems(item: Record<string, unknown>): CodexTodoItem[] | null {
const todosValue = item.todos;
if (Array.isArray(todosValue)) {
const todos = extractTodoFromArray(todosValue);
return todos.length > 0 ? todos : null;
}
const itemsValue = item.items;
if (Array.isArray(itemsValue)) {
const todos = extractTodoFromArray(itemsValue);
return todos.length > 0 ? todos : null;
}
const textValue =
typeof item.text === 'string'
? item.text
: typeof item.content === 'string'
? item.content
: null;
if (!textValue) return null;
const lines = textValue
.split('\n')
.map((line) => line.trim())
.filter(Boolean);
const todos = parseTodoLines(lines);
return todos.length > 0 ? todos : null;
}
export function getCodexTodoToolName(): string {
return TOOL_NAME_TODO;
}

View File

@@ -0,0 +1,197 @@
/**
* Cursor CLI Configuration Manager
*
* Manages Cursor CLI configuration stored in .automaker/cursor-config.json
*/
import * as fs from 'fs';
import * as path from 'path';
import { getAllCursorModelIds, type CursorCliConfig, type CursorModelId } from '@automaker/types';
import { createLogger } from '@automaker/utils';
import { getAutomakerDir } from '@automaker/platform';
// Create logger for this module
const logger = createLogger('CursorConfigManager');
/**
* Manages Cursor CLI configuration
* Config location: .automaker/cursor-config.json
*/
export class CursorConfigManager {
private configPath: string;
private config: CursorCliConfig;
constructor(projectPath: string) {
// Use getAutomakerDir for consistent path resolution
this.configPath = path.join(getAutomakerDir(projectPath), 'cursor-config.json');
this.config = this.loadConfig();
}
/**
* Load configuration from disk
*/
private loadConfig(): CursorCliConfig {
try {
if (fs.existsSync(this.configPath)) {
const content = fs.readFileSync(this.configPath, 'utf8');
const parsed = JSON.parse(content) as CursorCliConfig;
logger.debug(`Loaded config from ${this.configPath}`);
return parsed;
}
} catch (error) {
logger.warn('Failed to load config:', error);
}
// Return default config with all available models
return {
defaultModel: 'auto',
models: getAllCursorModelIds(),
};
}
/**
* Save configuration to disk
*/
private saveConfig(): void {
try {
const dir = path.dirname(this.configPath);
if (!fs.existsSync(dir)) {
fs.mkdirSync(dir, { recursive: true });
}
fs.writeFileSync(this.configPath, JSON.stringify(this.config, null, 2));
logger.debug('Config saved');
} catch (error) {
logger.error('Failed to save config:', error);
throw error;
}
}
/**
* Get the full configuration
*/
getConfig(): CursorCliConfig {
return { ...this.config };
}
/**
* Get the default model
*/
getDefaultModel(): CursorModelId {
return this.config.defaultModel || 'auto';
}
/**
* Set the default model
*/
setDefaultModel(model: CursorModelId): void {
this.config.defaultModel = model;
this.saveConfig();
logger.info(`Default model set to: ${model}`);
}
/**
* Get enabled models
*/
getEnabledModels(): CursorModelId[] {
return this.config.models || ['auto'];
}
/**
* Set enabled models
*/
setEnabledModels(models: CursorModelId[]): void {
this.config.models = models;
this.saveConfig();
logger.info(`Enabled models updated: ${models.join(', ')}`);
}
/**
* Add a model to enabled list
*/
addModel(model: CursorModelId): void {
if (!this.config.models) {
this.config.models = [];
}
if (!this.config.models.includes(model)) {
this.config.models.push(model);
this.saveConfig();
logger.info(`Model added: ${model}`);
}
}
/**
* Remove a model from enabled list
*/
removeModel(model: CursorModelId): void {
if (this.config.models) {
this.config.models = this.config.models.filter((m) => m !== model);
this.saveConfig();
logger.info(`Model removed: ${model}`);
}
}
/**
* Check if a model is enabled
*/
isModelEnabled(model: CursorModelId): boolean {
return this.config.models?.includes(model) ?? false;
}
/**
* Get MCP server configurations
*/
getMcpServers(): string[] {
return this.config.mcpServers || [];
}
/**
* Set MCP server configurations
*/
setMcpServers(servers: string[]): void {
this.config.mcpServers = servers;
this.saveConfig();
logger.info(`MCP servers updated: ${servers.join(', ')}`);
}
/**
* Get Cursor rules paths
*/
getRules(): string[] {
return this.config.rules || [];
}
/**
* Set Cursor rules paths
*/
setRules(rules: string[]): void {
this.config.rules = rules;
this.saveConfig();
logger.info(`Rules updated: ${rules.join(', ')}`);
}
/**
* Reset configuration to defaults
*/
reset(): void {
this.config = {
defaultModel: 'auto',
models: getAllCursorModelIds(),
};
this.saveConfig();
logger.info('Config reset to defaults');
}
/**
* Check if config file exists
*/
exists(): boolean {
return fs.existsSync(this.configPath);
}
/**
* Get the config file path
*/
getConfigPath(): string {
return this.configPath;
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,32 @@
/**
* Provider exports
*/
// Base providers
export { BaseProvider } from './base-provider.js';
export {
CliProvider,
type SpawnStrategy,
type CliSpawnConfig,
type CliErrorInfo,
} from './cli-provider.js';
export type {
ProviderConfig,
ExecuteOptions,
ProviderMessage,
InstallationStatus,
ModelDefinition,
} from './types.js';
// Claude provider
export { ClaudeProvider } from './claude-provider.js';
// Cursor provider
export { CursorProvider, CursorErrorCode, CursorError } from './cursor-provider.js';
export { CursorConfigManager } from './cursor-config-manager.js';
// OpenCode provider
export { OpencodeProvider } from './opencode-provider.js';
// Provider factory
export { ProviderFactory } from './provider-factory.js';

View File

@@ -0,0 +1,676 @@
/**
* OpenCode Provider - Executes queries using opencode CLI
*
* Extends CliProvider with OpenCode-specific configuration:
* - Event normalization for OpenCode's stream-json format
* - Model definitions for anthropic, openai, and google models
* - NPX-based Windows execution strategy
* - Platform-specific npm global installation paths
*
* Spawns the opencode CLI with --output-format stream-json for streaming responses.
*/
import * as path from 'path';
import * as os from 'os';
import { CliProvider, type CliSpawnConfig } from './cli-provider.js';
import type {
ProviderConfig,
ExecuteOptions,
ProviderMessage,
ModelDefinition,
InstallationStatus,
ContentBlock,
} from '@automaker/types';
import { stripProviderPrefix } from '@automaker/types';
import { type SubprocessOptions, getOpenCodeAuthIndicators } from '@automaker/platform';
// =============================================================================
// OpenCode Auth Types
// =============================================================================
export interface OpenCodeAuthStatus {
authenticated: boolean;
method: 'api_key' | 'oauth' | 'none';
hasOAuthToken?: boolean;
hasApiKey?: boolean;
}
// =============================================================================
// OpenCode Stream Event Types
// =============================================================================
/**
* Base interface for all OpenCode stream events
* OpenCode uses underscore format: step_start, step_finish, text
*/
interface OpenCodeBaseEvent {
/** Event type identifier */
type: string;
/** Timestamp of the event */
timestamp?: number;
/** Session ID */
sessionID?: string;
/** Part object containing the actual event data */
part?: Record<string, unknown>;
}
/**
* Text event - Text output from the model
* Format: {"type":"text","part":{"text":"content",...}}
*/
export interface OpenCodeTextEvent extends OpenCodeBaseEvent {
type: 'text';
part: {
type: 'text';
text: string;
[key: string]: unknown;
};
}
/**
* Tool call event - Request to execute a tool
*/
export interface OpenCodeToolCallEvent extends OpenCodeBaseEvent {
type: 'tool_call';
part: {
type: 'tool-call';
name: string;
call_id?: string;
args: unknown;
[key: string]: unknown;
};
}
/**
* Tool result event - Output from a tool execution
*/
export interface OpenCodeToolResultEvent extends OpenCodeBaseEvent {
type: 'tool_result';
part: {
type: 'tool-result';
call_id?: string;
output: string;
[key: string]: unknown;
};
}
/**
* Tool error event - Tool execution failed
*/
export interface OpenCodeToolErrorEvent extends OpenCodeBaseEvent {
type: 'tool_error';
part: {
type: 'tool-error';
call_id?: string;
error: string;
[key: string]: unknown;
};
}
/**
* Start step event - Begins an agentic loop iteration
* Format: {"type":"step_start","part":{...}}
*/
export interface OpenCodeStartStepEvent extends OpenCodeBaseEvent {
type: 'step_start';
part?: {
type: 'step-start';
[key: string]: unknown;
};
}
/**
* Finish step event - Completes an agentic loop iteration
* Format: {"type":"step_finish","part":{"reason":"stop",...}}
*/
export interface OpenCodeFinishStepEvent extends OpenCodeBaseEvent {
type: 'step_finish';
part?: {
type: 'step-finish';
reason?: string;
error?: string;
[key: string]: unknown;
};
}
/**
* Union type of all OpenCode stream events
*/
export type OpenCodeStreamEvent =
| OpenCodeTextEvent
| OpenCodeToolCallEvent
| OpenCodeToolResultEvent
| OpenCodeToolErrorEvent
| OpenCodeStartStepEvent
| OpenCodeFinishStepEvent;
// =============================================================================
// Tool Use ID Generation
// =============================================================================
/** Counter for generating unique tool use IDs when call_id is not provided */
let toolUseIdCounter = 0;
/**
* Generate a unique tool use ID for tool calls without explicit IDs
*/
function generateToolUseId(): string {
toolUseIdCounter += 1;
return `opencode-tool-${toolUseIdCounter}`;
}
/**
* Reset the tool use ID counter (useful for testing)
*/
export function resetToolUseIdCounter(): void {
toolUseIdCounter = 0;
}
// =============================================================================
// Provider Implementation
// =============================================================================
/**
* OpencodeProvider - Integrates opencode CLI as an AI provider
*
* OpenCode is an npm-distributed CLI tool that provides access to
* multiple AI model providers through a unified interface.
*/
export class OpencodeProvider extends CliProvider {
constructor(config: ProviderConfig = {}) {
super(config);
}
// ==========================================================================
// CliProvider Abstract Method Implementations
// ==========================================================================
getName(): string {
return 'opencode';
}
getCliName(): string {
return 'opencode';
}
getSpawnConfig(): CliSpawnConfig {
return {
windowsStrategy: 'npx',
npxPackage: 'opencode-ai@latest',
commonPaths: {
linux: [
path.join(os.homedir(), '.opencode/bin/opencode'),
path.join(os.homedir(), '.npm-global/bin/opencode'),
'/usr/local/bin/opencode',
'/usr/bin/opencode',
path.join(os.homedir(), '.local/bin/opencode'),
],
darwin: [
path.join(os.homedir(), '.opencode/bin/opencode'),
path.join(os.homedir(), '.npm-global/bin/opencode'),
'/usr/local/bin/opencode',
'/opt/homebrew/bin/opencode',
path.join(os.homedir(), '.local/bin/opencode'),
],
win32: [
path.join(os.homedir(), '.opencode', 'bin', 'opencode.exe'),
path.join(os.homedir(), 'AppData', 'Roaming', 'npm', 'opencode.cmd'),
path.join(os.homedir(), 'AppData', 'Roaming', 'npm', 'opencode'),
path.join(process.env.APPDATA || '', 'npm', 'opencode.cmd'),
],
},
};
}
/**
* Build CLI arguments for the `opencode run` command
*
* Arguments built:
* - 'run' subcommand for executing queries
* - '--format', 'json' for JSON streaming output
* - '--model', '<model>' for model selection (if specified)
* - Message passed via stdin (no positional args needed)
*
* The prompt is passed via stdin to avoid shell escaping issues.
* OpenCode will read from stdin when no positional message arguments are provided.
*
* @param options - Execution options containing model, cwd, etc.
* @returns Array of CLI arguments for opencode run
*/
buildCliArgs(options: ExecuteOptions): string[] {
const args: string[] = ['run'];
// Add JSON output format for streaming
args.push('--format', 'json');
// Handle model selection
// Strip 'opencode-' prefix if present, OpenCode uses native format
if (options.model) {
const model = stripProviderPrefix(options.model);
args.push('--model', model);
}
// Note: Working directory is set via subprocess cwd option, not CLI args
// Note: Message is passed via stdin, OpenCode reads from stdin automatically
return args;
}
// ==========================================================================
// Prompt Handling
// ==========================================================================
/**
* Extract prompt text from ExecuteOptions for passing via stdin
*
* Handles both string prompts and array-based prompts with content blocks.
* For array prompts with images, extracts only text content (images would
* need separate handling via file paths if OpenCode supports them).
*
* @param options - Execution options containing the prompt
* @returns Plain text prompt string
*/
private extractPromptText(options: ExecuteOptions): string {
if (typeof options.prompt === 'string') {
return options.prompt;
}
// Array-based prompt - extract text content
if (Array.isArray(options.prompt)) {
return options.prompt
.filter((block) => block.type === 'text' && block.text)
.map((block) => block.text)
.join('\n');
}
throw new Error('Invalid prompt format: expected string or content block array');
}
/**
* Build subprocess options with stdin data for prompt
*
* Extends the base class method to add stdinData containing the prompt.
* This allows passing prompts via stdin instead of CLI arguments,
* avoiding shell escaping issues with special characters.
*
* @param options - Execution options
* @param cliArgs - CLI arguments from buildCliArgs
* @returns SubprocessOptions with stdinData set
*/
protected buildSubprocessOptions(options: ExecuteOptions, cliArgs: string[]): SubprocessOptions {
const subprocessOptions = super.buildSubprocessOptions(options, cliArgs);
// Pass prompt via stdin to avoid shell interpretation of special characters
// like $(), backticks, quotes, etc. that may appear in prompts or file content
subprocessOptions.stdinData = this.extractPromptText(options);
return subprocessOptions;
}
/**
* Normalize a raw CLI event to ProviderMessage format
*
* Maps OpenCode event types to the standard ProviderMessage structure:
* - text -> type: 'assistant', content with type: 'text'
* - step_start -> null (informational, no message needed)
* - step_finish -> type: 'result', subtype: 'success' (or error if failed)
* - tool_call -> type: 'assistant', content with type: 'tool_use'
* - tool_result -> type: 'assistant', content with type: 'tool_result'
* - tool_error -> type: 'error'
*
* @param event - Raw event from OpenCode CLI JSONL output
* @returns Normalized ProviderMessage or null to skip the event
*/
normalizeEvent(event: unknown): ProviderMessage | null {
if (!event || typeof event !== 'object') {
return null;
}
const openCodeEvent = event as OpenCodeStreamEvent;
switch (openCodeEvent.type) {
case 'text': {
const textEvent = openCodeEvent as OpenCodeTextEvent;
// Skip if no text content
if (!textEvent.part?.text) {
return null;
}
const content: ContentBlock[] = [
{
type: 'text',
text: textEvent.part.text,
},
];
return {
type: 'assistant',
session_id: textEvent.sessionID,
message: {
role: 'assistant',
content,
},
};
}
case 'step_start': {
// Start step is informational - no message needed
return null;
}
case 'step_finish': {
const finishEvent = openCodeEvent as OpenCodeFinishStepEvent;
// Check if the step failed (either has error field or reason is 'error')
if (finishEvent.part?.error || finishEvent.part?.reason === 'error') {
return {
type: 'error',
session_id: finishEvent.sessionID,
error: finishEvent.part?.error || 'Step execution failed',
};
}
// Successful completion
const result: { type: 'result'; subtype: 'success'; session_id?: string; result?: string } =
{
type: 'result',
subtype: 'success',
};
if (finishEvent.sessionID) {
result.session_id = finishEvent.sessionID;
}
// Safely handle arbitrary result payloads from CLI: ensure we assign a string.
const rawResult =
(finishEvent.part && (finishEvent.part as Record<string, unknown>).result) ?? undefined;
if (rawResult !== undefined) {
result.result = typeof rawResult === 'string' ? rawResult : JSON.stringify(rawResult);
}
return result;
}
case 'tool_call': {
const toolEvent = openCodeEvent as OpenCodeToolCallEvent;
if (!toolEvent.part) {
return null;
}
// Generate a tool use ID if not provided
const toolUseId = toolEvent.part.call_id || generateToolUseId();
const content: ContentBlock[] = [
{
type: 'tool_use',
name: toolEvent.part.name,
tool_use_id: toolUseId,
input: toolEvent.part.args,
},
];
return {
type: 'assistant',
session_id: toolEvent.sessionID,
message: {
role: 'assistant',
content,
},
};
}
case 'tool_result': {
const resultEvent = openCodeEvent as OpenCodeToolResultEvent;
if (!resultEvent.part) {
return null;
}
const content: ContentBlock[] = [
{
type: 'tool_result',
tool_use_id: resultEvent.part.call_id,
content: resultEvent.part.output,
},
];
return {
type: 'assistant',
session_id: resultEvent.sessionID,
message: {
role: 'assistant',
content,
},
};
}
case 'tool_error': {
const errorEvent = openCodeEvent as OpenCodeToolErrorEvent;
return {
type: 'error',
session_id: errorEvent.sessionID,
error: errorEvent.part?.error || 'Tool execution failed',
};
}
default: {
// Unknown event type - skip it
return null;
}
}
}
// ==========================================================================
// Model Configuration
// ==========================================================================
/**
* Get available models for OpenCode
*
* Returns model definitions for supported AI providers:
* - Anthropic Claude models (Sonnet, Opus, Haiku)
* - OpenAI GPT-4o
* - Google Gemini 2.5 Pro
*/
getAvailableModels(): ModelDefinition[] {
return [
// OpenCode Free Tier Models
{
id: 'opencode/big-pickle',
name: 'Big Pickle (Free)',
modelString: 'opencode/big-pickle',
provider: 'opencode',
description: 'OpenCode free tier model - great for general coding',
supportsTools: true,
supportsVision: false,
tier: 'basic',
},
{
id: 'opencode/gpt-5-nano',
name: 'GPT-5 Nano (Free)',
modelString: 'opencode/gpt-5-nano',
provider: 'opencode',
description: 'Fast and lightweight free tier model',
supportsTools: true,
supportsVision: false,
tier: 'basic',
},
{
id: 'opencode/grok-code',
name: 'Grok Code (Free)',
modelString: 'opencode/grok-code',
provider: 'opencode',
description: 'OpenCode free tier Grok model for coding',
supportsTools: true,
supportsVision: false,
tier: 'basic',
},
// Amazon Bedrock - Claude Models
{
id: 'amazon-bedrock/anthropic.claude-sonnet-4-5-20250929-v1:0',
name: 'Claude Sonnet 4.5 (Bedrock)',
modelString: 'amazon-bedrock/anthropic.claude-sonnet-4-5-20250929-v1:0',
provider: 'opencode',
description: 'Latest Claude Sonnet via AWS Bedrock - fast and intelligent',
supportsTools: true,
supportsVision: true,
tier: 'premium',
default: true,
},
{
id: 'amazon-bedrock/anthropic.claude-opus-4-5-20251101-v1:0',
name: 'Claude Opus 4.5 (Bedrock)',
modelString: 'amazon-bedrock/anthropic.claude-opus-4-5-20251101-v1:0',
provider: 'opencode',
description: 'Most capable Claude model via AWS Bedrock',
supportsTools: true,
supportsVision: true,
tier: 'premium',
},
{
id: 'amazon-bedrock/anthropic.claude-haiku-4-5-20251001-v1:0',
name: 'Claude Haiku 4.5 (Bedrock)',
modelString: 'amazon-bedrock/anthropic.claude-haiku-4-5-20251001-v1:0',
provider: 'opencode',
description: 'Fastest Claude model via AWS Bedrock',
supportsTools: true,
supportsVision: true,
tier: 'standard',
},
// Amazon Bedrock - DeepSeek Models
{
id: 'amazon-bedrock/deepseek.r1-v1:0',
name: 'DeepSeek R1 (Bedrock)',
modelString: 'amazon-bedrock/deepseek.r1-v1:0',
provider: 'opencode',
description: 'DeepSeek R1 reasoning model - excellent for coding',
supportsTools: true,
supportsVision: false,
tier: 'premium',
},
// Amazon Bedrock - Amazon Nova Models
{
id: 'amazon-bedrock/amazon.nova-pro-v1:0',
name: 'Amazon Nova Pro (Bedrock)',
modelString: 'amazon-bedrock/amazon.nova-pro-v1:0',
provider: 'opencode',
description: 'Amazon Nova Pro - balanced performance',
supportsTools: true,
supportsVision: true,
tier: 'standard',
},
// Amazon Bedrock - Meta Llama Models
{
id: 'amazon-bedrock/meta.llama4-maverick-17b-instruct-v1:0',
name: 'Llama 4 Maverick 17B (Bedrock)',
modelString: 'amazon-bedrock/meta.llama4-maverick-17b-instruct-v1:0',
provider: 'opencode',
description: 'Meta Llama 4 Maverick via AWS Bedrock',
supportsTools: true,
supportsVision: false,
tier: 'standard',
},
// Amazon Bedrock - Qwen Models
{
id: 'amazon-bedrock/qwen.qwen3-coder-480b-a35b-v1:0',
name: 'Qwen3 Coder 480B (Bedrock)',
modelString: 'amazon-bedrock/qwen.qwen3-coder-480b-a35b-v1:0',
provider: 'opencode',
description: 'Qwen3 Coder 480B - excellent for coding',
supportsTools: true,
supportsVision: false,
tier: 'premium',
},
];
}
// ==========================================================================
// Feature Support
// ==========================================================================
/**
* Check if a feature is supported by OpenCode
*
* Supported features:
* - tools: Function calling / tool use
* - text: Text generation
* - vision: Image understanding
*/
supportsFeature(feature: string): boolean {
const supportedFeatures = ['tools', 'text', 'vision'];
return supportedFeatures.includes(feature);
}
// ==========================================================================
// Authentication
// ==========================================================================
/**
* Check authentication status for OpenCode CLI
*
* Checks for authentication via:
* - OAuth token in auth file
* - API key in auth file
*/
async checkAuth(): Promise<OpenCodeAuthStatus> {
const authIndicators = await getOpenCodeAuthIndicators();
// Check for OAuth token
if (authIndicators.hasOAuthToken) {
return {
authenticated: true,
method: 'oauth',
hasOAuthToken: true,
hasApiKey: authIndicators.hasApiKey,
};
}
// Check for API key
if (authIndicators.hasApiKey) {
return {
authenticated: true,
method: 'api_key',
hasOAuthToken: false,
hasApiKey: true,
};
}
return {
authenticated: false,
method: 'none',
hasOAuthToken: false,
hasApiKey: false,
};
}
// ==========================================================================
// Installation Detection
// ==========================================================================
/**
* Detect OpenCode installation status
*
* Checks if the opencode CLI is available either through:
* - Direct installation (npm global)
* - NPX (fallback on Windows)
* Also checks authentication status.
*/
async detectInstallation(): Promise<InstallationStatus> {
this.ensureCliDetected();
const installed = await this.isInstalled();
const auth = await this.checkAuth();
return {
installed,
path: this.cliPath || undefined,
method: this.detectedStrategy === 'npx' ? 'npm' : 'cli',
authenticated: auth.authenticated,
hasApiKey: auth.hasApiKey,
hasOAuthToken: auth.hasOAuthToken,
};
}
}

View File

@@ -1,51 +1,168 @@
/**
* Provider Factory - Routes model IDs to the appropriate provider
*
* This factory implements model-based routing to automatically select
* the correct provider based on the model string. This makes adding
* new providers (Cursor, OpenCode, etc.) trivial - just add one line.
* Uses a registry pattern for dynamic provider registration.
* Providers register themselves on import, making it easy to add new providers.
*/
import { BaseProvider } from './base-provider.js';
import { ClaudeProvider } from './claude-provider.js';
import type { InstallationStatus } from './types.js';
import type { InstallationStatus, ModelDefinition } from './types.js';
import { isCursorModel, isCodexModel, isOpencodeModel, type ModelProvider } from '@automaker/types';
import * as fs from 'fs';
import * as path from 'path';
const DISCONNECTED_MARKERS: Record<string, string> = {
claude: '.claude-disconnected',
codex: '.codex-disconnected',
cursor: '.cursor-disconnected',
opencode: '.opencode-disconnected',
};
/**
* Check if a provider CLI is disconnected from the app
*/
export function isProviderDisconnected(providerName: string): boolean {
const markerFile = DISCONNECTED_MARKERS[providerName.toLowerCase()];
if (!markerFile) return false;
const markerPath = path.join(process.cwd(), '.automaker', markerFile);
return fs.existsSync(markerPath);
}
/**
* Provider registration entry
*/
interface ProviderRegistration {
/** Factory function to create provider instance */
factory: () => BaseProvider;
/** Aliases for this provider (e.g., 'anthropic' for 'claude') */
aliases?: string[];
/** Function to check if this provider can handle a model ID */
canHandleModel?: (modelId: string) => boolean;
/** Priority for model matching (higher = checked first) */
priority?: number;
}
/**
* Provider registry - stores registered providers
*/
const providerRegistry = new Map<string, ProviderRegistration>();
/**
* Register a provider with the factory
*
* @param name Provider name (e.g., 'claude', 'cursor')
* @param registration Provider registration config
*/
export function registerProvider(name: string, registration: ProviderRegistration): void {
providerRegistry.set(name.toLowerCase(), registration);
}
export class ProviderFactory {
/**
* Get the appropriate provider for a given model ID
* Determine which provider to use for a given model
*
* @param modelId Model identifier (e.g., "claude-opus-4-5-20251101", "gpt-5.2", "cursor-fast")
* @returns Provider instance for the model
* @param model Model identifier
* @returns Provider name (ModelProvider type)
*/
static getProviderForModel(modelId: string): BaseProvider {
const lowerModel = modelId.toLowerCase();
static getProviderNameForModel(model: string): ModelProvider {
const lowerModel = model.toLowerCase();
// Claude models (claude-*, opus, sonnet, haiku)
if (lowerModel.startsWith('claude-') || ['haiku', 'sonnet', 'opus'].includes(lowerModel)) {
return new ClaudeProvider();
// Get all registered providers sorted by priority (descending)
const registrations = Array.from(providerRegistry.entries()).sort(
([, a], [, b]) => (b.priority ?? 0) - (a.priority ?? 0)
);
// Check each provider's canHandleModel function
for (const [name, reg] of registrations) {
if (reg.canHandleModel?.(lowerModel)) {
return name as ModelProvider;
}
}
// Future providers:
// if (lowerModel.startsWith("cursor-")) {
// return new CursorProvider();
// }
// if (lowerModel.startsWith("opencode-")) {
// return new OpenCodeProvider();
// }
// Fallback: Check for explicit prefixes
for (const [name] of registrations) {
if (lowerModel.startsWith(`${name}-`)) {
return name as ModelProvider;
}
}
// Default to Claude for unknown models
console.warn(`[ProviderFactory] Unknown model prefix for "${modelId}", defaulting to Claude`);
return new ClaudeProvider();
// Default to claude (first registered provider or claude)
return 'claude';
}
/**
* Get the appropriate provider for a given model ID
*
* @param modelId Model identifier (e.g., "claude-opus-4-5-20251101", "cursor-gpt-4o", "cursor-auto")
* @param options Optional settings
* @param options.throwOnDisconnected Throw error if provider is disconnected (default: true)
* @returns Provider instance for the model
* @throws Error if provider is disconnected and throwOnDisconnected is true
*/
static getProviderForModel(
modelId: string,
options: { throwOnDisconnected?: boolean } = {}
): BaseProvider {
const { throwOnDisconnected = true } = options;
const providerName = this.getProviderForModelName(modelId);
// Check if provider is disconnected
if (throwOnDisconnected && isProviderDisconnected(providerName)) {
throw new Error(
`${providerName.charAt(0).toUpperCase() + providerName.slice(1)} CLI is disconnected from the app. ` +
`Please go to Settings > Providers and click "Sign In" to reconnect.`
);
}
const provider = this.getProviderByName(providerName);
if (!provider) {
// Fallback to claude if provider not found
const claudeReg = providerRegistry.get('claude');
if (claudeReg) {
return claudeReg.factory();
}
throw new Error(`No provider found for model: ${modelId}`);
}
return provider;
}
/**
* Get the provider name for a given model ID (without creating provider instance)
*/
static getProviderForModelName(modelId: string): string {
const lowerModel = modelId.toLowerCase();
// Get all registered providers sorted by priority (descending)
const registrations = Array.from(providerRegistry.entries()).sort(
([, a], [, b]) => (b.priority ?? 0) - (a.priority ?? 0)
);
// Check each provider's canHandleModel function
for (const [name, reg] of registrations) {
if (reg.canHandleModel?.(lowerModel)) {
return name;
}
}
// Fallback: Check for explicit prefixes
for (const [name] of registrations) {
if (lowerModel.startsWith(`${name}-`)) {
return name;
}
}
// Default to claude (first registered provider or claude)
return 'claude';
}
/**
* Get all available providers
*/
static getAllProviders(): BaseProvider[] {
return [
new ClaudeProvider(),
// Future providers...
];
return Array.from(providerRegistry.values()).map((reg) => reg.factory());
}
/**
@@ -54,11 +171,10 @@ export class ProviderFactory {
* @returns Map of provider name to installation status
*/
static async checkAllProviders(): Promise<Record<string, InstallationStatus>> {
const providers = this.getAllProviders();
const statuses: Record<string, InstallationStatus> = {};
for (const provider of providers) {
const name = provider.getName();
for (const [name, reg] of providerRegistry.entries()) {
const provider = reg.factory();
const status = await provider.detectInstallation();
statuses[name] = status;
}
@@ -69,40 +185,119 @@ export class ProviderFactory {
/**
* Get provider by name (for direct access if needed)
*
* @param name Provider name (e.g., "claude", "cursor")
* @param name Provider name (e.g., "claude", "cursor") or alias (e.g., "anthropic")
* @returns Provider instance or null if not found
*/
static getProviderByName(name: string): BaseProvider | null {
const lowerName = name.toLowerCase();
switch (lowerName) {
case 'claude':
case 'anthropic':
return new ClaudeProvider();
// Future providers:
// case "cursor":
// return new CursorProvider();
// case "opencode":
// return new OpenCodeProvider();
default:
return null;
// Direct lookup
const directReg = providerRegistry.get(lowerName);
if (directReg) {
return directReg.factory();
}
// Check aliases
for (const [, reg] of providerRegistry.entries()) {
if (reg.aliases?.includes(lowerName)) {
return reg.factory();
}
}
return null;
}
/**
* Get all available models from all providers
*/
static getAllAvailableModels() {
static getAllAvailableModels(): ModelDefinition[] {
const providers = this.getAllProviders();
const allModels = [];
return providers.flatMap((p) => p.getAvailableModels());
}
for (const provider of providers) {
const models = provider.getAvailableModels();
allModels.push(...models);
/**
* Get list of registered provider names
*/
static getRegisteredProviderNames(): string[] {
return Array.from(providerRegistry.keys());
}
/**
* Check if a specific model supports vision/image input
*
* @param modelId Model identifier
* @returns Whether the model supports vision (defaults to true if model not found)
*/
static modelSupportsVision(modelId: string): boolean {
const provider = this.getProviderForModel(modelId);
const models = provider.getAvailableModels();
// Find the model in the available models list
for (const model of models) {
if (
model.id === modelId ||
model.modelString === modelId ||
model.id.endsWith(`-${modelId}`) ||
model.modelString.endsWith(`-${modelId}`) ||
model.modelString === modelId.replace(/^(claude|cursor|codex)-/, '') ||
model.modelString === modelId.replace(/-(claude|cursor|codex)$/, '')
) {
return model.supportsVision ?? true;
}
}
return allModels;
// Also try exact match with model string from provider's model map
for (const model of models) {
if (model.modelString === modelId || model.id === modelId) {
return model.supportsVision ?? true;
}
}
// Default to true (Claude SDK supports vision by default)
return true;
}
}
// =============================================================================
// Provider Registrations
// =============================================================================
// Import providers for registration side-effects
import { ClaudeProvider } from './claude-provider.js';
import { CursorProvider } from './cursor-provider.js';
import { CodexProvider } from './codex-provider.js';
import { OpencodeProvider } from './opencode-provider.js';
// Register Claude provider
registerProvider('claude', {
factory: () => new ClaudeProvider(),
aliases: ['anthropic'],
canHandleModel: (model: string) => {
return (
model.startsWith('claude-') || ['opus', 'sonnet', 'haiku'].some((n) => model.includes(n))
);
},
priority: 0, // Default priority
});
// Register Cursor provider
registerProvider('cursor', {
factory: () => new CursorProvider(),
canHandleModel: (model: string) => isCursorModel(model),
priority: 10, // Higher priority - check Cursor models first
});
// Register Codex provider
registerProvider('codex', {
factory: () => new CodexProvider(),
aliases: ['openai'],
canHandleModel: (model: string) => isCodexModel(model),
priority: 5, // Medium priority - check after Cursor but before Claude
});
// Register OpenCode provider
registerProvider('opencode', {
factory: () => new OpencodeProvider(),
canHandleModel: (model: string) => isOpencodeModel(model),
priority: 3, // Between codex (5) and claude (0)
});

View File

@@ -1,104 +1,22 @@
/**
* Shared types for AI model providers
*
* Re-exports types from @automaker/types for consistency across the codebase.
* All provider types are defined in @automaker/types to avoid duplication.
*/
/**
* Configuration for a provider instance
*/
export interface ProviderConfig {
apiKey?: string;
cliPath?: string;
env?: Record<string, string>;
}
/**
* Message in conversation history
*/
export interface ConversationMessage {
role: 'user' | 'assistant';
content: string | Array<{ type: string; text?: string; source?: object }>;
}
/**
* Options for executing a query via a provider
*/
export interface ExecuteOptions {
prompt: string | Array<{ type: string; text?: string; source?: object }>;
model: string;
cwd: string;
systemPrompt?: string;
maxTurns?: number;
allowedTools?: string[];
mcpServers?: Record<string, unknown>;
abortController?: AbortController;
conversationHistory?: ConversationMessage[]; // Previous messages for context
sdkSessionId?: string; // Claude SDK session ID for resuming conversations
}
/**
* Content block in a provider message (matches Claude SDK format)
*/
export interface ContentBlock {
type: 'text' | 'tool_use' | 'thinking' | 'tool_result';
text?: string;
thinking?: string;
name?: string;
input?: unknown;
tool_use_id?: string;
content?: string;
}
/**
* Message returned by a provider (matches Claude SDK streaming format)
*/
export interface ProviderMessage {
type: 'assistant' | 'user' | 'error' | 'result';
subtype?: 'success' | 'error';
session_id?: string;
message?: {
role: 'user' | 'assistant';
content: ContentBlock[];
};
result?: string;
error?: string;
parent_tool_use_id?: string | null;
}
/**
* Installation status for a provider
*/
export interface InstallationStatus {
installed: boolean;
path?: string;
version?: string;
method?: 'cli' | 'npm' | 'brew' | 'sdk';
hasApiKey?: boolean;
authenticated?: boolean;
error?: string;
}
/**
* Validation result
*/
export interface ValidationResult {
valid: boolean;
errors: string[];
warnings?: string[];
}
/**
* Model definition
*/
export interface ModelDefinition {
id: string;
name: string;
modelString: string;
provider: string;
description: string;
contextWindow?: number;
maxOutputTokens?: number;
supportsVision?: boolean;
supportsTools?: boolean;
tier?: 'basic' | 'standard' | 'premium';
default?: boolean;
}
// Re-export all provider types from @automaker/types
export type {
ProviderConfig,
ConversationMessage,
ExecuteOptions,
McpServerConfig,
McpStdioServerConfig,
McpSSEServerConfig,
McpHttpServerConfig,
ContentBlock,
ProviderMessage,
InstallationStatus,
ValidationResult,
ModelDefinition,
} from '@automaker/types';

View File

@@ -12,6 +12,10 @@ import { createHistoryHandler } from './routes/history.js';
import { createStopHandler } from './routes/stop.js';
import { createClearHandler } from './routes/clear.js';
import { createModelHandler } from './routes/model.js';
import { createQueueAddHandler } from './routes/queue-add.js';
import { createQueueListHandler } from './routes/queue-list.js';
import { createQueueRemoveHandler } from './routes/queue-remove.js';
import { createQueueClearHandler } from './routes/queue-clear.js';
export function createAgentRoutes(agentService: AgentService, _events: EventEmitter): Router {
const router = Router();
@@ -27,5 +31,15 @@ export function createAgentRoutes(agentService: AgentService, _events: EventEmit
router.post('/clear', createClearHandler(agentService));
router.post('/model', createModelHandler(agentService));
// Queue routes
router.post(
'/queue/add',
validatePathParams('imagePaths[]'),
createQueueAddHandler(agentService)
);
router.post('/queue/list', createQueueListHandler(agentService));
router.post('/queue/remove', createQueueRemoveHandler(agentService));
router.post('/queue/clear', createQueueClearHandler(agentService));
return router;
}

View File

@@ -0,0 +1,41 @@
/**
* POST /queue/add endpoint - Add a prompt to the queue
*/
import type { Request, Response } from 'express';
import type { ThinkingLevel } from '@automaker/types';
import { AgentService } from '../../../services/agent-service.js';
import { getErrorMessage, logError } from '../common.js';
export function createQueueAddHandler(agentService: AgentService) {
return async (req: Request, res: Response): Promise<void> => {
try {
const { sessionId, message, imagePaths, model, thinkingLevel } = req.body as {
sessionId: string;
message: string;
imagePaths?: string[];
model?: string;
thinkingLevel?: ThinkingLevel;
};
if (!sessionId || !message) {
res.status(400).json({
success: false,
error: 'sessionId and message are required',
});
return;
}
const result = await agentService.addToQueue(sessionId, {
message,
imagePaths,
model,
thinkingLevel,
});
res.json(result);
} catch (error) {
logError(error, 'Add to queue failed');
res.status(500).json({ success: false, error: getErrorMessage(error) });
}
};
}

View File

@@ -0,0 +1,29 @@
/**
* POST /queue/clear endpoint - Clear all prompts from the queue
*/
import type { Request, Response } from 'express';
import { AgentService } from '../../../services/agent-service.js';
import { getErrorMessage, logError } from '../common.js';
export function createQueueClearHandler(agentService: AgentService) {
return async (req: Request, res: Response): Promise<void> => {
try {
const { sessionId } = req.body as { sessionId: string };
if (!sessionId) {
res.status(400).json({
success: false,
error: 'sessionId is required',
});
return;
}
const result = await agentService.clearQueue(sessionId);
res.json(result);
} catch (error) {
logError(error, 'Clear queue failed');
res.status(500).json({ success: false, error: getErrorMessage(error) });
}
};
}

View File

@@ -0,0 +1,29 @@
/**
* POST /queue/list endpoint - List queued prompts
*/
import type { Request, Response } from 'express';
import { AgentService } from '../../../services/agent-service.js';
import { getErrorMessage, logError } from '../common.js';
export function createQueueListHandler(agentService: AgentService) {
return async (req: Request, res: Response): Promise<void> => {
try {
const { sessionId } = req.body as { sessionId: string };
if (!sessionId) {
res.status(400).json({
success: false,
error: 'sessionId is required',
});
return;
}
const result = agentService.getQueue(sessionId);
res.json(result);
} catch (error) {
logError(error, 'List queue failed');
res.status(500).json({ success: false, error: getErrorMessage(error) });
}
};
}

View File

@@ -0,0 +1,32 @@
/**
* POST /queue/remove endpoint - Remove a prompt from the queue
*/
import type { Request, Response } from 'express';
import { AgentService } from '../../../services/agent-service.js';
import { getErrorMessage, logError } from '../common.js';
export function createQueueRemoveHandler(agentService: AgentService) {
return async (req: Request, res: Response): Promise<void> => {
try {
const { sessionId, promptId } = req.body as {
sessionId: string;
promptId: string;
};
if (!sessionId || !promptId) {
res.status(400).json({
success: false,
error: 'sessionId and promptId are required',
});
return;
}
const result = await agentService.removeFromQueue(sessionId, promptId);
res.json(result);
} catch (error) {
logError(error, 'Remove from queue failed');
res.status(500).json({ success: false, error: getErrorMessage(error) });
}
};
}

View File

@@ -3,6 +3,7 @@
*/
import type { Request, Response } from 'express';
import type { ThinkingLevel } from '@automaker/types';
import { AgentService } from '../../../services/agent-service.js';
import { createLogger } from '@automaker/utils';
import { getErrorMessage, logError } from '../common.js';
@@ -11,15 +12,27 @@ const logger = createLogger('Agent');
export function createSendHandler(agentService: AgentService) {
return async (req: Request, res: Response): Promise<void> => {
try {
const { sessionId, message, workingDirectory, imagePaths, model } = req.body as {
sessionId: string;
message: string;
workingDirectory?: string;
imagePaths?: string[];
model?: string;
};
const { sessionId, message, workingDirectory, imagePaths, model, thinkingLevel } =
req.body as {
sessionId: string;
message: string;
workingDirectory?: string;
imagePaths?: string[];
model?: string;
thinkingLevel?: ThinkingLevel;
};
logger.debug('Received request:', {
sessionId,
messageLength: message?.length,
workingDirectory,
imageCount: imagePaths?.length || 0,
model,
thinkingLevel,
});
if (!sessionId || !message) {
logger.warn('Validation failed - missing sessionId or message');
res.status(400).json({
success: false,
error: 'sessionId and message are required',
@@ -27,6 +40,8 @@ export function createSendHandler(agentService: AgentService) {
return;
}
logger.debug('Validation passed, calling agentService.sendMessage()');
// Start the message processing (don't await - it streams via WebSocket)
agentService
.sendMessage({
@@ -35,14 +50,19 @@ export function createSendHandler(agentService: AgentService) {
workingDirectory,
imagePaths,
model,
thinkingLevel,
})
.catch((error) => {
logger.error('Background error in sendMessage():', error);
logError(error, 'Send message failed (background)');
});
logger.debug('Returning immediate response to client');
// Return immediately - responses come via WebSocket
res.json({ success: true, message: 'Message sent' });
} catch (error) {
logger.error('Synchronous error:', error);
logError(error, 'Send message failed');
res.status(500).json({ success: false, error: getErrorMessage(error) });
}

View File

@@ -6,26 +6,57 @@ import { createLogger } from '@automaker/utils';
const logger = createLogger('SpecRegeneration');
// Shared state for tracking generation status - private
let isRunning = false;
let currentAbortController: AbortController | null = null;
// Shared state for tracking generation status - scoped by project path
const runningProjects = new Map<string, boolean>();
const abortControllers = new Map<string, AbortController>();
/**
* Get the current running state
* Get the running state for a specific project
*/
export function getSpecRegenerationStatus(): {
export function getSpecRegenerationStatus(projectPath?: string): {
isRunning: boolean;
currentAbortController: AbortController | null;
projectPath?: string;
} {
return { isRunning, currentAbortController };
if (projectPath) {
return {
isRunning: runningProjects.get(projectPath) || false,
currentAbortController: abortControllers.get(projectPath) || null,
projectPath,
};
}
// Fallback: check if any project is running (for backward compatibility)
const isAnyRunning = Array.from(runningProjects.values()).some((running) => running);
return { isRunning: isAnyRunning, currentAbortController: null };
}
/**
* Set the running state and abort controller
* Get the project path that is currently running (if any)
*/
export function setRunningState(running: boolean, controller: AbortController | null = null): void {
isRunning = running;
currentAbortController = controller;
export function getRunningProjectPath(): string | null {
for (const [path, running] of runningProjects.entries()) {
if (running) return path;
}
return null;
}
/**
* Set the running state and abort controller for a specific project
*/
export function setRunningState(
projectPath: string,
running: boolean,
controller: AbortController | null = null
): void {
if (running) {
runningProjects.set(projectPath, true);
if (controller) {
abortControllers.set(projectPath, controller);
}
} else {
runningProjects.delete(projectPath);
abortControllers.delete(projectPath);
}
}
/**

View File

@@ -1,15 +1,23 @@
/**
* Generate features from existing app_spec.txt
*
* Model is configurable via phaseModels.featureGenerationModel in settings
* (defaults to Sonnet for balanced speed and quality).
*/
import { query } from '@anthropic-ai/claude-agent-sdk';
import * as secureFs from '../../lib/secure-fs.js';
import type { EventEmitter } from '../../lib/events.js';
import { createLogger } from '@automaker/utils';
import { DEFAULT_PHASE_MODELS, isCursorModel, stripProviderPrefix } from '@automaker/types';
import { resolvePhaseModel } from '@automaker/model-resolver';
import { createFeatureGenerationOptions } from '../../lib/sdk-options.js';
import { ProviderFactory } from '../../providers/provider-factory.js';
import { logAuthStatus } from './common.js';
import { parseAndCreateFeatures } from './parse-and-create-features.js';
import { getAppSpecPath } from '@automaker/platform';
import type { SettingsService } from '../../services/settings-service.js';
import { getAutoLoadClaudeMdSetting } from '../../lib/settings-helpers.js';
const logger = createLogger('SpecRegeneration');
@@ -19,7 +27,8 @@ export async function generateFeaturesFromSpec(
projectPath: string,
events: EventEmitter,
abortController: AbortController,
maxFeatures?: number
maxFeatures?: number,
settingsService?: SettingsService
): Promise<void> {
const featureCount = maxFeatures ?? DEFAULT_MAX_FEATURES;
logger.debug('========== generateFeaturesFromSpec() started ==========');
@@ -91,42 +100,55 @@ IMPORTANT: Do not ask for clarification. The specification is provided above. Ge
projectPath: projectPath,
});
const options = createFeatureGenerationOptions({
cwd: projectPath,
abortController,
});
// Load autoLoadClaudeMd setting
const autoLoadClaudeMd = await getAutoLoadClaudeMdSetting(
projectPath,
settingsService,
'[FeatureGeneration]'
);
logger.debug('SDK Options:', JSON.stringify(options, null, 2));
logger.info('Calling Claude Agent SDK query() for features...');
// Get model from phase settings
const settings = await settingsService?.getGlobalSettings();
const phaseModelEntry =
settings?.phaseModels?.featureGenerationModel || DEFAULT_PHASE_MODELS.featureGenerationModel;
const { model, thinkingLevel } = resolvePhaseModel(phaseModelEntry);
logAuthStatus('Right before SDK query() for features');
let stream;
try {
stream = query({ prompt, options });
logger.debug('query() returned stream successfully');
} catch (queryError) {
logger.error('❌ query() threw an exception:');
logger.error('Error:', queryError);
throw queryError;
}
logger.info('Using model:', model);
let responseText = '';
let messageCount = 0;
logger.debug('Starting to iterate over feature stream...');
// Route to appropriate provider based on model type
if (isCursorModel(model)) {
// Use Cursor provider for Cursor models
logger.info('[FeatureGeneration] Using Cursor provider');
try {
for await (const msg of stream) {
const provider = ProviderFactory.getProviderForModel(model);
// Strip provider prefix - providers expect bare model IDs
const bareModel = stripProviderPrefix(model);
// Add explicit instructions for Cursor to return JSON in response
const cursorPrompt = `${prompt}
CRITICAL INSTRUCTIONS:
1. DO NOT write any files. Return the JSON in your response only.
2. Respond with ONLY a JSON object - no explanations, no markdown, just raw JSON.
3. Your entire response should be valid JSON starting with { and ending with }. No text before or after.`;
for await (const msg of provider.executeQuery({
prompt: cursorPrompt,
model: bareModel,
cwd: projectPath,
maxTurns: 250,
allowedTools: ['Read', 'Glob', 'Grep'],
abortController,
readOnly: true, // Feature generation only reads code, doesn't write
})) {
messageCount++;
logger.debug(
`Feature stream message #${messageCount}:`,
JSON.stringify({ type: msg.type, subtype: (msg as any).subtype }, null, 2)
);
if (msg.type === 'assistant' && msg.message.content) {
if (msg.type === 'assistant' && msg.message?.content) {
for (const block of msg.message.content) {
if (block.type === 'text') {
if (block.type === 'text' && block.text) {
responseText += block.text;
logger.debug(`Feature text block received (${block.text.length} chars)`);
events.emit('spec-regeneration:event', {
@@ -136,18 +158,75 @@ IMPORTANT: Do not ask for clarification. The specification is provided above. Ge
});
}
}
} else if (msg.type === 'result' && (msg as any).subtype === 'success') {
logger.debug('Received success result for features');
responseText = (msg as any).result || responseText;
} else if ((msg as { type: string }).type === 'error') {
logger.error('❌ Received error message from feature stream:');
logger.error('Error message:', JSON.stringify(msg, null, 2));
} else if (msg.type === 'result' && msg.subtype === 'success' && msg.result) {
// Use result if it's a final accumulated message
if (msg.result.length > responseText.length) {
responseText = msg.result;
}
}
}
} catch (streamError) {
logger.error('❌ Error while iterating feature stream:');
logger.error('Stream error:', streamError);
throw streamError;
} else {
// Use Claude SDK for Claude models
logger.info('[FeatureGeneration] Using Claude SDK');
const options = createFeatureGenerationOptions({
cwd: projectPath,
abortController,
autoLoadClaudeMd,
model,
thinkingLevel, // Pass thinking level for extended thinking
});
logger.debug('SDK Options:', JSON.stringify(options, null, 2));
logger.info('Calling Claude Agent SDK query() for features...');
logAuthStatus('Right before SDK query() for features');
let stream;
try {
stream = query({ prompt, options });
logger.debug('query() returned stream successfully');
} catch (queryError) {
logger.error('❌ query() threw an exception:');
logger.error('Error:', queryError);
throw queryError;
}
logger.debug('Starting to iterate over feature stream...');
try {
for await (const msg of stream) {
messageCount++;
logger.debug(
`Feature stream message #${messageCount}:`,
JSON.stringify({ type: msg.type, subtype: (msg as any).subtype }, null, 2)
);
if (msg.type === 'assistant' && msg.message.content) {
for (const block of msg.message.content) {
if (block.type === 'text') {
responseText += block.text;
logger.debug(`Feature text block received (${block.text.length} chars)`);
events.emit('spec-regeneration:event', {
type: 'spec_regeneration_progress',
content: block.text,
projectPath: projectPath,
});
}
}
} else if (msg.type === 'result' && (msg as any).subtype === 'success') {
logger.debug('Received success result for features');
responseText = (msg as any).result || responseText;
} else if ((msg as { type: string }).type === 'error') {
logger.error('❌ Received error message from feature stream:');
logger.error('Error message:', JSON.stringify(msg, null, 2));
}
}
} catch (streamError) {
logger.error('❌ Error while iterating feature stream:');
logger.error('Stream error:', streamError);
throw streamError;
}
}
logger.info(`Feature stream complete. Total messages: ${messageCount}`);

View File

@@ -1,5 +1,8 @@
/**
* Generate app_spec.txt from project overview
*
* Model is configurable via phaseModels.specGenerationModel in settings
* (defaults to Opus for high-quality specification generation).
*/
import { query } from '@anthropic-ai/claude-agent-sdk';
@@ -13,10 +16,16 @@ import {
type SpecOutput,
} from '../../lib/app-spec-format.js';
import { createLogger } from '@automaker/utils';
import { DEFAULT_PHASE_MODELS, isCursorModel, stripProviderPrefix } from '@automaker/types';
import { resolvePhaseModel } from '@automaker/model-resolver';
import { createSpecGenerationOptions } from '../../lib/sdk-options.js';
import { extractJson } from '../../lib/json-extractor.js';
import { ProviderFactory } from '../../providers/provider-factory.js';
import { logAuthStatus } from './common.js';
import { generateFeaturesFromSpec } from './generate-features-from-spec.js';
import { ensureAutomakerDir, getAppSpecPath } from '@automaker/platform';
import type { SettingsService } from '../../services/settings-service.js';
import { getAutoLoadClaudeMdSetting } from '../../lib/settings-helpers.js';
const logger = createLogger('SpecRegeneration');
@@ -27,7 +36,8 @@ export async function generateSpec(
abortController: AbortController,
generateFeatures?: boolean,
analyzeProject?: boolean,
maxFeatures?: number
maxFeatures?: number,
settingsService?: SettingsService
): Promise<void> {
logger.info('========== generateSpec() started ==========');
logger.info('projectPath:', projectPath);
@@ -83,101 +93,190 @@ ${getStructuredSpecPromptInstruction()}`;
content: 'Starting spec generation...\n',
});
const options = createSpecGenerationOptions({
cwd: projectPath,
abortController,
outputFormat: {
type: 'json_schema',
schema: specOutputSchema,
},
});
// Load autoLoadClaudeMd setting
const autoLoadClaudeMd = await getAutoLoadClaudeMdSetting(
projectPath,
settingsService,
'[SpecRegeneration]'
);
logger.debug('SDK Options:', JSON.stringify(options, null, 2));
logger.info('Calling Claude Agent SDK query()...');
// Get model from phase settings
const settings = await settingsService?.getGlobalSettings();
const phaseModelEntry =
settings?.phaseModels?.specGenerationModel || DEFAULT_PHASE_MODELS.specGenerationModel;
const { model, thinkingLevel } = resolvePhaseModel(phaseModelEntry);
// Log auth status right before the SDK call
logAuthStatus('Right before SDK query()');
let stream;
try {
stream = query({ prompt, options });
logger.debug('query() returned stream successfully');
} catch (queryError) {
logger.error('❌ query() threw an exception:');
logger.error('Error:', queryError);
throw queryError;
}
logger.info('Using model:', model);
let responseText = '';
let messageCount = 0;
let structuredOutput: SpecOutput | null = null;
logger.info('Starting to iterate over stream...');
// Route to appropriate provider based on model type
if (isCursorModel(model)) {
// Use Cursor provider for Cursor models
logger.info('[SpecGeneration] Using Cursor provider');
try {
for await (const msg of stream) {
const provider = ProviderFactory.getProviderForModel(model);
// Strip provider prefix - providers expect bare model IDs
const bareModel = stripProviderPrefix(model);
// For Cursor, include the JSON schema in the prompt with clear instructions
// to return JSON in the response (not write to a file)
const cursorPrompt = `${prompt}
CRITICAL INSTRUCTIONS:
1. DO NOT write any files. DO NOT create any files like "project_specification.json".
2. After analyzing the project, respond with ONLY a JSON object - no explanations, no markdown, just raw JSON.
3. The JSON must match this exact schema:
${JSON.stringify(specOutputSchema, null, 2)}
Your entire response should be valid JSON starting with { and ending with }. No text before or after.`;
for await (const msg of provider.executeQuery({
prompt: cursorPrompt,
model: bareModel,
cwd: projectPath,
maxTurns: 250,
allowedTools: ['Read', 'Glob', 'Grep'],
abortController,
readOnly: true, // Spec generation only reads code, we write the spec ourselves
})) {
messageCount++;
logger.info(
`Stream message #${messageCount}: type=${msg.type}, subtype=${(msg as any).subtype}`
);
if (msg.type === 'assistant') {
const msgAny = msg as any;
if (msgAny.message?.content) {
for (const block of msgAny.message.content) {
if (block.type === 'text') {
responseText += block.text;
logger.info(
`Text block received (${block.text.length} chars), total now: ${responseText.length} chars`
);
events.emit('spec-regeneration:event', {
type: 'spec_regeneration_progress',
content: block.text,
projectPath: projectPath,
});
} else if (block.type === 'tool_use') {
logger.info('Tool use:', block.name);
events.emit('spec-regeneration:event', {
type: 'spec_tool',
tool: block.name,
input: block.input,
});
}
if (msg.type === 'assistant' && msg.message?.content) {
for (const block of msg.message.content) {
if (block.type === 'text' && block.text) {
responseText += block.text;
logger.info(
`Text block received (${block.text.length} chars), total now: ${responseText.length} chars`
);
events.emit('spec-regeneration:event', {
type: 'spec_regeneration_progress',
content: block.text,
projectPath: projectPath,
});
} else if (block.type === 'tool_use') {
logger.info('Tool use:', block.name);
events.emit('spec-regeneration:event', {
type: 'spec_tool',
tool: block.name,
input: block.input,
});
}
}
} else if (msg.type === 'result' && (msg as any).subtype === 'success') {
logger.info('Received success result');
// Check for structured output - this is the reliable way to get spec data
const resultMsg = msg as any;
if (resultMsg.structured_output) {
structuredOutput = resultMsg.structured_output as SpecOutput;
logger.info('✅ Received structured output');
logger.debug('Structured output:', JSON.stringify(structuredOutput, null, 2));
} else {
logger.warn('⚠️ No structured output in result, will fall back to text parsing');
} else if (msg.type === 'result' && msg.subtype === 'success' && msg.result) {
// Use result if it's a final accumulated message
if (msg.result.length > responseText.length) {
responseText = msg.result;
}
} else if (msg.type === 'result') {
// Handle error result types
const subtype = (msg as any).subtype;
logger.info(`Result message: subtype=${subtype}`);
if (subtype === 'error_max_turns') {
logger.error('❌ Hit max turns limit!');
} else if (subtype === 'error_max_structured_output_retries') {
logger.error('❌ Failed to produce valid structured output after retries');
throw new Error('Could not produce valid spec output');
}
} else if ((msg as { type: string }).type === 'error') {
logger.error('❌ Received error message from stream:');
logger.error('Error message:', JSON.stringify(msg, null, 2));
} else if (msg.type === 'user') {
// Log user messages (tool results)
logger.info(`User message (tool result): ${JSON.stringify(msg).substring(0, 500)}`);
}
}
} catch (streamError) {
logger.error('❌ Error while iterating stream:');
logger.error('Stream error:', streamError);
throw streamError;
// Parse JSON from the response text using shared utility
if (responseText) {
structuredOutput = extractJson<SpecOutput>(responseText, { logger });
}
} else {
// Use Claude SDK for Claude models
logger.info('[SpecGeneration] Using Claude SDK');
const options = createSpecGenerationOptions({
cwd: projectPath,
abortController,
autoLoadClaudeMd,
model,
thinkingLevel, // Pass thinking level for extended thinking
outputFormat: {
type: 'json_schema',
schema: specOutputSchema,
},
});
logger.debug('SDK Options:', JSON.stringify(options, null, 2));
logger.info('Calling Claude Agent SDK query()...');
// Log auth status right before the SDK call
logAuthStatus('Right before SDK query()');
let stream;
try {
stream = query({ prompt, options });
logger.debug('query() returned stream successfully');
} catch (queryError) {
logger.error('❌ query() threw an exception:');
logger.error('Error:', queryError);
throw queryError;
}
logger.info('Starting to iterate over stream...');
try {
for await (const msg of stream) {
messageCount++;
logger.info(
`Stream message #${messageCount}: type=${msg.type}, subtype=${(msg as any).subtype}`
);
if (msg.type === 'assistant') {
const msgAny = msg as any;
if (msgAny.message?.content) {
for (const block of msgAny.message.content) {
if (block.type === 'text') {
responseText += block.text;
logger.info(
`Text block received (${block.text.length} chars), total now: ${responseText.length} chars`
);
events.emit('spec-regeneration:event', {
type: 'spec_regeneration_progress',
content: block.text,
projectPath: projectPath,
});
} else if (block.type === 'tool_use') {
logger.info('Tool use:', block.name);
events.emit('spec-regeneration:event', {
type: 'spec_tool',
tool: block.name,
input: block.input,
});
}
}
}
} else if (msg.type === 'result' && (msg as any).subtype === 'success') {
logger.info('Received success result');
// Check for structured output - this is the reliable way to get spec data
const resultMsg = msg as any;
if (resultMsg.structured_output) {
structuredOutput = resultMsg.structured_output as SpecOutput;
logger.info('✅ Received structured output');
logger.debug('Structured output:', JSON.stringify(structuredOutput, null, 2));
} else {
logger.warn('⚠️ No structured output in result, will fall back to text parsing');
}
} else if (msg.type === 'result') {
// Handle error result types
const subtype = (msg as any).subtype;
logger.info(`Result message: subtype=${subtype}`);
if (subtype === 'error_max_turns') {
logger.error('❌ Hit max turns limit!');
} else if (subtype === 'error_max_structured_output_retries') {
logger.error('❌ Failed to produce valid structured output after retries');
throw new Error('Could not produce valid spec output');
}
} else if ((msg as { type: string }).type === 'error') {
logger.error('❌ Received error message from stream:');
logger.error('Error message:', JSON.stringify(msg, null, 2));
} else if (msg.type === 'user') {
// Log user messages (tool results)
logger.info(`User message (tool result): ${JSON.stringify(msg).substring(0, 500)}`);
}
}
} catch (streamError) {
logger.error('❌ Error while iterating stream:');
logger.error('Stream error:', streamError);
throw streamError;
}
}
logger.info(`Stream iteration complete. Total messages: ${messageCount}`);
@@ -269,7 +368,13 @@ ${getStructuredSpecPromptInstruction()}`;
// Create a new abort controller for feature generation
const featureAbortController = new AbortController();
try {
await generateFeaturesFromSpec(projectPath, events, featureAbortController, maxFeatures);
await generateFeaturesFromSpec(
projectPath,
events,
featureAbortController,
maxFeatures,
settingsService
);
// Final completion will be emitted by generateFeaturesFromSpec -> parseAndCreateFeatures
} catch (featureError) {
logger.error('Feature generation failed:', featureError);

View File

@@ -9,13 +9,17 @@ import { createGenerateHandler } from './routes/generate.js';
import { createGenerateFeaturesHandler } from './routes/generate-features.js';
import { createStopHandler } from './routes/stop.js';
import { createStatusHandler } from './routes/status.js';
import type { SettingsService } from '../../services/settings-service.js';
export function createSpecRegenerationRoutes(events: EventEmitter): Router {
export function createSpecRegenerationRoutes(
events: EventEmitter,
settingsService?: SettingsService
): Router {
const router = Router();
router.post('/create', createCreateHandler(events));
router.post('/generate', createGenerateHandler(events));
router.post('/generate-features', createGenerateFeaturesHandler(events));
router.post('/generate', createGenerateHandler(events, settingsService));
router.post('/generate-features', createGenerateFeaturesHandler(events, settingsService));
router.post('/stop', createStopHandler());
router.get('/status', createStatusHandler());

View File

@@ -7,6 +7,7 @@ import * as secureFs from '../../lib/secure-fs.js';
import type { EventEmitter } from '../../lib/events.js';
import { createLogger } from '@automaker/utils';
import { getFeaturesDir } from '@automaker/platform';
import { extractJsonWithArray } from '../../lib/json-extractor.js';
const logger = createLogger('SpecRegeneration');
@@ -22,23 +23,30 @@ export async function parseAndCreateFeatures(
logger.info('========== END CONTENT ==========');
try {
// Extract JSON from response
logger.info('Extracting JSON from response...');
logger.info(`Looking for pattern: /{[\\s\\S]*"features"[\\s\\S]*}/`);
const jsonMatch = content.match(/\{[\s\S]*"features"[\s\S]*\}/);
if (!jsonMatch) {
logger.error('❌ No valid JSON found in response');
// Extract JSON from response using shared utility
logger.info('Extracting JSON from response using extractJsonWithArray...');
interface FeaturesResponse {
features: Array<{
id: string;
category?: string;
title: string;
description: string;
priority?: number;
complexity?: string;
dependencies?: string[];
}>;
}
const parsed = extractJsonWithArray<FeaturesResponse>(content, 'features', { logger });
if (!parsed || !parsed.features) {
logger.error('❌ No valid JSON with "features" array found in response');
logger.error('Full content received:');
logger.error(content);
throw new Error('No valid JSON found in response');
}
logger.info(`JSON match found (${jsonMatch[0].length} chars)`);
logger.info('========== MATCHED JSON ==========');
logger.info(jsonMatch[0]);
logger.info('========== END MATCHED JSON ==========');
const parsed = JSON.parse(jsonMatch[0]);
logger.info(`Parsed ${parsed.features?.length || 0} features`);
logger.info('Parsed features:', JSON.stringify(parsed.features, null, 2));

View File

@@ -47,17 +47,17 @@ export function createCreateHandler(events: EventEmitter) {
return;
}
const { isRunning } = getSpecRegenerationStatus();
const { isRunning } = getSpecRegenerationStatus(projectPath);
if (isRunning) {
logger.warn('Generation already running, rejecting request');
res.json({ success: false, error: 'Spec generation already running' });
logger.warn('Generation already running for project:', projectPath);
res.json({ success: false, error: 'Spec generation already running for this project' });
return;
}
logAuthStatus('Before starting generation');
const abortController = new AbortController();
setRunningState(true, abortController);
setRunningState(projectPath, true, abortController);
logger.info('Starting background generation task...');
// Start generation in background
@@ -80,7 +80,7 @@ export function createCreateHandler(events: EventEmitter) {
})
.finally(() => {
logger.info('Generation task finished (success or error)');
setRunningState(false, null);
setRunningState(projectPath, false, null);
});
logger.info('Returning success response (generation running in background)');

View File

@@ -13,10 +13,14 @@ import {
getErrorMessage,
} from '../common.js';
import { generateFeaturesFromSpec } from '../generate-features-from-spec.js';
import type { SettingsService } from '../../../services/settings-service.js';
const logger = createLogger('SpecRegeneration');
export function createGenerateFeaturesHandler(events: EventEmitter) {
export function createGenerateFeaturesHandler(
events: EventEmitter,
settingsService?: SettingsService
) {
return async (req: Request, res: Response): Promise<void> => {
logger.info('========== /generate-features endpoint called ==========');
logger.debug('Request body:', JSON.stringify(req.body, null, 2));
@@ -36,20 +40,20 @@ export function createGenerateFeaturesHandler(events: EventEmitter) {
return;
}
const { isRunning } = getSpecRegenerationStatus();
const { isRunning } = getSpecRegenerationStatus(projectPath);
if (isRunning) {
logger.warn('Generation already running, rejecting request');
res.json({ success: false, error: 'Generation already running' });
logger.warn('Generation already running for project:', projectPath);
res.json({ success: false, error: 'Generation already running for this project' });
return;
}
logAuthStatus('Before starting feature generation');
const abortController = new AbortController();
setRunningState(true, abortController);
setRunningState(projectPath, true, abortController);
logger.info('Starting background feature generation task...');
generateFeaturesFromSpec(projectPath, events, abortController, maxFeatures)
generateFeaturesFromSpec(projectPath, events, abortController, maxFeatures, settingsService)
.catch((error) => {
logError(error, 'Feature generation failed with error');
events.emit('spec-regeneration:event', {
@@ -59,7 +63,7 @@ export function createGenerateFeaturesHandler(events: EventEmitter) {
})
.finally(() => {
logger.info('Feature generation task finished (success or error)');
setRunningState(false, null);
setRunningState(projectPath, false, null);
});
logger.info('Returning success response (generation running in background)');

View File

@@ -13,10 +13,11 @@ import {
getErrorMessage,
} from '../common.js';
import { generateSpec } from '../generate-spec.js';
import type { SettingsService } from '../../../services/settings-service.js';
const logger = createLogger('SpecRegeneration');
export function createGenerateHandler(events: EventEmitter) {
export function createGenerateHandler(events: EventEmitter, settingsService?: SettingsService) {
return async (req: Request, res: Response): Promise<void> => {
logger.info('========== /generate endpoint called ==========');
logger.debug('Request body:', JSON.stringify(req.body, null, 2));
@@ -47,17 +48,17 @@ export function createGenerateHandler(events: EventEmitter) {
return;
}
const { isRunning } = getSpecRegenerationStatus();
const { isRunning } = getSpecRegenerationStatus(projectPath);
if (isRunning) {
logger.warn('Generation already running, rejecting request');
res.json({ success: false, error: 'Spec generation already running' });
logger.warn('Generation already running for project:', projectPath);
res.json({ success: false, error: 'Spec generation already running for this project' });
return;
}
logAuthStatus('Before starting generation');
const abortController = new AbortController();
setRunningState(true, abortController);
setRunningState(projectPath, true, abortController);
logger.info('Starting background generation task...');
generateSpec(
@@ -67,7 +68,8 @@ export function createGenerateHandler(events: EventEmitter) {
abortController,
generateFeatures,
analyzeProject,
maxFeatures
maxFeatures,
settingsService
)
.catch((error) => {
logError(error, 'Generation failed with error');
@@ -79,7 +81,7 @@ export function createGenerateHandler(events: EventEmitter) {
})
.finally(() => {
logger.info('Generation task finished (success or error)');
setRunningState(false, null);
setRunningState(projectPath, false, null);
});
logger.info('Returning success response (generation running in background)');

View File

@@ -6,10 +6,11 @@ import type { Request, Response } from 'express';
import { getSpecRegenerationStatus, getErrorMessage } from '../common.js';
export function createStatusHandler() {
return async (_req: Request, res: Response): Promise<void> => {
return async (req: Request, res: Response): Promise<void> => {
try {
const { isRunning } = getSpecRegenerationStatus();
res.json({ success: true, isRunning });
const projectPath = req.query.projectPath as string | undefined;
const { isRunning } = getSpecRegenerationStatus(projectPath);
res.json({ success: true, isRunning, projectPath });
} catch (error) {
res.status(500).json({ success: false, error: getErrorMessage(error) });
}

View File

@@ -6,13 +6,16 @@ import type { Request, Response } from 'express';
import { getSpecRegenerationStatus, setRunningState, getErrorMessage } from '../common.js';
export function createStopHandler() {
return async (_req: Request, res: Response): Promise<void> => {
return async (req: Request, res: Response): Promise<void> => {
try {
const { currentAbortController } = getSpecRegenerationStatus();
const { projectPath } = req.body as { projectPath?: string };
const { currentAbortController } = getSpecRegenerationStatus(projectPath);
if (currentAbortController) {
currentAbortController.abort();
}
setRunningState(false, null);
if (projectPath) {
setRunningState(projectPath, false, null);
}
res.json({ success: true });
} catch (error) {
res.status(500).json({ success: false, error: getErrorMessage(error) });

View File

@@ -0,0 +1,248 @@
/**
* Auth routes - Login, logout, and status endpoints
*
* Security model:
* - Web mode: User enters API key (shown on server console) to get HTTP-only session cookie
* - Electron mode: Uses X-API-Key header (handled automatically via IPC)
*
* The session cookie is:
* - HTTP-only: JavaScript cannot read it (protects against XSS)
* - SameSite=Strict: Only sent for same-site requests (protects against CSRF)
*
* Mounted at /api/auth in the main server (BEFORE auth middleware).
*/
import { Router } from 'express';
import type { Request } from 'express';
import {
validateApiKey,
createSession,
invalidateSession,
getSessionCookieOptions,
getSessionCookieName,
isRequestAuthenticated,
createWsConnectionToken,
} from '../../lib/auth.js';
// Rate limiting configuration
const RATE_LIMIT_WINDOW_MS = 60 * 1000; // 1 minute window
const RATE_LIMIT_MAX_ATTEMPTS = 5; // Max 5 attempts per window
// Check if we're in test mode - disable rate limiting for E2E tests
const isTestMode = process.env.AUTOMAKER_MOCK_AGENT === 'true';
// In-memory rate limit tracking (resets on server restart)
const loginAttempts = new Map<string, { count: number; windowStart: number }>();
// Clean up old rate limit entries periodically (every 5 minutes)
setInterval(
() => {
const now = Date.now();
loginAttempts.forEach((data, ip) => {
if (now - data.windowStart > RATE_LIMIT_WINDOW_MS * 2) {
loginAttempts.delete(ip);
}
});
},
5 * 60 * 1000
);
/**
* Get client IP address from request
* Handles X-Forwarded-For header for reverse proxy setups
*/
function getClientIp(req: Request): string {
const forwarded = req.headers['x-forwarded-for'];
if (forwarded) {
// X-Forwarded-For can be a comma-separated list; take the first (original client)
const forwardedIp = Array.isArray(forwarded) ? forwarded[0] : forwarded.split(',')[0];
return forwardedIp.trim();
}
return req.ip || req.socket.remoteAddress || 'unknown';
}
/**
* Check if an IP is rate limited
* Returns { limited: boolean, retryAfter?: number }
*/
function checkRateLimit(ip: string): { limited: boolean; retryAfter?: number } {
const now = Date.now();
const attempt = loginAttempts.get(ip);
if (!attempt) {
return { limited: false };
}
// Check if window has expired
if (now - attempt.windowStart > RATE_LIMIT_WINDOW_MS) {
loginAttempts.delete(ip);
return { limited: false };
}
// Check if over limit
if (attempt.count >= RATE_LIMIT_MAX_ATTEMPTS) {
const retryAfter = Math.ceil((RATE_LIMIT_WINDOW_MS - (now - attempt.windowStart)) / 1000);
return { limited: true, retryAfter };
}
return { limited: false };
}
/**
* Record a login attempt for rate limiting
*/
function recordLoginAttempt(ip: string): void {
const now = Date.now();
const attempt = loginAttempts.get(ip);
if (!attempt || now - attempt.windowStart > RATE_LIMIT_WINDOW_MS) {
// Start new window
loginAttempts.set(ip, { count: 1, windowStart: now });
} else {
// Increment existing window
attempt.count++;
}
}
/**
* Create auth routes
*
* @returns Express Router with auth endpoints
*/
export function createAuthRoutes(): Router {
const router = Router();
/**
* GET /api/auth/status
*
* Returns whether the current request is authenticated.
* Used by the UI to determine if login is needed.
*/
router.get('/status', (req, res) => {
const authenticated = isRequestAuthenticated(req);
res.json({
success: true,
authenticated,
required: true,
});
});
/**
* POST /api/auth/login
*
* Validates the API key and sets a session cookie.
* Body: { apiKey: string }
*
* Rate limited to 5 attempts per minute per IP to prevent brute force attacks.
*/
router.post('/login', async (req, res) => {
const clientIp = getClientIp(req);
// Skip rate limiting in test mode to allow parallel E2E tests
if (!isTestMode) {
// Check rate limit before processing
const rateLimit = checkRateLimit(clientIp);
if (rateLimit.limited) {
res.status(429).json({
success: false,
error: 'Too many login attempts. Please try again later.',
retryAfter: rateLimit.retryAfter,
});
return;
}
}
const { apiKey } = req.body as { apiKey?: string };
if (!apiKey) {
res.status(400).json({
success: false,
error: 'API key is required.',
});
return;
}
// Record this attempt (only for actual API key validation attempts, skip in test mode)
if (!isTestMode) {
recordLoginAttempt(clientIp);
}
if (!validateApiKey(apiKey)) {
res.status(401).json({
success: false,
error: 'Invalid API key.',
});
return;
}
// Create session and set cookie
const sessionToken = await createSession();
const cookieOptions = getSessionCookieOptions();
const cookieName = getSessionCookieName();
res.cookie(cookieName, sessionToken, cookieOptions);
res.json({
success: true,
message: 'Logged in successfully.',
// Return token for explicit header-based auth (works around cross-origin cookie issues)
token: sessionToken,
});
});
/**
* GET /api/auth/token
*
* Generates a short-lived WebSocket connection token if the user has a valid session.
* This token is used for initial WebSocket handshake authentication and expires in 5 minutes.
* The token is NOT the session cookie value - it's a separate, short-lived token.
*/
router.get('/token', (req, res) => {
// Validate the session is still valid (via cookie, API key, or session token header)
if (!isRequestAuthenticated(req)) {
res.status(401).json({
success: false,
error: 'Authentication required.',
});
return;
}
// Generate a new short-lived WebSocket connection token
const wsToken = createWsConnectionToken();
res.json({
success: true,
token: wsToken,
expiresIn: 300, // 5 minutes in seconds
});
});
/**
* POST /api/auth/logout
*
* Clears the session cookie and invalidates the session.
*/
router.post('/logout', async (req, res) => {
const cookieName = getSessionCookieName();
const sessionToken = req.cookies?.[cookieName] as string | undefined;
if (sessionToken) {
await invalidateSession(sessionToken);
}
// Clear the cookie by setting it to empty with immediate expiration
// Using res.cookie() with maxAge: 0 is more reliable than clearCookie()
// in cross-origin development environments
res.cookie(cookieName, '', {
...getSessionCookieOptions(),
maxAge: 0,
expires: new Date(0),
});
res.json({
success: true,
message: 'Logged out successfully.',
});
});
return router;
}

View File

@@ -17,6 +17,7 @@ import { createAnalyzeProjectHandler } from './routes/analyze-project.js';
import { createFollowUpFeatureHandler } from './routes/follow-up-feature.js';
import { createCommitFeatureHandler } from './routes/commit-feature.js';
import { createApprovePlanHandler } from './routes/approve-plan.js';
import { createResumeInterruptedHandler } from './routes/resume-interrupted.js';
export function createAutoModeRoutes(autoModeService: AutoModeService): Router {
const router = Router();
@@ -63,6 +64,11 @@ export function createAutoModeRoutes(autoModeService: AutoModeService): Router {
validatePathParams('projectPath'),
createApprovePlanHandler(autoModeService)
);
router.post(
'/resume-interrupted',
validatePathParams('projectPath'),
createResumeInterruptedHandler(autoModeService)
);
return router;
}

View File

@@ -31,7 +31,9 @@ export function createFollowUpFeatureHandler(autoModeService: AutoModeService) {
// Start follow-up in background
// followUpFeature derives workDir from feature.branchName
autoModeService
.followUpFeature(projectPath, featureId, prompt, imagePaths, useWorktrees ?? true)
// Default to false to match run-feature/resume-feature behavior.
// Worktrees should only be used when explicitly enabled by the user.
.followUpFeature(projectPath, featureId, prompt, imagePaths, useWorktrees ?? false)
.catch((error) => {
logger.error(`[AutoMode] Follow up feature ${featureId} error:`, error);
})

View File

@@ -31,7 +31,7 @@ export function createResumeFeatureHandler(autoModeService: AutoModeService) {
autoModeService
.resumeFeature(projectPath, featureId, useWorktrees ?? false)
.catch((error) => {
logger.error(`[AutoMode] Resume feature ${featureId} error:`, error);
logger.error(`Resume feature ${featureId} error:`, error);
});
res.json({ success: true });

View File

@@ -0,0 +1,42 @@
/**
* Resume Interrupted Features Handler
*
* Checks for features that were interrupted (in pipeline steps or in_progress)
* when the server was restarted and resumes them.
*/
import type { Request, Response } from 'express';
import { createLogger } from '@automaker/utils';
import type { AutoModeService } from '../../../services/auto-mode-service.js';
const logger = createLogger('ResumeInterrupted');
interface ResumeInterruptedRequest {
projectPath: string;
}
export function createResumeInterruptedHandler(autoModeService: AutoModeService) {
return async (req: Request, res: Response): Promise<void> => {
const { projectPath } = req.body as ResumeInterruptedRequest;
if (!projectPath) {
res.status(400).json({ error: 'Project path is required' });
return;
}
logger.info(`Checking for interrupted features in ${projectPath}`);
try {
await autoModeService.resumeInterruptedFeatures(projectPath);
res.json({
success: true,
message: 'Resume check completed',
});
} catch (error) {
logger.error('Error resuming interrupted features:', error);
res.status(500).json({
error: error instanceof Error ? error.message : 'Unknown error',
});
}
};
}

View File

@@ -31,7 +31,7 @@ export function createRunFeatureHandler(autoModeService: AutoModeService) {
autoModeService
.executeFeature(projectPath, featureId, useWorktrees ?? false, false)
.catch((error) => {
logger.error(`[AutoMode] Feature ${featureId} error:`, error);
logger.error(`Feature ${featureId} error:`, error);
})
.finally(() => {
// Release the starting slot when execution completes (success or error)

View File

@@ -0,0 +1,39 @@
/**
* Common utilities for backlog plan routes
*/
import { createLogger } from '@automaker/utils';
const logger = createLogger('BacklogPlan');
// State for tracking running generation
let isRunning = false;
let currentAbortController: AbortController | null = null;
export function getBacklogPlanStatus(): { isRunning: boolean } {
return { isRunning };
}
export function setRunningState(running: boolean, abortController?: AbortController | null): void {
isRunning = running;
if (abortController !== undefined) {
currentAbortController = abortController;
}
}
export function getAbortController(): AbortController | null {
return currentAbortController;
}
export function getErrorMessage(error: unknown): string {
if (error instanceof Error) {
return error.message;
}
return String(error);
}
export function logError(error: unknown, context: string): void {
logger.error(`[BacklogPlan] ${context}:`, getErrorMessage(error));
}
export { logger };

View File

@@ -0,0 +1,222 @@
/**
* Generate backlog plan using Claude AI
*
* Model is configurable via phaseModels.backlogPlanningModel in settings
* (defaults to Sonnet). Can be overridden per-call via model parameter.
*/
import type { EventEmitter } from '../../lib/events.js';
import type { Feature, BacklogPlanResult, BacklogChange, DependencyUpdate } from '@automaker/types';
import {
DEFAULT_PHASE_MODELS,
isCursorModel,
stripProviderPrefix,
type ThinkingLevel,
} from '@automaker/types';
import { resolvePhaseModel } from '@automaker/model-resolver';
import { FeatureLoader } from '../../services/feature-loader.js';
import { ProviderFactory } from '../../providers/provider-factory.js';
import { extractJsonWithArray } from '../../lib/json-extractor.js';
import { logger, setRunningState, getErrorMessage } from './common.js';
import type { SettingsService } from '../../services/settings-service.js';
import { getAutoLoadClaudeMdSetting, getPromptCustomization } from '../../lib/settings-helpers.js';
const featureLoader = new FeatureLoader();
/**
* Format features for the AI prompt
*/
function formatFeaturesForPrompt(features: Feature[]): string {
if (features.length === 0) {
return 'No features in backlog yet.';
}
return features
.map((f) => {
const deps = f.dependencies?.length ? `Dependencies: [${f.dependencies.join(', ')}]` : '';
const priority = f.priority !== undefined ? `Priority: ${f.priority}` : '';
return `- ID: ${f.id}
Title: ${f.title || 'Untitled'}
Description: ${f.description}
Category: ${f.category}
Status: ${f.status || 'backlog'}
${priority}
${deps}`.trim();
})
.join('\n\n');
}
/**
* Parse the AI response into a BacklogPlanResult
*/
function parsePlanResponse(response: string): BacklogPlanResult {
// Use shared JSON extraction utility for robust parsing
// extractJsonWithArray validates that 'changes' exists AND is an array
const parsed = extractJsonWithArray<BacklogPlanResult>(response, 'changes', {
logger,
});
if (parsed) {
return parsed;
}
// If parsing fails, log details and return an empty result
logger.warn('[BacklogPlan] Failed to parse AI response as JSON');
logger.warn('[BacklogPlan] Response text length:', response.length);
logger.warn('[BacklogPlan] Response preview:', response.slice(0, 500));
if (response.length === 0) {
logger.error('[BacklogPlan] Response text is EMPTY! No content was extracted from stream.');
}
return {
changes: [],
summary: 'Failed to parse AI response',
dependencyUpdates: [],
};
}
/**
* Generate a backlog modification plan based on user prompt
*/
export async function generateBacklogPlan(
projectPath: string,
prompt: string,
events: EventEmitter,
abortController: AbortController,
settingsService?: SettingsService,
model?: string
): Promise<BacklogPlanResult> {
try {
// Load current features
const features = await featureLoader.getAll(projectPath);
events.emit('backlog-plan:event', {
type: 'backlog_plan_progress',
content: `Loaded ${features.length} features from backlog`,
});
// Load prompts from settings
const prompts = await getPromptCustomization(settingsService, '[BacklogPlan]');
// Build the system prompt
const systemPrompt = prompts.backlogPlan.systemPrompt;
// Build the user prompt from template
const currentFeatures = formatFeaturesForPrompt(features);
const userPrompt = prompts.backlogPlan.userPromptTemplate
.replace('{{currentFeatures}}', currentFeatures)
.replace('{{userRequest}}', prompt);
events.emit('backlog-plan:event', {
type: 'backlog_plan_progress',
content: 'Generating plan with AI...',
});
// Get the model to use from settings or provided override
let effectiveModel = model;
let thinkingLevel: ThinkingLevel | undefined;
if (!effectiveModel) {
const settings = await settingsService?.getGlobalSettings();
const phaseModelEntry =
settings?.phaseModels?.backlogPlanningModel || DEFAULT_PHASE_MODELS.backlogPlanningModel;
const resolved = resolvePhaseModel(phaseModelEntry);
effectiveModel = resolved.model;
thinkingLevel = resolved.thinkingLevel;
}
logger.info('[BacklogPlan] Using model:', effectiveModel);
const provider = ProviderFactory.getProviderForModel(effectiveModel);
// Strip provider prefix - providers expect bare model IDs
const bareModel = stripProviderPrefix(effectiveModel);
// Get autoLoadClaudeMd setting
const autoLoadClaudeMd = await getAutoLoadClaudeMdSetting(
projectPath,
settingsService,
'[BacklogPlan]'
);
// For Cursor models, we need to combine prompts with explicit instructions
// because Cursor doesn't support systemPrompt separation like Claude SDK
let finalPrompt = userPrompt;
let finalSystemPrompt: string | undefined = systemPrompt;
if (isCursorModel(effectiveModel)) {
logger.info('[BacklogPlan] Using Cursor model - adding explicit no-file-write instructions');
finalPrompt = `${systemPrompt}
CRITICAL INSTRUCTIONS:
1. DO NOT write any files. Return the JSON in your response only.
2. DO NOT use Write, Edit, or any file modification tools.
3. Respond with ONLY a JSON object - no explanations, no markdown, just raw JSON.
4. Your entire response should be valid JSON starting with { and ending with }.
5. No text before or after the JSON object.
${userPrompt}`;
finalSystemPrompt = undefined; // System prompt is now embedded in the user prompt
}
// Execute the query
const stream = provider.executeQuery({
prompt: finalPrompt,
model: bareModel,
cwd: projectPath,
systemPrompt: finalSystemPrompt,
maxTurns: 1,
allowedTools: [], // No tools needed for this
abortController,
settingSources: autoLoadClaudeMd ? ['user', 'project'] : undefined,
readOnly: true, // Plan generation only generates text, doesn't write files
thinkingLevel, // Pass thinking level for extended thinking
});
let responseText = '';
for await (const msg of stream) {
if (abortController.signal.aborted) {
throw new Error('Generation aborted');
}
if (msg.type === 'assistant') {
if (msg.message?.content) {
for (const block of msg.message.content) {
if (block.type === 'text') {
responseText += block.text;
}
}
}
} else if (msg.type === 'result' && msg.subtype === 'success' && msg.result) {
// Use result if it's a final accumulated message (from Cursor provider)
logger.info('[BacklogPlan] Received result from Cursor, length:', msg.result.length);
logger.info('[BacklogPlan] Previous responseText length:', responseText.length);
if (msg.result.length > responseText.length) {
logger.info('[BacklogPlan] Using Cursor result (longer than accumulated text)');
responseText = msg.result;
} else {
logger.info('[BacklogPlan] Keeping accumulated text (longer than Cursor result)');
}
}
}
// Parse the response
const result = parsePlanResponse(responseText);
events.emit('backlog-plan:event', {
type: 'backlog_plan_complete',
result,
});
return result;
} catch (error) {
const errorMessage = getErrorMessage(error);
logger.error('[BacklogPlan] Generation failed:', errorMessage);
events.emit('backlog-plan:event', {
type: 'backlog_plan_error',
error: errorMessage,
});
throw error;
} finally {
setRunningState(false, null);
}
}

View File

@@ -0,0 +1,30 @@
/**
* Backlog Plan routes - HTTP API for AI-assisted backlog modification
*/
import { Router } from 'express';
import type { EventEmitter } from '../../lib/events.js';
import { validatePathParams } from '../../middleware/validate-paths.js';
import { createGenerateHandler } from './routes/generate.js';
import { createStopHandler } from './routes/stop.js';
import { createStatusHandler } from './routes/status.js';
import { createApplyHandler } from './routes/apply.js';
import type { SettingsService } from '../../services/settings-service.js';
export function createBacklogPlanRoutes(
events: EventEmitter,
settingsService?: SettingsService
): Router {
const router = Router();
router.post(
'/generate',
validatePathParams('projectPath'),
createGenerateHandler(events, settingsService)
);
router.post('/stop', createStopHandler());
router.get('/status', createStatusHandler());
router.post('/apply', validatePathParams('projectPath'), createApplyHandler());
return router;
}

View File

@@ -0,0 +1,147 @@
/**
* POST /apply endpoint - Apply a backlog plan
*/
import type { Request, Response } from 'express';
import type { BacklogPlanResult, BacklogChange, Feature } from '@automaker/types';
import { FeatureLoader } from '../../../services/feature-loader.js';
import { getErrorMessage, logError, logger } from '../common.js';
const featureLoader = new FeatureLoader();
export function createApplyHandler() {
return async (req: Request, res: Response): Promise<void> => {
try {
const { projectPath, plan } = req.body as {
projectPath: string;
plan: BacklogPlanResult;
};
if (!projectPath) {
res.status(400).json({ success: false, error: 'projectPath required' });
return;
}
if (!plan || !plan.changes) {
res.status(400).json({ success: false, error: 'plan with changes required' });
return;
}
const appliedChanges: string[] = [];
// Load current features for dependency validation
const allFeatures = await featureLoader.getAll(projectPath);
const featureMap = new Map(allFeatures.map((f) => [f.id, f]));
// Process changes in order: deletes first, then adds, then updates
// This ensures we can remove dependencies before they cause issues
// 1. First pass: Handle deletes
const deletions = plan.changes.filter((c) => c.type === 'delete');
for (const change of deletions) {
if (!change.featureId) continue;
try {
// Before deleting, update any features that depend on this one
for (const feature of allFeatures) {
if (feature.dependencies?.includes(change.featureId)) {
const newDeps = feature.dependencies.filter((d) => d !== change.featureId);
await featureLoader.update(projectPath, feature.id, { dependencies: newDeps });
logger.info(
`[BacklogPlan] Removed dependency ${change.featureId} from ${feature.id}`
);
}
}
// Now delete the feature
const deleted = await featureLoader.delete(projectPath, change.featureId);
if (deleted) {
appliedChanges.push(`deleted:${change.featureId}`);
featureMap.delete(change.featureId);
logger.info(`[BacklogPlan] Deleted feature ${change.featureId}`);
}
} catch (error) {
logger.error(
`[BacklogPlan] Failed to delete ${change.featureId}:`,
getErrorMessage(error)
);
}
}
// 2. Second pass: Handle adds
const additions = plan.changes.filter((c) => c.type === 'add');
for (const change of additions) {
if (!change.feature) continue;
try {
// Create the new feature
const newFeature = await featureLoader.create(projectPath, {
title: change.feature.title,
description: change.feature.description || '',
category: change.feature.category || 'Uncategorized',
dependencies: change.feature.dependencies,
priority: change.feature.priority,
status: 'backlog',
});
appliedChanges.push(`added:${newFeature.id}`);
featureMap.set(newFeature.id, newFeature);
logger.info(`[BacklogPlan] Created feature ${newFeature.id}: ${newFeature.title}`);
} catch (error) {
logger.error(`[BacklogPlan] Failed to add feature:`, getErrorMessage(error));
}
}
// 3. Third pass: Handle updates
const updates = plan.changes.filter((c) => c.type === 'update');
for (const change of updates) {
if (!change.featureId || !change.feature) continue;
try {
const updated = await featureLoader.update(projectPath, change.featureId, change.feature);
appliedChanges.push(`updated:${change.featureId}`);
featureMap.set(change.featureId, updated);
logger.info(`[BacklogPlan] Updated feature ${change.featureId}`);
} catch (error) {
logger.error(
`[BacklogPlan] Failed to update ${change.featureId}:`,
getErrorMessage(error)
);
}
}
// 4. Apply dependency updates from the plan
if (plan.dependencyUpdates) {
for (const depUpdate of plan.dependencyUpdates) {
try {
const feature = featureMap.get(depUpdate.featureId);
if (feature) {
const currentDeps = feature.dependencies || [];
const newDeps = currentDeps
.filter((d) => !depUpdate.removedDependencies.includes(d))
.concat(depUpdate.addedDependencies.filter((d) => !currentDeps.includes(d)));
await featureLoader.update(projectPath, depUpdate.featureId, {
dependencies: newDeps,
});
logger.info(`[BacklogPlan] Updated dependencies for ${depUpdate.featureId}`);
}
} catch (error) {
logger.error(
`[BacklogPlan] Failed to update dependencies for ${depUpdate.featureId}:`,
getErrorMessage(error)
);
}
}
}
res.json({
success: true,
appliedChanges,
});
} catch (error) {
logError(error, 'Apply backlog plan failed');
res.status(500).json({ success: false, error: getErrorMessage(error) });
}
};
}

View File

@@ -0,0 +1,62 @@
/**
* POST /generate endpoint - Generate a backlog plan
*/
import type { Request, Response } from 'express';
import type { EventEmitter } from '../../../lib/events.js';
import { getBacklogPlanStatus, setRunningState, getErrorMessage, logError } from '../common.js';
import { generateBacklogPlan } from '../generate-plan.js';
import type { SettingsService } from '../../../services/settings-service.js';
export function createGenerateHandler(events: EventEmitter, settingsService?: SettingsService) {
return async (req: Request, res: Response): Promise<void> => {
try {
const { projectPath, prompt, model } = req.body as {
projectPath: string;
prompt: string;
model?: string;
};
if (!projectPath) {
res.status(400).json({ success: false, error: 'projectPath required' });
return;
}
if (!prompt) {
res.status(400).json({ success: false, error: 'prompt required' });
return;
}
const { isRunning } = getBacklogPlanStatus();
if (isRunning) {
res.json({
success: false,
error: 'Backlog plan generation is already running',
});
return;
}
setRunningState(true);
const abortController = new AbortController();
setRunningState(true, abortController);
// Start generation in background
generateBacklogPlan(projectPath, prompt, events, abortController, settingsService, model)
.catch((error) => {
logError(error, 'Generate backlog plan failed (background)');
events.emit('backlog-plan:event', {
type: 'backlog_plan_error',
error: getErrorMessage(error),
});
})
.finally(() => {
setRunningState(false, null);
});
res.json({ success: true });
} catch (error) {
logError(error, 'Generate backlog plan failed');
res.status(500).json({ success: false, error: getErrorMessage(error) });
}
};
}

View File

@@ -0,0 +1,18 @@
/**
* GET /status endpoint - Get backlog plan generation status
*/
import type { Request, Response } from 'express';
import { getBacklogPlanStatus, getErrorMessage, logError } from '../common.js';
export function createStatusHandler() {
return async (_req: Request, res: Response): Promise<void> => {
try {
const status = getBacklogPlanStatus();
res.json({ success: true, ...status });
} catch (error) {
logError(error, 'Get backlog plan status failed');
res.status(500).json({ success: false, error: getErrorMessage(error) });
}
};
}

View File

@@ -0,0 +1,22 @@
/**
* POST /stop endpoint - Stop the current backlog plan generation
*/
import type { Request, Response } from 'express';
import { getAbortController, setRunningState, getErrorMessage, logError } from '../common.js';
export function createStopHandler() {
return async (_req: Request, res: Response): Promise<void> => {
try {
const abortController = getAbortController();
if (abortController) {
abortController.abort();
setRunningState(false, null);
}
res.json({ success: true });
} catch (error) {
logError(error, 'Stop backlog plan failed');
res.status(500).json({ success: false, error: getErrorMessage(error) });
}
};
}

View File

@@ -1,5 +1,8 @@
import { Router, Request, Response } from 'express';
import { ClaudeUsageService } from '../../services/claude-usage-service.js';
import { createLogger } from '@automaker/utils';
const logger = createLogger('Claude');
export function createClaudeRoutes(service: ClaudeUsageService): Router {
const router = Router();
@@ -10,7 +13,10 @@ export function createClaudeRoutes(service: ClaudeUsageService): Router {
// Check if Claude CLI is available first
const isAvailable = await service.isAvailable();
if (!isAvailable) {
res.status(503).json({
// IMPORTANT: This endpoint is behind Automaker session auth already.
// Use a 200 + error payload for Claude CLI issues so the UI doesn't
// interpret it as an invalid Automaker session (401/403 triggers logout).
res.status(200).json({
error: 'Claude CLI not found',
message: "Please install Claude Code CLI and run 'claude login' to authenticate",
});
@@ -23,17 +29,18 @@ export function createClaudeRoutes(service: ClaudeUsageService): Router {
const message = error instanceof Error ? error.message : 'Unknown error';
if (message.includes('Authentication required') || message.includes('token_expired')) {
res.status(401).json({
// Do NOT use 401/403 here: that status code is reserved for Automaker session auth.
res.status(200).json({
error: 'Authentication required',
message: "Please run 'claude login' to authenticate",
});
} else if (message.includes('timed out')) {
res.status(504).json({
res.status(200).json({
error: 'Command timed out',
message: 'The Claude CLI took too long to respond',
});
} else {
console.error('Error fetching usage:', error);
logger.error('Error fetching usage:', error);
res.status(500).json({ error: message });
}
}

View File

@@ -0,0 +1,90 @@
import { Router, Request, Response } from 'express';
import { CodexUsageService } from '../../services/codex-usage-service.js';
import { CodexModelCacheService } from '../../services/codex-model-cache-service.js';
import { createLogger } from '@automaker/utils';
const logger = createLogger('Codex');
export function createCodexRoutes(
usageService: CodexUsageService,
modelCacheService: CodexModelCacheService
): Router {
const router = Router();
// Get current usage (attempts to fetch from Codex CLI)
router.get('/usage', async (_req: Request, res: Response) => {
try {
// Check if Codex CLI is available first
const isAvailable = await usageService.isAvailable();
if (!isAvailable) {
// IMPORTANT: This endpoint is behind Automaker session auth already.
// Use a 200 + error payload for Codex CLI issues so the UI doesn't
// interpret it as an invalid Automaker session (401/403 triggers logout).
res.status(200).json({
error: 'Codex CLI not found',
message: "Please install Codex CLI and run 'codex login' to authenticate",
});
return;
}
const usage = await usageService.fetchUsageData();
res.json(usage);
} catch (error) {
const message = error instanceof Error ? error.message : 'Unknown error';
if (message.includes('not authenticated') || message.includes('login')) {
// Do NOT use 401/403 here: that status code is reserved for Automaker session auth.
res.status(200).json({
error: 'Authentication required',
message: "Please run 'codex login' to authenticate",
});
} else if (message.includes('not available') || message.includes('does not provide')) {
// This is the expected case - Codex doesn't provide usage stats
res.status(200).json({
error: 'Usage statistics not available',
message: message,
});
} else if (message.includes('timed out')) {
res.status(200).json({
error: 'Command timed out',
message: 'The Codex CLI took too long to respond',
});
} else {
logger.error('Error fetching usage:', error);
res.status(500).json({ error: message });
}
}
});
// Get available Codex models (cached)
router.get('/models', async (req: Request, res: Response) => {
try {
const forceRefresh = req.query.refresh === 'true';
const { models, cachedAt } = await modelCacheService.getModelsWithMetadata(forceRefresh);
if (models.length === 0) {
res.status(503).json({
success: false,
error: 'Codex CLI not available or not authenticated',
message: "Please install Codex CLI and run 'codex login' to authenticate",
});
return;
}
res.json({
success: true,
models,
cachedAt,
});
} catch (error) {
logger.error('Error fetching models:', error);
const message = error instanceof Error ? error.message : 'Unknown error';
res.status(500).json({
success: false,
error: message,
});
}
});
return router;
}

View File

@@ -8,17 +8,19 @@
import { Router } from 'express';
import { createDescribeImageHandler } from './routes/describe-image.js';
import { createDescribeFileHandler } from './routes/describe-file.js';
import type { SettingsService } from '../../services/settings-service.js';
/**
* Create the context router
*
* @param settingsService - Optional settings service for loading autoLoadClaudeMd setting
* @returns Express router with context endpoints
*/
export function createContextRoutes(): Router {
export function createContextRoutes(settingsService?: SettingsService): Router {
const router = Router();
router.post('/describe-image', createDescribeImageHandler());
router.post('/describe-file', createDescribeFileHandler());
router.post('/describe-image', createDescribeImageHandler(settingsService));
router.post('/describe-file', createDescribeFileHandler(settingsService));
return router;
}

View File

@@ -1,8 +1,9 @@
/**
* POST /context/describe-file endpoint - Generate description for a text file
*
* Uses Claude Haiku to analyze a text file and generate a concise description
* suitable for context file metadata.
* Uses AI to analyze a text file and generate a concise description
* suitable for context file metadata. Model is configurable via
* phaseModels.fileDescriptionModel in settings (defaults to Haiku).
*
* SECURITY: This endpoint validates file paths against ALLOWED_ROOT_DIRECTORY
* and reads file content directly (not via Claude's Read tool) to prevent
@@ -12,11 +13,15 @@
import type { Request, Response } from 'express';
import { query } from '@anthropic-ai/claude-agent-sdk';
import { createLogger } from '@automaker/utils';
import { CLAUDE_MODEL_MAP } from '@automaker/types';
import { DEFAULT_PHASE_MODELS, isCursorModel, stripProviderPrefix } from '@automaker/types';
import { PathNotAllowedError } from '@automaker/platform';
import { resolvePhaseModel } from '@automaker/model-resolver';
import { createCustomOptions } from '../../../lib/sdk-options.js';
import { ProviderFactory } from '../../../providers/provider-factory.js';
import * as secureFs from '../../../lib/secure-fs.js';
import * as path from 'path';
import type { SettingsService } from '../../../services/settings-service.js';
import { getAutoLoadClaudeMdSetting } from '../../../lib/settings-helpers.js';
const logger = createLogger('DescribeFile');
@@ -72,9 +77,12 @@ async function extractTextFromStream(
/**
* Create the describe-file request handler
*
* @param settingsService - Optional settings service for loading autoLoadClaudeMd setting
* @returns Express request handler for file description
*/
export function createDescribeFileHandler(): (req: Request, res: Response) => Promise<void> {
export function createDescribeFileHandler(
settingsService?: SettingsService
): (req: Request, res: Response) => Promise<void> {
return async (req: Request, res: Response): Promise<void> => {
try {
const { filePath } = req.body as DescribeFileRequestBody;
@@ -89,7 +97,7 @@ export function createDescribeFileHandler(): (req: Request, res: Response) => Pr
return;
}
logger.info(`[DescribeFile] Starting description generation for: ${filePath}`);
logger.info(`Starting description generation for: ${filePath}`);
// Resolve the path for logging and cwd derivation
const resolvedPath = secureFs.resolvePath(filePath);
@@ -104,7 +112,7 @@ export function createDescribeFileHandler(): (req: Request, res: Response) => Pr
} catch (readError) {
// Path not allowed - return 403 Forbidden
if (readError instanceof PathNotAllowedError) {
logger.warn(`[DescribeFile] Path not allowed: ${filePath}`);
logger.warn(`Path not allowed: ${filePath}`);
const response: DescribeFileErrorResponse = {
success: false,
error: 'File path is not within the allowed directory',
@@ -120,7 +128,7 @@ export function createDescribeFileHandler(): (req: Request, res: Response) => Pr
'code' in readError &&
readError.code === 'ENOENT'
) {
logger.warn(`[DescribeFile] File not found: ${resolvedPath}`);
logger.warn(`File not found: ${resolvedPath}`);
const response: DescribeFileErrorResponse = {
success: false,
error: `File not found: ${filePath}`,
@@ -130,7 +138,7 @@ export function createDescribeFileHandler(): (req: Request, res: Response) => Pr
}
const errorMessage = readError instanceof Error ? readError.message : 'Unknown error';
logger.error(`[DescribeFile] Failed to read file: ${errorMessage}`);
logger.error(`Failed to read file: ${errorMessage}`);
const response: DescribeFileErrorResponse = {
success: false,
error: `Failed to read file: ${errorMessage}`,
@@ -165,29 +173,84 @@ File: ${fileName}${truncated ? ' (truncated)' : ''}`;
// Use the file's directory as the working directory
const cwd = path.dirname(resolvedPath);
// Use centralized SDK options with proper cwd validation
// No tools needed since we're passing file content directly
const sdkOptions = createCustomOptions({
// Load autoLoadClaudeMd setting
const autoLoadClaudeMd = await getAutoLoadClaudeMdSetting(
cwd,
model: CLAUDE_MODEL_MAP.haiku,
maxTurns: 1,
allowedTools: [],
sandbox: { enabled: true, autoAllowBashIfSandboxed: true },
});
settingsService,
'[DescribeFile]'
);
const promptGenerator = (async function* () {
yield {
type: 'user' as const,
session_id: '',
message: { role: 'user' as const, content: promptContent },
parent_tool_use_id: null,
};
})();
// Get model from phase settings
const settings = await settingsService?.getGlobalSettings();
logger.info(`Raw phaseModels from settings:`, JSON.stringify(settings?.phaseModels, null, 2));
const phaseModelEntry =
settings?.phaseModels?.fileDescriptionModel || DEFAULT_PHASE_MODELS.fileDescriptionModel;
logger.info(`fileDescriptionModel entry:`, JSON.stringify(phaseModelEntry));
const { model, thinkingLevel } = resolvePhaseModel(phaseModelEntry);
const stream = query({ prompt: promptGenerator, options: sdkOptions });
logger.info(`Resolved model: ${model}, thinkingLevel: ${thinkingLevel}`);
// Extract the description from the response
const description = await extractTextFromStream(stream);
let description: string;
// Route to appropriate provider based on model type
if (isCursorModel(model)) {
// Use Cursor provider for Cursor models
logger.info(`Using Cursor provider for model: ${model}`);
const provider = ProviderFactory.getProviderForModel(model);
// Strip provider prefix - providers expect bare model IDs
const bareModel = stripProviderPrefix(model);
// Build a simple text prompt for Cursor (no multi-part content blocks)
const cursorPrompt = `${instructionText}\n\n--- FILE CONTENT ---\n${contentToAnalyze}`;
let responseText = '';
for await (const msg of provider.executeQuery({
prompt: cursorPrompt,
model: bareModel,
cwd,
maxTurns: 1,
allowedTools: [],
readOnly: true, // File description only reads, doesn't write
})) {
if (msg.type === 'assistant' && msg.message?.content) {
for (const block of msg.message.content) {
if (block.type === 'text' && block.text) {
responseText += block.text;
}
}
}
}
description = responseText;
} else {
// Use Claude SDK for Claude models
logger.info(`Using Claude SDK for model: ${model}`);
// Use centralized SDK options with proper cwd validation
// No tools needed since we're passing file content directly
const sdkOptions = createCustomOptions({
cwd,
model,
maxTurns: 1,
allowedTools: [],
autoLoadClaudeMd,
thinkingLevel, // Pass thinking level for extended thinking
});
const promptGenerator = (async function* () {
yield {
type: 'user' as const,
session_id: '',
message: { role: 'user' as const, content: promptContent },
parent_tool_use_id: null,
};
})();
const stream = query({ prompt: promptGenerator, options: sdkOptions });
// Extract the description from the response
description = await extractTextFromStream(stream);
}
if (!description || description.trim().length === 0) {
logger.warn('Received empty response from Claude');

View File

@@ -1,8 +1,9 @@
/**
* POST /context/describe-image endpoint - Generate description for an image
*
* Uses Claude Haiku to analyze an image and generate a concise description
* suitable for context file metadata.
* Uses AI to analyze an image and generate a concise description
* suitable for context file metadata. Model is configurable via
* phaseModels.imageDescriptionModel in settings (defaults to Haiku).
*
* IMPORTANT:
* The agent runner (chat/auto-mode) sends images as multi-part content blocks (base64 image blocks),
@@ -13,10 +14,14 @@
import type { Request, Response } from 'express';
import { query } from '@anthropic-ai/claude-agent-sdk';
import { createLogger, readImageAsBase64 } from '@automaker/utils';
import { CLAUDE_MODEL_MAP } from '@automaker/types';
import { DEFAULT_PHASE_MODELS, isCursorModel, stripProviderPrefix } from '@automaker/types';
import { resolvePhaseModel } from '@automaker/model-resolver';
import { createCustomOptions } from '../../../lib/sdk-options.js';
import * as fs from 'fs';
import { ProviderFactory } from '../../../providers/provider-factory.js';
import * as secureFs from '../../../lib/secure-fs.js';
import * as path from 'path';
import type { SettingsService } from '../../../services/settings-service.js';
import { getAutoLoadClaudeMdSetting } from '../../../lib/settings-helpers.js';
const logger = createLogger('DescribeImage');
@@ -55,13 +60,13 @@ function filterSafeHeaders(headers: Record<string, unknown>): Record<string, unk
*/
function findActualFilePath(requestedPath: string): string | null {
// First, try the exact path
if (fs.existsSync(requestedPath)) {
if (secureFs.existsSync(requestedPath)) {
return requestedPath;
}
// Try with Unicode normalization
const normalizedPath = requestedPath.normalize('NFC');
if (fs.existsSync(normalizedPath)) {
if (secureFs.existsSync(normalizedPath)) {
return normalizedPath;
}
@@ -70,12 +75,12 @@ function findActualFilePath(requestedPath: string): string | null {
const dir = path.dirname(requestedPath);
const baseName = path.basename(requestedPath);
if (!fs.existsSync(dir)) {
if (!secureFs.existsSync(dir)) {
return null;
}
try {
const files = fs.readdirSync(dir);
const files = secureFs.readdirSync(dir);
// Normalize the requested basename for comparison
// Replace various space-like characters with regular space for comparison
@@ -226,9 +231,12 @@ async function extractTextFromStream(
* Uses Claude SDK query with multi-part content blocks to include the image (base64),
* matching the agent runner behavior.
*
* @param settingsService - Optional settings service for loading autoLoadClaudeMd setting
* @returns Express request handler for image description
*/
export function createDescribeImageHandler(): (req: Request, res: Response) => Promise<void> {
export function createDescribeImageHandler(
settingsService?: SettingsService
): (req: Request, res: Response) => Promise<void> {
return async (req: Request, res: Response): Promise<void> => {
const requestId = `describe-image-${Date.now()}-${Math.random().toString(36).slice(2, 9)}`;
const startedAt = Date.now();
@@ -276,9 +284,9 @@ export function createDescribeImageHandler(): (req: Request, res: Response) => P
}
// Log path + stats (this is often where issues start: missing file, perms, size)
let stat: fs.Stats | null = null;
let stat: ReturnType<typeof secureFs.statSync> | null = null;
try {
stat = fs.statSync(actualPath);
stat = secureFs.statSync(actualPath);
logger.info(
`[${requestId}] fileStats size=${stat.size} bytes mtime=${stat.mtime.toISOString()}`
);
@@ -325,39 +333,97 @@ export function createDescribeImageHandler(): (req: Request, res: Response) => P
const cwd = path.dirname(actualPath);
logger.info(`[${requestId}] Using cwd=${cwd}`);
// Use the same centralized option builder used across the server (validates cwd)
const sdkOptions = createCustomOptions({
// Load autoLoadClaudeMd setting
const autoLoadClaudeMd = await getAutoLoadClaudeMdSetting(
cwd,
model: CLAUDE_MODEL_MAP.haiku,
maxTurns: 1,
allowedTools: [],
sandbox: { enabled: true, autoAllowBashIfSandboxed: true },
});
logger.info(
`[${requestId}] SDK options model=${sdkOptions.model} maxTurns=${sdkOptions.maxTurns} allowedTools=${JSON.stringify(
sdkOptions.allowedTools
)} sandbox=${JSON.stringify(sdkOptions.sandbox)}`
settingsService,
'[DescribeImage]'
);
const promptGenerator = (async function* () {
yield {
type: 'user' as const,
session_id: '',
message: { role: 'user' as const, content: promptContent },
parent_tool_use_id: null,
};
})();
// Get model from phase settings
const settings = await settingsService?.getGlobalSettings();
const phaseModelEntry =
settings?.phaseModels?.imageDescriptionModel || DEFAULT_PHASE_MODELS.imageDescriptionModel;
const { model, thinkingLevel } = resolvePhaseModel(phaseModelEntry);
logger.info(`[${requestId}] Calling query()...`);
const queryStart = Date.now();
const stream = query({ prompt: promptGenerator, options: sdkOptions });
logger.info(`[${requestId}] query() returned stream in ${Date.now() - queryStart}ms`);
logger.info(`[${requestId}] Using model: ${model}`);
// Extract the description from the response
const extractStart = Date.now();
const description = await extractTextFromStream(stream, requestId);
logger.info(`[${requestId}] extractMs=${Date.now() - extractStart}`);
let description: string;
// Route to appropriate provider based on model type
if (isCursorModel(model)) {
// Use Cursor provider for Cursor models
// Note: Cursor may have limited support for image content blocks
logger.info(`[${requestId}] Using Cursor provider for model: ${model}`);
const provider = ProviderFactory.getProviderForModel(model);
// Strip provider prefix - providers expect bare model IDs
const bareModel = stripProviderPrefix(model);
// Build prompt with image reference for Cursor
// Note: Cursor CLI may not support base64 image blocks directly,
// so we include the image path as context
const cursorPrompt = `${instructionText}\n\nImage file: ${actualPath}\nMIME type: ${imageData.mimeType}`;
let responseText = '';
const queryStart = Date.now();
for await (const msg of provider.executeQuery({
prompt: cursorPrompt,
model: bareModel,
cwd,
maxTurns: 1,
allowedTools: ['Read'], // Allow Read tool so Cursor can read the image if needed
readOnly: true, // Image description only reads, doesn't write
})) {
if (msg.type === 'assistant' && msg.message?.content) {
for (const block of msg.message.content) {
if (block.type === 'text' && block.text) {
responseText += block.text;
}
}
}
}
logger.info(`[${requestId}] Cursor query completed in ${Date.now() - queryStart}ms`);
description = responseText;
} else {
// Use Claude SDK for Claude models (supports image content blocks)
logger.info(`[${requestId}] Using Claude SDK for model: ${model}`);
// Use the same centralized option builder used across the server (validates cwd)
const sdkOptions = createCustomOptions({
cwd,
model,
maxTurns: 1,
allowedTools: [],
autoLoadClaudeMd,
thinkingLevel, // Pass thinking level for extended thinking
});
logger.info(
`[${requestId}] SDK options model=${sdkOptions.model} maxTurns=${sdkOptions.maxTurns} allowedTools=${JSON.stringify(
sdkOptions.allowedTools
)}`
);
const promptGenerator = (async function* () {
yield {
type: 'user' as const,
session_id: '',
message: { role: 'user' as const, content: promptContent },
parent_tool_use_id: null,
};
})();
logger.info(`[${requestId}] Calling query()...`);
const queryStart = Date.now();
const stream = query({ prompt: promptGenerator, options: sdkOptions });
logger.info(`[${requestId}] query() returned stream in ${Date.now() - queryStart}ms`);
// Extract the description from the response
const extractStart = Date.now();
description = await extractTextFromStream(stream, requestId);
logger.info(`[${requestId}] extractMs=${Date.now() - extractStart}`);
}
if (!description || description.trim().length === 0) {
logger.warn(`[${requestId}] Received empty response from Claude`);

View File

@@ -6,17 +6,19 @@
*/
import { Router } from 'express';
import type { SettingsService } from '../../services/settings-service.js';
import { createEnhanceHandler } from './routes/enhance.js';
/**
* Create the enhance-prompt router
*
* @param settingsService - Settings service for loading custom prompts
* @returns Express router with enhance-prompt endpoints
*/
export function createEnhancePromptRoutes(): Router {
export function createEnhancePromptRoutes(settingsService?: SettingsService): Router {
const router = Router();
router.post('/', createEnhanceHandler());
router.post('/', createEnhanceHandler(settingsService));
return router;
}

View File

@@ -1,7 +1,7 @@
/**
* POST /enhance-prompt endpoint - Enhance user input text
*
* Uses Claude AI to enhance text based on the specified enhancement mode.
* Uses Claude AI or Cursor to enhance text based on the specified enhancement mode.
* Supports modes: improve, technical, simplify, acceptance
*/
@@ -9,9 +9,17 @@ import type { Request, Response } from 'express';
import { query } from '@anthropic-ai/claude-agent-sdk';
import { createLogger } from '@automaker/utils';
import { resolveModelString } from '@automaker/model-resolver';
import { CLAUDE_MODEL_MAP } from '@automaker/types';
import {
getSystemPrompt,
CLAUDE_MODEL_MAP,
isCursorModel,
stripProviderPrefix,
ThinkingLevel,
getThinkingTokenBudget,
} from '@automaker/types';
import { ProviderFactory } from '../../../providers/provider-factory.js';
import type { SettingsService } from '../../../services/settings-service.js';
import { getPromptCustomization } from '../../../lib/settings-helpers.js';
import {
buildUserPrompt,
isValidEnhancementMode,
type EnhancementMode,
@@ -29,6 +37,8 @@ interface EnhanceRequestBody {
enhancementMode: string;
/** Optional model override */
model?: string;
/** Optional thinking level for Claude models (ignored for Cursor models) */
thinkingLevel?: ThinkingLevel;
}
/**
@@ -80,15 +90,56 @@ async function extractTextFromStream(
return responseText;
}
/**
* Execute enhancement using Cursor provider
*
* @param prompt - The enhancement prompt
* @param model - The Cursor model to use
* @returns The enhanced text
*/
async function executeWithCursor(prompt: string, model: string): Promise<string> {
const provider = ProviderFactory.getProviderForModel(model);
// Strip provider prefix - providers expect bare model IDs
const bareModel = stripProviderPrefix(model);
let responseText = '';
for await (const msg of provider.executeQuery({
prompt,
model: bareModel,
cwd: process.cwd(), // Enhancement doesn't need a specific working directory
readOnly: true, // Prompt enhancement only generates text, doesn't write files
})) {
if (msg.type === 'assistant' && msg.message?.content) {
for (const block of msg.message.content) {
if (block.type === 'text' && block.text) {
responseText += block.text;
}
}
} else if (msg.type === 'result' && msg.subtype === 'success' && msg.result) {
// Use result if it's a final accumulated message
if (msg.result.length > responseText.length) {
responseText = msg.result;
}
}
}
return responseText;
}
/**
* Create the enhance request handler
*
* @param settingsService - Optional settings service for loading custom prompts
* @returns Express request handler for text enhancement
*/
export function createEnhanceHandler(): (req: Request, res: Response) => Promise<void> {
export function createEnhanceHandler(
settingsService?: SettingsService
): (req: Request, res: Response) => Promise<void> {
return async (req: Request, res: Response): Promise<void> => {
try {
const { originalText, enhancementMode, model } = req.body as EnhanceRequestBody;
const { originalText, enhancementMode, model, thinkingLevel } =
req.body as EnhanceRequestBody;
// Validate required fields
if (!originalText || typeof originalText !== 'string') {
@@ -128,8 +179,20 @@ export function createEnhanceHandler(): (req: Request, res: Response) => Promise
logger.info(`Enhancing text with mode: ${validMode}, length: ${trimmedText.length} chars`);
// Get the system prompt for this mode
const systemPrompt = getSystemPrompt(validMode);
// Load enhancement prompts from settings (merges custom + defaults)
const prompts = await getPromptCustomization(settingsService, '[EnhancePrompt]');
// Get the system prompt for this mode from merged prompts
const systemPromptMap: Record<EnhancementMode, string> = {
improve: prompts.enhancement.improveSystemPrompt,
technical: prompts.enhancement.technicalSystemPrompt,
simplify: prompts.enhancement.simplifySystemPrompt,
acceptance: prompts.enhancement.acceptanceSystemPrompt,
'ux-reviewer': prompts.enhancement.uxReviewerSystemPrompt,
};
const systemPrompt = systemPromptMap[validMode];
logger.debug(`Using ${validMode} system prompt (length: ${systemPrompt.length} chars)`);
// Build the user prompt with few-shot examples
// This helps the model understand this is text transformation, not a coding task
@@ -140,24 +203,43 @@ export function createEnhanceHandler(): (req: Request, res: Response) => Promise
logger.debug(`Using model: ${resolvedModel}`);
// Call Claude SDK with minimal configuration for text transformation
// Key: no tools, just text completion
const stream = query({
prompt: userPrompt,
options: {
let enhancedText: string;
// Route to appropriate provider based on model
if (isCursorModel(resolvedModel)) {
// Use Cursor provider for Cursor models
logger.info(`Using Cursor provider for model: ${resolvedModel}`);
// Cursor doesn't have a separate system prompt concept, so combine them
const combinedPrompt = `${systemPrompt}\n\n${userPrompt}`;
enhancedText = await executeWithCursor(combinedPrompt, resolvedModel);
} else {
// Use Claude SDK for Claude models
logger.info(`Using Claude provider for model: ${resolvedModel}`);
// Convert thinkingLevel to maxThinkingTokens for SDK
const maxThinkingTokens = getThinkingTokenBudget(thinkingLevel);
const queryOptions: Parameters<typeof query>[0]['options'] = {
model: resolvedModel,
systemPrompt,
maxTurns: 1,
allowedTools: [],
permissionMode: 'acceptEdits',
},
});
};
if (maxThinkingTokens) {
queryOptions.maxThinkingTokens = maxThinkingTokens;
}
// Extract the enhanced text from the response
const enhancedText = await extractTextFromStream(stream);
const stream = query({
prompt: userPrompt,
options: queryOptions,
});
enhancedText = await extractTextFromStream(stream);
}
if (!enhancedText || enhancedText.trim().length === 0) {
logger.warn('Received empty response from Claude');
logger.warn('Received empty response from AI');
const response: EnhanceErrorResponse = {
success: false,
error: 'Failed to generate enhanced text - empty response',

View File

@@ -9,8 +9,10 @@ import { createListHandler } from './routes/list.js';
import { createGetHandler } from './routes/get.js';
import { createCreateHandler } from './routes/create.js';
import { createUpdateHandler } from './routes/update.js';
import { createBulkUpdateHandler } from './routes/bulk-update.js';
import { createBulkDeleteHandler } from './routes/bulk-delete.js';
import { createDeleteHandler } from './routes/delete.js';
import { createAgentOutputHandler } from './routes/agent-output.js';
import { createAgentOutputHandler, createRawOutputHandler } from './routes/agent-output.js';
import { createGenerateTitleHandler } from './routes/generate-title.js';
export function createFeaturesRoutes(featureLoader: FeatureLoader): Router {
@@ -20,8 +22,19 @@ export function createFeaturesRoutes(featureLoader: FeatureLoader): Router {
router.post('/get', validatePathParams('projectPath'), createGetHandler(featureLoader));
router.post('/create', validatePathParams('projectPath'), createCreateHandler(featureLoader));
router.post('/update', validatePathParams('projectPath'), createUpdateHandler(featureLoader));
router.post(
'/bulk-update',
validatePathParams('projectPath'),
createBulkUpdateHandler(featureLoader)
);
router.post(
'/bulk-delete',
validatePathParams('projectPath'),
createBulkDeleteHandler(featureLoader)
);
router.post('/delete', validatePathParams('projectPath'), createDeleteHandler(featureLoader));
router.post('/agent-output', createAgentOutputHandler(featureLoader));
router.post('/raw-output', createRawOutputHandler(featureLoader));
router.post('/generate-title', createGenerateTitleHandler());
return router;

View File

@@ -1,5 +1,6 @@
/**
* POST /agent-output endpoint - Get agent output for a feature
* POST /raw-output endpoint - Get raw JSONL output for debugging
*/
import type { Request, Response } from 'express';
@@ -30,3 +31,31 @@ export function createAgentOutputHandler(featureLoader: FeatureLoader) {
}
};
}
/**
* Handler for getting raw JSONL output for debugging
*/
export function createRawOutputHandler(featureLoader: FeatureLoader) {
return async (req: Request, res: Response): Promise<void> => {
try {
const { projectPath, featureId } = req.body as {
projectPath: string;
featureId: string;
};
if (!projectPath || !featureId) {
res.status(400).json({
success: false,
error: 'projectPath and featureId are required',
});
return;
}
const content = await featureLoader.getRawOutput(projectPath, featureId);
res.json({ success: true, content });
} catch (error) {
logError(error, 'Get raw output failed');
res.status(500).json({ success: false, error: getErrorMessage(error) });
}
};
}

View File

@@ -0,0 +1,61 @@
/**
* POST /bulk-delete endpoint - Delete multiple features at once
*/
import type { Request, Response } from 'express';
import { FeatureLoader } from '../../../services/feature-loader.js';
import { getErrorMessage, logError } from '../common.js';
interface BulkDeleteRequest {
projectPath: string;
featureIds: string[];
}
interface BulkDeleteResult {
featureId: string;
success: boolean;
error?: string;
}
export function createBulkDeleteHandler(featureLoader: FeatureLoader) {
return async (req: Request, res: Response): Promise<void> => {
try {
const { projectPath, featureIds } = req.body as BulkDeleteRequest;
if (!projectPath || !featureIds || !Array.isArray(featureIds) || featureIds.length === 0) {
res.status(400).json({
success: false,
error: 'projectPath and featureIds (non-empty array) are required',
});
return;
}
const results = await Promise.all(
featureIds.map(async (featureId) => {
const success = await featureLoader.delete(projectPath, featureId);
if (success) {
return { featureId, success: true };
}
return {
featureId,
success: false,
error: 'Deletion failed. Check server logs for details.',
};
})
);
const successCount = results.reduce((count, r) => count + (r.success ? 1 : 0), 0);
const failureCount = results.length - successCount;
res.json({
success: failureCount === 0,
deletedCount: successCount,
failedCount: failureCount,
results,
});
} catch (error) {
logError(error, 'Bulk delete features failed');
res.status(500).json({ success: false, error: getErrorMessage(error) });
}
};
}

Some files were not shown because too many files have changed in this diff Show More