Address three issues identified in code review:
1. Fix missing thinkingLevel parameter (Critical)
- Added thinkingLevel parameter to enhance API call
- Updated electron.ts type definition to match http-api-client
- Fixes functional regression in Claude model enhancement
2. Refactor dropdown menu to use constants dynamically
- Changed hardcoded DropdownMenuItem components to dynamic generation
- Now iterates over ENHANCEMENT_MODE_LABELS object
- Ensures automatic sync when new modes are added
- Eliminates manual UI updates for new enhancement modes
3. Optimize array reversal performance
- Added useMemo hook to memoize reversed history array
- Prevents creating new array on every render
- Improves performance with lengthy histories
All TypeScript errors resolved. Build verified.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Extract all "Enhance with AI" functionality into reusable shared components
following DRY principles and clean code guidelines.
Changes:
- Create shared/enhancement/ folder for related functionality
- Extract EnhanceWithAI component (AI enhancement with model override)
- Extract EnhancementHistoryButton component (version history UI)
- Extract enhancement constants and types
- Refactor add-feature-dialog.tsx to use shared components
- Refactor edit-feature-dialog.tsx to use shared components
- Refactor follow-up-dialog.tsx to use shared components
- Add history tracking to add-feature-dialog for consistency
Benefits:
- Eliminated ~527 lines of duplicated code
- Single source of truth for enhancement logic
- Consistent UX across all dialogs
- Easier maintenance and extensibility
- Better code organization
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
- Refactored the bulk delete handler to utilize Promise.all for concurrent deletion of features, improving performance and error handling.
- Updated the BoardView component to handle deletion results more effectively, providing user feedback for both successful and failed deletions.
- Enhanced local state management to avoid redundant API calls during feature deletion.
- Introduced a new endpoint `/bulk-delete` to allow deletion of multiple features at once.
- Implemented `createBulkDeleteHandler` to process bulk delete requests and handle success/failure responses.
- Updated the UI to include a bulk delete option in the BoardView component, with confirmation dialog for user actions.
- Enhanced the HTTP API client to support bulk delete requests.
- Improved the selection action bar to trigger bulk delete functionality and provide user feedback.
- Add official MiniMax logo SVG from LobeHub icons library
- Add official Z.ai logo SVG for GLM models from LobeHub icons library
- Add BigPickle icon with custom green color (#4ADE80)
- Fix icon detection logic to properly handle amazon-bedrock/ and opencode/ prefixes
- Update phase-model-selector and opencode-model-configuration to use
getProviderIconForModel() for consistent icon display across the app
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Introduced a new "Accept All" button in the IdeationHeader component, allowing users to accept all filtered suggestions at once.
- Implemented state management for accept all readiness and count in the IdeationView component.
- Enhanced the IdeationDashboard to notify the parent component about the readiness of the accept all feature.
- Added logic to handle the acceptance of all suggestions, including success and failure notifications.
- Updated UI components to reflect the new functionality and improve user experience.
- Refactored spec regeneration status tracking to support multiple projects using a Map for running states and abort controllers.
- Updated `getSpecRegenerationStatus` to accept a project path, allowing retrieval of status specific to a project.
- Modified `setRunningState` to manage running states and abort controllers per project.
- Adjusted related route handlers to utilize project-specific status checks and updates.
- Introduced a new Graph View page and integrated it into the routing structure.
- Enhanced UI components to reflect the current project’s spec generation state.
- Introduced a new endpoint `/resume-interrupted` to handle resuming features that were interrupted during server restarts.
- Implemented the `createResumeInterruptedHandler` to check for and resume interrupted features based on the project path.
- Enhanced the `AutoModeService` to track and manage the execution state of features, ensuring they can be resumed correctly.
- Updated relevant types and prompts to include the new 'ux-reviewer' enhancement mode for better user experience handling.
- Added new templates for UX review and other enhancement modes to improve task descriptions from a user experience perspective.
- Introduced a new MemoryView component for viewing and editing AI memory files.
- Updated navigation hooks and keyboard shortcuts to include memory functionality.
- Added memory file creation, deletion, and renaming capabilities.
- Enhanced the sidebar navigation to support memory as a new section.
- Implemented loading and saving of memory files with a markdown editor.
- Integrated dialogs for creating, deleting, and renaming memory files.
- Add try-catch around pty.spawn() to prevent crashes when PTY unavailable
- Add unhandledRejection/uncaughtException handlers for graceful degradation
- Add checkBackendHealth/waitForBackendHealth utilities for tests
- Add data/.api-key and data/credentials.json to .gitignore
This recovers the commits that were accidentally overwritten during force push.
Included:
- fa8ae149 feat: enhance worktree listing by scanning external directories
- Any other commits from upstream/v0.10.0rc at that point
The issue was that ALL OpenCode models were showing the OpenCode icon, regardless
of their actual underlying provider. This fix ensures each model shows its
authentic brand icon.
Changes:
1. **model-constants.ts** - Fixed provider field assignment
- Changed provider from hardcoded 'opencode' to actual config.provider
- Now correctly maps: opencode/big-pickle, amazon-bedrock/anthropic.*, etc.
2. **phase-model-selector.tsx** - Added provider-specific icon logic
- Added imports for DeepSeekIcon, NovaIcon, QwenIcon, MistralIcon, MetaIcon
- Added ProviderIcon selector based on model.provider field
- Each model type now displays its correct provider icon
3. **provider-icon.tsx** - Updated icon detection and mapping
- Enhanced getUnderlyingModelIcon() to detect specific Bedrock providers:
* amazon-bedrock/anthropic.* → anthropic icon
* amazon-bedrock/deepseek.* → deepseek icon
* amazon-bedrock/nova.* → nova icon
* amazon-bedrock/meta.* or llama → meta icon
* amazon-bedrock/mistral.* → mistral icon
* amazon-bedrock/qwen.* → qwen icon
* opencode/* → opencode icon
- Added meta and mistral to PROVIDER_ICON_KEYS
- Added placeholder definitions for meta/mistral in PROVIDER_ICON_DEFINITIONS
- Updated iconMap to include all provider icons
- Set OpenCode icon to official brand color (#6366F1 indigo)
Result: All model selectors and kanban cards now show correct brand icons
for each OpenCode model (DeepSeek whale, Amazon Nova sparkle, Qwen star, etc.)
- Implemented a new function to scan the .worktrees directory for worktrees that may exist outside of git's management, allowing for better detection of externally created or corrupted worktrees.
- Updated the /list endpoint to include discovered worktrees in the response, improving the accuracy of the worktree listing.
- Added logging for discovered worktrees to aid in debugging and tracking.
- Cleaned up and organized imports in the list.ts file for better maintainability.
The OpenCode icon now uses the official indigo brand color (#6366F1)
from opencode.ai instead of white, making it visible in both light
and dark themes.
Fix TS2322 error where finishEvent.part?.result (typed as {}) was being
assigned to result.result (typed as string).
Solution: Safely handle arbitrary result payloads by:
1. Reading raw value as unknown from Record<string, unknown>
2. Checking if it's a string, otherwise JSON.stringify()
This ensures type safety while supporting both string and object results
from the OpenCode CLI.
Fix all 8 remaining test failures:
1. Update executeQuery integration tests to use new OpenCode event format:
- text events use type='text' with part.text
- tool_call events use type='tool_call' with part containing call_id, name, args
- tool_result events use type='tool_result' with part
- step_finish events use type='step_finish' with part
- Use sessionID field instead of session_id
2. Fix step_finish event handling:
- Include result field in successful completion response
- Check for reason === 'error' to detect failed steps
- Provide default error message when error field is missing
3. Update model test expectations:
- Model 'opencode/big-pickle' stays as-is (not stripped to 'big-pickle')
- PROVIDER_PREFIXES only strips 'opencode-' prefix, not 'opencode/'
All 84 tests now pass successfully!
Update normalizeEvent tests to match new OpenCode API:
- text events use type='text' with part.text instead of text-delta
- tool_call events use type='tool_call' with part containing call_id, name, args
- tool_result events use type='tool_result' with part
- tool_error events use type='tool_error' with part
- step_finish events use type='step_finish' with part
Update buildCliArgs tests:
- Remove expectations for -q flag (no longer used)
- Remove expectations for -c flag (cwd set at subprocess level)
- Remove expectations for - final arg (prompt via stdin)
- Update format to 'json' instead of 'stream-json'
Remaining 8 test failures are in integration tests that use executeQuery
and require more extensive mock data updates.
This commit integrates OpenCode as a new AI provider and updates all provider
icons with their official brand colors for better visual recognition.
**OpenCode Provider Integration:**
- Add OpencodeProvider class with CLI-based execution
- Support for OpenCode native models (opencode/) and Bedrock models
- Proper event normalization for OpenCode streaming format
- Correct CLI arguments: --format json (not stream-json)
- Event structure: type, part.text, sessionID fields
**Provider Icons:**
- Add official OpenCode icon (white square frame from opencode.ai)
- Add DeepSeek icon (blue whale #4D6BFE)
- Add Qwen icon (purple gradient #6336E7 → #6F69F7)
- Add Amazon Nova icon (AWS orange #FF9900)
- Add Mistral icon (rainbow gradient gold→red)
- Add Meta icon (blue #1877F2)
- Update existing icons with brand colors:
* Claude: #d97757 (terra cotta)
* OpenAI/Codex: #74aa9c (teal-green)
* Cursor: #5E9EFF (bright blue)
**Settings UI Updates:**
- Update settings navigation to show OpenCode icon
- Update model configuration to use provider-specific icons
- Differentiate between OpenCode free models and Bedrock-hosted models
- All AI models now display their official brand logos
**Model Resolution:**
- Add isOpencodeModel() function to detect OpenCode models
- Support patterns: opencode/, opencode-*, amazon-bedrock/*
- Update getProviderFromModel to recognize opencode provider
Note: Some unit tests in opencode-provider.test.ts need updating to match
the new event structure and CLI argument format.
- Integrated Tooltip components into AddFeatureDialog, EditFeatureDialog, and MassEditDialog to provide user guidance on planning mode availability.
- Updated the rendering logic for planning mode selection to conditionally display tooltips when planning modes are not supported.
- Improved user experience by clarifying the conditions under which planning modes can be utilized.
- Eliminated kanbanCardDetailLevel from the SettingsService, app state, and various UI components including BoardView and BoardControls.
- Updated related hooks and API client to reflect the removal of kanbanCardDetailLevel.
- Cleaned up imports and props associated with kanbanCardDetailLevel across the codebase for improved clarity and maintainability.
- Removed the add feature shortcut from BoardHeader and integrated the add feature functionality directly into the KanbanBoard and BoardView components.
- Added a floating action button for adding features in the KanbanBoard's backlog column.
- Updated KanbanColumn to support a footer action for enhanced UI consistency.
- Cleaned up unused imports and props related to feature addition.
- Added a new DashboardView component for improved project management.
- Updated sidebar navigation to redirect to the dashboard instead of the home page.
- Removed ProjectActions from the sidebar for a cleaner interface.
- Enhanced BoardView to conditionally render the WorktreePanel based on visibility settings.
- Introduced worktree panel visibility management per project in the app store.
- Updated project settings to include worktree panel visibility and favorite status.
- Adjusted navigation logic to ensure users are directed to the appropriate view based on project state.
- Add error logging to CodexProvider auth check instead of silent failure
- Fix cachedAt timestamp to return actual cache time instead of request time
- Replace misleading hardcoded rate limit values (100) with sentinel value (-1)
- Fix unused parameter warning in codex routes
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
- Eliminated CodexCreditsSnapshot interface and related logic from CodexUsageService and UI components.
- Updated CodexUsageSection to display only plan type, removing credits information for a cleaner interface.
- Streamlined Codex usage formatting functions by removing unused credit formatting logic.
These changes simplify the Codex usage management by focusing on plan types, enhancing clarity and maintainability.
- Bumped version numbers for @automaker/server and @automaker/ui to 0.9.0 in package-lock.json.
- Introduced CodexAppServerService and CodexModelCacheService to manage communication with the Codex CLI's app-server and cache model data.
- Updated CodexUsageService to utilize app-server for fetching usage data.
- Enhanced Codex routes to support fetching available models and integrated model caching.
- Improved UI components to dynamically load and display Codex models, including error handling and loading states.
- Added new API methods for fetching Codex models and integrated them into the app store for state management.
These changes improve the overall functionality and user experience of the Codex integration, ensuring efficient model management and data retrieval.
- Deleted the AI profile management feature, including all associated views, hooks, and types.
- Updated settings and navigation components to remove references to AI profiles.
- Adjusted local storage and settings synchronization logic to reflect the removal of AI profiles.
- Cleaned up tests and utility functions that were dependent on the AI profile feature.
These changes streamline the application by eliminating unused functionality, improving maintainability and reducing complexity.
- Implemented a new method to retrieve usage data from the Codex app-server, providing real-time data and improving reliability.
- Updated the fetchUsageData method to prioritize app-server data over fallback methods.
- Added detailed logging for better traceability and debugging.
- Removed unused methods related to OpenAI API usage and Codex CLI requests, streamlining the service.
These changes enhance the functionality and robustness of the CodexUsageService, ensuring accurate usage statistics retrieval.
- Refactored the global navigation structure to group settings items into distinct categories for improved organization and usability.
- Updated the settings navigation component to render these groups dynamically, enhancing the user experience.
- Changed the default initial view in the settings hook to 'model-defaults' for better alignment with the new navigation structure.
These changes streamline navigation and make it easier for users to find relevant settings.
- Implemented a collapsible dropdown for navigation items in the settings view, allowing users to expand or collapse sub-items.
- Added local storage functionality to remember the open/closed state of the dropdown across sessions.
- Updated the styling and interaction logic for improved user experience and accessibility.
These changes improve the organization of navigation items and enhance user interaction within the settings view.
- Add typeof checks for fallback claim values to prevent runtime errors
- Make openaiAuth parsing more robust with proper type validation
- Add isNaN check for date parsing to handle invalid dates
- Refactor fetchFromAuthFile to reuse getPlanTypeFromAuthFile (DRY)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Fix hardcoded 'plus' planType that was returned as default
- Read plan type from correct JWT path: https://api.openai.com/auth.chatgpt_plan_type
- Add subscription expiry check - override to 'free' if expired
- Use getCodexAuthPath() from @automaker/platform instead of manual path
- Remove unused imports (os, fs, path) and class properties
- Clean up code and add minimal essential logging
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Added WorkModeSelector component to allow users to choose between 'current', 'auto', and 'custom' work modes for feature management.
- Updated AddFeatureDialog and EditFeatureDialog to utilize the new work mode functionality, replacing the previous branch selector logic.
- Enhanced useBoardActions hook to handle branch name generation based on the selected work mode.
- Adjusted settings to default to using worktrees, improving the overall feature creation experience.
These changes streamline the feature management process by providing clearer options for branch handling and worktree isolation.
- Introduced VITE_APP_MODE variable in multiple files to manage application modes.
- Updated dev.mjs and docker-compose.dev.yml to set different modes for development.
- Enhanced type definitions in vite-env.d.ts to include VITE_APP_MODE options.
- Modified AutomakerLogo component to display version suffix based on the current app mode.
- Improved OS detection logic in use-os-detection.ts to utilize Electron's platform information.
- Updated ElectronAPI interface to expose platform information.
These changes provide better control over application behavior based on the mode, enhancing the development experience.
* memory
* feat: add smart memory selection with task context
- Add taskContext parameter to loadContextFiles for intelligent file selection
- Memory files are scored based on tag matching with task keywords
- Category name matching (e.g., "terminals" matches terminals.md) with 4x weight
- Usage statistics influence scoring (files that helped before rank higher)
- Limit to top 5 files + always include gotchas.md
- Auto-mode passes feature title/description as context
- Chat sessions pass user message as context
This prevents loading 40+ memory files and killing context limits.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
* refactor: enhance auto-mode service and context loader
- Improved context loading by adding task context for better memory selection.
- Updated JSON parsing logic to handle various formats and ensure robust error handling.
- Introduced file locking mechanisms to prevent race conditions during memory file updates.
- Enhanced metadata handling in memory files, including validation and sanitization.
- Refactored scoring logic for context files to improve selection accuracy based on task relevance.
These changes optimize memory file management and enhance the overall performance of the auto-mode service.
* refactor: enhance learning extraction and formatting in auto-mode service
- Improved the learning extraction process by refining the user prompt to focus on meaningful insights and structured JSON output.
- Updated the LearningEntry interface to include additional context fields for better documentation of decisions and patterns.
- Enhanced the formatLearning function to adopt an Architecture Decision Record (ADR) style, providing richer context for recorded learnings.
- Added detailed logging for better traceability during the learning extraction and appending processes.
These changes aim to improve the quality and clarity of learnings captured during the auto-mode service's operation.
* feat: integrate stripProviderPrefix utility for model ID handling
- Added stripProviderPrefix utility to various routes to ensure providers receive bare model IDs.
- Updated model references in executeQuery calls across multiple files, enhancing consistency in model ID handling.
- Introduced memoryExtractionModel in settings for improved learning extraction tasks.
These changes streamline the model ID processing and enhance the overall functionality of the provider interactions.
* feat: enhance error handling and server offline management in board actions
- Improved error handling in the handleRunFeature and handleStartImplementation functions to throw errors for better caller management.
- Integrated connection error detection and server offline handling, redirecting users to the login page when the server is unreachable.
- Updated follow-up feature logic to include rollback mechanisms and improved user feedback for error scenarios.
These changes enhance the robustness of the board actions by ensuring proper error management and user experience during server connectivity issues.
---------
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: webdevcody <webdevcody@gmail.com>
- Fix disposed response object in Playwright route handler
- Add git user config to prevent 'empty ident' errors
- Increase server startup timeout and improve debugging
- Fix YAML indentation in E2E workflow
Resolves:
- 'Response has been disposed' error in open-existing-project test
- Git identity configuration issues in CI
- Backend server startup timing issues
- Combined CLI disconnection markers with OpenCode support
- Added OpenCode auth/deauth routes and API methods
- Resolved merge conflicts between feature branch and upstream v0.9.0rc
- Fix Claude, Codex, and Cursor auth handlers to check if CLI is already authenticated
- Use same detection logic as each provider's internal checkAuth/codexAuthIndicators()
- For Codex: Check for API keys and auth files before requiring manual login
- For Cursor: Check for env var and credentials files before requiring manual auth
- For Claude: Check for cached auth tokens, settings, and credentials files
- If CLI is already authenticated: Just reconnect by removing disconnected marker
- If CLI needs auth: Tell user to manually run login command
- This prevents timeout errors when login commands can't run in non-interactive mode
- Refactored ProvidersSetupStep component to improve the UI and streamline provider status checks for Claude, Cursor, Codex, and OpenCode.
- Introduced auto-verification for CLI authentication and improved error handling for authentication states.
- Added loading indicators for provider status checks and enhanced user feedback for installation and authentication processes.
- Updated setup store to manage verification states and ensure accurate representation of provider statuses.
These changes enhance the user experience by providing clearer feedback and a more efficient setup process for AI providers.
- Updated unit tests for OpenCode provider to include new authentication indicators.
- Refactored ProvidersSetupStep component by removing unnecessary UI elements for better clarity.
- Improved board background persistence tests by utilizing a setup function for initializing app state.
- Enhanced settings synchronization tests to ensure proper handling of login and app state.
These changes improve the testing framework and user interface for OpenCode integration, ensuring a smoother setup and authentication process.