* feat: refactor Claude API Profiles to Claude Compatible Providers
- Rename ClaudeApiProfile to ClaudeCompatibleProvider with models[] array
- Each ProviderModel has mapsToClaudeModel field for Claude tier mapping
- Add providerType field for provider-specific icons (glm, minimax, openrouter)
- Add thinking level support for provider models in phase selectors
- Show all mapped Claude models per provider model (e.g., "Maps to Haiku, Sonnet, Opus")
- Add Bulk Replace feature to switch all phases to a provider at once
- Hide Bulk Replace button when no providers are enabled
- Fix project-level phaseModelOverrides not persisting after refresh
- Fix deleting last provider not persisting (remove empty array guard)
- Add getProviderByModelId() helper for all SDK routes
- Update all routes to pass provider config for provider models
- Update terminology from "profiles" to "providers" throughout UI
- Update documentation to reflect new provider system
* fix: atomic writer race condition and bulk replace reset to defaults
1. AtomicWriter Race Condition Fix (libs/utils/src/atomic-writer.ts):
- Changed temp file naming from Date.now() to Date.now() + random hex
- Uses crypto.randomBytes(4).toString('hex') for uniqueness
- Prevents ENOENT errors when multiple concurrent writes happen
within the same millisecond
2. Bulk Replace "Anthropic Direct" Reset (both dialogs):
- When selecting "Anthropic Direct", now uses DEFAULT_PHASE_MODELS
- Properly resets thinking levels and other settings to defaults
- Added thinkingLevel to the change detection comparison
- Affects both global and project-level bulk replace dialogs
* fix: update tests for new model resolver passthrough behavior
1. model-resolver tests:
- Unknown models now pass through unchanged (provider model support)
- Removed expectations for warnings on unknown models
- Updated case sensitivity and edge case tests accordingly
- Added tests for provider-like model names (GLM-4.7, MiniMax-M2.1)
2. atomic-writer tests:
- Updated regex to match new temp file format with random suffix
- Format changed from .tmp.{timestamp} to .tmp.{timestamp}.{hex}
* refactor: simplify getPhaseModelWithOverrides calls per code review
Address code review feedback on PR #629:
- Make settingsService parameter optional in getPhaseModelWithOverrides
- Function now handles undefined settingsService gracefully by returning defaults
- Remove redundant ternary checks in 4 call sites:
- apps/server/src/routes/context/routes/describe-file.ts
- apps/server/src/routes/context/routes/describe-image.ts
- apps/server/src/routes/worktree/routes/generate-commit-message.ts
- apps/server/src/services/auto-mode-service.ts
- Remove unused DEFAULT_PHASE_MODELS imports where applicable
* test: fix server tests for provider model passthrough behavior
- Update model-resolver.test.ts to expect unknown models to pass through
unchanged (supports ClaudeCompatibleProvider models like GLM-4.7)
- Remove warning expectations for unknown models (valid for providers)
- Add missing getCredentials and getGlobalSettings mocks to
ideation-service.test.ts for settingsService
* fix: address code review feedback for model providers
- Honor thinkingLevel in generate-commit-message.ts
- Pass claudeCompatibleProvider in ideation-service.ts for provider models
- Resolve provider configuration for model overrides in generate-suggestions.ts
- Update "Active Profile" to "Active Provider" label in project-claude-section
- Use substring instead of deprecated substr in api-profiles-section
- Preserve provider enabled state when editing in api-profiles-section
* fix: address CodeRabbit review issues for Claude Compatible Providers
- Fix TypeScript TS2339 error in generate-suggestions.ts where
settingsService was narrowed to 'never' type in else branch
- Use DEFAULT_PHASE_MODELS per-phase defaults instead of hardcoded
'sonnet' in settings-helpers.ts
- Remove duplicate eventHooks key in use-settings-migration.ts
- Add claudeCompatibleProviders to localStorage migration parsing
and merging functions
- Handle canonical claude-* model IDs (claude-haiku, claude-sonnet,
claude-opus) in project-models-section display names
This resolves the CI build failures and addresses code review feedback.
* fix: skip broken list-view-priority E2E test and add Priority column label
- Skip list-view-priority.spec.ts with TODO explaining the infrastructure
issue: setupRealProject only sets localStorage but server settings
take precedence with localStorageMigrated: true
- Add 'Priority' label to list-header.tsx for the priority column
(was empty string, now shows proper header text)
- Increase column width to accommodate the label
The E2E test issue is that tests create features in a temp directory,
but the server loads from the E2E Test Project fixture path set in
setup-e2e-fixtures.mjs. Needs infrastructure fix to properly switch
projects or create features through UI instead of on disk.
* feat: add Claude API provider profiles for alternative endpoints
Add support for managing multiple Claude-compatible API endpoints
(z.AI GLM, AWS Bedrock, etc.) through provider profiles in settings.
Features:
- New ClaudeApiProfile type with base URL, API key, model mappings
- Pre-configured z.AI GLM template with correct model names
- Profile selector in Settings > Claude > API Profiles
- Clean switching between profiles and direct Anthropic API
- Immediate persistence to prevent data loss on restart
Profile support added to all execution paths:
- Agent service (chat)
- Ideation service
- Auto-mode service (feature agents, enhancements)
- Simple query service (title generation, descriptions, etc.)
- Backlog planning, commit messages, spec generation
- GitHub issue validation, suggestions
Environment variables set when profile is active:
- ANTHROPIC_BASE_URL, ANTHROPIC_AUTH_TOKEN/API_KEY
- ANTHROPIC_DEFAULT_HAIKU/SONNET/OPUS_MODEL
- API_TIMEOUT_MS, CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC
- Introduced a new Simple Query Service to streamline basic AI queries, allowing for structured JSON outputs.
- Updated existing routes to utilize the new service, replacing direct SDK calls with a unified interface for querying.
- Enhanced provider handling in various routes, including generate-spec, generate-features-from-spec, and validate-issue, to support both Claude and Cursor models seamlessly.
- Added structured output support for improved response handling and error management across the application.
- Update error event interface to handle nested error objects with
name/data/message structure from OpenCode CLI
- Extract meaningful error messages from provider errors in normalizeEvent
- Add error type handling in executeWithProvider to throw errors with
actual provider messages instead of returning empty response
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Update isOpencodeModel() to detect dynamic models with provider/model format
(e.g., github-copilot/gpt-4o, google/gemini-2.5-pro, zai-coding-plan/glm-4.7)
- Update resolveModelString() to recognize and pass through OpenCode models
- Update enhance route to route OpenCode models to OpenCode provider
- Fix OpenCode CLI command format: use --format json (not stream-json)
- Remove unsupported -q and - flags from CLI arguments
- Update normalizeEvent() to handle actual OpenCode JSON event format
- Add dynamic model configuration UI with provider grouping
- Cache providers and models in app store for snappier navigation
- Show authenticated providers in OpenCode CLI status card
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Introduced a new endpoint `/resume-interrupted` to handle resuming features that were interrupted during server restarts.
- Implemented the `createResumeInterruptedHandler` to check for and resume interrupted features based on the project path.
- Enhanced the `AutoModeService` to track and manage the execution state of features, ensuring they can be resumed correctly.
- Updated relevant types and prompts to include the new 'ux-reviewer' enhancement mode for better user experience handling.
- Added new templates for UX review and other enhancement modes to improve task descriptions from a user experience perspective.
* memory
* feat: add smart memory selection with task context
- Add taskContext parameter to loadContextFiles for intelligent file selection
- Memory files are scored based on tag matching with task keywords
- Category name matching (e.g., "terminals" matches terminals.md) with 4x weight
- Usage statistics influence scoring (files that helped before rank higher)
- Limit to top 5 files + always include gotchas.md
- Auto-mode passes feature title/description as context
- Chat sessions pass user message as context
This prevents loading 40+ memory files and killing context limits.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
* refactor: enhance auto-mode service and context loader
- Improved context loading by adding task context for better memory selection.
- Updated JSON parsing logic to handle various formats and ensure robust error handling.
- Introduced file locking mechanisms to prevent race conditions during memory file updates.
- Enhanced metadata handling in memory files, including validation and sanitization.
- Refactored scoring logic for context files to improve selection accuracy based on task relevance.
These changes optimize memory file management and enhance the overall performance of the auto-mode service.
* refactor: enhance learning extraction and formatting in auto-mode service
- Improved the learning extraction process by refining the user prompt to focus on meaningful insights and structured JSON output.
- Updated the LearningEntry interface to include additional context fields for better documentation of decisions and patterns.
- Enhanced the formatLearning function to adopt an Architecture Decision Record (ADR) style, providing richer context for recorded learnings.
- Added detailed logging for better traceability during the learning extraction and appending processes.
These changes aim to improve the quality and clarity of learnings captured during the auto-mode service's operation.
* feat: integrate stripProviderPrefix utility for model ID handling
- Added stripProviderPrefix utility to various routes to ensure providers receive bare model IDs.
- Updated model references in executeQuery calls across multiple files, enhancing consistency in model ID handling.
- Introduced memoryExtractionModel in settings for improved learning extraction tasks.
These changes streamline the model ID processing and enhance the overall functionality of the provider interactions.
* feat: enhance error handling and server offline management in board actions
- Improved error handling in the handleRunFeature and handleStartImplementation functions to throw errors for better caller management.
- Integrated connection error detection and server offline handling, redirecting users to the login page when the server is unreachable.
- Updated follow-up feature logic to include rollback mechanisms and improved user feedback for error scenarios.
These changes enhance the robustness of the board actions by ensuring proper error management and user experience during server connectivity issues.
---------
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: webdevcody <webdevcody@gmail.com>
- Introduced maxThinkingTokens to the query options for Claude models, allowing for more precise control over the SDK's reasoning capabilities.
- Refactored the enhance handler to utilize the new getThinkingTokenBudget function, improving the integration of thinking levels into the query process.
- Updated the query options structure for clarity and maintainability, ensuring consistent handling of model parameters.
This update enhances the application's ability to adapt reasoning capabilities based on user-defined thinking levels, improving overall performance.
- Enhanced multiple server and UI components to include an optional thinking level parameter, improving the configurability of model interactions.
- Updated request handlers and services to manage and pass the thinking level, ensuring consistent data handling across the application.
- Refactored UI components to display and manage the selected model along with its thinking level, enhancing user experience and clarity.
- Adjusted the Electron API and HTTP client to support the new thinking level parameter in requests, ensuring seamless integration.
This update significantly improves the application's ability to adapt reasoning capabilities based on user-defined thinking levels, enhancing overall performance and user satisfaction.
Merges latest main branch changes including:
- MCP server support and configuration
- Pipeline configuration system
- Prompt customization settings
- GitHub issue comments in validation
- Auth middleware improvements
- Various UI/UX improvements
All Cursor CLI features preserved:
- Multi-provider support (Claude + Cursor)
- Model override capabilities
- Phase model configuration
- Provider tabs in settings
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Adds a readOnly option to ExecuteOptions that controls whether the
Cursor CLI runs with --force flag (allows edits) or without (suggest-only).
Read-only routes now pass readOnly: true:
- generate-spec.ts, generate-features-from-spec.ts (we write files ourselves)
- validate-issue.ts, generate-suggestions.ts (analysis only)
- describe-file.ts, describe-image.ts (description only)
- generate-plan.ts, enhance.ts (text generation only)
Routes that implement features (auto-mode-service, agent-service) keep
the default (readOnly: false) to allow file modifications.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Refactor model handling to support both Claude and Cursor models across various components.
- Introduce `stripProviderPrefix` utility for consistent model ID processing.
- Update `CursorProvider` to utilize `isCursorModel` for model validation.
- Implement model override functionality in GitHub issue validation and enhancement routes.
- Add `useCursorStatusInit` hook to initialize Cursor CLI status on app startup.
- Update UI components to reflect changes in model selection and validation processes.
This update improves the flexibility of AI model usage and enhances user experience by allowing quick model overrides.
Applied three code quality improvements suggested by Gemini Code Assist:
1. **Replace nested ternary with map object (enhance.ts)**
- Changed nested ternary operator to Record<EnhancementMode, string> map
- Improves readability and maintainability
- More declarative approach for system prompt selection
2. **Simplify handleToggle logic (prompt-customization-section.tsx)**
- Removed redundant if/else branches
- Both branches were calculating the same value
- Cleaner, more concise implementation
3. **Add type safety to updatePrompt with generics (prompt-customization-section.tsx)**
- Changed field parameter from string to keyof NonNullable<PromptCustomization[T]>
- Prevents runtime errors from misspelled field names
- Improved developer experience with autocomplete
All tests passing (774/774). Builds successful.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Add comprehensive prompt customization system allowing users to customize
all AI prompts (Auto Mode, Agent Runner, Backlog Plan, Enhancement) through
the Settings UI.
## Features
### Core Customization System
- New TypeScript types for prompt customization with enabled flag
- CustomPrompt interface with value and enabled state
- Prompts preserved even when disabled (no data loss)
- Merged prompt system (custom overrides defaults when enabled)
- Persistent storage in ~/.automaker/settings.json
### Settings UI
- New "Prompt Customization" section in Settings
- 4 tabs: Auto Mode, Agent, Backlog Plan, Enhancement
- Toggle-based editing (read-only default → editable custom)
- Dynamic textarea height based on prompt length (120px-600px)
- Visual state indicators (Custom/Default labels)
### Warning System
- Critical prompt warnings for Backlog Plan (JSON format requirement)
- Field-level warnings when editing critical prompts
- Info banners for Auto Mode planning markers
- Color-coded warnings (blue=info, amber=critical)
### Backend Integration
- Auto Mode service loads prompts from settings
- Agent service loads prompts from settings
- Backlog Plan service loads prompts from settings
- Enhancement endpoint loads prompts from settings
- Settings sync includes promptCustomization field
### Files Changed
- libs/types/src/prompts.ts - Type definitions
- libs/prompts/src/defaults.ts - Default prompt values
- libs/prompts/src/merge.ts - Merge utilities
- apps/ui/src/components/views/settings-view/prompts/ - UI components
- apps/server/src/lib/settings-helpers.ts - getPromptCustomization()
- All service files updated to use customizable prompts
## Technical Details
Prompt storage format:
```json
{
"promptCustomization": {
"autoMode": {
"planningLite": {
"value": "Custom prompt text...",
"enabled": true
}
}
}
}
```
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
- Updated 150+ files to import from @automaker/* packages
- Server imports now use @automaker/utils, @automaker/platform, @automaker/types, @automaker/model-resolver, @automaker/dependency-resolver, @automaker/git-utils
- UI imports now use @automaker/dependency-resolver and @automaker/types
- Deleted duplicate dependency-resolver files (222 lines eliminated)
- Updated dependency-resolver to use ES modules for Vite compatibility
- Added type annotation fix in auto-mode-service.ts
- Updated feature-loader to re-export Feature type from @automaker/types
- Both server and UI builds successfully verified
Phase 1 of server refactoring complete.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
- Simplified the handling of enhanced text in AddFeatureDialog and EditFeatureDialog by storing the enhanced text in a variable before updating the state.
- Updated the dropdown menu and button components to ensure consistent styling and behavior across both dialogs.
- Enhanced user experience by ensuring the cursor style indicates interactivity in the dropdown menus.
This refactor improves code readability and maintains a consistent UI experience.
- Introduced AIEnhancementSection to settings view for selecting enhancement models.
- Implemented enhancement functionality in AddFeatureDialog and EditFeatureDialog, allowing users to enhance feature descriptions with AI.
- Added dropdown menu for selecting enhancement modes (improve, technical, simplify, acceptance).
- Integrated new API endpoints for enhancing text using Claude AI.
- Updated navigation to include AI enhancement section in settings.
This enhances user experience by providing AI-powered text enhancement capabilities directly within the application.