mirror of
https://github.com/AutoMaker-Org/automaker.git
synced 2026-03-18 10:23:07 +00:00
Bug fixes and stability improvements (#815)
* fix(copilot): correct tool.execution_complete event handling The CopilotProvider was using incorrect event type and data structure for tool execution completion events from the @github/copilot-sdk, causing tool call outputs to be empty. Changes: - Update event type from 'tool.execution_end' to 'tool.execution_complete' - Fix data structure to use nested result.content instead of flat result - Fix error structure to use error.message instead of flat error - Add success field to match SDK event structure - Add tests for empty and missing result handling This aligns with the official @github/copilot-sdk v0.1.16 types defined in session-events.d.ts. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * test(copilot): add edge case test for error with code field Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * refactor(copilot): improve error handling and code quality Code review improvements: - Extract magic string '[ERROR]' to TOOL_ERROR_PREFIX constant - Add null-safe error handling with direct error variable assignment - Include error codes in error messages for better debugging - Add JSDoc documentation for tool.execution_complete handler - Update tests to verify error codes are displayed - Add missing tool_use_id assertion in error test These changes improve: - Code maintainability (no magic strings) - Debugging experience (error codes now visible) - Type safety (explicit null checks) - Test coverage (verify error code formatting) Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Changes from fix/bug-fixes-1-0 * test(copilot): add edge case test for error with code field Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Changes from fix/bug-fixes-1-0 * fix: Handle detached HEAD state in worktree discovery and recovery * fix: Remove unused isDevServerStarting prop and md: breakpoint classes * fix: Add missing dependency and sanitize persisted cache data * feat: Ensure NODE_ENV is set to test in vitest configs * feat: Configure Playwright to run only E2E tests * fix: Improve PR tracking and dev server lifecycle management * feat: Add settings-based defaults for planning mode, model config, and custom providers. Fixes #816 * feat: Add worktree and branch selector to graph view * fix: Add timeout and error handling for worktree HEAD ref resolution * fix: use absolute icon path and place icon outside asar on Linux The hicolor icon theme index only lists sizes up to 512x512, so an icon installed only at 1024x1024 is invisible to GNOME/KDE's theme resolver, causing both the app launcher and taskbar to show a generic icon. Additionally, BrowserWindow.icon cannot be read by the window manager when the file is inside app.asar. - extraResources: copy logo_larger.png to resources/ (outside asar) so it lands at /opt/Automaker/resources/logo_larger.png on install - linux.desktop.Icon: set to the absolute resources path, bypassing the hicolor theme lookup and its size constraints entirely - icon-manager.ts: on Linux production use process.resourcesPath so BrowserWindow receives a real filesystem path the WM can read directly Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * fix: use linux.desktop.entry for custom desktop Icon field electron-builder v26 rejects arbitrary keys in linux.desktop — the correct schema wraps custom .desktop overrides inside desktop.entry. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * fix: set desktop name on Linux so taskbar uses the correct app icon Without app.setDesktopName(), the window manager cannot associate the running Electron process with automaker.desktop. GNOME/KDE fall back to _NET_WM_ICON which defaults to Electron's own bundled icon. Calling app.setDesktopName('automaker.desktop') before any window is created sets the _GTK_APPLICATION_ID hint and XDG app_id so the WM picks up the desktop entry's Icon for the taskbar. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * Fix: memory and context views mobile friendly (#818) * Changes from fix/memory-and-context-mobile-friendly * fix: Improve file extension detection and add path traversal protection * refactor: Extract file extension utilities and add path traversal guards Code review improvements: - Extract isMarkdownFilename and isImageFilename to shared image-utils.ts - Remove duplicated code from context-view.tsx and memory-view.tsx - Add path traversal guard for context fixture utilities (matching memory) - Add 7 new tests for context fixture path traversal protection - Total 61 tests pass Addresses code review feedback from PR #813 Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * test: Add e2e tests for profiles crud and board background persistence * Update apps/ui/playwright.config.ts Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> * fix: Add robust test navigation handling and file filtering * fix: Format NODE_OPTIONS configuration on single line * test: Update profiles and board background persistence tests * test: Replace iPhone 13 Pro with Pixel 5 for mobile test consistency * Update apps/ui/src/components/views/context-view.tsx Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> * chore: Remove test project directory * feat: Filter context files by type and improve mobile menu visibility --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> * fix: Improve test reliability and localhost handling * chore: Use explicit TEST_USE_EXTERNAL_BACKEND env var for server cleanup * feat: Add E2E/CI mock mode for provider factory and auth verification * feat: Add remoteBranch parameter to pull and rebase operations * chore: Enhance E2E testing setup with worker isolation and auth state management - Updated .gitignore to include worker-specific test fixtures. - Modified e2e-tests.yml to implement test sharding for improved CI performance. - Refactored global setup to authenticate once and save session state for reuse across tests. - Introduced worker-isolated fixture paths to prevent conflicts during parallel test execution. - Improved test navigation and loading handling for better reliability. - Updated various test files to utilize new auth state management and fixture paths. * fix: Update Playwright configuration and improve test reliability - Increased the number of workers in Playwright configuration for better parallelism in CI environments. - Enhanced the board background persistence test to ensure dropdown stability by waiting for the list to populate before interaction, improving test reliability. * chore: Simplify E2E test configuration and enhance mock implementations - Updated e2e-tests.yml to run tests in a single shard for streamlined CI execution. - Enhanced unit tests for worktree list handling by introducing a mock for execGitCommand, improving test reliability and coverage. - Refactored setup functions to better manage command mocks for git operations in tests. - Improved error handling in mkdirSafe function to account for undefined stats in certain environments. * refactor: Improve test configurations and enhance error handling - Updated Playwright configuration to clear VITE_SERVER_URL, ensuring the frontend uses the Vite proxy and preventing cookie domain mismatches. - Enhanced MergeRebaseDialog logic to normalize selectedBranch for better handling of various ref formats. - Improved global setup with a more robust backend health check, throwing an error if the backend is not healthy after retries. - Refactored project creation tests to handle file existence checks more reliably. - Added error handling for missing E2E source fixtures to guide setup process. - Enhanced memory navigation to handle sandbox dialog visibility more effectively. * refactor: Enhance Git command execution and improve test configurations - Updated Git command execution to merge environment paths correctly, ensuring proper command execution context. - Refactored the Git initialization process to handle errors more gracefully and ensure user configuration is set before creating the initial commit. - Improved test configurations by updating Playwright test identifiers for better clarity and consistency across different project states. - Enhanced cleanup functions in tests to handle directory removal more robustly, preventing errors during test execution. * fix: Resolve React hooks errors from duplicate instances in dependency tree * style: Format alias configuration for improved readability --------- Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> Co-authored-by: DhanushSantosh <dhanushsantoshs05@gmail.com> Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
This commit is contained in:
331
apps/server/tests/unit/lib/file-editor-store-logic.test.ts
Normal file
331
apps/server/tests/unit/lib/file-editor-store-logic.test.ts
Normal file
@@ -0,0 +1,331 @@
|
||||
import { describe, it, expect } from 'vitest';
|
||||
import {
|
||||
computeIsDirty,
|
||||
updateTabWithContent as updateTabContent,
|
||||
markTabAsSaved as markTabSaved,
|
||||
} from '../../../../ui/src/components/views/file-editor-view/file-editor-dirty-utils.ts';
|
||||
|
||||
/**
|
||||
* Unit tests for the file editor store logic, focusing on the unsaved indicator fix.
|
||||
*
|
||||
* The bug was: File unsaved indicators weren't working reliably - editing a file
|
||||
* and saving it would sometimes leave the dirty indicator (dot) visible.
|
||||
*
|
||||
* Root causes:
|
||||
* 1. Stale closure in handleSave - captured activeTab could have old content
|
||||
* 2. Editor buffer not synced - CodeMirror might have buffered changes not yet in store
|
||||
*
|
||||
* Fix:
|
||||
* - handleSave now gets fresh state from store using getState()
|
||||
* - handleSave gets current content from editor via getValue()
|
||||
* - Content is synced to store before saving if it differs
|
||||
*
|
||||
* Since we can't easily test the React/zustand store in node environment,
|
||||
* we test the pure logic that the store uses for dirty state tracking.
|
||||
*/
|
||||
|
||||
describe('File editor dirty state logic', () => {
|
||||
describe('updateTabContent', () => {
|
||||
it('should set isDirty to true when content differs from originalContent', () => {
|
||||
const tab = {
|
||||
content: 'original content',
|
||||
originalContent: 'original content',
|
||||
isDirty: false,
|
||||
};
|
||||
|
||||
const updated = updateTabContent(tab, 'modified content');
|
||||
|
||||
expect(updated.isDirty).toBe(true);
|
||||
expect(updated.content).toBe('modified content');
|
||||
expect(updated.originalContent).toBe('original content');
|
||||
});
|
||||
|
||||
it('should set isDirty to false when content matches originalContent', () => {
|
||||
const tab = {
|
||||
content: 'original content',
|
||||
originalContent: 'original content',
|
||||
isDirty: false,
|
||||
};
|
||||
|
||||
// First modify it
|
||||
let updated = updateTabContent(tab, 'modified content');
|
||||
expect(updated.isDirty).toBe(true);
|
||||
|
||||
// Now update back to original
|
||||
updated = updateTabContent(updated, 'original content');
|
||||
expect(updated.isDirty).toBe(false);
|
||||
});
|
||||
|
||||
it('should handle empty content correctly', () => {
|
||||
const tab = {
|
||||
content: '',
|
||||
originalContent: '',
|
||||
isDirty: false,
|
||||
};
|
||||
|
||||
const updated = updateTabContent(tab, 'new content');
|
||||
|
||||
expect(updated.isDirty).toBe(true);
|
||||
});
|
||||
});
|
||||
|
||||
describe('markTabSaved', () => {
|
||||
it('should set isDirty to false and update both content and originalContent', () => {
|
||||
const tab = {
|
||||
content: 'original content',
|
||||
originalContent: 'original content',
|
||||
isDirty: false,
|
||||
};
|
||||
|
||||
// First modify
|
||||
let updated = updateTabContent(tab, 'modified content');
|
||||
expect(updated.isDirty).toBe(true);
|
||||
|
||||
// Then save
|
||||
updated = markTabSaved(updated, 'modified content');
|
||||
|
||||
expect(updated.isDirty).toBe(false);
|
||||
expect(updated.content).toBe('modified content');
|
||||
expect(updated.originalContent).toBe('modified content');
|
||||
});
|
||||
|
||||
it('should correctly clear dirty state when save is triggered after edit', () => {
|
||||
// This test simulates the bug scenario:
|
||||
// 1. User edits file -> isDirty = true
|
||||
// 2. User saves -> markTabSaved should set isDirty = false
|
||||
let tab = {
|
||||
content: 'initial',
|
||||
originalContent: 'initial',
|
||||
isDirty: false,
|
||||
};
|
||||
|
||||
// Simulate user editing
|
||||
tab = updateTabContent(tab, 'initial\nnew line');
|
||||
|
||||
// Should be dirty
|
||||
expect(tab.isDirty).toBe(true);
|
||||
|
||||
// Simulate save (with the content that was saved)
|
||||
tab = markTabSaved(tab, 'initial\nnew line');
|
||||
|
||||
// Should NOT be dirty anymore
|
||||
expect(tab.isDirty).toBe(false);
|
||||
});
|
||||
});
|
||||
|
||||
describe('race condition handling', () => {
|
||||
it('should correctly handle updateTabContent after markTabSaved with same content', () => {
|
||||
// This tests the scenario where:
|
||||
// 1. CodeMirror has a pending onChange with content "B"
|
||||
// 2. User presses save when editor shows "B"
|
||||
// 3. markTabSaved is called with "B"
|
||||
// 4. CodeMirror's pending onChange fires with "B" (same content)
|
||||
// Result: isDirty should remain false
|
||||
let tab = {
|
||||
content: 'A',
|
||||
originalContent: 'A',
|
||||
isDirty: false,
|
||||
};
|
||||
|
||||
// User edits to "B"
|
||||
tab = updateTabContent(tab, 'B');
|
||||
|
||||
// Save with "B"
|
||||
tab = markTabSaved(tab, 'B');
|
||||
|
||||
// Late onChange with same content "B"
|
||||
tab = updateTabContent(tab, 'B');
|
||||
|
||||
expect(tab.isDirty).toBe(false);
|
||||
expect(tab.content).toBe('B');
|
||||
});
|
||||
|
||||
it('should correctly handle updateTabContent after markTabSaved with different content', () => {
|
||||
// This tests the scenario where:
|
||||
// 1. CodeMirror has a pending onChange with content "C"
|
||||
// 2. User presses save when store has "B"
|
||||
// 3. markTabSaved is called with "B"
|
||||
// 4. CodeMirror's pending onChange fires with "C" (different content)
|
||||
// Result: isDirty should be true (file changed after save)
|
||||
let tab = {
|
||||
content: 'A',
|
||||
originalContent: 'A',
|
||||
isDirty: false,
|
||||
};
|
||||
|
||||
// User edits to "B"
|
||||
tab = updateTabContent(tab, 'B');
|
||||
|
||||
// Save with "B"
|
||||
tab = markTabSaved(tab, 'B');
|
||||
|
||||
// Late onChange with different content "C"
|
||||
tab = updateTabContent(tab, 'C');
|
||||
|
||||
// File changed after save, so it should be dirty
|
||||
expect(tab.isDirty).toBe(true);
|
||||
expect(tab.content).toBe('C');
|
||||
expect(tab.originalContent).toBe('B');
|
||||
});
|
||||
|
||||
it('should handle rapid edit-save-edit cycle correctly', () => {
|
||||
// Simulate rapid user actions
|
||||
let tab = {
|
||||
content: 'v1',
|
||||
originalContent: 'v1',
|
||||
isDirty: false,
|
||||
};
|
||||
|
||||
// Edit 1
|
||||
tab = updateTabContent(tab, 'v2');
|
||||
expect(tab.isDirty).toBe(true);
|
||||
|
||||
// Save 1
|
||||
tab = markTabSaved(tab, 'v2');
|
||||
expect(tab.isDirty).toBe(false);
|
||||
|
||||
// Edit 2
|
||||
tab = updateTabContent(tab, 'v3');
|
||||
expect(tab.isDirty).toBe(true);
|
||||
|
||||
// Save 2
|
||||
tab = markTabSaved(tab, 'v3');
|
||||
expect(tab.isDirty).toBe(false);
|
||||
|
||||
// Edit 3 (back to v2)
|
||||
tab = updateTabContent(tab, 'v2');
|
||||
expect(tab.isDirty).toBe(true);
|
||||
|
||||
// Save 3
|
||||
tab = markTabSaved(tab, 'v2');
|
||||
expect(tab.isDirty).toBe(false);
|
||||
});
|
||||
});
|
||||
|
||||
describe('handleSave stale closure fix simulation', () => {
|
||||
it('demonstrates the fix: using fresh content instead of closure content', () => {
|
||||
// This test demonstrates why the fix was necessary.
|
||||
// The old handleSave captured activeTab in closure, which could be stale.
|
||||
// The fix gets fresh state from getState() and uses editor.getValue().
|
||||
|
||||
// Simulate store state
|
||||
let storeState = {
|
||||
tabs: [
|
||||
{
|
||||
id: 'tab-1',
|
||||
content: 'A',
|
||||
originalContent: 'A',
|
||||
isDirty: false,
|
||||
},
|
||||
],
|
||||
activeTabId: 'tab-1',
|
||||
};
|
||||
|
||||
// Simulate a "stale closure" capturing the tab state
|
||||
const staleClosureTab = storeState.tabs[0];
|
||||
|
||||
// User edits - store state updates
|
||||
storeState = {
|
||||
...storeState,
|
||||
tabs: [
|
||||
{
|
||||
id: 'tab-1',
|
||||
content: 'B',
|
||||
originalContent: 'A',
|
||||
isDirty: true,
|
||||
},
|
||||
],
|
||||
};
|
||||
|
||||
// OLD BUG: Using stale closure tab would save "A" (old content)
|
||||
const oldBugSavedContent = staleClosureTab!.content;
|
||||
expect(oldBugSavedContent).toBe('A'); // Wrong! Should be "B"
|
||||
|
||||
// FIX: Using fresh state from getState() gets correct content
|
||||
const freshTab = storeState.tabs[0];
|
||||
const fixedSavedContent = freshTab!.content;
|
||||
expect(fixedSavedContent).toBe('B'); // Correct!
|
||||
});
|
||||
|
||||
it('demonstrates syncing editor content before save', () => {
|
||||
// This test demonstrates why we need to get content from editor directly.
|
||||
// The store might have stale content if onChange hasn't fired yet.
|
||||
|
||||
// Simulate store state (has old content because onChange hasn't fired)
|
||||
let storeContent = 'A';
|
||||
|
||||
// Editor has newer content (not yet synced to store)
|
||||
const editorContent = 'B';
|
||||
|
||||
// FIX: Use editor content if available, fall back to store content
|
||||
const contentToSave = editorContent ?? storeContent;
|
||||
|
||||
expect(contentToSave).toBe('B'); // Correctly saves editor content
|
||||
|
||||
// Simulate syncing to store before save
|
||||
if (editorContent !== null && editorContent !== storeContent) {
|
||||
storeContent = editorContent;
|
||||
}
|
||||
|
||||
// Now store is synced
|
||||
expect(storeContent).toBe('B');
|
||||
|
||||
// After save, markTabSaved would set originalContent = savedContent
|
||||
// and isDirty = false (if no more changes come in)
|
||||
});
|
||||
});
|
||||
|
||||
describe('edge cases', () => {
|
||||
it('should handle whitespace-only changes as dirty', () => {
|
||||
let tab = {
|
||||
content: 'hello',
|
||||
originalContent: 'hello',
|
||||
isDirty: false,
|
||||
};
|
||||
|
||||
tab = updateTabContent(tab, 'hello ');
|
||||
expect(tab.isDirty).toBe(true);
|
||||
});
|
||||
|
||||
it('should handle line ending differences as dirty', () => {
|
||||
let tab = {
|
||||
content: 'line1\nline2',
|
||||
originalContent: 'line1\nline2',
|
||||
isDirty: false,
|
||||
};
|
||||
|
||||
tab = updateTabContent(tab, 'line1\r\nline2');
|
||||
expect(tab.isDirty).toBe(true);
|
||||
});
|
||||
|
||||
it('should handle unicode content correctly', () => {
|
||||
let tab = {
|
||||
content: '你好世界',
|
||||
originalContent: '你好世界',
|
||||
isDirty: false,
|
||||
};
|
||||
|
||||
tab = updateTabContent(tab, '你好宇宙');
|
||||
expect(tab.isDirty).toBe(true);
|
||||
|
||||
tab = markTabSaved(tab, '你好宇宙');
|
||||
expect(tab.isDirty).toBe(false);
|
||||
});
|
||||
|
||||
it('should handle very large content efficiently', () => {
|
||||
// Generate a large string (1MB)
|
||||
const largeOriginal = 'x'.repeat(1024 * 1024);
|
||||
const largeModified = largeOriginal + 'y';
|
||||
|
||||
let tab = {
|
||||
content: largeOriginal,
|
||||
originalContent: largeOriginal,
|
||||
isDirty: false,
|
||||
};
|
||||
|
||||
tab = updateTabContent(tab, largeModified);
|
||||
|
||||
expect(tab.isDirty).toBe(true);
|
||||
});
|
||||
});
|
||||
});
|
||||
@@ -1,5 +1,11 @@
|
||||
import { describe, it, expect, vi, beforeEach } from 'vitest';
|
||||
import { getMCPServersFromSettings } from '@/lib/settings-helpers.js';
|
||||
import {
|
||||
getMCPServersFromSettings,
|
||||
getProviderById,
|
||||
getProviderByModelId,
|
||||
resolveProviderContext,
|
||||
getAllProviderModels,
|
||||
} from '@/lib/settings-helpers.js';
|
||||
import type { SettingsService } from '@/services/settings-service.js';
|
||||
|
||||
// Mock the logger
|
||||
@@ -286,4 +292,691 @@ describe('settings-helpers.ts', () => {
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
describe('getProviderById', () => {
|
||||
beforeEach(() => {
|
||||
vi.clearAllMocks();
|
||||
});
|
||||
|
||||
it('should return provider when found by ID', async () => {
|
||||
const mockProvider = { id: 'zai-1', name: 'Zai', enabled: true };
|
||||
const mockSettingsService = {
|
||||
getGlobalSettings: vi.fn().mockResolvedValue({
|
||||
claudeCompatibleProviders: [mockProvider],
|
||||
}),
|
||||
getCredentials: vi.fn().mockResolvedValue({}),
|
||||
} as unknown as SettingsService;
|
||||
|
||||
const result = await getProviderById('zai-1', mockSettingsService);
|
||||
expect(result.provider).toEqual(mockProvider);
|
||||
});
|
||||
|
||||
it('should return undefined when provider not found', async () => {
|
||||
const mockSettingsService = {
|
||||
getGlobalSettings: vi.fn().mockResolvedValue({
|
||||
claudeCompatibleProviders: [],
|
||||
}),
|
||||
getCredentials: vi.fn().mockResolvedValue({}),
|
||||
} as unknown as SettingsService;
|
||||
|
||||
const result = await getProviderById('unknown', mockSettingsService);
|
||||
expect(result.provider).toBeUndefined();
|
||||
});
|
||||
|
||||
it('should return provider even if disabled (caller handles enabled state)', async () => {
|
||||
const mockProvider = { id: 'disabled-1', name: 'Disabled', enabled: false };
|
||||
const mockSettingsService = {
|
||||
getGlobalSettings: vi.fn().mockResolvedValue({
|
||||
claudeCompatibleProviders: [mockProvider],
|
||||
}),
|
||||
getCredentials: vi.fn().mockResolvedValue({}),
|
||||
} as unknown as SettingsService;
|
||||
|
||||
const result = await getProviderById('disabled-1', mockSettingsService);
|
||||
expect(result.provider).toEqual(mockProvider);
|
||||
});
|
||||
});
|
||||
|
||||
describe('getProviderByModelId', () => {
|
||||
beforeEach(() => {
|
||||
vi.clearAllMocks();
|
||||
});
|
||||
|
||||
it('should return provider and modelConfig when found by model ID', async () => {
|
||||
const mockModel = { id: 'custom-model-1', name: 'Custom Model' };
|
||||
const mockProvider = {
|
||||
id: 'provider-1',
|
||||
name: 'Provider 1',
|
||||
enabled: true,
|
||||
models: [mockModel],
|
||||
};
|
||||
const mockSettingsService = {
|
||||
getGlobalSettings: vi.fn().mockResolvedValue({
|
||||
claudeCompatibleProviders: [mockProvider],
|
||||
}),
|
||||
getCredentials: vi.fn().mockResolvedValue({}),
|
||||
} as unknown as SettingsService;
|
||||
|
||||
const result = await getProviderByModelId('custom-model-1', mockSettingsService);
|
||||
expect(result.provider).toEqual(mockProvider);
|
||||
expect(result.modelConfig).toEqual(mockModel);
|
||||
});
|
||||
|
||||
it('should resolve mapped Claude model when mapsToClaudeModel is present', async () => {
|
||||
const mockModel = {
|
||||
id: 'custom-model-1',
|
||||
name: 'Custom Model',
|
||||
mapsToClaudeModel: 'sonnet-3-5',
|
||||
};
|
||||
const mockProvider = {
|
||||
id: 'provider-1',
|
||||
name: 'Provider 1',
|
||||
enabled: true,
|
||||
models: [mockModel],
|
||||
};
|
||||
const mockSettingsService = {
|
||||
getGlobalSettings: vi.fn().mockResolvedValue({
|
||||
claudeCompatibleProviders: [mockProvider],
|
||||
}),
|
||||
getCredentials: vi.fn().mockResolvedValue({}),
|
||||
} as unknown as SettingsService;
|
||||
|
||||
const result = await getProviderByModelId('custom-model-1', mockSettingsService);
|
||||
expect(result.resolvedModel).toBeDefined();
|
||||
// resolveModelString('sonnet-3-5') usually returns 'claude-3-5-sonnet-20240620' or similar
|
||||
});
|
||||
|
||||
it('should ignore disabled providers', async () => {
|
||||
const mockModel = { id: 'custom-model-1', name: 'Custom Model' };
|
||||
const mockProvider = {
|
||||
id: 'disabled-1',
|
||||
name: 'Disabled Provider',
|
||||
enabled: false,
|
||||
models: [mockModel],
|
||||
};
|
||||
const mockSettingsService = {
|
||||
getGlobalSettings: vi.fn().mockResolvedValue({
|
||||
claudeCompatibleProviders: [mockProvider],
|
||||
}),
|
||||
getCredentials: vi.fn().mockResolvedValue({}),
|
||||
} as unknown as SettingsService;
|
||||
|
||||
const result = await getProviderByModelId('custom-model-1', mockSettingsService);
|
||||
expect(result.provider).toBeUndefined();
|
||||
});
|
||||
});
|
||||
|
||||
describe('resolveProviderContext', () => {
|
||||
beforeEach(() => {
|
||||
vi.clearAllMocks();
|
||||
});
|
||||
|
||||
it('should resolve provider by explicit providerId', async () => {
|
||||
const mockProvider = {
|
||||
id: 'provider-1',
|
||||
name: 'Provider 1',
|
||||
enabled: true,
|
||||
models: [{ id: 'custom-model-1', name: 'Custom Model' }],
|
||||
};
|
||||
const mockSettingsService = {
|
||||
getGlobalSettings: vi.fn().mockResolvedValue({
|
||||
claudeCompatibleProviders: [mockProvider],
|
||||
}),
|
||||
getCredentials: vi.fn().mockResolvedValue({ anthropicApiKey: 'test-key' }),
|
||||
} as unknown as SettingsService;
|
||||
|
||||
const result = await resolveProviderContext(
|
||||
mockSettingsService,
|
||||
'custom-model-1',
|
||||
'provider-1'
|
||||
);
|
||||
|
||||
expect(result.provider).toEqual(mockProvider);
|
||||
expect(result.credentials).toEqual({ anthropicApiKey: 'test-key' });
|
||||
});
|
||||
|
||||
it('should return undefined provider when explicit providerId not found', async () => {
|
||||
const mockSettingsService = {
|
||||
getGlobalSettings: vi.fn().mockResolvedValue({
|
||||
claudeCompatibleProviders: [],
|
||||
}),
|
||||
getCredentials: vi.fn().mockResolvedValue({}),
|
||||
} as unknown as SettingsService;
|
||||
|
||||
const result = await resolveProviderContext(
|
||||
mockSettingsService,
|
||||
'some-model',
|
||||
'unknown-provider'
|
||||
);
|
||||
|
||||
expect(result.provider).toBeUndefined();
|
||||
});
|
||||
|
||||
it('should fallback to model-based lookup when providerId not provided', async () => {
|
||||
const mockProvider = {
|
||||
id: 'provider-1',
|
||||
name: 'Provider 1',
|
||||
enabled: true,
|
||||
models: [{ id: 'custom-model-1', name: 'Custom Model' }],
|
||||
};
|
||||
const mockSettingsService = {
|
||||
getGlobalSettings: vi.fn().mockResolvedValue({
|
||||
claudeCompatibleProviders: [mockProvider],
|
||||
}),
|
||||
getCredentials: vi.fn().mockResolvedValue({}),
|
||||
} as unknown as SettingsService;
|
||||
|
||||
const result = await resolveProviderContext(mockSettingsService, 'custom-model-1');
|
||||
|
||||
expect(result.provider).toEqual(mockProvider);
|
||||
expect(result.modelConfig?.id).toBe('custom-model-1');
|
||||
});
|
||||
|
||||
it('should resolve mapsToClaudeModel to actual Claude model', async () => {
|
||||
const mockProvider = {
|
||||
id: 'provider-1',
|
||||
name: 'Provider 1',
|
||||
enabled: true,
|
||||
models: [
|
||||
{
|
||||
id: 'custom-model-1',
|
||||
name: 'Custom Model',
|
||||
mapsToClaudeModel: 'sonnet',
|
||||
},
|
||||
],
|
||||
};
|
||||
const mockSettingsService = {
|
||||
getGlobalSettings: vi.fn().mockResolvedValue({
|
||||
claudeCompatibleProviders: [mockProvider],
|
||||
}),
|
||||
getCredentials: vi.fn().mockResolvedValue({}),
|
||||
} as unknown as SettingsService;
|
||||
|
||||
const result = await resolveProviderContext(mockSettingsService, 'custom-model-1');
|
||||
|
||||
// resolveModelString('sonnet') should return a valid Claude model ID
|
||||
expect(result.resolvedModel).toBeDefined();
|
||||
expect(result.resolvedModel).toContain('claude');
|
||||
});
|
||||
|
||||
it('should handle empty providers list', async () => {
|
||||
const mockSettingsService = {
|
||||
getGlobalSettings: vi.fn().mockResolvedValue({
|
||||
claudeCompatibleProviders: [],
|
||||
}),
|
||||
getCredentials: vi.fn().mockResolvedValue({}),
|
||||
} as unknown as SettingsService;
|
||||
|
||||
const result = await resolveProviderContext(mockSettingsService, 'some-model');
|
||||
|
||||
expect(result.provider).toBeUndefined();
|
||||
expect(result.resolvedModel).toBeUndefined();
|
||||
expect(result.modelConfig).toBeUndefined();
|
||||
});
|
||||
|
||||
it('should handle missing claudeCompatibleProviders field', async () => {
|
||||
const mockSettingsService = {
|
||||
getGlobalSettings: vi.fn().mockResolvedValue({}),
|
||||
getCredentials: vi.fn().mockResolvedValue({}),
|
||||
} as unknown as SettingsService;
|
||||
|
||||
const result = await resolveProviderContext(mockSettingsService, 'some-model');
|
||||
|
||||
expect(result.provider).toBeUndefined();
|
||||
});
|
||||
|
||||
it('should skip disabled providers during fallback lookup', async () => {
|
||||
const disabledProvider = {
|
||||
id: 'disabled-1',
|
||||
name: 'Disabled Provider',
|
||||
enabled: false,
|
||||
models: [{ id: 'model-in-disabled', name: 'Model' }],
|
||||
};
|
||||
const enabledProvider = {
|
||||
id: 'enabled-1',
|
||||
name: 'Enabled Provider',
|
||||
enabled: true,
|
||||
models: [{ id: 'model-in-enabled', name: 'Model' }],
|
||||
};
|
||||
const mockSettingsService = {
|
||||
getGlobalSettings: vi.fn().mockResolvedValue({
|
||||
claudeCompatibleProviders: [disabledProvider, enabledProvider],
|
||||
}),
|
||||
getCredentials: vi.fn().mockResolvedValue({}),
|
||||
} as unknown as SettingsService;
|
||||
|
||||
// Should skip the disabled provider and find the model in the enabled one
|
||||
const result = await resolveProviderContext(mockSettingsService, 'model-in-enabled');
|
||||
expect(result.provider?.id).toBe('enabled-1');
|
||||
|
||||
// Should not find model that only exists in disabled provider
|
||||
const result2 = await resolveProviderContext(mockSettingsService, 'model-in-disabled');
|
||||
expect(result2.provider).toBeUndefined();
|
||||
});
|
||||
|
||||
it('should perform case-insensitive model ID matching', async () => {
|
||||
const mockProvider = {
|
||||
id: 'provider-1',
|
||||
name: 'Provider 1',
|
||||
enabled: true,
|
||||
models: [{ id: 'Custom-Model-1', name: 'Custom Model' }],
|
||||
};
|
||||
const mockSettingsService = {
|
||||
getGlobalSettings: vi.fn().mockResolvedValue({
|
||||
claudeCompatibleProviders: [mockProvider],
|
||||
}),
|
||||
getCredentials: vi.fn().mockResolvedValue({}),
|
||||
} as unknown as SettingsService;
|
||||
|
||||
const result = await resolveProviderContext(mockSettingsService, 'custom-model-1');
|
||||
|
||||
expect(result.provider).toEqual(mockProvider);
|
||||
expect(result.modelConfig?.id).toBe('Custom-Model-1');
|
||||
});
|
||||
|
||||
it('should return error result on exception', async () => {
|
||||
const mockSettingsService = {
|
||||
getGlobalSettings: vi.fn().mockRejectedValue(new Error('Settings error')),
|
||||
getCredentials: vi.fn().mockResolvedValue({}),
|
||||
} as unknown as SettingsService;
|
||||
|
||||
const result = await resolveProviderContext(mockSettingsService, 'some-model');
|
||||
|
||||
expect(result.provider).toBeUndefined();
|
||||
expect(result.credentials).toBeUndefined();
|
||||
expect(result.resolvedModel).toBeUndefined();
|
||||
expect(result.modelConfig).toBeUndefined();
|
||||
});
|
||||
|
||||
it('should persist and load provider config from server settings', async () => {
|
||||
// This test verifies the main bug fix: providers are loaded from server settings
|
||||
const savedProvider = {
|
||||
id: 'saved-provider-1',
|
||||
name: 'Saved Provider',
|
||||
enabled: true,
|
||||
apiKeySource: 'credentials' as const,
|
||||
models: [
|
||||
{
|
||||
id: 'saved-model-1',
|
||||
name: 'Saved Model',
|
||||
mapsToClaudeModel: 'sonnet',
|
||||
},
|
||||
],
|
||||
};
|
||||
|
||||
const mockSettingsService = {
|
||||
getGlobalSettings: vi.fn().mockResolvedValue({
|
||||
claudeCompatibleProviders: [savedProvider],
|
||||
}),
|
||||
getCredentials: vi.fn().mockResolvedValue({
|
||||
anthropicApiKey: 'saved-api-key',
|
||||
}),
|
||||
} as unknown as SettingsService;
|
||||
|
||||
// Simulate loading saved provider config
|
||||
const result = await resolveProviderContext(
|
||||
mockSettingsService,
|
||||
'saved-model-1',
|
||||
'saved-provider-1'
|
||||
);
|
||||
|
||||
// Verify the provider is loaded from server settings
|
||||
expect(result.provider).toEqual(savedProvider);
|
||||
expect(result.provider?.id).toBe('saved-provider-1');
|
||||
expect(result.provider?.models).toHaveLength(1);
|
||||
expect(result.credentials?.anthropicApiKey).toBe('saved-api-key');
|
||||
// Verify model mapping is resolved
|
||||
expect(result.resolvedModel).toContain('claude');
|
||||
});
|
||||
|
||||
it('should accept custom logPrefix parameter', async () => {
|
||||
// Verify that the logPrefix parameter is accepted (used by facade.ts)
|
||||
const mockProvider = {
|
||||
id: 'provider-1',
|
||||
name: 'Provider 1',
|
||||
enabled: true,
|
||||
models: [{ id: 'model-1', name: 'Model' }],
|
||||
};
|
||||
const mockSettingsService = {
|
||||
getGlobalSettings: vi.fn().mockResolvedValue({
|
||||
claudeCompatibleProviders: [mockProvider],
|
||||
}),
|
||||
getCredentials: vi.fn().mockResolvedValue({}),
|
||||
} as unknown as SettingsService;
|
||||
|
||||
// Call with custom logPrefix (as facade.ts does)
|
||||
const result = await resolveProviderContext(
|
||||
mockSettingsService,
|
||||
'model-1',
|
||||
undefined,
|
||||
'[CustomPrefix]'
|
||||
);
|
||||
|
||||
// Function should work the same with custom prefix
|
||||
expect(result.provider).toEqual(mockProvider);
|
||||
});
|
||||
|
||||
// Session restore scenarios - provider.enabled: undefined should be treated as enabled
|
||||
describe('session restore scenarios (enabled: undefined)', () => {
|
||||
it('should treat provider with enabled: undefined as enabled', async () => {
|
||||
// This is the main bug fix: when providers are loaded from settings on session restore,
|
||||
// enabled might be undefined (not explicitly set) and should be treated as enabled
|
||||
const mockProvider = {
|
||||
id: 'provider-1',
|
||||
name: 'Provider 1',
|
||||
enabled: undefined, // Not explicitly set - should be treated as enabled
|
||||
models: [{ id: 'model-1', name: 'Model' }],
|
||||
};
|
||||
const mockSettingsService = {
|
||||
getGlobalSettings: vi.fn().mockResolvedValue({
|
||||
claudeCompatibleProviders: [mockProvider],
|
||||
}),
|
||||
getCredentials: vi.fn().mockResolvedValue({}),
|
||||
} as unknown as SettingsService;
|
||||
|
||||
const result = await resolveProviderContext(mockSettingsService, 'model-1');
|
||||
|
||||
// Provider should be found and used even though enabled is undefined
|
||||
expect(result.provider).toEqual(mockProvider);
|
||||
expect(result.modelConfig?.id).toBe('model-1');
|
||||
});
|
||||
|
||||
it('should use provider by ID when enabled is undefined', async () => {
|
||||
// This tests the explicit providerId lookup with undefined enabled
|
||||
const mockProvider = {
|
||||
id: 'provider-1',
|
||||
name: 'Provider 1',
|
||||
enabled: undefined, // Not explicitly set - should be treated as enabled
|
||||
models: [{ id: 'custom-model', name: 'Custom Model', mapsToClaudeModel: 'sonnet' }],
|
||||
};
|
||||
const mockSettingsService = {
|
||||
getGlobalSettings: vi.fn().mockResolvedValue({
|
||||
claudeCompatibleProviders: [mockProvider],
|
||||
}),
|
||||
getCredentials: vi.fn().mockResolvedValue({ anthropicApiKey: 'test-key' }),
|
||||
} as unknown as SettingsService;
|
||||
|
||||
const result = await resolveProviderContext(
|
||||
mockSettingsService,
|
||||
'custom-model',
|
||||
'provider-1'
|
||||
);
|
||||
|
||||
// Provider should be found and used even though enabled is undefined
|
||||
expect(result.provider).toEqual(mockProvider);
|
||||
expect(result.credentials?.anthropicApiKey).toBe('test-key');
|
||||
expect(result.resolvedModel).toContain('claude');
|
||||
});
|
||||
|
||||
it('should find model via fallback in provider with enabled: undefined', async () => {
|
||||
// Test fallback model lookup when provider has undefined enabled
|
||||
const providerWithUndefinedEnabled = {
|
||||
id: 'provider-1',
|
||||
name: 'Provider 1',
|
||||
// enabled is not set (undefined)
|
||||
models: [{ id: 'model-1', name: 'Model' }],
|
||||
};
|
||||
const mockSettingsService = {
|
||||
getGlobalSettings: vi.fn().mockResolvedValue({
|
||||
claudeCompatibleProviders: [providerWithUndefinedEnabled],
|
||||
}),
|
||||
getCredentials: vi.fn().mockResolvedValue({}),
|
||||
} as unknown as SettingsService;
|
||||
|
||||
const result = await resolveProviderContext(mockSettingsService, 'model-1');
|
||||
|
||||
expect(result.provider).toEqual(providerWithUndefinedEnabled);
|
||||
expect(result.modelConfig?.id).toBe('model-1');
|
||||
});
|
||||
|
||||
it('should still use provider for connection when model not found in its models array', async () => {
|
||||
// This tests the fix: when providerId is explicitly set and provider is found,
|
||||
// but the model isn't in that provider's models array, we still use that provider
|
||||
// for connection settings (baseUrl, credentials)
|
||||
const mockProvider = {
|
||||
id: 'provider-1',
|
||||
name: 'Provider 1',
|
||||
enabled: true,
|
||||
baseUrl: 'https://custom-api.example.com',
|
||||
models: [{ id: 'other-model', name: 'Other Model' }],
|
||||
};
|
||||
const mockSettingsService = {
|
||||
getGlobalSettings: vi.fn().mockResolvedValue({
|
||||
claudeCompatibleProviders: [mockProvider],
|
||||
}),
|
||||
getCredentials: vi.fn().mockResolvedValue({ anthropicApiKey: 'test-key' }),
|
||||
} as unknown as SettingsService;
|
||||
|
||||
const result = await resolveProviderContext(
|
||||
mockSettingsService,
|
||||
'unknown-model', // Model not in provider's models array
|
||||
'provider-1'
|
||||
);
|
||||
|
||||
// Provider should still be returned for connection settings
|
||||
expect(result.provider).toEqual(mockProvider);
|
||||
// modelConfig should be undefined since the model wasn't found
|
||||
expect(result.modelConfig).toBeUndefined();
|
||||
// resolvedModel should be undefined since no mapping was found
|
||||
expect(result.resolvedModel).toBeUndefined();
|
||||
});
|
||||
|
||||
it('should fallback to find modelConfig in other providers when not in explicit providerId provider', async () => {
|
||||
// When providerId is set and provider is found, but model isn't there,
|
||||
// we should still search for modelConfig in other providers
|
||||
const provider1 = {
|
||||
id: 'provider-1',
|
||||
name: 'Provider 1',
|
||||
enabled: true,
|
||||
baseUrl: 'https://provider1.example.com',
|
||||
models: [{ id: 'provider1-model', name: 'Provider 1 Model' }],
|
||||
};
|
||||
const provider2 = {
|
||||
id: 'provider-2',
|
||||
name: 'Provider 2',
|
||||
enabled: true,
|
||||
baseUrl: 'https://provider2.example.com',
|
||||
models: [
|
||||
{
|
||||
id: 'shared-model',
|
||||
name: 'Shared Model',
|
||||
mapsToClaudeModel: 'sonnet',
|
||||
},
|
||||
],
|
||||
};
|
||||
const mockSettingsService = {
|
||||
getGlobalSettings: vi.fn().mockResolvedValue({
|
||||
claudeCompatibleProviders: [provider1, provider2],
|
||||
}),
|
||||
getCredentials: vi.fn().mockResolvedValue({ anthropicApiKey: 'test-key' }),
|
||||
} as unknown as SettingsService;
|
||||
|
||||
const result = await resolveProviderContext(
|
||||
mockSettingsService,
|
||||
'shared-model', // This model is in provider-2, not provider-1
|
||||
'provider-1' // But we explicitly want to use provider-1
|
||||
);
|
||||
|
||||
// Provider should still be provider-1 (for connection settings)
|
||||
expect(result.provider).toEqual(provider1);
|
||||
// But modelConfig should be found from provider-2
|
||||
expect(result.modelConfig?.id).toBe('shared-model');
|
||||
// And the model mapping should be resolved
|
||||
expect(result.resolvedModel).toContain('claude');
|
||||
});
|
||||
|
||||
it('should handle multiple providers with mixed enabled states', async () => {
|
||||
// Test the full session restore scenario with multiple providers
|
||||
const providers = [
|
||||
{
|
||||
id: 'provider-1',
|
||||
name: 'First Provider',
|
||||
enabled: undefined, // Undefined after restore
|
||||
models: [{ id: 'model-a', name: 'Model A' }],
|
||||
},
|
||||
{
|
||||
id: 'provider-2',
|
||||
name: 'Second Provider',
|
||||
// enabled field missing entirely
|
||||
models: [{ id: 'model-b', name: 'Model B', mapsToClaudeModel: 'opus' }],
|
||||
},
|
||||
{
|
||||
id: 'provider-3',
|
||||
name: 'Disabled Provider',
|
||||
enabled: false, // Explicitly disabled
|
||||
models: [{ id: 'model-c', name: 'Model C' }],
|
||||
},
|
||||
];
|
||||
|
||||
const mockSettingsService = {
|
||||
getGlobalSettings: vi.fn().mockResolvedValue({
|
||||
claudeCompatibleProviders: providers,
|
||||
}),
|
||||
getCredentials: vi.fn().mockResolvedValue({ anthropicApiKey: 'test-key' }),
|
||||
} as unknown as SettingsService;
|
||||
|
||||
// Provider 1 should work (enabled: undefined)
|
||||
const result1 = await resolveProviderContext(mockSettingsService, 'model-a', 'provider-1');
|
||||
expect(result1.provider?.id).toBe('provider-1');
|
||||
expect(result1.modelConfig?.id).toBe('model-a');
|
||||
|
||||
// Provider 2 should work (enabled field missing)
|
||||
const result2 = await resolveProviderContext(mockSettingsService, 'model-b', 'provider-2');
|
||||
expect(result2.provider?.id).toBe('provider-2');
|
||||
expect(result2.modelConfig?.id).toBe('model-b');
|
||||
expect(result2.resolvedModel).toContain('claude');
|
||||
|
||||
// Provider 3 with explicit providerId IS returned even if disabled
|
||||
// (caller handles enabled state check)
|
||||
const result3 = await resolveProviderContext(mockSettingsService, 'model-c', 'provider-3');
|
||||
// Provider is found but modelConfig won't be found since disabled providers
|
||||
// skip model lookup in their models array
|
||||
expect(result3.provider).toEqual(providers[2]);
|
||||
expect(result3.modelConfig).toBeUndefined();
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
describe('getAllProviderModels', () => {
|
||||
beforeEach(() => {
|
||||
vi.clearAllMocks();
|
||||
});
|
||||
|
||||
it('should return all models from enabled providers', async () => {
|
||||
const mockProviders = [
|
||||
{
|
||||
id: 'provider-1',
|
||||
name: 'Provider 1',
|
||||
enabled: true,
|
||||
models: [
|
||||
{ id: 'model-1', name: 'Model 1' },
|
||||
{ id: 'model-2', name: 'Model 2' },
|
||||
],
|
||||
},
|
||||
{
|
||||
id: 'provider-2',
|
||||
name: 'Provider 2',
|
||||
enabled: true,
|
||||
models: [{ id: 'model-3', name: 'Model 3' }],
|
||||
},
|
||||
];
|
||||
const mockSettingsService = {
|
||||
getGlobalSettings: vi.fn().mockResolvedValue({
|
||||
claudeCompatibleProviders: mockProviders,
|
||||
}),
|
||||
} as unknown as SettingsService;
|
||||
|
||||
const result = await getAllProviderModels(mockSettingsService);
|
||||
|
||||
expect(result).toHaveLength(3);
|
||||
expect(result[0].providerId).toBe('provider-1');
|
||||
expect(result[0].model.id).toBe('model-1');
|
||||
expect(result[2].providerId).toBe('provider-2');
|
||||
});
|
||||
|
||||
it('should filter out disabled providers', async () => {
|
||||
const mockProviders = [
|
||||
{
|
||||
id: 'enabled-1',
|
||||
name: 'Enabled Provider',
|
||||
enabled: true,
|
||||
models: [{ id: 'model-1', name: 'Model 1' }],
|
||||
},
|
||||
{
|
||||
id: 'disabled-1',
|
||||
name: 'Disabled Provider',
|
||||
enabled: false,
|
||||
models: [{ id: 'model-2', name: 'Model 2' }],
|
||||
},
|
||||
];
|
||||
const mockSettingsService = {
|
||||
getGlobalSettings: vi.fn().mockResolvedValue({
|
||||
claudeCompatibleProviders: mockProviders,
|
||||
}),
|
||||
} as unknown as SettingsService;
|
||||
|
||||
const result = await getAllProviderModels(mockSettingsService);
|
||||
|
||||
expect(result).toHaveLength(1);
|
||||
expect(result[0].providerId).toBe('enabled-1');
|
||||
});
|
||||
|
||||
it('should return empty array when no providers configured', async () => {
|
||||
const mockSettingsService = {
|
||||
getGlobalSettings: vi.fn().mockResolvedValue({
|
||||
claudeCompatibleProviders: [],
|
||||
}),
|
||||
} as unknown as SettingsService;
|
||||
|
||||
const result = await getAllProviderModels(mockSettingsService);
|
||||
|
||||
expect(result).toEqual([]);
|
||||
});
|
||||
|
||||
it('should handle missing claudeCompatibleProviders field', async () => {
|
||||
const mockSettingsService = {
|
||||
getGlobalSettings: vi.fn().mockResolvedValue({}),
|
||||
} as unknown as SettingsService;
|
||||
|
||||
const result = await getAllProviderModels(mockSettingsService);
|
||||
|
||||
expect(result).toEqual([]);
|
||||
});
|
||||
|
||||
it('should handle provider with no models', async () => {
|
||||
const mockProviders = [
|
||||
{
|
||||
id: 'provider-1',
|
||||
name: 'Provider 1',
|
||||
enabled: true,
|
||||
models: [],
|
||||
},
|
||||
{
|
||||
id: 'provider-2',
|
||||
name: 'Provider 2',
|
||||
enabled: true,
|
||||
// no models field
|
||||
},
|
||||
];
|
||||
const mockSettingsService = {
|
||||
getGlobalSettings: vi.fn().mockResolvedValue({
|
||||
claudeCompatibleProviders: mockProviders,
|
||||
}),
|
||||
} as unknown as SettingsService;
|
||||
|
||||
const result = await getAllProviderModels(mockSettingsService);
|
||||
|
||||
expect(result).toEqual([]);
|
||||
});
|
||||
|
||||
it('should return empty array on exception', async () => {
|
||||
const mockSettingsService = {
|
||||
getGlobalSettings: vi.fn().mockRejectedValue(new Error('Settings error')),
|
||||
} as unknown as SettingsService;
|
||||
|
||||
const result = await getAllProviderModels(mockSettingsService);
|
||||
|
||||
expect(result).toEqual([]);
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
@@ -15,6 +15,7 @@ import {
|
||||
calculateReasoningTimeout,
|
||||
REASONING_TIMEOUT_MULTIPLIERS,
|
||||
DEFAULT_TIMEOUT_MS,
|
||||
validateBareModelId,
|
||||
} from '@automaker/types';
|
||||
|
||||
const OPENAI_API_KEY_ENV = 'OPENAI_API_KEY';
|
||||
@@ -455,4 +456,19 @@ describe('codex-provider.ts', () => {
|
||||
expect(calculateReasoningTimeout('xhigh')).toBe(120000);
|
||||
});
|
||||
});
|
||||
|
||||
describe('validateBareModelId integration', () => {
|
||||
it('should allow codex- prefixed models for Codex provider with expectedProvider="codex"', () => {
|
||||
expect(() => validateBareModelId('codex-gpt-4', 'CodexProvider', 'codex')).not.toThrow();
|
||||
expect(() =>
|
||||
validateBareModelId('codex-gpt-5.1-codex-max', 'CodexProvider', 'codex')
|
||||
).not.toThrow();
|
||||
});
|
||||
|
||||
it('should reject other provider prefixes for Codex provider', () => {
|
||||
expect(() => validateBareModelId('cursor-gpt-4', 'CodexProvider', 'codex')).toThrow();
|
||||
expect(() => validateBareModelId('gemini-2.5-flash', 'CodexProvider', 'codex')).toThrow();
|
||||
expect(() => validateBareModelId('copilot-gpt-4', 'CodexProvider', 'codex')).toThrow();
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
@@ -331,13 +331,15 @@ describe('copilot-provider.ts', () => {
|
||||
});
|
||||
});
|
||||
|
||||
it('should normalize tool.execution_end event', () => {
|
||||
it('should normalize tool.execution_complete event', () => {
|
||||
const event = {
|
||||
type: 'tool.execution_end',
|
||||
type: 'tool.execution_complete',
|
||||
data: {
|
||||
toolName: 'read_file',
|
||||
toolCallId: 'call-123',
|
||||
result: 'file content',
|
||||
success: true,
|
||||
result: {
|
||||
content: 'file content',
|
||||
},
|
||||
},
|
||||
};
|
||||
|
||||
@@ -357,23 +359,85 @@ describe('copilot-provider.ts', () => {
|
||||
});
|
||||
});
|
||||
|
||||
it('should handle tool.execution_end with error', () => {
|
||||
it('should handle tool.execution_complete with error', () => {
|
||||
const event = {
|
||||
type: 'tool.execution_end',
|
||||
type: 'tool.execution_complete',
|
||||
data: {
|
||||
toolName: 'bash',
|
||||
toolCallId: 'call-456',
|
||||
error: 'Command failed',
|
||||
success: false,
|
||||
error: {
|
||||
message: 'Command failed',
|
||||
},
|
||||
},
|
||||
};
|
||||
|
||||
const result = provider.normalizeEvent(event);
|
||||
expect(result?.message?.content?.[0]).toMatchObject({
|
||||
type: 'tool_result',
|
||||
tool_use_id: 'call-456',
|
||||
content: '[ERROR] Command failed',
|
||||
});
|
||||
});
|
||||
|
||||
it('should handle tool.execution_complete with empty result', () => {
|
||||
const event = {
|
||||
type: 'tool.execution_complete',
|
||||
data: {
|
||||
toolCallId: 'call-789',
|
||||
success: true,
|
||||
result: {
|
||||
content: '',
|
||||
},
|
||||
},
|
||||
};
|
||||
|
||||
const result = provider.normalizeEvent(event);
|
||||
expect(result?.message?.content?.[0]).toMatchObject({
|
||||
type: 'tool_result',
|
||||
tool_use_id: 'call-789',
|
||||
content: '',
|
||||
});
|
||||
});
|
||||
|
||||
it('should handle tool.execution_complete with missing result', () => {
|
||||
const event = {
|
||||
type: 'tool.execution_complete',
|
||||
data: {
|
||||
toolCallId: 'call-999',
|
||||
success: true,
|
||||
// No result field
|
||||
},
|
||||
};
|
||||
|
||||
const result = provider.normalizeEvent(event);
|
||||
expect(result?.message?.content?.[0]).toMatchObject({
|
||||
type: 'tool_result',
|
||||
tool_use_id: 'call-999',
|
||||
content: '',
|
||||
});
|
||||
});
|
||||
|
||||
it('should handle tool.execution_complete with error code', () => {
|
||||
const event = {
|
||||
type: 'tool.execution_complete',
|
||||
data: {
|
||||
toolCallId: 'call-567',
|
||||
success: false,
|
||||
error: {
|
||||
message: 'Permission denied',
|
||||
code: 'EACCES',
|
||||
},
|
||||
},
|
||||
};
|
||||
|
||||
const result = provider.normalizeEvent(event);
|
||||
expect(result?.message?.content?.[0]).toMatchObject({
|
||||
type: 'tool_result',
|
||||
tool_use_id: 'call-567',
|
||||
content: '[ERROR] Permission denied (EACCES)',
|
||||
});
|
||||
});
|
||||
|
||||
it('should normalize session.idle to success result', () => {
|
||||
const event = { type: 'session.idle' };
|
||||
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
import { describe, it, expect, beforeEach } from 'vitest';
|
||||
import { describe, it, expect, beforeEach, vi } from 'vitest';
|
||||
import { CursorProvider } from '@/providers/cursor-provider.js';
|
||||
import { validateBareModelId } from '@automaker/types';
|
||||
|
||||
describe('cursor-provider.ts', () => {
|
||||
describe('buildCliArgs', () => {
|
||||
@@ -154,4 +155,81 @@ describe('cursor-provider.ts', () => {
|
||||
expect(msg!.subtype).toBe('success');
|
||||
});
|
||||
});
|
||||
|
||||
describe('Cursor Gemini models support', () => {
|
||||
let provider: CursorProvider;
|
||||
|
||||
beforeEach(() => {
|
||||
provider = Object.create(CursorProvider.prototype) as CursorProvider & {
|
||||
cliPath?: string;
|
||||
};
|
||||
provider.cliPath = '/usr/local/bin/cursor-agent';
|
||||
});
|
||||
|
||||
describe('buildCliArgs with Cursor Gemini models', () => {
|
||||
it('should handle cursor-gemini-3-pro model', () => {
|
||||
const args = provider.buildCliArgs({
|
||||
prompt: 'Write a function',
|
||||
model: 'gemini-3-pro', // Bare model ID after stripping cursor- prefix
|
||||
cwd: '/tmp/project',
|
||||
});
|
||||
|
||||
const modelIndex = args.indexOf('--model');
|
||||
expect(modelIndex).toBeGreaterThan(-1);
|
||||
expect(args[modelIndex + 1]).toBe('gemini-3-pro');
|
||||
});
|
||||
|
||||
it('should handle cursor-gemini-3-flash model', () => {
|
||||
const args = provider.buildCliArgs({
|
||||
prompt: 'Quick task',
|
||||
model: 'gemini-3-flash', // Bare model ID after stripping cursor- prefix
|
||||
cwd: '/tmp/project',
|
||||
});
|
||||
|
||||
const modelIndex = args.indexOf('--model');
|
||||
expect(modelIndex).toBeGreaterThan(-1);
|
||||
expect(args[modelIndex + 1]).toBe('gemini-3-flash');
|
||||
});
|
||||
|
||||
it('should include --resume with Cursor Gemini models when sdkSessionId is provided', () => {
|
||||
const args = provider.buildCliArgs({
|
||||
prompt: 'Continue task',
|
||||
model: 'gemini-3-pro',
|
||||
cwd: '/tmp/project',
|
||||
sdkSessionId: 'cursor-gemini-session-123',
|
||||
});
|
||||
|
||||
const resumeIndex = args.indexOf('--resume');
|
||||
expect(resumeIndex).toBeGreaterThan(-1);
|
||||
expect(args[resumeIndex + 1]).toBe('cursor-gemini-session-123');
|
||||
});
|
||||
});
|
||||
|
||||
describe('validateBareModelId with Cursor Gemini models', () => {
|
||||
it('should allow gemini- prefixed models for Cursor provider with expectedProvider="cursor"', () => {
|
||||
// This is the key fix - Cursor Gemini models have bare IDs like "gemini-3-pro"
|
||||
expect(() => validateBareModelId('gemini-3-pro', 'CursorProvider', 'cursor')).not.toThrow();
|
||||
expect(() =>
|
||||
validateBareModelId('gemini-3-flash', 'CursorProvider', 'cursor')
|
||||
).not.toThrow();
|
||||
});
|
||||
|
||||
it('should still reject other provider prefixes for Cursor provider', () => {
|
||||
expect(() => validateBareModelId('codex-gpt-4', 'CursorProvider', 'cursor')).toThrow();
|
||||
expect(() => validateBareModelId('copilot-gpt-4', 'CursorProvider', 'cursor')).toThrow();
|
||||
expect(() => validateBareModelId('opencode-gpt-4', 'CursorProvider', 'cursor')).toThrow();
|
||||
});
|
||||
|
||||
it('should accept cursor- prefixed models when expectedProvider is "cursor" (for double-prefix validation)', () => {
|
||||
// Note: When expectedProvider="cursor", we skip the cursor- prefix check
|
||||
// This is intentional because the validation happens AFTER prefix stripping
|
||||
// So if cursor-gemini-3-pro reaches validateBareModelId with expectedProvider="cursor",
|
||||
// it means the prefix was NOT properly stripped, but we skip it anyway
|
||||
// since we're checking if the Cursor provider itself can receive cursor- prefixed models
|
||||
expect(() =>
|
||||
validateBareModelId('cursor-gemini-3-pro', 'CursorProvider', 'cursor')
|
||||
).not.toThrow();
|
||||
});
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
import { describe, it, expect, beforeEach } from 'vitest';
|
||||
import { GeminiProvider } from '@/providers/gemini-provider.js';
|
||||
import type { ProviderMessage } from '@automaker/types';
|
||||
import { validateBareModelId } from '@automaker/types';
|
||||
|
||||
describe('gemini-provider.ts', () => {
|
||||
let provider: GeminiProvider;
|
||||
@@ -253,4 +254,19 @@ describe('gemini-provider.ts', () => {
|
||||
expect(msg.subtype).toBe('success');
|
||||
});
|
||||
});
|
||||
|
||||
describe('validateBareModelId integration', () => {
|
||||
it('should allow gemini- prefixed models for Gemini provider with expectedProvider="gemini"', () => {
|
||||
expect(() =>
|
||||
validateBareModelId('gemini-2.5-flash', 'GeminiProvider', 'gemini')
|
||||
).not.toThrow();
|
||||
expect(() => validateBareModelId('gemini-2.5-pro', 'GeminiProvider', 'gemini')).not.toThrow();
|
||||
});
|
||||
|
||||
it('should reject other provider prefixes for Gemini provider', () => {
|
||||
expect(() => validateBareModelId('cursor-gpt-4', 'GeminiProvider', 'gemini')).toThrow();
|
||||
expect(() => validateBareModelId('codex-gpt-4', 'GeminiProvider', 'gemini')).toThrow();
|
||||
expect(() => validateBareModelId('copilot-gpt-4', 'GeminiProvider', 'gemini')).toThrow();
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
@@ -0,0 +1,270 @@
|
||||
/**
|
||||
* Tests for default fields applied to features created by parseAndCreateFeatures
|
||||
*
|
||||
* Verifies that auto-created features include planningMode: 'skip',
|
||||
* requirePlanApproval: false, and dependencies: [].
|
||||
*/
|
||||
|
||||
import { describe, it, expect, vi, beforeEach } from 'vitest';
|
||||
import path from 'path';
|
||||
|
||||
// Use vi.hoisted to create mock functions that can be referenced in vi.mock factories
|
||||
const { mockMkdir, mockAtomicWriteJson, mockExtractJsonWithArray, mockCreateNotification } =
|
||||
vi.hoisted(() => ({
|
||||
mockMkdir: vi.fn().mockResolvedValue(undefined),
|
||||
mockAtomicWriteJson: vi.fn().mockResolvedValue(undefined),
|
||||
mockExtractJsonWithArray: vi.fn(),
|
||||
mockCreateNotification: vi.fn().mockResolvedValue(undefined),
|
||||
}));
|
||||
|
||||
vi.mock('@/lib/secure-fs.js', () => ({
|
||||
mkdir: mockMkdir,
|
||||
}));
|
||||
|
||||
vi.mock('@automaker/utils', () => ({
|
||||
createLogger: vi.fn().mockReturnValue({
|
||||
info: vi.fn(),
|
||||
warn: vi.fn(),
|
||||
error: vi.fn(),
|
||||
debug: vi.fn(),
|
||||
}),
|
||||
atomicWriteJson: mockAtomicWriteJson,
|
||||
DEFAULT_BACKUP_COUNT: 3,
|
||||
}));
|
||||
|
||||
vi.mock('@automaker/platform', () => ({
|
||||
getFeaturesDir: vi.fn((projectPath: string) => path.join(projectPath, '.automaker', 'features')),
|
||||
}));
|
||||
|
||||
vi.mock('@/lib/json-extractor.js', () => ({
|
||||
extractJsonWithArray: mockExtractJsonWithArray,
|
||||
}));
|
||||
|
||||
vi.mock('@/services/notification-service.js', () => ({
|
||||
getNotificationService: vi.fn(() => ({
|
||||
createNotification: mockCreateNotification,
|
||||
})),
|
||||
}));
|
||||
|
||||
// Import after mocks are set up
|
||||
import { parseAndCreateFeatures } from '../../../../src/routes/app-spec/parse-and-create-features.js';
|
||||
|
||||
describe('parseAndCreateFeatures - default fields', () => {
|
||||
const mockEvents = {
|
||||
emit: vi.fn(),
|
||||
} as any;
|
||||
|
||||
const projectPath = '/test/project';
|
||||
|
||||
beforeEach(() => {
|
||||
vi.clearAllMocks();
|
||||
});
|
||||
|
||||
it('should set planningMode to "skip" on created features', async () => {
|
||||
mockExtractJsonWithArray.mockReturnValue({
|
||||
features: [
|
||||
{
|
||||
id: 'feature-1',
|
||||
title: 'Test Feature',
|
||||
description: 'A test feature',
|
||||
priority: 1,
|
||||
complexity: 'simple',
|
||||
},
|
||||
],
|
||||
});
|
||||
|
||||
await parseAndCreateFeatures(projectPath, 'content', mockEvents);
|
||||
|
||||
expect(mockAtomicWriteJson).toHaveBeenCalledTimes(1);
|
||||
const writtenData = mockAtomicWriteJson.mock.calls[0][1];
|
||||
expect(writtenData.planningMode).toBe('skip');
|
||||
});
|
||||
|
||||
it('should set requirePlanApproval to false on created features', async () => {
|
||||
mockExtractJsonWithArray.mockReturnValue({
|
||||
features: [
|
||||
{
|
||||
id: 'feature-1',
|
||||
title: 'Test Feature',
|
||||
description: 'A test feature',
|
||||
},
|
||||
],
|
||||
});
|
||||
|
||||
await parseAndCreateFeatures(projectPath, 'content', mockEvents);
|
||||
|
||||
const writtenData = mockAtomicWriteJson.mock.calls[0][1];
|
||||
expect(writtenData.requirePlanApproval).toBe(false);
|
||||
});
|
||||
|
||||
it('should set dependencies to empty array when not provided', async () => {
|
||||
mockExtractJsonWithArray.mockReturnValue({
|
||||
features: [
|
||||
{
|
||||
id: 'feature-1',
|
||||
title: 'Test Feature',
|
||||
description: 'A test feature',
|
||||
},
|
||||
],
|
||||
});
|
||||
|
||||
await parseAndCreateFeatures(projectPath, 'content', mockEvents);
|
||||
|
||||
const writtenData = mockAtomicWriteJson.mock.calls[0][1];
|
||||
expect(writtenData.dependencies).toEqual([]);
|
||||
});
|
||||
|
||||
it('should preserve dependencies when provided by the parser', async () => {
|
||||
mockExtractJsonWithArray.mockReturnValue({
|
||||
features: [
|
||||
{
|
||||
id: 'feature-1',
|
||||
title: 'Test Feature',
|
||||
description: 'A test feature',
|
||||
dependencies: ['feature-0'],
|
||||
},
|
||||
],
|
||||
});
|
||||
|
||||
await parseAndCreateFeatures(projectPath, 'content', mockEvents);
|
||||
|
||||
const writtenData = mockAtomicWriteJson.mock.calls[0][1];
|
||||
expect(writtenData.dependencies).toEqual(['feature-0']);
|
||||
});
|
||||
|
||||
it('should apply all default fields consistently across multiple features', async () => {
|
||||
mockExtractJsonWithArray.mockReturnValue({
|
||||
features: [
|
||||
{
|
||||
id: 'feature-1',
|
||||
title: 'Feature 1',
|
||||
description: 'First feature',
|
||||
},
|
||||
{
|
||||
id: 'feature-2',
|
||||
title: 'Feature 2',
|
||||
description: 'Second feature',
|
||||
dependencies: ['feature-1'],
|
||||
},
|
||||
{
|
||||
id: 'feature-3',
|
||||
title: 'Feature 3',
|
||||
description: 'Third feature',
|
||||
},
|
||||
],
|
||||
});
|
||||
|
||||
await parseAndCreateFeatures(projectPath, 'content', mockEvents);
|
||||
|
||||
expect(mockAtomicWriteJson).toHaveBeenCalledTimes(3);
|
||||
|
||||
for (let i = 0; i < 3; i++) {
|
||||
const writtenData = mockAtomicWriteJson.mock.calls[i][1];
|
||||
expect(writtenData.planningMode, `feature ${i + 1} planningMode`).toBe('skip');
|
||||
expect(writtenData.requirePlanApproval, `feature ${i + 1} requirePlanApproval`).toBe(false);
|
||||
expect(Array.isArray(writtenData.dependencies), `feature ${i + 1} dependencies`).toBe(true);
|
||||
}
|
||||
|
||||
// Feature 2 should have its explicit dependency preserved
|
||||
expect(mockAtomicWriteJson.mock.calls[1][1].dependencies).toEqual(['feature-1']);
|
||||
// Features 1 and 3 should have empty arrays
|
||||
expect(mockAtomicWriteJson.mock.calls[0][1].dependencies).toEqual([]);
|
||||
expect(mockAtomicWriteJson.mock.calls[2][1].dependencies).toEqual([]);
|
||||
});
|
||||
|
||||
it('should set status to "backlog" on all created features', async () => {
|
||||
mockExtractJsonWithArray.mockReturnValue({
|
||||
features: [
|
||||
{
|
||||
id: 'feature-1',
|
||||
title: 'Test Feature',
|
||||
description: 'A test feature',
|
||||
},
|
||||
],
|
||||
});
|
||||
|
||||
await parseAndCreateFeatures(projectPath, 'content', mockEvents);
|
||||
|
||||
const writtenData = mockAtomicWriteJson.mock.calls[0][1];
|
||||
expect(writtenData.status).toBe('backlog');
|
||||
});
|
||||
|
||||
it('should include createdAt and updatedAt timestamps', async () => {
|
||||
mockExtractJsonWithArray.mockReturnValue({
|
||||
features: [
|
||||
{
|
||||
id: 'feature-1',
|
||||
title: 'Test Feature',
|
||||
description: 'A test feature',
|
||||
},
|
||||
],
|
||||
});
|
||||
|
||||
await parseAndCreateFeatures(projectPath, 'content', mockEvents);
|
||||
|
||||
const writtenData = mockAtomicWriteJson.mock.calls[0][1];
|
||||
expect(writtenData.createdAt).toBeDefined();
|
||||
expect(writtenData.updatedAt).toBeDefined();
|
||||
// Should be valid ISO date strings
|
||||
expect(new Date(writtenData.createdAt).toISOString()).toBe(writtenData.createdAt);
|
||||
expect(new Date(writtenData.updatedAt).toISOString()).toBe(writtenData.updatedAt);
|
||||
});
|
||||
|
||||
it('should use default values for optional fields not provided', async () => {
|
||||
mockExtractJsonWithArray.mockReturnValue({
|
||||
features: [
|
||||
{
|
||||
id: 'feature-minimal',
|
||||
title: 'Minimal Feature',
|
||||
description: 'Only required fields',
|
||||
},
|
||||
],
|
||||
});
|
||||
|
||||
await parseAndCreateFeatures(projectPath, 'content', mockEvents);
|
||||
|
||||
const writtenData = mockAtomicWriteJson.mock.calls[0][1];
|
||||
expect(writtenData.category).toBe('Uncategorized');
|
||||
expect(writtenData.priority).toBe(2);
|
||||
expect(writtenData.complexity).toBe('moderate');
|
||||
expect(writtenData.dependencies).toEqual([]);
|
||||
expect(writtenData.planningMode).toBe('skip');
|
||||
expect(writtenData.requirePlanApproval).toBe(false);
|
||||
});
|
||||
|
||||
it('should emit success event after creating features', async () => {
|
||||
mockExtractJsonWithArray.mockReturnValue({
|
||||
features: [
|
||||
{
|
||||
id: 'feature-1',
|
||||
title: 'Feature 1',
|
||||
description: 'First',
|
||||
},
|
||||
],
|
||||
});
|
||||
|
||||
await parseAndCreateFeatures(projectPath, 'content', mockEvents);
|
||||
|
||||
expect(mockEvents.emit).toHaveBeenCalledWith(
|
||||
'spec-regeneration:event',
|
||||
expect.objectContaining({
|
||||
type: 'spec_regeneration_complete',
|
||||
projectPath,
|
||||
})
|
||||
);
|
||||
});
|
||||
|
||||
it('should emit error event when no valid JSON is found', async () => {
|
||||
mockExtractJsonWithArray.mockReturnValue(null);
|
||||
|
||||
await parseAndCreateFeatures(projectPath, 'invalid content', mockEvents);
|
||||
|
||||
expect(mockEvents.emit).toHaveBeenCalledWith(
|
||||
'spec-regeneration:event',
|
||||
expect.objectContaining({
|
||||
type: 'spec_regeneration_error',
|
||||
projectPath,
|
||||
})
|
||||
);
|
||||
});
|
||||
});
|
||||
149
apps/server/tests/unit/routes/backlog-plan/apply.test.ts
Normal file
149
apps/server/tests/unit/routes/backlog-plan/apply.test.ts
Normal file
@@ -0,0 +1,149 @@
|
||||
import { beforeEach, describe, expect, it, vi } from 'vitest';
|
||||
|
||||
const { mockGetAll, mockCreate, mockUpdate, mockDelete, mockClearBacklogPlan } = vi.hoisted(() => ({
|
||||
mockGetAll: vi.fn(),
|
||||
mockCreate: vi.fn(),
|
||||
mockUpdate: vi.fn(),
|
||||
mockDelete: vi.fn(),
|
||||
mockClearBacklogPlan: vi.fn(),
|
||||
}));
|
||||
|
||||
vi.mock('@/services/feature-loader.js', () => ({
|
||||
FeatureLoader: class {
|
||||
getAll = mockGetAll;
|
||||
create = mockCreate;
|
||||
update = mockUpdate;
|
||||
delete = mockDelete;
|
||||
},
|
||||
}));
|
||||
|
||||
vi.mock('@/routes/backlog-plan/common.js', () => ({
|
||||
logger: {
|
||||
info: vi.fn(),
|
||||
warn: vi.fn(),
|
||||
error: vi.fn(),
|
||||
},
|
||||
clearBacklogPlan: mockClearBacklogPlan,
|
||||
getErrorMessage: (error: unknown) => (error instanceof Error ? error.message : String(error)),
|
||||
logError: vi.fn(),
|
||||
}));
|
||||
|
||||
import { createApplyHandler } from '@/routes/backlog-plan/routes/apply.js';
|
||||
|
||||
function createMockRes() {
|
||||
const res: {
|
||||
status: ReturnType<typeof vi.fn>;
|
||||
json: ReturnType<typeof vi.fn>;
|
||||
} = {
|
||||
status: vi.fn(),
|
||||
json: vi.fn(),
|
||||
};
|
||||
res.status.mockReturnValue(res);
|
||||
return res;
|
||||
}
|
||||
|
||||
describe('createApplyHandler', () => {
|
||||
beforeEach(() => {
|
||||
vi.clearAllMocks();
|
||||
mockGetAll.mockResolvedValue([]);
|
||||
mockCreate.mockResolvedValue({ id: 'feature-created' });
|
||||
mockUpdate.mockResolvedValue({});
|
||||
mockDelete.mockResolvedValue(true);
|
||||
mockClearBacklogPlan.mockResolvedValue(undefined);
|
||||
});
|
||||
|
||||
it('applies default feature model and planning settings when backlog plan additions omit them', async () => {
|
||||
const settingsService = {
|
||||
getGlobalSettings: vi.fn().mockResolvedValue({
|
||||
defaultFeatureModel: { model: 'codex-gpt-5.2-codex', reasoningEffort: 'high' },
|
||||
defaultPlanningMode: 'spec',
|
||||
defaultRequirePlanApproval: true,
|
||||
}),
|
||||
getProjectSettings: vi.fn().mockResolvedValue({}),
|
||||
} as any;
|
||||
|
||||
const req = {
|
||||
body: {
|
||||
projectPath: '/tmp/project',
|
||||
plan: {
|
||||
changes: [
|
||||
{
|
||||
type: 'add',
|
||||
feature: {
|
||||
id: 'feature-from-plan',
|
||||
title: 'Created from plan',
|
||||
description: 'desc',
|
||||
},
|
||||
},
|
||||
],
|
||||
},
|
||||
},
|
||||
} as any;
|
||||
const res = createMockRes();
|
||||
|
||||
await createApplyHandler(settingsService)(req, res as any);
|
||||
|
||||
expect(mockCreate).toHaveBeenCalledWith(
|
||||
'/tmp/project',
|
||||
expect.objectContaining({
|
||||
model: 'codex-gpt-5.2-codex',
|
||||
reasoningEffort: 'high',
|
||||
planningMode: 'spec',
|
||||
requirePlanApproval: true,
|
||||
})
|
||||
);
|
||||
expect(res.json).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
success: true,
|
||||
})
|
||||
);
|
||||
});
|
||||
|
||||
it('uses project default feature model override and enforces no approval for skip mode', async () => {
|
||||
const settingsService = {
|
||||
getGlobalSettings: vi.fn().mockResolvedValue({
|
||||
defaultFeatureModel: { model: 'claude-opus' },
|
||||
defaultPlanningMode: 'skip',
|
||||
defaultRequirePlanApproval: true,
|
||||
}),
|
||||
getProjectSettings: vi.fn().mockResolvedValue({
|
||||
defaultFeatureModel: {
|
||||
model: 'GLM-4.7',
|
||||
providerId: 'provider-glm',
|
||||
thinkingLevel: 'adaptive',
|
||||
},
|
||||
}),
|
||||
} as any;
|
||||
|
||||
const req = {
|
||||
body: {
|
||||
projectPath: '/tmp/project',
|
||||
plan: {
|
||||
changes: [
|
||||
{
|
||||
type: 'add',
|
||||
feature: {
|
||||
id: 'feature-from-plan',
|
||||
title: 'Created from plan',
|
||||
},
|
||||
},
|
||||
],
|
||||
},
|
||||
},
|
||||
} as any;
|
||||
const res = createMockRes();
|
||||
|
||||
await createApplyHandler(settingsService)(req, res as any);
|
||||
|
||||
expect(mockCreate).toHaveBeenCalledWith(
|
||||
'/tmp/project',
|
||||
expect.objectContaining({
|
||||
model: 'GLM-4.7',
|
||||
providerId: 'provider-glm',
|
||||
thinkingLevel: 'adaptive',
|
||||
planningMode: 'skip',
|
||||
requirePlanApproval: false,
|
||||
})
|
||||
);
|
||||
});
|
||||
});
|
||||
@@ -0,0 +1,930 @@
|
||||
/**
|
||||
* Tests for worktree list endpoint handling of detached HEAD state.
|
||||
*
|
||||
* When a worktree is in detached HEAD state (e.g., during a rebase),
|
||||
* `git worktree list --porcelain` outputs "detached" instead of
|
||||
* "branch refs/heads/...". Previously, these worktrees were silently
|
||||
* dropped from the response because the parser required both path AND branch.
|
||||
*/
|
||||
|
||||
import { describe, it, expect, vi, beforeEach, type Mock } from 'vitest';
|
||||
import type { Request, Response } from 'express';
|
||||
import { exec } from 'child_process';
|
||||
import { createMockExpressContext } from '../../../utils/mocks.js';
|
||||
|
||||
// Mock all external dependencies before importing the module under test
|
||||
vi.mock('child_process', () => ({
|
||||
exec: vi.fn(),
|
||||
}));
|
||||
|
||||
vi.mock('@/lib/git.js', () => ({
|
||||
execGitCommand: vi.fn(),
|
||||
}));
|
||||
|
||||
vi.mock('@automaker/git-utils', () => ({
|
||||
isGitRepo: vi.fn(async () => true),
|
||||
}));
|
||||
|
||||
vi.mock('@automaker/utils', () => ({
|
||||
createLogger: () => ({
|
||||
info: vi.fn(),
|
||||
warn: vi.fn(),
|
||||
error: vi.fn(),
|
||||
debug: vi.fn(),
|
||||
}),
|
||||
}));
|
||||
|
||||
vi.mock('@automaker/types', () => ({
|
||||
validatePRState: vi.fn((state: string) => state),
|
||||
}));
|
||||
|
||||
vi.mock('@/lib/secure-fs.js', () => ({
|
||||
access: vi.fn().mockResolvedValue(undefined),
|
||||
readFile: vi.fn(),
|
||||
readdir: vi.fn().mockResolvedValue([]),
|
||||
stat: vi.fn(),
|
||||
}));
|
||||
|
||||
vi.mock('@/lib/worktree-metadata.js', () => ({
|
||||
readAllWorktreeMetadata: vi.fn(async () => new Map()),
|
||||
updateWorktreePRInfo: vi.fn(async () => undefined),
|
||||
}));
|
||||
|
||||
vi.mock('@/routes/worktree/common.js', async (importOriginal) => {
|
||||
const actual = (await importOriginal()) as Record<string, unknown>;
|
||||
return {
|
||||
...actual,
|
||||
getErrorMessage: vi.fn((e: Error) => e?.message || 'Unknown error'),
|
||||
logError: vi.fn(),
|
||||
normalizePath: vi.fn((p: string) => p),
|
||||
execEnv: {},
|
||||
isGhCliAvailable: vi.fn().mockResolvedValue(false),
|
||||
};
|
||||
});
|
||||
|
||||
vi.mock('@/routes/github/routes/check-github-remote.js', () => ({
|
||||
checkGitHubRemote: vi.fn().mockResolvedValue({ hasGitHubRemote: false }),
|
||||
}));
|
||||
|
||||
import { createListHandler } from '@/routes/worktree/routes/list.js';
|
||||
import * as secureFs from '@/lib/secure-fs.js';
|
||||
import { execGitCommand } from '@/lib/git.js';
|
||||
import { readAllWorktreeMetadata, updateWorktreePRInfo } from '@/lib/worktree-metadata.js';
|
||||
import { isGitRepo } from '@automaker/git-utils';
|
||||
import { isGhCliAvailable, normalizePath, getErrorMessage } from '@/routes/worktree/common.js';
|
||||
import { checkGitHubRemote } from '@/routes/github/routes/check-github-remote.js';
|
||||
|
||||
/**
|
||||
* Set up execGitCommand mock (list handler uses this via lib/git.js, not child_process.exec).
|
||||
*/
|
||||
function setupExecGitCommandMock(options: {
|
||||
porcelainOutput: string;
|
||||
projectBranch?: string;
|
||||
gitDirs?: Record<string, string>;
|
||||
worktreeBranches?: Record<string, string>;
|
||||
}) {
|
||||
const { porcelainOutput, projectBranch = 'main', gitDirs = {}, worktreeBranches = {} } = options;
|
||||
|
||||
vi.mocked(execGitCommand).mockImplementation(async (args: string[], cwd: string) => {
|
||||
if (args[0] === 'worktree' && args[1] === 'list' && args[2] === '--porcelain') {
|
||||
return porcelainOutput;
|
||||
}
|
||||
if (args[0] === 'branch' && args[1] === '--show-current') {
|
||||
if (worktreeBranches[cwd] !== undefined) {
|
||||
return worktreeBranches[cwd] + '\n';
|
||||
}
|
||||
return projectBranch + '\n';
|
||||
}
|
||||
if (args[0] === 'rev-parse' && args[1] === '--git-dir') {
|
||||
if (cwd && gitDirs[cwd]) {
|
||||
return gitDirs[cwd] + '\n';
|
||||
}
|
||||
throw new Error('not a git directory');
|
||||
}
|
||||
if (args[0] === 'rev-parse' && args[1] === '--abbrev-ref' && args[2] === 'HEAD') {
|
||||
return 'HEAD\n';
|
||||
}
|
||||
if (args[0] === 'worktree' && args[1] === 'prune') {
|
||||
return '';
|
||||
}
|
||||
if (args[0] === 'status' && args[1] === '--porcelain') {
|
||||
return '';
|
||||
}
|
||||
if (args[0] === 'diff' && args[1] === '--name-only' && args[2] === '--diff-filter=U') {
|
||||
return '';
|
||||
}
|
||||
return '';
|
||||
});
|
||||
}
|
||||
|
||||
describe('worktree list - detached HEAD handling', () => {
|
||||
let req: Request;
|
||||
let res: Response;
|
||||
|
||||
beforeEach(() => {
|
||||
vi.clearAllMocks();
|
||||
const context = createMockExpressContext();
|
||||
req = context.req;
|
||||
res = context.res;
|
||||
|
||||
// Re-establish mock implementations cleared by mockReset/clearAllMocks
|
||||
vi.mocked(isGitRepo).mockResolvedValue(true);
|
||||
vi.mocked(readAllWorktreeMetadata).mockResolvedValue(new Map());
|
||||
vi.mocked(isGhCliAvailable).mockResolvedValue(false);
|
||||
vi.mocked(checkGitHubRemote).mockResolvedValue({ hasGitHubRemote: false });
|
||||
vi.mocked(normalizePath).mockImplementation((p: string) => p);
|
||||
vi.mocked(getErrorMessage).mockImplementation(
|
||||
(e: unknown) => (e as Error)?.message || 'Unknown error'
|
||||
);
|
||||
|
||||
// Default: all paths exist
|
||||
vi.mocked(secureFs.access).mockResolvedValue(undefined);
|
||||
// Default: .worktrees directory doesn't exist (no scan via readdir)
|
||||
vi.mocked(secureFs.readdir).mockRejectedValue(new Error('ENOENT'));
|
||||
// Default: readFile fails
|
||||
vi.mocked(secureFs.readFile).mockRejectedValue(new Error('ENOENT'));
|
||||
|
||||
// Default execGitCommand so list handler gets valid porcelain/branch output (vitest clearMocks resets implementations)
|
||||
setupExecGitCommandMock({
|
||||
porcelainOutput: 'worktree /project\nbranch refs/heads/main\n\n',
|
||||
projectBranch: 'main',
|
||||
});
|
||||
});
|
||||
|
||||
/**
|
||||
* Helper: set up execGitCommand mock for the list handler.
|
||||
* Worktree-specific behavior can be customized via the options parameter.
|
||||
*/
|
||||
function setupStandardExec(options: {
|
||||
porcelainOutput: string;
|
||||
projectBranch?: string;
|
||||
/** Map of worktree path -> git-dir path */
|
||||
gitDirs?: Record<string, string>;
|
||||
/** Map of worktree cwd -> branch for `git branch --show-current` */
|
||||
worktreeBranches?: Record<string, string>;
|
||||
}) {
|
||||
setupExecGitCommandMock(options);
|
||||
}
|
||||
|
||||
/** Suppress .worktrees dir scan by making access throw for the .worktrees dir. */
|
||||
function disableWorktreesScan() {
|
||||
vi.mocked(secureFs.access).mockImplementation(async (p) => {
|
||||
const pathStr = String(p);
|
||||
// Block only the .worktrees dir access check in scanWorktreesDirectory
|
||||
if (pathStr.endsWith('.worktrees') || pathStr.endsWith('.worktrees/')) {
|
||||
throw new Error('ENOENT');
|
||||
}
|
||||
// All other paths exist
|
||||
return undefined;
|
||||
});
|
||||
}
|
||||
|
||||
describe('porcelain parser', () => {
|
||||
it('should include normal worktrees with branch lines', async () => {
|
||||
req.body = { projectPath: '/project' };
|
||||
|
||||
setupStandardExec({
|
||||
porcelainOutput: [
|
||||
'worktree /project',
|
||||
'branch refs/heads/main',
|
||||
'',
|
||||
'worktree /project/.worktrees/feature-a',
|
||||
'branch refs/heads/feature-a',
|
||||
'',
|
||||
].join('\n'),
|
||||
});
|
||||
disableWorktreesScan();
|
||||
|
||||
const handler = createListHandler();
|
||||
await handler(req, res);
|
||||
|
||||
const response = vi.mocked(res.json).mock.calls[0][0] as {
|
||||
success: boolean;
|
||||
worktrees: Array<{ branch: string; path: string; isMain: boolean; hasWorktree: boolean }>;
|
||||
};
|
||||
|
||||
expect(response.success).toBe(true);
|
||||
expect(response.worktrees).toHaveLength(2);
|
||||
expect(response.worktrees[0]).toEqual(
|
||||
expect.objectContaining({
|
||||
path: '/project',
|
||||
branch: 'main',
|
||||
isMain: true,
|
||||
hasWorktree: true,
|
||||
})
|
||||
);
|
||||
expect(response.worktrees[1]).toEqual(
|
||||
expect.objectContaining({
|
||||
path: '/project/.worktrees/feature-a',
|
||||
branch: 'feature-a',
|
||||
isMain: false,
|
||||
hasWorktree: true,
|
||||
})
|
||||
);
|
||||
});
|
||||
|
||||
it('should include worktrees with detached HEAD and recover branch from rebase-merge state', async () => {
|
||||
req.body = { projectPath: '/project' };
|
||||
|
||||
setupStandardExec({
|
||||
porcelainOutput: [
|
||||
'worktree /project',
|
||||
'branch refs/heads/main',
|
||||
'',
|
||||
'worktree /project/.worktrees/rebasing-wt',
|
||||
'detached',
|
||||
'',
|
||||
].join('\n'),
|
||||
gitDirs: {
|
||||
'/project/.worktrees/rebasing-wt': '/project/.worktrees/rebasing-wt/.git',
|
||||
},
|
||||
});
|
||||
disableWorktreesScan();
|
||||
|
||||
// rebase-merge/head-name returns the branch being rebased
|
||||
vi.mocked(secureFs.readFile).mockImplementation(async (filePath) => {
|
||||
const pathStr = String(filePath);
|
||||
if (pathStr.includes('rebase-merge/head-name')) {
|
||||
return 'refs/heads/feature/my-rebasing-branch\n' as any;
|
||||
}
|
||||
throw new Error('ENOENT');
|
||||
});
|
||||
|
||||
const handler = createListHandler();
|
||||
await handler(req, res);
|
||||
|
||||
const response = vi.mocked(res.json).mock.calls[0][0] as {
|
||||
worktrees: Array<{ branch: string; path: string; isCurrent: boolean }>;
|
||||
};
|
||||
expect(response.worktrees).toHaveLength(2);
|
||||
expect(response.worktrees[1]).toEqual(
|
||||
expect.objectContaining({
|
||||
path: '/project/.worktrees/rebasing-wt',
|
||||
branch: 'feature/my-rebasing-branch',
|
||||
isMain: false,
|
||||
isCurrent: false,
|
||||
hasWorktree: true,
|
||||
})
|
||||
);
|
||||
});
|
||||
|
||||
it('should include worktrees with detached HEAD and recover branch from rebase-apply state', async () => {
|
||||
req.body = { projectPath: '/project' };
|
||||
|
||||
setupStandardExec({
|
||||
porcelainOutput: [
|
||||
'worktree /project',
|
||||
'branch refs/heads/main',
|
||||
'',
|
||||
'worktree /project/.worktrees/apply-wt',
|
||||
'detached',
|
||||
'',
|
||||
].join('\n'),
|
||||
gitDirs: {
|
||||
'/project/.worktrees/apply-wt': '/project/.worktrees/apply-wt/.git',
|
||||
},
|
||||
});
|
||||
disableWorktreesScan();
|
||||
|
||||
// rebase-merge doesn't exist, but rebase-apply does
|
||||
vi.mocked(secureFs.readFile).mockImplementation(async (filePath) => {
|
||||
const pathStr = String(filePath);
|
||||
if (pathStr.includes('rebase-apply/head-name')) {
|
||||
return 'refs/heads/feature/apply-branch\n' as any;
|
||||
}
|
||||
throw new Error('ENOENT');
|
||||
});
|
||||
|
||||
const handler = createListHandler();
|
||||
await handler(req, res);
|
||||
|
||||
const response = vi.mocked(res.json).mock.calls[0][0] as {
|
||||
worktrees: Array<{ branch: string; path: string }>;
|
||||
};
|
||||
const detachedWt = response.worktrees.find((w) => w.path === '/project/.worktrees/apply-wt');
|
||||
expect(detachedWt).toBeDefined();
|
||||
expect(detachedWt!.branch).toBe('feature/apply-branch');
|
||||
});
|
||||
|
||||
it('should show merge conflict worktrees normally since merge does not detach HEAD', async () => {
|
||||
// During a merge conflict, HEAD stays on the branch, so `git worktree list --porcelain`
|
||||
// still outputs `branch refs/heads/...`. This test verifies merge conflicts don't
|
||||
// trigger the detached HEAD recovery path.
|
||||
req.body = { projectPath: '/project' };
|
||||
|
||||
setupStandardExec({
|
||||
porcelainOutput: [
|
||||
'worktree /project',
|
||||
'branch refs/heads/main',
|
||||
'',
|
||||
'worktree /project/.worktrees/merge-wt',
|
||||
'branch refs/heads/feature/merge-branch',
|
||||
'',
|
||||
].join('\n'),
|
||||
});
|
||||
disableWorktreesScan();
|
||||
|
||||
const handler = createListHandler();
|
||||
await handler(req, res);
|
||||
|
||||
const response = vi.mocked(res.json).mock.calls[0][0] as {
|
||||
worktrees: Array<{ branch: string; path: string }>;
|
||||
};
|
||||
const mergeWt = response.worktrees.find((w) => w.path === '/project/.worktrees/merge-wt');
|
||||
expect(mergeWt).toBeDefined();
|
||||
expect(mergeWt!.branch).toBe('feature/merge-branch');
|
||||
});
|
||||
|
||||
it('should fall back to (detached) when all branch recovery methods fail', async () => {
|
||||
req.body = { projectPath: '/project' };
|
||||
|
||||
setupStandardExec({
|
||||
porcelainOutput: [
|
||||
'worktree /project',
|
||||
'branch refs/heads/main',
|
||||
'',
|
||||
'worktree /project/.worktrees/unknown-wt',
|
||||
'detached',
|
||||
'',
|
||||
].join('\n'),
|
||||
worktreeBranches: {
|
||||
'/project/.worktrees/unknown-wt': '', // empty = no branch
|
||||
},
|
||||
});
|
||||
disableWorktreesScan();
|
||||
|
||||
// All readFile calls fail (no gitDirs so rev-parse --git-dir will throw)
|
||||
vi.mocked(secureFs.readFile).mockRejectedValue(new Error('ENOENT'));
|
||||
|
||||
const handler = createListHandler();
|
||||
await handler(req, res);
|
||||
|
||||
const response = vi.mocked(res.json).mock.calls[0][0] as {
|
||||
worktrees: Array<{ branch: string; path: string }>;
|
||||
};
|
||||
const detachedWt = response.worktrees.find(
|
||||
(w) => w.path === '/project/.worktrees/unknown-wt'
|
||||
);
|
||||
expect(detachedWt).toBeDefined();
|
||||
expect(detachedWt!.branch).toBe('(detached)');
|
||||
});
|
||||
|
||||
it('should not include detached worktree when directory does not exist on disk', async () => {
|
||||
req.body = { projectPath: '/project' };
|
||||
|
||||
setupStandardExec({
|
||||
porcelainOutput: [
|
||||
'worktree /project',
|
||||
'branch refs/heads/main',
|
||||
'',
|
||||
'worktree /project/.worktrees/deleted-wt',
|
||||
'detached',
|
||||
'',
|
||||
].join('\n'),
|
||||
});
|
||||
|
||||
// The deleted worktree doesn't exist on disk
|
||||
vi.mocked(secureFs.access).mockImplementation(async (p) => {
|
||||
const pathStr = String(p);
|
||||
if (pathStr.includes('deleted-wt')) {
|
||||
throw new Error('ENOENT');
|
||||
}
|
||||
if (pathStr.endsWith('.worktrees') || pathStr.endsWith('.worktrees/')) {
|
||||
throw new Error('ENOENT');
|
||||
}
|
||||
return undefined;
|
||||
});
|
||||
|
||||
const handler = createListHandler();
|
||||
await handler(req, res);
|
||||
|
||||
const response = vi.mocked(res.json).mock.calls[0][0] as {
|
||||
worktrees: Array<{ branch: string; path: string }>;
|
||||
};
|
||||
// Only the main worktree should be present
|
||||
expect(response.worktrees).toHaveLength(1);
|
||||
expect(response.worktrees[0].path).toBe('/project');
|
||||
});
|
||||
|
||||
it('should set isCurrent to false for detached worktrees even if recovered branch matches current branch', async () => {
|
||||
req.body = { projectPath: '/project' };
|
||||
|
||||
setupStandardExec({
|
||||
porcelainOutput: [
|
||||
'worktree /project',
|
||||
'branch refs/heads/main',
|
||||
'',
|
||||
'worktree /project/.worktrees/rebasing-wt',
|
||||
'detached',
|
||||
'',
|
||||
].join('\n'),
|
||||
// currentBranch for project is 'feature/my-branch'
|
||||
projectBranch: 'feature/my-branch',
|
||||
gitDirs: {
|
||||
'/project/.worktrees/rebasing-wt': '/project/.worktrees/rebasing-wt/.git',
|
||||
},
|
||||
});
|
||||
disableWorktreesScan();
|
||||
|
||||
// Recovery returns the same branch as currentBranch
|
||||
vi.mocked(secureFs.readFile).mockImplementation(async (filePath) => {
|
||||
const pathStr = String(filePath);
|
||||
if (pathStr.includes('rebase-merge/head-name')) {
|
||||
return 'refs/heads/feature/my-branch\n' as any;
|
||||
}
|
||||
throw new Error('ENOENT');
|
||||
});
|
||||
|
||||
const handler = createListHandler();
|
||||
await handler(req, res);
|
||||
|
||||
const response = vi.mocked(res.json).mock.calls[0][0] as {
|
||||
worktrees: Array<{ branch: string; isCurrent: boolean; path: string }>;
|
||||
};
|
||||
const detachedWt = response.worktrees.find(
|
||||
(w) => w.path === '/project/.worktrees/rebasing-wt'
|
||||
);
|
||||
expect(detachedWt).toBeDefined();
|
||||
// Detached worktrees should always have isCurrent=false
|
||||
expect(detachedWt!.isCurrent).toBe(false);
|
||||
});
|
||||
|
||||
it('should handle mixed normal and detached worktrees', async () => {
|
||||
req.body = { projectPath: '/project' };
|
||||
|
||||
setupStandardExec({
|
||||
porcelainOutput: [
|
||||
'worktree /project',
|
||||
'branch refs/heads/main',
|
||||
'',
|
||||
'worktree /project/.worktrees/normal-wt',
|
||||
'branch refs/heads/feature-normal',
|
||||
'',
|
||||
'worktree /project/.worktrees/rebasing-wt',
|
||||
'detached',
|
||||
'',
|
||||
'worktree /project/.worktrees/another-normal',
|
||||
'branch refs/heads/feature-other',
|
||||
'',
|
||||
].join('\n'),
|
||||
gitDirs: {
|
||||
'/project/.worktrees/rebasing-wt': '/project/.worktrees/rebasing-wt/.git',
|
||||
},
|
||||
});
|
||||
disableWorktreesScan();
|
||||
|
||||
vi.mocked(secureFs.readFile).mockImplementation(async (filePath) => {
|
||||
const pathStr = String(filePath);
|
||||
if (pathStr.includes('rebase-merge/head-name')) {
|
||||
return 'refs/heads/feature/rebasing\n' as any;
|
||||
}
|
||||
throw new Error('ENOENT');
|
||||
});
|
||||
|
||||
const handler = createListHandler();
|
||||
await handler(req, res);
|
||||
|
||||
const response = vi.mocked(res.json).mock.calls[0][0] as {
|
||||
worktrees: Array<{ branch: string; path: string; isMain: boolean }>;
|
||||
};
|
||||
expect(response.worktrees).toHaveLength(4);
|
||||
expect(response.worktrees[0]).toEqual(
|
||||
expect.objectContaining({ path: '/project', branch: 'main', isMain: true })
|
||||
);
|
||||
expect(response.worktrees[1]).toEqual(
|
||||
expect.objectContaining({
|
||||
path: '/project/.worktrees/normal-wt',
|
||||
branch: 'feature-normal',
|
||||
isMain: false,
|
||||
})
|
||||
);
|
||||
expect(response.worktrees[2]).toEqual(
|
||||
expect.objectContaining({
|
||||
path: '/project/.worktrees/rebasing-wt',
|
||||
branch: 'feature/rebasing',
|
||||
isMain: false,
|
||||
})
|
||||
);
|
||||
expect(response.worktrees[3]).toEqual(
|
||||
expect.objectContaining({
|
||||
path: '/project/.worktrees/another-normal',
|
||||
branch: 'feature-other',
|
||||
isMain: false,
|
||||
})
|
||||
);
|
||||
});
|
||||
|
||||
it('should correctly advance isFirst flag past detached worktrees', async () => {
|
||||
req.body = { projectPath: '/project' };
|
||||
|
||||
setupStandardExec({
|
||||
porcelainOutput: [
|
||||
'worktree /project',
|
||||
'branch refs/heads/main',
|
||||
'',
|
||||
'worktree /project/.worktrees/detached-wt',
|
||||
'detached',
|
||||
'',
|
||||
'worktree /project/.worktrees/normal-wt',
|
||||
'branch refs/heads/feature-x',
|
||||
'',
|
||||
].join('\n'),
|
||||
});
|
||||
disableWorktreesScan();
|
||||
vi.mocked(secureFs.readFile).mockRejectedValue(new Error('ENOENT'));
|
||||
|
||||
const handler = createListHandler();
|
||||
await handler(req, res);
|
||||
|
||||
const response = vi.mocked(res.json).mock.calls[0][0] as {
|
||||
worktrees: Array<{ branch: string; isMain: boolean }>;
|
||||
};
|
||||
expect(response.worktrees).toHaveLength(3);
|
||||
expect(response.worktrees[0].isMain).toBe(true); // main
|
||||
expect(response.worktrees[1].isMain).toBe(false); // detached
|
||||
expect(response.worktrees[2].isMain).toBe(false); // normal
|
||||
});
|
||||
|
||||
it('should not add removed detached worktrees to removedWorktrees list', async () => {
|
||||
req.body = { projectPath: '/project' };
|
||||
|
||||
setupStandardExec({
|
||||
porcelainOutput: [
|
||||
'worktree /project',
|
||||
'branch refs/heads/main',
|
||||
'',
|
||||
'worktree /project/.worktrees/gone-wt',
|
||||
'detached',
|
||||
'',
|
||||
].join('\n'),
|
||||
});
|
||||
|
||||
// The detached worktree doesn't exist on disk
|
||||
vi.mocked(secureFs.access).mockImplementation(async (p) => {
|
||||
const pathStr = String(p);
|
||||
if (pathStr.includes('gone-wt')) {
|
||||
throw new Error('ENOENT');
|
||||
}
|
||||
if (pathStr.endsWith('.worktrees') || pathStr.endsWith('.worktrees/')) {
|
||||
throw new Error('ENOENT');
|
||||
}
|
||||
return undefined;
|
||||
});
|
||||
|
||||
const handler = createListHandler();
|
||||
await handler(req, res);
|
||||
|
||||
const response = vi.mocked(res.json).mock.calls[0][0] as {
|
||||
worktrees: Array<{ branch: string }>;
|
||||
removedWorktrees?: Array<{ path: string; branch: string }>;
|
||||
};
|
||||
// Should not be in removed list since we don't know the branch
|
||||
expect(response.removedWorktrees).toBeUndefined();
|
||||
});
|
||||
|
||||
it('should strip refs/heads/ prefix from recovered branch name', async () => {
|
||||
req.body = { projectPath: '/project' };
|
||||
|
||||
setupStandardExec({
|
||||
porcelainOutput: [
|
||||
'worktree /project',
|
||||
'branch refs/heads/main',
|
||||
'',
|
||||
'worktree /project/.worktrees/wt1',
|
||||
'detached',
|
||||
'',
|
||||
].join('\n'),
|
||||
gitDirs: {
|
||||
'/project/.worktrees/wt1': '/project/.worktrees/wt1/.git',
|
||||
},
|
||||
});
|
||||
disableWorktreesScan();
|
||||
|
||||
vi.mocked(secureFs.readFile).mockImplementation(async (filePath) => {
|
||||
const pathStr = String(filePath);
|
||||
if (pathStr.includes('rebase-merge/head-name')) {
|
||||
return 'refs/heads/my-branch\n' as any;
|
||||
}
|
||||
throw new Error('ENOENT');
|
||||
});
|
||||
|
||||
const handler = createListHandler();
|
||||
await handler(req, res);
|
||||
|
||||
const response = vi.mocked(res.json).mock.calls[0][0] as {
|
||||
worktrees: Array<{ branch: string; path: string }>;
|
||||
};
|
||||
const wt = response.worktrees.find((w) => w.path === '/project/.worktrees/wt1');
|
||||
expect(wt).toBeDefined();
|
||||
// Should be 'my-branch', not 'refs/heads/my-branch'
|
||||
expect(wt!.branch).toBe('my-branch');
|
||||
});
|
||||
});
|
||||
|
||||
describe('scanWorktreesDirectory with detached HEAD recovery', () => {
|
||||
it('should recover branch for discovered worktrees with detached HEAD', async () => {
|
||||
req.body = { projectPath: '/project' };
|
||||
|
||||
vi.mocked(execGitCommand).mockImplementation(async (args: string[], cwd: string) => {
|
||||
if (args[0] === 'worktree' && args[1] === 'list') {
|
||||
return 'worktree /project\nbranch refs/heads/main\n\n';
|
||||
}
|
||||
if (args[0] === 'branch' && args[1] === '--show-current') {
|
||||
return cwd === '/project' ? 'main\n' : '\n';
|
||||
}
|
||||
if (args[0] === 'rev-parse' && args[1] === '--abbrev-ref') {
|
||||
return 'HEAD\n';
|
||||
}
|
||||
if (args[0] === 'rev-parse' && args[1] === '--git-dir') {
|
||||
return '/project/.worktrees/orphan-wt/.git\n';
|
||||
}
|
||||
return '';
|
||||
});
|
||||
|
||||
// .worktrees directory exists and has an orphan worktree
|
||||
vi.mocked(secureFs.access).mockResolvedValue(undefined);
|
||||
vi.mocked(secureFs.readdir).mockResolvedValue([
|
||||
{ name: 'orphan-wt', isDirectory: () => true, isFile: () => false } as any,
|
||||
]);
|
||||
vi.mocked(secureFs.stat).mockResolvedValue({
|
||||
isFile: () => true,
|
||||
isDirectory: () => false,
|
||||
} as any);
|
||||
|
||||
// readFile returns branch from rebase-merge/head-name
|
||||
vi.mocked(secureFs.readFile).mockImplementation(async (filePath) => {
|
||||
const pathStr = String(filePath);
|
||||
if (pathStr.includes('rebase-merge/head-name')) {
|
||||
return 'refs/heads/feature/orphan-branch\n' as any;
|
||||
}
|
||||
throw new Error('ENOENT');
|
||||
});
|
||||
|
||||
const handler = createListHandler();
|
||||
await handler(req, res);
|
||||
|
||||
const response = vi.mocked(res.json).mock.calls[0][0] as {
|
||||
worktrees: Array<{ branch: string; path: string }>;
|
||||
};
|
||||
|
||||
const orphanWt = response.worktrees.find((w) => w.path === '/project/.worktrees/orphan-wt');
|
||||
expect(orphanWt).toBeDefined();
|
||||
expect(orphanWt!.branch).toBe('feature/orphan-branch');
|
||||
});
|
||||
|
||||
it('should skip discovered worktrees when all branch detection fails', async () => {
|
||||
req.body = { projectPath: '/project' };
|
||||
|
||||
vi.mocked(execGitCommand).mockImplementation(async (args: string[], cwd: string) => {
|
||||
if (args[0] === 'worktree' && args[1] === 'list') {
|
||||
return 'worktree /project\nbranch refs/heads/main\n\n';
|
||||
}
|
||||
if (args[0] === 'branch' && args[1] === '--show-current') {
|
||||
return cwd === '/project' ? 'main\n' : '\n';
|
||||
}
|
||||
if (args[0] === 'rev-parse' && args[1] === '--abbrev-ref') {
|
||||
return 'HEAD\n';
|
||||
}
|
||||
if (args[0] === 'rev-parse' && args[1] === '--git-dir') {
|
||||
throw new Error('not a git dir');
|
||||
}
|
||||
return '';
|
||||
});
|
||||
|
||||
vi.mocked(secureFs.access).mockResolvedValue(undefined);
|
||||
vi.mocked(secureFs.readdir).mockResolvedValue([
|
||||
{ name: 'broken-wt', isDirectory: () => true, isFile: () => false } as any,
|
||||
]);
|
||||
vi.mocked(secureFs.stat).mockResolvedValue({
|
||||
isFile: () => true,
|
||||
isDirectory: () => false,
|
||||
} as any);
|
||||
vi.mocked(secureFs.readFile).mockRejectedValue(new Error('ENOENT'));
|
||||
|
||||
const handler = createListHandler();
|
||||
await handler(req, res);
|
||||
|
||||
const response = vi.mocked(res.json).mock.calls[0][0] as {
|
||||
worktrees: Array<{ branch: string; path: string }>;
|
||||
};
|
||||
|
||||
// Only main worktree should be present
|
||||
expect(response.worktrees).toHaveLength(1);
|
||||
expect(response.worktrees[0].branch).toBe('main');
|
||||
});
|
||||
});
|
||||
|
||||
describe('PR tracking precedence', () => {
|
||||
it('should keep manually tracked PR from metadata when branch PR differs', async () => {
|
||||
req.body = { projectPath: '/project', includeDetails: true };
|
||||
|
||||
vi.mocked(readAllWorktreeMetadata).mockResolvedValue(
|
||||
new Map([
|
||||
[
|
||||
'feature-a',
|
||||
{
|
||||
branch: 'feature-a',
|
||||
createdAt: '2026-01-01T00:00:00.000Z',
|
||||
pr: {
|
||||
number: 99,
|
||||
url: 'https://github.com/org/repo/pull/99',
|
||||
title: 'Manual override PR',
|
||||
state: 'OPEN',
|
||||
createdAt: '2026-01-01T00:00:00.000Z',
|
||||
},
|
||||
},
|
||||
],
|
||||
])
|
||||
);
|
||||
vi.mocked(isGhCliAvailable).mockResolvedValue(true);
|
||||
vi.mocked(checkGitHubRemote).mockResolvedValue({
|
||||
hasGitHubRemote: true,
|
||||
owner: 'org',
|
||||
repo: 'repo',
|
||||
});
|
||||
vi.mocked(secureFs.access).mockImplementation(async (p) => {
|
||||
const pathStr = String(p);
|
||||
if (
|
||||
pathStr.includes('MERGE_HEAD') ||
|
||||
pathStr.includes('rebase-merge') ||
|
||||
pathStr.includes('rebase-apply') ||
|
||||
pathStr.includes('CHERRY_PICK_HEAD')
|
||||
) {
|
||||
throw new Error('ENOENT');
|
||||
}
|
||||
return undefined;
|
||||
});
|
||||
|
||||
vi.mocked(execGitCommand).mockImplementation(async (args: string[], cwd: string) => {
|
||||
if (args[0] === 'rev-parse' && args[1] === '--git-dir') {
|
||||
throw new Error('no git dir');
|
||||
}
|
||||
if (args[0] === 'worktree' && args[1] === 'list') {
|
||||
return [
|
||||
'worktree /project',
|
||||
'branch refs/heads/main',
|
||||
'',
|
||||
'worktree /project/.worktrees/feature-a',
|
||||
'branch refs/heads/feature-a',
|
||||
'',
|
||||
].join('\n');
|
||||
}
|
||||
if (args[0] === 'branch' && args[1] === '--show-current') {
|
||||
return cwd === '/project' ? 'main\n' : 'feature-a\n';
|
||||
}
|
||||
if (args[0] === 'status' && args[1] === '--porcelain') {
|
||||
return '';
|
||||
}
|
||||
return '';
|
||||
});
|
||||
(exec as unknown as Mock).mockImplementation(
|
||||
(
|
||||
cmd: string,
|
||||
_opts: unknown,
|
||||
callback?: (err: Error | null, out: { stdout: string; stderr: string }) => void
|
||||
) => {
|
||||
const cb = typeof _opts === 'function' ? _opts : callback!;
|
||||
if (cmd.includes('gh pr list')) {
|
||||
cb(null, {
|
||||
stdout: JSON.stringify([
|
||||
{
|
||||
number: 42,
|
||||
title: 'Branch PR',
|
||||
url: 'https://github.com/org/repo/pull/42',
|
||||
state: 'OPEN',
|
||||
headRefName: 'feature-a',
|
||||
createdAt: '2026-01-02T00:00:00.000Z',
|
||||
},
|
||||
]),
|
||||
stderr: '',
|
||||
});
|
||||
} else {
|
||||
cb(null, { stdout: '', stderr: '' });
|
||||
}
|
||||
}
|
||||
);
|
||||
disableWorktreesScan();
|
||||
|
||||
const handler = createListHandler();
|
||||
await handler(req, res);
|
||||
|
||||
const response = vi.mocked(res.json).mock.calls[0][0] as {
|
||||
worktrees: Array<{ branch: string; pr?: { number: number; title: string } }>;
|
||||
};
|
||||
const featureWorktree = response.worktrees.find((w) => w.branch === 'feature-a');
|
||||
expect(featureWorktree?.pr?.number).toBe(99);
|
||||
expect(featureWorktree?.pr?.title).toBe('Manual override PR');
|
||||
});
|
||||
|
||||
it('should prefer GitHub PR when it matches metadata number and sync updated fields', async () => {
|
||||
req.body = { projectPath: '/project-2', includeDetails: true };
|
||||
|
||||
vi.mocked(readAllWorktreeMetadata).mockResolvedValue(
|
||||
new Map([
|
||||
[
|
||||
'feature-a',
|
||||
{
|
||||
branch: 'feature-a',
|
||||
createdAt: '2026-01-01T00:00:00.000Z',
|
||||
pr: {
|
||||
number: 42,
|
||||
url: 'https://github.com/org/repo/pull/42',
|
||||
title: 'Old title',
|
||||
state: 'OPEN',
|
||||
createdAt: '2026-01-01T00:00:00.000Z',
|
||||
},
|
||||
},
|
||||
],
|
||||
])
|
||||
);
|
||||
vi.mocked(isGhCliAvailable).mockResolvedValue(true);
|
||||
vi.mocked(checkGitHubRemote).mockResolvedValue({
|
||||
hasGitHubRemote: true,
|
||||
owner: 'org',
|
||||
repo: 'repo',
|
||||
});
|
||||
vi.mocked(secureFs.access).mockImplementation(async (p) => {
|
||||
const pathStr = String(p);
|
||||
if (
|
||||
pathStr.includes('MERGE_HEAD') ||
|
||||
pathStr.includes('rebase-merge') ||
|
||||
pathStr.includes('rebase-apply') ||
|
||||
pathStr.includes('CHERRY_PICK_HEAD')
|
||||
) {
|
||||
throw new Error('ENOENT');
|
||||
}
|
||||
return undefined;
|
||||
});
|
||||
|
||||
vi.mocked(execGitCommand).mockImplementation(async (args: string[], cwd: string) => {
|
||||
if (args[0] === 'rev-parse' && args[1] === '--git-dir') {
|
||||
throw new Error('no git dir');
|
||||
}
|
||||
if (args[0] === 'worktree' && args[1] === 'list') {
|
||||
return [
|
||||
'worktree /project-2',
|
||||
'branch refs/heads/main',
|
||||
'',
|
||||
'worktree /project-2/.worktrees/feature-a',
|
||||
'branch refs/heads/feature-a',
|
||||
'',
|
||||
].join('\n');
|
||||
}
|
||||
if (args[0] === 'branch' && args[1] === '--show-current') {
|
||||
return cwd === '/project-2' ? 'main\n' : 'feature-a\n';
|
||||
}
|
||||
if (args[0] === 'status' && args[1] === '--porcelain') {
|
||||
return '';
|
||||
}
|
||||
return '';
|
||||
});
|
||||
(exec as unknown as Mock).mockImplementation(
|
||||
(
|
||||
cmd: string,
|
||||
_opts: unknown,
|
||||
callback?: (err: Error | null, out: { stdout: string; stderr: string }) => void
|
||||
) => {
|
||||
const cb = typeof _opts === 'function' ? _opts : callback!;
|
||||
if (cmd.includes('gh pr list')) {
|
||||
cb(null, {
|
||||
stdout: JSON.stringify([
|
||||
{
|
||||
number: 42,
|
||||
title: 'New title from GitHub',
|
||||
url: 'https://github.com/org/repo/pull/42',
|
||||
state: 'MERGED',
|
||||
headRefName: 'feature-a',
|
||||
createdAt: '2026-01-02T00:00:00.000Z',
|
||||
},
|
||||
]),
|
||||
stderr: '',
|
||||
});
|
||||
} else {
|
||||
cb(null, { stdout: '', stderr: '' });
|
||||
}
|
||||
}
|
||||
);
|
||||
disableWorktreesScan();
|
||||
|
||||
const handler = createListHandler();
|
||||
await handler(req, res);
|
||||
|
||||
const response = vi.mocked(res.json).mock.calls[0][0] as {
|
||||
worktrees: Array<{ branch: string; pr?: { number: number; title: string; state: string } }>;
|
||||
};
|
||||
const featureWorktree = response.worktrees.find((w) => w.branch === 'feature-a');
|
||||
expect(featureWorktree?.pr?.number).toBe(42);
|
||||
expect(featureWorktree?.pr?.title).toBe('New title from GitHub');
|
||||
expect(featureWorktree?.pr?.state).toBe('MERGED');
|
||||
expect(vi.mocked(updateWorktreePRInfo)).toHaveBeenCalledWith(
|
||||
'/project-2',
|
||||
'feature-a',
|
||||
expect.objectContaining({
|
||||
number: 42,
|
||||
title: 'New title from GitHub',
|
||||
state: 'MERGED',
|
||||
})
|
||||
);
|
||||
});
|
||||
});
|
||||
});
|
||||
@@ -1181,6 +1181,50 @@ describe('AgentExecutor', () => {
|
||||
);
|
||||
});
|
||||
|
||||
it('should pass claudeCompatibleProvider to executeQuery options', async () => {
|
||||
const executor = new AgentExecutor(
|
||||
mockEventBus,
|
||||
mockFeatureStateManager,
|
||||
mockPlanApprovalService,
|
||||
mockSettingsService
|
||||
);
|
||||
|
||||
const mockProvider = {
|
||||
getName: () => 'mock',
|
||||
executeQuery: vi.fn().mockImplementation(function* () {
|
||||
yield { type: 'result', subtype: 'success' };
|
||||
}),
|
||||
} as unknown as BaseProvider;
|
||||
|
||||
const mockClaudeProvider = { id: 'zai-1', name: 'Zai' } as any;
|
||||
|
||||
const options: AgentExecutionOptions = {
|
||||
workDir: '/test',
|
||||
featureId: 'test-feature',
|
||||
prompt: 'Test prompt',
|
||||
projectPath: '/project',
|
||||
abortController: new AbortController(),
|
||||
provider: mockProvider,
|
||||
effectiveBareModel: 'claude-sonnet-4-6',
|
||||
claudeCompatibleProvider: mockClaudeProvider,
|
||||
};
|
||||
|
||||
const callbacks = {
|
||||
waitForApproval: vi.fn().mockResolvedValue({ approved: true }),
|
||||
saveFeatureSummary: vi.fn(),
|
||||
updateFeatureSummary: vi.fn(),
|
||||
buildTaskPrompt: vi.fn().mockReturnValue('task prompt'),
|
||||
};
|
||||
|
||||
await executor.execute(options, callbacks);
|
||||
|
||||
expect(mockProvider.executeQuery).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
claudeCompatibleProvider: mockClaudeProvider,
|
||||
})
|
||||
);
|
||||
});
|
||||
|
||||
it('should return correct result structure', async () => {
|
||||
const executor = new AgentExecutor(
|
||||
mockEventBus,
|
||||
|
||||
@@ -0,0 +1,207 @@
|
||||
import { describe, it, expect, vi, beforeEach } from 'vitest';
|
||||
|
||||
// Mock dependencies (hoisted)
|
||||
vi.mock('../../../../src/services/agent-executor.js');
|
||||
vi.mock('../../../../src/lib/settings-helpers.js');
|
||||
vi.mock('../../../../src/providers/provider-factory.js');
|
||||
vi.mock('../../../../src/lib/sdk-options.js');
|
||||
vi.mock('@automaker/model-resolver', () => ({
|
||||
resolveModelString: vi.fn((model, fallback) => model || fallback),
|
||||
DEFAULT_MODELS: { claude: 'claude-3-5-sonnet' },
|
||||
}));
|
||||
|
||||
import { AutoModeServiceFacade } from '../../../../src/services/auto-mode/facade.js';
|
||||
import { AgentExecutor } from '../../../../src/services/agent-executor.js';
|
||||
import * as settingsHelpers from '../../../../src/lib/settings-helpers.js';
|
||||
import { ProviderFactory } from '../../../../src/providers/provider-factory.js';
|
||||
import * as sdkOptions from '../../../../src/lib/sdk-options.js';
|
||||
|
||||
describe('AutoModeServiceFacade Agent Runner', () => {
|
||||
let mockAgentExecutor: MockAgentExecutor;
|
||||
let mockSettingsService: MockSettingsService;
|
||||
let facade: AutoModeServiceFacade;
|
||||
|
||||
// Type definitions for mocks
|
||||
interface MockAgentExecutor {
|
||||
execute: ReturnType<typeof vi.fn>;
|
||||
}
|
||||
interface MockSettingsService {
|
||||
getGlobalSettings: ReturnType<typeof vi.fn>;
|
||||
getCredentials: ReturnType<typeof vi.fn>;
|
||||
getProjectSettings: ReturnType<typeof vi.fn>;
|
||||
}
|
||||
|
||||
beforeEach(() => {
|
||||
vi.clearAllMocks();
|
||||
|
||||
// Set up the mock for createAutoModeOptions
|
||||
// Note: Using 'as any' because Options type from SDK is complex and we only need
|
||||
// the specific fields that are verified in tests (maxTurns, allowedTools, etc.)
|
||||
vi.mocked(sdkOptions.createAutoModeOptions).mockReturnValue({
|
||||
maxTurns: 123,
|
||||
allowedTools: ['tool1'],
|
||||
systemPrompt: 'system-prompt',
|
||||
} as any);
|
||||
|
||||
mockAgentExecutor = {
|
||||
execute: vi.fn().mockResolvedValue(undefined),
|
||||
};
|
||||
(AgentExecutor as any).mockImplementation(function (this: MockAgentExecutor) {
|
||||
return mockAgentExecutor;
|
||||
});
|
||||
|
||||
mockSettingsService = {
|
||||
getGlobalSettings: vi.fn().mockResolvedValue({}),
|
||||
getCredentials: vi.fn().mockResolvedValue({}),
|
||||
getProjectSettings: vi.fn().mockResolvedValue({}),
|
||||
};
|
||||
|
||||
// Helper to access the private createRunAgentFn via factory creation
|
||||
facade = AutoModeServiceFacade.create('/project', {
|
||||
events: { on: vi.fn(), emit: vi.fn() } as any,
|
||||
settingsService: mockSettingsService,
|
||||
sharedServices: {
|
||||
eventBus: { emitAutoModeEvent: vi.fn() } as any,
|
||||
worktreeResolver: { getCurrentBranch: vi.fn().mockResolvedValue('main') } as any,
|
||||
concurrencyManager: {
|
||||
isRunning: vi.fn().mockReturnValue(false),
|
||||
getRunningFeature: vi.fn().mockReturnValue(null),
|
||||
} as any,
|
||||
} as any,
|
||||
});
|
||||
});
|
||||
|
||||
it('should resolve provider by providerId and pass to AgentExecutor', async () => {
|
||||
// 1. Setup mocks
|
||||
const mockProvider = { getName: () => 'mock-provider' };
|
||||
(ProviderFactory.getProviderForModel as any).mockReturnValue(mockProvider);
|
||||
|
||||
const mockClaudeProvider = { id: 'zai-1', name: 'Zai' };
|
||||
const mockCredentials = { apiKey: 'test-key' };
|
||||
(settingsHelpers.resolveProviderContext as any).mockResolvedValue({
|
||||
provider: mockClaudeProvider,
|
||||
credentials: mockCredentials,
|
||||
resolvedModel: undefined,
|
||||
});
|
||||
|
||||
const runAgentFn = (facade as any).executionService.runAgentFn;
|
||||
|
||||
// 2. Execute
|
||||
await runAgentFn(
|
||||
'/workdir',
|
||||
'feature-1',
|
||||
'prompt',
|
||||
new AbortController(),
|
||||
'/project',
|
||||
[],
|
||||
'model-1',
|
||||
{
|
||||
providerId: 'zai-1',
|
||||
}
|
||||
);
|
||||
|
||||
// 3. Verify
|
||||
expect(settingsHelpers.resolveProviderContext).toHaveBeenCalledWith(
|
||||
mockSettingsService,
|
||||
'model-1',
|
||||
'zai-1',
|
||||
'[AutoModeFacade]'
|
||||
);
|
||||
|
||||
expect(mockAgentExecutor.execute).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
claudeCompatibleProvider: mockClaudeProvider,
|
||||
credentials: mockCredentials,
|
||||
model: 'model-1', // Original model ID
|
||||
}),
|
||||
expect.any(Object)
|
||||
);
|
||||
});
|
||||
|
||||
it('should fallback to model-based lookup if providerId is not provided', async () => {
|
||||
const mockProvider = { getName: () => 'mock-provider' };
|
||||
(ProviderFactory.getProviderForModel as any).mockReturnValue(mockProvider);
|
||||
|
||||
const mockClaudeProvider = { id: 'zai-model', name: 'Zai Model' };
|
||||
(settingsHelpers.resolveProviderContext as any).mockResolvedValue({
|
||||
provider: mockClaudeProvider,
|
||||
credentials: { apiKey: 'model-key' },
|
||||
resolvedModel: 'resolved-model-1',
|
||||
});
|
||||
|
||||
const runAgentFn = (facade as any).executionService.runAgentFn;
|
||||
|
||||
await runAgentFn(
|
||||
'/workdir',
|
||||
'feature-1',
|
||||
'prompt',
|
||||
new AbortController(),
|
||||
'/project',
|
||||
[],
|
||||
'model-1',
|
||||
{
|
||||
// no providerId
|
||||
}
|
||||
);
|
||||
|
||||
expect(settingsHelpers.resolveProviderContext).toHaveBeenCalledWith(
|
||||
mockSettingsService,
|
||||
'model-1',
|
||||
undefined,
|
||||
'[AutoModeFacade]'
|
||||
);
|
||||
|
||||
expect(mockAgentExecutor.execute).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
claudeCompatibleProvider: mockClaudeProvider,
|
||||
}),
|
||||
expect.any(Object)
|
||||
);
|
||||
});
|
||||
|
||||
it('should use resolvedModel from provider config for createAutoModeOptions if it maps to a Claude model', async () => {
|
||||
const mockProvider = { getName: () => 'mock-provider' };
|
||||
(ProviderFactory.getProviderForModel as any).mockReturnValue(mockProvider);
|
||||
|
||||
const mockClaudeProvider = {
|
||||
id: 'zai-1',
|
||||
name: 'Zai',
|
||||
models: [{ id: 'custom-model-1', mapsToClaudeModel: 'claude-3-opus' }],
|
||||
};
|
||||
(settingsHelpers.resolveProviderContext as any).mockResolvedValue({
|
||||
provider: mockClaudeProvider,
|
||||
credentials: { apiKey: 'test-key' },
|
||||
resolvedModel: 'claude-3-5-opus',
|
||||
});
|
||||
|
||||
const runAgentFn = (facade as any).executionService.runAgentFn;
|
||||
|
||||
await runAgentFn(
|
||||
'/workdir',
|
||||
'feature-1',
|
||||
'prompt',
|
||||
new AbortController(),
|
||||
'/project',
|
||||
[],
|
||||
'custom-model-1',
|
||||
{
|
||||
providerId: 'zai-1',
|
||||
}
|
||||
);
|
||||
|
||||
// Verify createAutoModeOptions was called with the mapped model
|
||||
expect(sdkOptions.createAutoModeOptions).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
model: 'claude-3-5-opus',
|
||||
})
|
||||
);
|
||||
|
||||
// Verify AgentExecutor.execute still gets the original custom model ID
|
||||
expect(mockAgentExecutor.execute).toHaveBeenCalledWith(
|
||||
expect.objectContaining({
|
||||
model: 'custom-model-1',
|
||||
}),
|
||||
expect.any(Object)
|
||||
);
|
||||
});
|
||||
});
|
||||
115
apps/server/tests/unit/services/dev-server-event-types.test.ts
Normal file
115
apps/server/tests/unit/services/dev-server-event-types.test.ts
Normal file
@@ -0,0 +1,115 @@
|
||||
import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest';
|
||||
import { EventEmitter } from 'events';
|
||||
import path from 'path';
|
||||
import os from 'os';
|
||||
import fs from 'fs/promises';
|
||||
import { spawn } from 'child_process';
|
||||
|
||||
// Mock child_process
|
||||
vi.mock('child_process', () => ({
|
||||
spawn: vi.fn(),
|
||||
execSync: vi.fn(),
|
||||
execFile: vi.fn(),
|
||||
}));
|
||||
|
||||
// Mock secure-fs
|
||||
vi.mock('@/lib/secure-fs.js', () => ({
|
||||
access: vi.fn(),
|
||||
}));
|
||||
|
||||
// Mock net
|
||||
vi.mock('net', () => ({
|
||||
default: {
|
||||
createServer: vi.fn(),
|
||||
},
|
||||
createServer: vi.fn(),
|
||||
}));
|
||||
|
||||
import * as secureFs from '@/lib/secure-fs.js';
|
||||
import net from 'net';
|
||||
|
||||
describe('DevServerService Event Types', () => {
|
||||
let testDataDir: string;
|
||||
let worktreeDir: string;
|
||||
let mockEmitter: EventEmitter;
|
||||
|
||||
beforeEach(async () => {
|
||||
vi.clearAllMocks();
|
||||
vi.resetModules();
|
||||
|
||||
testDataDir = path.join(os.tmpdir(), `dev-server-events-test-${Date.now()}`);
|
||||
worktreeDir = path.join(os.tmpdir(), `dev-server-worktree-events-test-${Date.now()}`);
|
||||
await fs.mkdir(testDataDir, { recursive: true });
|
||||
await fs.mkdir(worktreeDir, { recursive: true });
|
||||
|
||||
mockEmitter = new EventEmitter();
|
||||
|
||||
vi.mocked(secureFs.access).mockResolvedValue(undefined);
|
||||
|
||||
const mockServer = new EventEmitter() as any;
|
||||
mockServer.listen = vi.fn().mockImplementation((port: number, host: string) => {
|
||||
process.nextTick(() => mockServer.emit('listening'));
|
||||
});
|
||||
mockServer.close = vi.fn();
|
||||
vi.mocked(net.createServer).mockReturnValue(mockServer);
|
||||
});
|
||||
|
||||
afterEach(async () => {
|
||||
try {
|
||||
await fs.rm(testDataDir, { recursive: true, force: true });
|
||||
await fs.rm(worktreeDir, { recursive: true, force: true });
|
||||
} catch {
|
||||
// Ignore cleanup errors
|
||||
}
|
||||
});
|
||||
|
||||
it('should emit all required event types during dev server lifecycle', async () => {
|
||||
const { getDevServerService } = await import('@/services/dev-server-service.js');
|
||||
const service = getDevServerService();
|
||||
await service.initialize(testDataDir, mockEmitter as any);
|
||||
|
||||
const mockProcess = createMockProcess();
|
||||
vi.mocked(spawn).mockReturnValue(mockProcess as any);
|
||||
|
||||
const emittedEvents: Record<string, any[]> = {
|
||||
'dev-server:starting': [],
|
||||
'dev-server:started': [],
|
||||
'dev-server:url-detected': [],
|
||||
'dev-server:output': [],
|
||||
'dev-server:stopped': [],
|
||||
};
|
||||
|
||||
Object.keys(emittedEvents).forEach((type) => {
|
||||
mockEmitter.on(type, (payload) => emittedEvents[type].push(payload));
|
||||
});
|
||||
|
||||
// 1. Starting & Started
|
||||
await service.startDevServer(worktreeDir, worktreeDir);
|
||||
expect(emittedEvents['dev-server:starting'].length).toBe(1);
|
||||
expect(emittedEvents['dev-server:started'].length).toBe(1);
|
||||
|
||||
// 2. Output & URL Detected
|
||||
mockProcess.stdout.emit('data', Buffer.from('Local: http://localhost:5173/\n'));
|
||||
// Throttled output needs a bit of time
|
||||
await new Promise((resolve) => setTimeout(resolve, 100));
|
||||
expect(emittedEvents['dev-server:output'].length).toBeGreaterThanOrEqual(1);
|
||||
expect(emittedEvents['dev-server:url-detected'].length).toBe(1);
|
||||
expect(emittedEvents['dev-server:url-detected'][0].url).toBe('http://localhost:5173/');
|
||||
|
||||
// 3. Stopped
|
||||
await service.stopDevServer(worktreeDir);
|
||||
expect(emittedEvents['dev-server:stopped'].length).toBe(1);
|
||||
});
|
||||
});
|
||||
|
||||
// Helper to create a mock child process
|
||||
function createMockProcess() {
|
||||
const mockProcess = new EventEmitter() as any;
|
||||
mockProcess.stdout = new EventEmitter();
|
||||
mockProcess.stderr = new EventEmitter();
|
||||
mockProcess.kill = vi.fn();
|
||||
mockProcess.killed = false;
|
||||
mockProcess.pid = 12345;
|
||||
mockProcess.unref = vi.fn();
|
||||
return mockProcess;
|
||||
}
|
||||
240
apps/server/tests/unit/services/dev-server-persistence.test.ts
Normal file
240
apps/server/tests/unit/services/dev-server-persistence.test.ts
Normal file
@@ -0,0 +1,240 @@
|
||||
import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest';
|
||||
import { EventEmitter } from 'events';
|
||||
import path from 'path';
|
||||
import os from 'os';
|
||||
import fs from 'fs/promises';
|
||||
import { spawn, execSync } from 'child_process';
|
||||
|
||||
// Mock child_process
|
||||
vi.mock('child_process', () => ({
|
||||
spawn: vi.fn(),
|
||||
execSync: vi.fn(),
|
||||
execFile: vi.fn(),
|
||||
}));
|
||||
|
||||
// Mock secure-fs
|
||||
vi.mock('@/lib/secure-fs.js', () => ({
|
||||
access: vi.fn(),
|
||||
}));
|
||||
|
||||
// Mock net
|
||||
vi.mock('net', () => ({
|
||||
default: {
|
||||
createServer: vi.fn(),
|
||||
},
|
||||
createServer: vi.fn(),
|
||||
}));
|
||||
|
||||
import * as secureFs from '@/lib/secure-fs.js';
|
||||
import net from 'net';
|
||||
|
||||
describe('DevServerService Persistence & Sync', () => {
|
||||
let testDataDir: string;
|
||||
let worktreeDir: string;
|
||||
let mockEmitter: EventEmitter;
|
||||
|
||||
beforeEach(async () => {
|
||||
vi.clearAllMocks();
|
||||
vi.resetModules();
|
||||
|
||||
testDataDir = path.join(os.tmpdir(), `dev-server-persistence-test-${Date.now()}`);
|
||||
worktreeDir = path.join(os.tmpdir(), `dev-server-worktree-test-${Date.now()}`);
|
||||
await fs.mkdir(testDataDir, { recursive: true });
|
||||
await fs.mkdir(worktreeDir, { recursive: true });
|
||||
|
||||
mockEmitter = new EventEmitter();
|
||||
|
||||
// Default mock for secureFs.access - return resolved (file exists)
|
||||
vi.mocked(secureFs.access).mockResolvedValue(undefined);
|
||||
|
||||
// Default mock for net.createServer - port available
|
||||
const mockServer = new EventEmitter() as any;
|
||||
mockServer.listen = vi.fn().mockImplementation((port: number, host: string) => {
|
||||
process.nextTick(() => mockServer.emit('listening'));
|
||||
});
|
||||
mockServer.close = vi.fn();
|
||||
vi.mocked(net.createServer).mockReturnValue(mockServer);
|
||||
|
||||
// Default mock for execSync - no process on port
|
||||
vi.mocked(execSync).mockImplementation(() => {
|
||||
throw new Error('No process found');
|
||||
});
|
||||
});
|
||||
|
||||
afterEach(async () => {
|
||||
try {
|
||||
await fs.rm(testDataDir, { recursive: true, force: true });
|
||||
await fs.rm(worktreeDir, { recursive: true, force: true });
|
||||
} catch {
|
||||
// Ignore cleanup errors
|
||||
}
|
||||
});
|
||||
|
||||
it('should emit dev-server:starting when startDevServer is called', async () => {
|
||||
const { getDevServerService } = await import('@/services/dev-server-service.js');
|
||||
const service = getDevServerService();
|
||||
await service.initialize(testDataDir, mockEmitter as any);
|
||||
|
||||
const mockProcess = createMockProcess();
|
||||
vi.mocked(spawn).mockReturnValue(mockProcess as any);
|
||||
|
||||
const events: any[] = [];
|
||||
mockEmitter.on('dev-server:starting', (payload) => events.push(payload));
|
||||
|
||||
await service.startDevServer(worktreeDir, worktreeDir);
|
||||
|
||||
expect(events.length).toBe(1);
|
||||
expect(events[0].worktreePath).toBe(worktreeDir);
|
||||
});
|
||||
|
||||
it('should prevent concurrent starts for the same worktree', async () => {
|
||||
const { getDevServerService } = await import('@/services/dev-server-service.js');
|
||||
const service = getDevServerService();
|
||||
await service.initialize(testDataDir, mockEmitter as any);
|
||||
|
||||
// Delay spawn to simulate long starting time
|
||||
vi.mocked(spawn).mockImplementation(() => {
|
||||
const p = createMockProcess();
|
||||
// Don't return immediately, simulate some work
|
||||
return p as any;
|
||||
});
|
||||
|
||||
// Start first one (don't await yet if we want to test concurrency)
|
||||
const promise1 = service.startDevServer(worktreeDir, worktreeDir);
|
||||
|
||||
// Try to start second one immediately
|
||||
const result2 = await service.startDevServer(worktreeDir, worktreeDir);
|
||||
|
||||
expect(result2.success).toBe(false);
|
||||
expect(result2.error).toContain('already starting');
|
||||
|
||||
await promise1;
|
||||
});
|
||||
|
||||
it('should persist state to dev-servers.json when started', async () => {
|
||||
const { getDevServerService } = await import('@/services/dev-server-service.js');
|
||||
const service = getDevServerService();
|
||||
await service.initialize(testDataDir, mockEmitter as any);
|
||||
|
||||
const mockProcess = createMockProcess();
|
||||
vi.mocked(spawn).mockReturnValue(mockProcess as any);
|
||||
|
||||
await service.startDevServer(worktreeDir, worktreeDir);
|
||||
|
||||
const statePath = path.join(testDataDir, 'dev-servers.json');
|
||||
const exists = await fs
|
||||
.access(statePath)
|
||||
.then(() => true)
|
||||
.catch(() => false);
|
||||
expect(exists).toBe(true);
|
||||
|
||||
const content = await fs.readFile(statePath, 'utf-8');
|
||||
const state = JSON.parse(content);
|
||||
expect(state.length).toBe(1);
|
||||
expect(state[0].worktreePath).toBe(worktreeDir);
|
||||
});
|
||||
|
||||
it('should load state from dev-servers.json on initialize', async () => {
|
||||
// 1. Create a fake state file
|
||||
const persistedInfo = [
|
||||
{
|
||||
worktreePath: worktreeDir,
|
||||
allocatedPort: 3005,
|
||||
port: 3005,
|
||||
url: 'http://localhost:3005',
|
||||
startedAt: new Date().toISOString(),
|
||||
urlDetected: true,
|
||||
customCommand: 'npm run dev',
|
||||
},
|
||||
];
|
||||
await fs.writeFile(path.join(testDataDir, 'dev-servers.json'), JSON.stringify(persistedInfo));
|
||||
|
||||
// 2. Mock port as IN USE (so it re-attaches)
|
||||
const mockServer = new EventEmitter() as any;
|
||||
mockServer.listen = vi.fn().mockImplementation((port: number, host: string) => {
|
||||
// Fail to listen = port in use
|
||||
process.nextTick(() => mockServer.emit('error', new Error('EADDRINUSE')));
|
||||
});
|
||||
vi.mocked(net.createServer).mockReturnValue(mockServer);
|
||||
|
||||
const { getDevServerService } = await import('@/services/dev-server-service.js');
|
||||
const service = getDevServerService();
|
||||
await service.initialize(testDataDir, mockEmitter as any);
|
||||
|
||||
expect(service.isRunning(worktreeDir)).toBe(true);
|
||||
const info = service.getServerInfo(worktreeDir);
|
||||
expect(info?.port).toBe(3005);
|
||||
});
|
||||
|
||||
it('should prune stale servers from state on initialize if port is available', async () => {
|
||||
// 1. Create a fake state file
|
||||
const persistedInfo = [
|
||||
{
|
||||
worktreePath: worktreeDir,
|
||||
allocatedPort: 3005,
|
||||
port: 3005,
|
||||
url: 'http://localhost:3005',
|
||||
startedAt: new Date().toISOString(),
|
||||
urlDetected: true,
|
||||
},
|
||||
];
|
||||
await fs.writeFile(path.join(testDataDir, 'dev-servers.json'), JSON.stringify(persistedInfo));
|
||||
|
||||
// 2. Mock port as AVAILABLE (so it prunes)
|
||||
const mockServer = new EventEmitter() as any;
|
||||
mockServer.listen = vi.fn().mockImplementation((port: number, host: string) => {
|
||||
process.nextTick(() => mockServer.emit('listening'));
|
||||
});
|
||||
mockServer.close = vi.fn();
|
||||
vi.mocked(net.createServer).mockReturnValue(mockServer);
|
||||
|
||||
const { getDevServerService } = await import('@/services/dev-server-service.js');
|
||||
const service = getDevServerService();
|
||||
await service.initialize(testDataDir, mockEmitter as any);
|
||||
|
||||
expect(service.isRunning(worktreeDir)).toBe(false);
|
||||
|
||||
// Give it a moment to complete the pruning saveState
|
||||
await new Promise((resolve) => setTimeout(resolve, 100));
|
||||
|
||||
// Check if file was updated
|
||||
const content = await fs.readFile(path.join(testDataDir, 'dev-servers.json'), 'utf-8');
|
||||
const state = JSON.parse(content);
|
||||
expect(state.length).toBe(0);
|
||||
});
|
||||
|
||||
it('should update persisted state when URL is detected', async () => {
|
||||
const { getDevServerService } = await import('@/services/dev-server-service.js');
|
||||
const service = getDevServerService();
|
||||
await service.initialize(testDataDir, mockEmitter as any);
|
||||
|
||||
const mockProcess = createMockProcess();
|
||||
vi.mocked(spawn).mockReturnValue(mockProcess as any);
|
||||
|
||||
await service.startDevServer(worktreeDir, worktreeDir);
|
||||
|
||||
// Simulate output with URL
|
||||
mockProcess.stdout.emit('data', Buffer.from('Local: http://localhost:5555/\n'));
|
||||
|
||||
// Give it a moment to process and save (needs to wait for saveQueue)
|
||||
await new Promise((resolve) => setTimeout(resolve, 300));
|
||||
|
||||
const content = await fs.readFile(path.join(testDataDir, 'dev-servers.json'), 'utf-8');
|
||||
const state = JSON.parse(content);
|
||||
expect(state[0].url).toBe('http://localhost:5555/');
|
||||
expect(state[0].port).toBe(5555);
|
||||
expect(state[0].urlDetected).toBe(true);
|
||||
});
|
||||
});
|
||||
|
||||
// Helper to create a mock child process
|
||||
function createMockProcess() {
|
||||
const mockProcess = new EventEmitter() as any;
|
||||
mockProcess.stdout = new EventEmitter();
|
||||
mockProcess.stderr = new EventEmitter();
|
||||
mockProcess.kill = vi.fn();
|
||||
mockProcess.killed = false;
|
||||
mockProcess.pid = 12345;
|
||||
mockProcess.unref = vi.fn();
|
||||
return mockProcess;
|
||||
}
|
||||
@@ -458,6 +458,21 @@ describe('execution-service.ts', () => {
|
||||
expect(callArgs[6]).toBe('claude-sonnet-4');
|
||||
});
|
||||
|
||||
it('passes providerId to runAgentFn when present on feature', async () => {
|
||||
const featureWithProvider: Feature = {
|
||||
...testFeature,
|
||||
providerId: 'zai-provider-1',
|
||||
};
|
||||
vi.mocked(mockLoadFeatureFn).mockResolvedValue(featureWithProvider);
|
||||
|
||||
await service.executeFeature('/test/project', 'feature-1');
|
||||
|
||||
expect(mockRunAgentFn).toHaveBeenCalled();
|
||||
const callArgs = mockRunAgentFn.mock.calls[0];
|
||||
const options = callArgs[7];
|
||||
expect(options.providerId).toBe('zai-provider-1');
|
||||
});
|
||||
|
||||
it('executes pipeline after agent completes', async () => {
|
||||
const pipelineSteps = [{ id: 'step-1', name: 'Step 1', order: 1, instructions: 'Do step 1' }];
|
||||
vi.mocked(pipelineService.getPipelineConfig).mockResolvedValue({
|
||||
@@ -1316,16 +1331,19 @@ describe('execution-service.ts', () => {
|
||||
);
|
||||
});
|
||||
|
||||
it('falls back to project path when worktree not found', async () => {
|
||||
it('emits error and does not execute agent when worktree is not found in worktree mode', async () => {
|
||||
vi.mocked(mockWorktreeResolver.findWorktreeForBranch).mockResolvedValue(null);
|
||||
|
||||
await service.executeFeature('/test/project', 'feature-1', true);
|
||||
|
||||
// Should still run agent, just with project path
|
||||
expect(mockRunAgentFn).toHaveBeenCalled();
|
||||
const callArgs = mockRunAgentFn.mock.calls[0];
|
||||
// First argument is workDir - should be normalized path to /test/project
|
||||
expect(callArgs[0]).toBe(normalizePath('/test/project'));
|
||||
expect(mockRunAgentFn).not.toHaveBeenCalled();
|
||||
expect(mockEventBus.emitAutoModeEvent).toHaveBeenCalledWith(
|
||||
'auto_mode_error',
|
||||
expect.objectContaining({
|
||||
featureId: 'feature-1',
|
||||
error: 'Worktree enabled but no worktree found for feature branch "feature/test-1".',
|
||||
})
|
||||
);
|
||||
});
|
||||
|
||||
it('skips worktree resolution when useWorktrees is false', async () => {
|
||||
|
||||
@@ -0,0 +1,356 @@
|
||||
/**
|
||||
* Tests for providerId passthrough in PipelineOrchestrator
|
||||
* Verifies that feature.providerId is forwarded to runAgentFn in both
|
||||
* executePipeline (step execution) and executeTestStep (test fix) contexts.
|
||||
*/
|
||||
|
||||
import { describe, it, expect, vi, beforeEach } from 'vitest';
|
||||
import type { Feature, PipelineStep } from '@automaker/types';
|
||||
import {
|
||||
PipelineOrchestrator,
|
||||
type PipelineContext,
|
||||
type UpdateFeatureStatusFn,
|
||||
type BuildFeaturePromptFn,
|
||||
type ExecuteFeatureFn,
|
||||
type RunAgentFn,
|
||||
} from '../../../src/services/pipeline-orchestrator.js';
|
||||
import type { TypedEventBus } from '../../../src/services/typed-event-bus.js';
|
||||
import type { FeatureStateManager } from '../../../src/services/feature-state-manager.js';
|
||||
import type { AgentExecutor } from '../../../src/services/agent-executor.js';
|
||||
import type { WorktreeResolver } from '../../../src/services/worktree-resolver.js';
|
||||
import type { SettingsService } from '../../../src/services/settings-service.js';
|
||||
import type { ConcurrencyManager } from '../../../src/services/concurrency-manager.js';
|
||||
import type { TestRunnerService } from '../../../src/services/test-runner-service.js';
|
||||
import * as secureFs from '../../../src/lib/secure-fs.js';
|
||||
import { getFeatureDir } from '@automaker/platform';
|
||||
import {
|
||||
getPromptCustomization,
|
||||
getAutoLoadClaudeMdSetting,
|
||||
filterClaudeMdFromContext,
|
||||
} from '../../../src/lib/settings-helpers.js';
|
||||
|
||||
// Mock pipelineService
|
||||
vi.mock('../../../src/services/pipeline-service.js', () => ({
|
||||
pipelineService: {
|
||||
isPipelineStatus: vi.fn(),
|
||||
getStepIdFromStatus: vi.fn(),
|
||||
getPipelineConfig: vi.fn(),
|
||||
getNextStatus: vi.fn(),
|
||||
},
|
||||
}));
|
||||
|
||||
// Mock merge-service
|
||||
vi.mock('../../../src/services/merge-service.js', () => ({
|
||||
performMerge: vi.fn().mockResolvedValue({ success: true }),
|
||||
}));
|
||||
|
||||
// Mock secureFs
|
||||
vi.mock('../../../src/lib/secure-fs.js', () => ({
|
||||
readFile: vi.fn(),
|
||||
access: vi.fn(),
|
||||
}));
|
||||
|
||||
// Mock settings helpers
|
||||
vi.mock('../../../src/lib/settings-helpers.js', () => ({
|
||||
getPromptCustomization: vi.fn().mockResolvedValue({
|
||||
taskExecution: {
|
||||
implementationInstructions: 'test instructions',
|
||||
playwrightVerificationInstructions: 'test playwright',
|
||||
},
|
||||
}),
|
||||
getAutoLoadClaudeMdSetting: vi.fn().mockResolvedValue(true),
|
||||
getUseClaudeCodeSystemPromptSetting: vi.fn().mockResolvedValue(true),
|
||||
filterClaudeMdFromContext: vi.fn().mockReturnValue('context prompt'),
|
||||
}));
|
||||
|
||||
// Mock validateWorkingDirectory
|
||||
vi.mock('../../../src/lib/sdk-options.js', () => ({
|
||||
validateWorkingDirectory: vi.fn(),
|
||||
}));
|
||||
|
||||
// Mock platform
|
||||
vi.mock('@automaker/platform', () => ({
|
||||
getFeatureDir: vi
|
||||
.fn()
|
||||
.mockImplementation(
|
||||
(projectPath: string, featureId: string) => `${projectPath}/.automaker/features/${featureId}`
|
||||
),
|
||||
}));
|
||||
|
||||
// Mock model-resolver
|
||||
vi.mock('@automaker/model-resolver', () => ({
|
||||
resolveModelString: vi.fn().mockReturnValue('claude-sonnet-4'),
|
||||
DEFAULT_MODELS: { claude: 'claude-sonnet-4' },
|
||||
}));
|
||||
|
||||
describe('PipelineOrchestrator - providerId passthrough', () => {
|
||||
let mockEventBus: TypedEventBus;
|
||||
let mockFeatureStateManager: FeatureStateManager;
|
||||
let mockAgentExecutor: AgentExecutor;
|
||||
let mockTestRunnerService: TestRunnerService;
|
||||
let mockWorktreeResolver: WorktreeResolver;
|
||||
let mockConcurrencyManager: ConcurrencyManager;
|
||||
let mockUpdateFeatureStatusFn: UpdateFeatureStatusFn;
|
||||
let mockLoadContextFilesFn: vi.Mock;
|
||||
let mockBuildFeaturePromptFn: BuildFeaturePromptFn;
|
||||
let mockExecuteFeatureFn: ExecuteFeatureFn;
|
||||
let mockRunAgentFn: RunAgentFn;
|
||||
let orchestrator: PipelineOrchestrator;
|
||||
|
||||
const testSteps: PipelineStep[] = [
|
||||
{
|
||||
id: 'step-1',
|
||||
name: 'Step 1',
|
||||
order: 1,
|
||||
instructions: 'Do step 1',
|
||||
colorClass: 'blue',
|
||||
createdAt: '',
|
||||
updatedAt: '',
|
||||
},
|
||||
];
|
||||
|
||||
const createFeatureWithProvider = (providerId?: string): Feature => ({
|
||||
id: 'feature-1',
|
||||
title: 'Test Feature',
|
||||
category: 'test',
|
||||
description: 'Test description',
|
||||
status: 'pipeline_step-1',
|
||||
branchName: 'feature/test-1',
|
||||
providerId,
|
||||
});
|
||||
|
||||
beforeEach(() => {
|
||||
vi.clearAllMocks();
|
||||
|
||||
mockEventBus = {
|
||||
emitAutoModeEvent: vi.fn(),
|
||||
getUnderlyingEmitter: vi.fn().mockReturnValue({}),
|
||||
} as unknown as TypedEventBus;
|
||||
|
||||
mockFeatureStateManager = {
|
||||
updateFeatureStatus: vi.fn().mockResolvedValue(undefined),
|
||||
loadFeature: vi.fn().mockResolvedValue(createFeatureWithProvider()),
|
||||
} as unknown as FeatureStateManager;
|
||||
|
||||
mockAgentExecutor = {
|
||||
execute: vi.fn().mockResolvedValue({ success: true }),
|
||||
} as unknown as AgentExecutor;
|
||||
|
||||
mockTestRunnerService = {
|
||||
startTests: vi
|
||||
.fn()
|
||||
.mockResolvedValue({ success: true, result: { sessionId: 'test-session-1' } }),
|
||||
getSession: vi.fn().mockReturnValue({
|
||||
status: 'passed',
|
||||
exitCode: 0,
|
||||
startedAt: new Date(),
|
||||
finishedAt: new Date(),
|
||||
}),
|
||||
getSessionOutput: vi
|
||||
.fn()
|
||||
.mockReturnValue({ success: true, result: { output: 'All tests passed' } }),
|
||||
} as unknown as TestRunnerService;
|
||||
|
||||
mockWorktreeResolver = {
|
||||
findWorktreeForBranch: vi.fn().mockResolvedValue('/test/worktree'),
|
||||
getCurrentBranch: vi.fn().mockResolvedValue('main'),
|
||||
} as unknown as WorktreeResolver;
|
||||
|
||||
mockConcurrencyManager = {
|
||||
acquire: vi.fn().mockImplementation(({ featureId, isAutoMode }) => ({
|
||||
featureId,
|
||||
projectPath: '/test/project',
|
||||
abortController: new AbortController(),
|
||||
branchName: null,
|
||||
worktreePath: null,
|
||||
isAutoMode: isAutoMode ?? false,
|
||||
})),
|
||||
release: vi.fn(),
|
||||
getRunningFeature: vi.fn().mockReturnValue(undefined),
|
||||
} as unknown as ConcurrencyManager;
|
||||
|
||||
mockUpdateFeatureStatusFn = vi.fn().mockResolvedValue(undefined);
|
||||
mockLoadContextFilesFn = vi.fn().mockResolvedValue({ contextPrompt: 'test context' });
|
||||
mockBuildFeaturePromptFn = vi.fn().mockReturnValue('Feature prompt content');
|
||||
mockExecuteFeatureFn = vi.fn().mockResolvedValue(undefined);
|
||||
mockRunAgentFn = vi.fn().mockResolvedValue(undefined);
|
||||
|
||||
vi.mocked(secureFs.readFile).mockResolvedValue('Previous context');
|
||||
vi.mocked(secureFs.access).mockResolvedValue(undefined);
|
||||
vi.mocked(getFeatureDir).mockImplementation(
|
||||
(projectPath: string, featureId: string) => `${projectPath}/.automaker/features/${featureId}`
|
||||
);
|
||||
vi.mocked(getPromptCustomization).mockResolvedValue({
|
||||
taskExecution: {
|
||||
implementationInstructions: 'test instructions',
|
||||
playwrightVerificationInstructions: 'test playwright',
|
||||
},
|
||||
} as any);
|
||||
vi.mocked(getAutoLoadClaudeMdSetting).mockResolvedValue(true);
|
||||
vi.mocked(filterClaudeMdFromContext).mockReturnValue('context prompt');
|
||||
|
||||
orchestrator = new PipelineOrchestrator(
|
||||
mockEventBus,
|
||||
mockFeatureStateManager,
|
||||
mockAgentExecutor,
|
||||
mockTestRunnerService,
|
||||
mockWorktreeResolver,
|
||||
mockConcurrencyManager,
|
||||
null,
|
||||
mockUpdateFeatureStatusFn,
|
||||
mockLoadContextFilesFn,
|
||||
mockBuildFeaturePromptFn,
|
||||
mockExecuteFeatureFn,
|
||||
mockRunAgentFn
|
||||
);
|
||||
});
|
||||
|
||||
describe('executePipeline', () => {
|
||||
it('should pass providerId to runAgentFn options when feature has providerId', async () => {
|
||||
const feature = createFeatureWithProvider('moonshot-ai');
|
||||
const context: PipelineContext = {
|
||||
projectPath: '/test/project',
|
||||
featureId: 'feature-1',
|
||||
feature,
|
||||
steps: testSteps,
|
||||
workDir: '/test/project',
|
||||
worktreePath: null,
|
||||
branchName: 'feature/test-1',
|
||||
abortController: new AbortController(),
|
||||
autoLoadClaudeMd: true,
|
||||
testAttempts: 0,
|
||||
maxTestAttempts: 5,
|
||||
};
|
||||
|
||||
await orchestrator.executePipeline(context);
|
||||
|
||||
expect(mockRunAgentFn).toHaveBeenCalledTimes(1);
|
||||
const options = mockRunAgentFn.mock.calls[0][7];
|
||||
expect(options).toHaveProperty('providerId', 'moonshot-ai');
|
||||
});
|
||||
|
||||
it('should pass undefined providerId when feature has no providerId', async () => {
|
||||
const feature = createFeatureWithProvider(undefined);
|
||||
const context: PipelineContext = {
|
||||
projectPath: '/test/project',
|
||||
featureId: 'feature-1',
|
||||
feature,
|
||||
steps: testSteps,
|
||||
workDir: '/test/project',
|
||||
worktreePath: null,
|
||||
branchName: 'feature/test-1',
|
||||
abortController: new AbortController(),
|
||||
autoLoadClaudeMd: true,
|
||||
testAttempts: 0,
|
||||
maxTestAttempts: 5,
|
||||
};
|
||||
|
||||
await orchestrator.executePipeline(context);
|
||||
|
||||
expect(mockRunAgentFn).toHaveBeenCalledTimes(1);
|
||||
const options = mockRunAgentFn.mock.calls[0][7];
|
||||
expect(options).toHaveProperty('providerId', undefined);
|
||||
});
|
||||
|
||||
it('should pass status alongside providerId in options', async () => {
|
||||
const feature = createFeatureWithProvider('zhipu');
|
||||
const context: PipelineContext = {
|
||||
projectPath: '/test/project',
|
||||
featureId: 'feature-1',
|
||||
feature,
|
||||
steps: testSteps,
|
||||
workDir: '/test/project',
|
||||
worktreePath: null,
|
||||
branchName: 'feature/test-1',
|
||||
abortController: new AbortController(),
|
||||
autoLoadClaudeMd: true,
|
||||
testAttempts: 0,
|
||||
maxTestAttempts: 5,
|
||||
};
|
||||
|
||||
await orchestrator.executePipeline(context);
|
||||
|
||||
const options = mockRunAgentFn.mock.calls[0][7];
|
||||
expect(options).toHaveProperty('providerId', 'zhipu');
|
||||
expect(options).toHaveProperty('status');
|
||||
});
|
||||
});
|
||||
|
||||
describe('executeTestStep', () => {
|
||||
it('should pass providerId in test fix agent options when tests fail', async () => {
|
||||
vi.mocked(mockTestRunnerService.getSession)
|
||||
.mockReturnValueOnce({
|
||||
status: 'failed',
|
||||
exitCode: 1,
|
||||
startedAt: new Date(),
|
||||
finishedAt: new Date(),
|
||||
} as never)
|
||||
.mockReturnValueOnce({
|
||||
status: 'passed',
|
||||
exitCode: 0,
|
||||
startedAt: new Date(),
|
||||
finishedAt: new Date(),
|
||||
} as never);
|
||||
|
||||
const feature = createFeatureWithProvider('custom-provider');
|
||||
const context: PipelineContext = {
|
||||
projectPath: '/test/project',
|
||||
featureId: 'feature-1',
|
||||
feature,
|
||||
steps: testSteps,
|
||||
workDir: '/test/project',
|
||||
worktreePath: null,
|
||||
branchName: 'feature/test-1',
|
||||
abortController: new AbortController(),
|
||||
autoLoadClaudeMd: true,
|
||||
testAttempts: 0,
|
||||
maxTestAttempts: 5,
|
||||
};
|
||||
|
||||
await orchestrator.executeTestStep(context, 'npm test');
|
||||
|
||||
// The fix agent should receive providerId
|
||||
expect(mockRunAgentFn).toHaveBeenCalledTimes(1);
|
||||
const options = mockRunAgentFn.mock.calls[0][7];
|
||||
expect(options).toHaveProperty('providerId', 'custom-provider');
|
||||
}, 15000);
|
||||
|
||||
it('should pass thinkingLevel in test fix agent options', async () => {
|
||||
vi.mocked(mockTestRunnerService.getSession)
|
||||
.mockReturnValueOnce({
|
||||
status: 'failed',
|
||||
exitCode: 1,
|
||||
startedAt: new Date(),
|
||||
finishedAt: new Date(),
|
||||
} as never)
|
||||
.mockReturnValueOnce({
|
||||
status: 'passed',
|
||||
exitCode: 0,
|
||||
startedAt: new Date(),
|
||||
finishedAt: new Date(),
|
||||
} as never);
|
||||
|
||||
const feature = createFeatureWithProvider('moonshot-ai');
|
||||
feature.thinkingLevel = 'high';
|
||||
const context: PipelineContext = {
|
||||
projectPath: '/test/project',
|
||||
featureId: 'feature-1',
|
||||
feature,
|
||||
steps: testSteps,
|
||||
workDir: '/test/project',
|
||||
worktreePath: null,
|
||||
branchName: 'feature/test-1',
|
||||
abortController: new AbortController(),
|
||||
autoLoadClaudeMd: true,
|
||||
testAttempts: 0,
|
||||
maxTestAttempts: 5,
|
||||
};
|
||||
|
||||
await orchestrator.executeTestStep(context, 'npm test');
|
||||
|
||||
const options = mockRunAgentFn.mock.calls[0][7];
|
||||
expect(options).toHaveProperty('thinkingLevel', 'high');
|
||||
expect(options).toHaveProperty('providerId', 'moonshot-ai');
|
||||
}, 15000);
|
||||
});
|
||||
});
|
||||
@@ -0,0 +1,302 @@
|
||||
/**
|
||||
* Tests for status + providerId coexistence in PipelineOrchestrator options.
|
||||
*
|
||||
* During rebase onto upstream/v1.0.0rc, a merge conflict arose where
|
||||
* upstream added `status: currentStatus` and the incoming branch added
|
||||
* `providerId: feature.providerId`. The conflict resolution kept BOTH fields.
|
||||
*
|
||||
* This test validates that both fields coexist correctly in the options
|
||||
* object passed to runAgentFn in both executePipeline and executeTestStep.
|
||||
*/
|
||||
|
||||
import { describe, it, expect, vi, beforeEach } from 'vitest';
|
||||
import type { Feature, PipelineStep } from '@automaker/types';
|
||||
import {
|
||||
PipelineOrchestrator,
|
||||
type PipelineContext,
|
||||
type UpdateFeatureStatusFn,
|
||||
type BuildFeaturePromptFn,
|
||||
type ExecuteFeatureFn,
|
||||
type RunAgentFn,
|
||||
} from '../../../src/services/pipeline-orchestrator.js';
|
||||
import type { TypedEventBus } from '../../../src/services/typed-event-bus.js';
|
||||
import type { FeatureStateManager } from '../../../src/services/feature-state-manager.js';
|
||||
import type { AgentExecutor } from '../../../src/services/agent-executor.js';
|
||||
import type { WorktreeResolver } from '../../../src/services/worktree-resolver.js';
|
||||
import type { ConcurrencyManager } from '../../../src/services/concurrency-manager.js';
|
||||
import type { TestRunnerService } from '../../../src/services/test-runner-service.js';
|
||||
import * as secureFs from '../../../src/lib/secure-fs.js';
|
||||
import { getFeatureDir } from '@automaker/platform';
|
||||
import {
|
||||
getPromptCustomization,
|
||||
getAutoLoadClaudeMdSetting,
|
||||
filterClaudeMdFromContext,
|
||||
} from '../../../src/lib/settings-helpers.js';
|
||||
|
||||
vi.mock('../../../src/services/pipeline-service.js', () => ({
|
||||
pipelineService: {
|
||||
isPipelineStatus: vi.fn(),
|
||||
getStepIdFromStatus: vi.fn(),
|
||||
getPipelineConfig: vi.fn(),
|
||||
getNextStatus: vi.fn(),
|
||||
},
|
||||
}));
|
||||
|
||||
vi.mock('../../../src/services/merge-service.js', () => ({
|
||||
performMerge: vi.fn().mockResolvedValue({ success: true }),
|
||||
}));
|
||||
|
||||
vi.mock('../../../src/lib/secure-fs.js', () => ({
|
||||
readFile: vi.fn(),
|
||||
access: vi.fn(),
|
||||
}));
|
||||
|
||||
vi.mock('../../../src/lib/settings-helpers.js', () => ({
|
||||
getPromptCustomization: vi.fn().mockResolvedValue({
|
||||
taskExecution: {
|
||||
implementationInstructions: 'test instructions',
|
||||
playwrightVerificationInstructions: 'test playwright',
|
||||
},
|
||||
}),
|
||||
getAutoLoadClaudeMdSetting: vi.fn().mockResolvedValue(true),
|
||||
getUseClaudeCodeSystemPromptSetting: vi.fn().mockResolvedValue(true),
|
||||
filterClaudeMdFromContext: vi.fn().mockReturnValue('context prompt'),
|
||||
}));
|
||||
|
||||
vi.mock('../../../src/lib/sdk-options.js', () => ({
|
||||
validateWorkingDirectory: vi.fn(),
|
||||
}));
|
||||
|
||||
vi.mock('@automaker/platform', () => ({
|
||||
getFeatureDir: vi
|
||||
.fn()
|
||||
.mockImplementation(
|
||||
(projectPath: string, featureId: string) => `${projectPath}/.automaker/features/${featureId}`
|
||||
),
|
||||
}));
|
||||
|
||||
vi.mock('@automaker/model-resolver', () => ({
|
||||
resolveModelString: vi.fn().mockReturnValue('claude-sonnet-4'),
|
||||
DEFAULT_MODELS: { claude: 'claude-sonnet-4' },
|
||||
}));
|
||||
|
||||
describe('PipelineOrchestrator - status and providerId coexistence', () => {
|
||||
let mockRunAgentFn: RunAgentFn;
|
||||
let orchestrator: PipelineOrchestrator;
|
||||
|
||||
const testSteps: PipelineStep[] = [
|
||||
{
|
||||
id: 'implement',
|
||||
name: 'Implement Feature',
|
||||
order: 1,
|
||||
instructions: 'Implement the feature',
|
||||
colorClass: 'blue',
|
||||
createdAt: '',
|
||||
updatedAt: '',
|
||||
},
|
||||
];
|
||||
|
||||
const createFeature = (overrides: Partial<Feature> = {}): Feature => ({
|
||||
id: 'feature-1',
|
||||
title: 'Test Feature',
|
||||
category: 'test',
|
||||
description: 'Test description',
|
||||
status: 'pipeline_implement',
|
||||
branchName: 'feature/test-1',
|
||||
providerId: 'moonshot-ai',
|
||||
thinkingLevel: 'medium',
|
||||
reasoningEffort: 'high',
|
||||
...overrides,
|
||||
});
|
||||
|
||||
const createContext = (feature: Feature): PipelineContext => ({
|
||||
projectPath: '/test/project',
|
||||
featureId: feature.id,
|
||||
feature,
|
||||
steps: testSteps,
|
||||
workDir: '/test/project',
|
||||
worktreePath: null,
|
||||
branchName: feature.branchName ?? 'main',
|
||||
abortController: new AbortController(),
|
||||
autoLoadClaudeMd: true,
|
||||
testAttempts: 0,
|
||||
maxTestAttempts: 5,
|
||||
});
|
||||
|
||||
beforeEach(() => {
|
||||
vi.clearAllMocks();
|
||||
mockRunAgentFn = vi.fn().mockResolvedValue(undefined);
|
||||
|
||||
vi.mocked(secureFs.readFile).mockResolvedValue('Previous context');
|
||||
vi.mocked(secureFs.access).mockResolvedValue(undefined);
|
||||
vi.mocked(getFeatureDir).mockImplementation(
|
||||
(projectPath: string, featureId: string) => `${projectPath}/.automaker/features/${featureId}`
|
||||
);
|
||||
vi.mocked(getPromptCustomization).mockResolvedValue({
|
||||
taskExecution: {
|
||||
implementationInstructions: 'test instructions',
|
||||
playwrightVerificationInstructions: 'test playwright',
|
||||
},
|
||||
} as any);
|
||||
vi.mocked(getAutoLoadClaudeMdSetting).mockResolvedValue(true);
|
||||
vi.mocked(filterClaudeMdFromContext).mockReturnValue('context prompt');
|
||||
|
||||
const mockEventBus = {
|
||||
emitAutoModeEvent: vi.fn(),
|
||||
getUnderlyingEmitter: vi.fn().mockReturnValue({}),
|
||||
} as unknown as TypedEventBus;
|
||||
|
||||
const mockFeatureStateManager = {
|
||||
updateFeatureStatus: vi.fn().mockResolvedValue(undefined),
|
||||
loadFeature: vi.fn().mockResolvedValue(createFeature()),
|
||||
} as unknown as FeatureStateManager;
|
||||
|
||||
const mockTestRunnerService = {
|
||||
startTests: vi
|
||||
.fn()
|
||||
.mockResolvedValue({ success: true, result: { sessionId: 'test-session-1' } }),
|
||||
getSession: vi.fn().mockReturnValue({
|
||||
status: 'passed',
|
||||
exitCode: 0,
|
||||
startedAt: new Date(),
|
||||
finishedAt: new Date(),
|
||||
}),
|
||||
getSessionOutput: vi
|
||||
.fn()
|
||||
.mockReturnValue({ success: true, result: { output: 'All tests passed' } }),
|
||||
} as unknown as TestRunnerService;
|
||||
|
||||
orchestrator = new PipelineOrchestrator(
|
||||
mockEventBus,
|
||||
mockFeatureStateManager,
|
||||
{} as AgentExecutor,
|
||||
mockTestRunnerService,
|
||||
{
|
||||
findWorktreeForBranch: vi.fn().mockResolvedValue('/test/worktree'),
|
||||
getCurrentBranch: vi.fn().mockResolvedValue('main'),
|
||||
} as unknown as WorktreeResolver,
|
||||
{
|
||||
acquire: vi.fn().mockImplementation(({ featureId }) => ({
|
||||
featureId,
|
||||
projectPath: '/test/project',
|
||||
abortController: new AbortController(),
|
||||
branchName: null,
|
||||
worktreePath: null,
|
||||
isAutoMode: false,
|
||||
})),
|
||||
release: vi.fn(),
|
||||
getRunningFeature: vi.fn().mockReturnValue(undefined),
|
||||
} as unknown as ConcurrencyManager,
|
||||
null,
|
||||
vi.fn().mockResolvedValue(undefined),
|
||||
vi.fn().mockResolvedValue({ contextPrompt: 'test context' }),
|
||||
vi.fn().mockReturnValue('Feature prompt content'),
|
||||
vi.fn().mockResolvedValue(undefined),
|
||||
mockRunAgentFn
|
||||
);
|
||||
});
|
||||
|
||||
describe('executePipeline - options object', () => {
|
||||
it('should pass both status and providerId in options', async () => {
|
||||
const feature = createFeature({ providerId: 'moonshot-ai' });
|
||||
const context = createContext(feature);
|
||||
|
||||
await orchestrator.executePipeline(context);
|
||||
|
||||
expect(mockRunAgentFn).toHaveBeenCalledTimes(1);
|
||||
const options = mockRunAgentFn.mock.calls[0][7];
|
||||
expect(options).toHaveProperty('status', 'pipeline_implement');
|
||||
expect(options).toHaveProperty('providerId', 'moonshot-ai');
|
||||
});
|
||||
|
||||
it('should pass status even when providerId is undefined', async () => {
|
||||
const feature = createFeature({ providerId: undefined });
|
||||
const context = createContext(feature);
|
||||
|
||||
await orchestrator.executePipeline(context);
|
||||
|
||||
const options = mockRunAgentFn.mock.calls[0][7];
|
||||
expect(options).toHaveProperty('status', 'pipeline_implement');
|
||||
expect(options).toHaveProperty('providerId', undefined);
|
||||
});
|
||||
|
||||
it('should pass thinkingLevel and reasoningEffort alongside status and providerId', async () => {
|
||||
const feature = createFeature({
|
||||
providerId: 'zhipu',
|
||||
thinkingLevel: 'high',
|
||||
reasoningEffort: 'medium',
|
||||
});
|
||||
const context = createContext(feature);
|
||||
|
||||
await orchestrator.executePipeline(context);
|
||||
|
||||
const options = mockRunAgentFn.mock.calls[0][7];
|
||||
expect(options).toHaveProperty('status', 'pipeline_implement');
|
||||
expect(options).toHaveProperty('providerId', 'zhipu');
|
||||
expect(options).toHaveProperty('thinkingLevel', 'high');
|
||||
expect(options).toHaveProperty('reasoningEffort', 'medium');
|
||||
});
|
||||
});
|
||||
|
||||
describe('executeTestStep - options object', () => {
|
||||
it('should pass both status and providerId in test fix agent options', async () => {
|
||||
const feature = createFeature({
|
||||
status: 'running',
|
||||
providerId: 'custom-provider',
|
||||
});
|
||||
const context = createContext(feature);
|
||||
|
||||
const mockTestRunner = orchestrator['testRunnerService'] as any;
|
||||
vi.mocked(mockTestRunner.getSession)
|
||||
.mockReturnValueOnce({
|
||||
status: 'failed',
|
||||
exitCode: 1,
|
||||
startedAt: new Date(),
|
||||
finishedAt: new Date(),
|
||||
})
|
||||
.mockReturnValueOnce({
|
||||
status: 'passed',
|
||||
exitCode: 0,
|
||||
startedAt: new Date(),
|
||||
finishedAt: new Date(),
|
||||
});
|
||||
|
||||
await orchestrator.executeTestStep(context, 'npm test');
|
||||
|
||||
expect(mockRunAgentFn).toHaveBeenCalledTimes(1);
|
||||
const options = mockRunAgentFn.mock.calls[0][7];
|
||||
expect(options).toHaveProperty('status', 'running');
|
||||
expect(options).toHaveProperty('providerId', 'custom-provider');
|
||||
}, 15000);
|
||||
|
||||
it('should pass feature.status (not currentStatus) in test fix context', async () => {
|
||||
const feature = createFeature({
|
||||
status: 'pipeline_test',
|
||||
providerId: 'moonshot-ai',
|
||||
});
|
||||
const context = createContext(feature);
|
||||
|
||||
const mockTestRunner = orchestrator['testRunnerService'] as any;
|
||||
vi.mocked(mockTestRunner.getSession)
|
||||
.mockReturnValueOnce({
|
||||
status: 'failed',
|
||||
exitCode: 1,
|
||||
startedAt: new Date(),
|
||||
finishedAt: new Date(),
|
||||
})
|
||||
.mockReturnValueOnce({
|
||||
status: 'passed',
|
||||
exitCode: 0,
|
||||
startedAt: new Date(),
|
||||
finishedAt: new Date(),
|
||||
});
|
||||
|
||||
await orchestrator.executeTestStep(context, 'npm test');
|
||||
|
||||
const options = mockRunAgentFn.mock.calls[0][7];
|
||||
// In test fix context, status should come from context.feature.status
|
||||
expect(options).toHaveProperty('status', 'pipeline_test');
|
||||
expect(options).toHaveProperty('providerId', 'moonshot-ai');
|
||||
}, 15000);
|
||||
});
|
||||
});
|
||||
@@ -107,6 +107,25 @@ branch refs/heads/feature-y
|
||||
expect(result).toBe(normalizePath('/Users/dev/project/.worktrees/feature-x'));
|
||||
});
|
||||
|
||||
it('should normalize refs/heads and trim when resolving target branch', async () => {
|
||||
mockExecAsync(async () => ({ stdout: porcelainOutput, stderr: '' }));
|
||||
|
||||
const result = await resolver.findWorktreeForBranch(
|
||||
'/Users/dev/project',
|
||||
' refs/heads/feature-x '
|
||||
);
|
||||
|
||||
expect(result).toBe(normalizePath('/Users/dev/project/.worktrees/feature-x'));
|
||||
});
|
||||
|
||||
it('should normalize remote-style target branch names', async () => {
|
||||
mockExecAsync(async () => ({ stdout: porcelainOutput, stderr: '' }));
|
||||
|
||||
const result = await resolver.findWorktreeForBranch('/Users/dev/project', 'origin/feature-x');
|
||||
|
||||
expect(result).toBe(normalizePath('/Users/dev/project/.worktrees/feature-x'));
|
||||
});
|
||||
|
||||
it('should return null when branch not found', async () => {
|
||||
mockExecAsync(async () => ({ stdout: porcelainOutput, stderr: '' }));
|
||||
|
||||
|
||||
Reference in New Issue
Block a user