mirror of
https://github.com/AutoMaker-Org/automaker.git
synced 2026-03-18 22:33:08 +00:00
Bug fixes and stability improvements (#815)
* fix(copilot): correct tool.execution_complete event handling The CopilotProvider was using incorrect event type and data structure for tool execution completion events from the @github/copilot-sdk, causing tool call outputs to be empty. Changes: - Update event type from 'tool.execution_end' to 'tool.execution_complete' - Fix data structure to use nested result.content instead of flat result - Fix error structure to use error.message instead of flat error - Add success field to match SDK event structure - Add tests for empty and missing result handling This aligns with the official @github/copilot-sdk v0.1.16 types defined in session-events.d.ts. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * test(copilot): add edge case test for error with code field Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * refactor(copilot): improve error handling and code quality Code review improvements: - Extract magic string '[ERROR]' to TOOL_ERROR_PREFIX constant - Add null-safe error handling with direct error variable assignment - Include error codes in error messages for better debugging - Add JSDoc documentation for tool.execution_complete handler - Update tests to verify error codes are displayed - Add missing tool_use_id assertion in error test These changes improve: - Code maintainability (no magic strings) - Debugging experience (error codes now visible) - Type safety (explicit null checks) - Test coverage (verify error code formatting) Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Changes from fix/bug-fixes-1-0 * test(copilot): add edge case test for error with code field Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Changes from fix/bug-fixes-1-0 * fix: Handle detached HEAD state in worktree discovery and recovery * fix: Remove unused isDevServerStarting prop and md: breakpoint classes * fix: Add missing dependency and sanitize persisted cache data * feat: Ensure NODE_ENV is set to test in vitest configs * feat: Configure Playwright to run only E2E tests * fix: Improve PR tracking and dev server lifecycle management * feat: Add settings-based defaults for planning mode, model config, and custom providers. Fixes #816 * feat: Add worktree and branch selector to graph view * fix: Add timeout and error handling for worktree HEAD ref resolution * fix: use absolute icon path and place icon outside asar on Linux The hicolor icon theme index only lists sizes up to 512x512, so an icon installed only at 1024x1024 is invisible to GNOME/KDE's theme resolver, causing both the app launcher and taskbar to show a generic icon. Additionally, BrowserWindow.icon cannot be read by the window manager when the file is inside app.asar. - extraResources: copy logo_larger.png to resources/ (outside asar) so it lands at /opt/Automaker/resources/logo_larger.png on install - linux.desktop.Icon: set to the absolute resources path, bypassing the hicolor theme lookup and its size constraints entirely - icon-manager.ts: on Linux production use process.resourcesPath so BrowserWindow receives a real filesystem path the WM can read directly Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * fix: use linux.desktop.entry for custom desktop Icon field electron-builder v26 rejects arbitrary keys in linux.desktop — the correct schema wraps custom .desktop overrides inside desktop.entry. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * fix: set desktop name on Linux so taskbar uses the correct app icon Without app.setDesktopName(), the window manager cannot associate the running Electron process with automaker.desktop. GNOME/KDE fall back to _NET_WM_ICON which defaults to Electron's own bundled icon. Calling app.setDesktopName('automaker.desktop') before any window is created sets the _GTK_APPLICATION_ID hint and XDG app_id so the WM picks up the desktop entry's Icon for the taskbar. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * Fix: memory and context views mobile friendly (#818) * Changes from fix/memory-and-context-mobile-friendly * fix: Improve file extension detection and add path traversal protection * refactor: Extract file extension utilities and add path traversal guards Code review improvements: - Extract isMarkdownFilename and isImageFilename to shared image-utils.ts - Remove duplicated code from context-view.tsx and memory-view.tsx - Add path traversal guard for context fixture utilities (matching memory) - Add 7 new tests for context fixture path traversal protection - Total 61 tests pass Addresses code review feedback from PR #813 Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * test: Add e2e tests for profiles crud and board background persistence * Update apps/ui/playwright.config.ts Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> * fix: Add robust test navigation handling and file filtering * fix: Format NODE_OPTIONS configuration on single line * test: Update profiles and board background persistence tests * test: Replace iPhone 13 Pro with Pixel 5 for mobile test consistency * Update apps/ui/src/components/views/context-view.tsx Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> * chore: Remove test project directory * feat: Filter context files by type and improve mobile menu visibility --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> * fix: Improve test reliability and localhost handling * chore: Use explicit TEST_USE_EXTERNAL_BACKEND env var for server cleanup * feat: Add E2E/CI mock mode for provider factory and auth verification * feat: Add remoteBranch parameter to pull and rebase operations * chore: Enhance E2E testing setup with worker isolation and auth state management - Updated .gitignore to include worker-specific test fixtures. - Modified e2e-tests.yml to implement test sharding for improved CI performance. - Refactored global setup to authenticate once and save session state for reuse across tests. - Introduced worker-isolated fixture paths to prevent conflicts during parallel test execution. - Improved test navigation and loading handling for better reliability. - Updated various test files to utilize new auth state management and fixture paths. * fix: Update Playwright configuration and improve test reliability - Increased the number of workers in Playwright configuration for better parallelism in CI environments. - Enhanced the board background persistence test to ensure dropdown stability by waiting for the list to populate before interaction, improving test reliability. * chore: Simplify E2E test configuration and enhance mock implementations - Updated e2e-tests.yml to run tests in a single shard for streamlined CI execution. - Enhanced unit tests for worktree list handling by introducing a mock for execGitCommand, improving test reliability and coverage. - Refactored setup functions to better manage command mocks for git operations in tests. - Improved error handling in mkdirSafe function to account for undefined stats in certain environments. * refactor: Improve test configurations and enhance error handling - Updated Playwright configuration to clear VITE_SERVER_URL, ensuring the frontend uses the Vite proxy and preventing cookie domain mismatches. - Enhanced MergeRebaseDialog logic to normalize selectedBranch for better handling of various ref formats. - Improved global setup with a more robust backend health check, throwing an error if the backend is not healthy after retries. - Refactored project creation tests to handle file existence checks more reliably. - Added error handling for missing E2E source fixtures to guide setup process. - Enhanced memory navigation to handle sandbox dialog visibility more effectively. * refactor: Enhance Git command execution and improve test configurations - Updated Git command execution to merge environment paths correctly, ensuring proper command execution context. - Refactored the Git initialization process to handle errors more gracefully and ensure user configuration is set before creating the initial commit. - Improved test configurations by updating Playwright test identifiers for better clarity and consistency across different project states. - Enhanced cleanup functions in tests to handle directory removal more robustly, preventing errors during test execution. * fix: Resolve React hooks errors from duplicate instances in dependency tree * style: Format alias configuration for improved readability --------- Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> Co-authored-by: DhanushSantosh <dhanushsantoshs05@gmail.com> Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
This commit is contained in:
438
apps/ui/tests/unit/hooks/use-board-column-features.test.ts
Normal file
438
apps/ui/tests/unit/hooks/use-board-column-features.test.ts
Normal file
@@ -0,0 +1,438 @@
|
||||
/**
|
||||
* Unit tests for useBoardColumnFeatures hook
|
||||
* These tests verify the column filtering logic, including the race condition
|
||||
* protection for recently completed features appearing in backlog.
|
||||
*/
|
||||
|
||||
import { describe, it, expect, beforeEach, afterEach, vi } from 'vitest';
|
||||
import { renderHook, act } from '@testing-library/react';
|
||||
import { useBoardColumnFeatures } from '../../../src/components/views/board-view/hooks/use-board-column-features';
|
||||
import { useAppStore } from '../../../src/store/app-store';
|
||||
import type { Feature } from '@automaker/types';
|
||||
|
||||
// Helper to create mock features
|
||||
function createMockFeature(id: string, status: string, options: Partial<Feature> = {}): Feature {
|
||||
return {
|
||||
id,
|
||||
title: `Feature ${id}`,
|
||||
category: 'test',
|
||||
description: `Description for ${id}`,
|
||||
status,
|
||||
...options,
|
||||
};
|
||||
}
|
||||
|
||||
describe('useBoardColumnFeatures', () => {
|
||||
const defaultProps = {
|
||||
features: [] as Feature[],
|
||||
runningAutoTasks: [] as string[],
|
||||
runningAutoTasksAllWorktrees: [] as string[],
|
||||
searchQuery: '',
|
||||
currentWorktreePath: null as string | null,
|
||||
currentWorktreeBranch: null as string | null,
|
||||
projectPath: '/test/project' as string | null,
|
||||
};
|
||||
|
||||
beforeEach(() => {
|
||||
// Reset store state
|
||||
useAppStore.setState({
|
||||
recentlyCompletedFeatures: new Set<string>(),
|
||||
});
|
||||
// Suppress console.debug in tests
|
||||
vi.spyOn(console, 'debug').mockImplementation(() => {});
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
vi.restoreAllMocks();
|
||||
});
|
||||
|
||||
describe('basic column mapping', () => {
|
||||
it('should map backlog features to backlog column', () => {
|
||||
const features = [createMockFeature('feat-1', 'backlog')];
|
||||
|
||||
const { result } = renderHook(() =>
|
||||
useBoardColumnFeatures({
|
||||
...defaultProps,
|
||||
features,
|
||||
})
|
||||
);
|
||||
|
||||
expect(result.current.columnFeaturesMap.backlog).toHaveLength(1);
|
||||
expect(result.current.columnFeaturesMap.backlog[0].id).toBe('feat-1');
|
||||
});
|
||||
|
||||
it('should map merge_conflict features to backlog column', () => {
|
||||
const features = [createMockFeature('feat-1', 'merge_conflict')];
|
||||
|
||||
const { result } = renderHook(() =>
|
||||
useBoardColumnFeatures({
|
||||
...defaultProps,
|
||||
features,
|
||||
})
|
||||
);
|
||||
|
||||
expect(result.current.columnFeaturesMap.backlog).toHaveLength(1);
|
||||
expect(result.current.columnFeaturesMap.backlog[0].id).toBe('feat-1');
|
||||
});
|
||||
|
||||
it('should map in_progress features to in_progress column', () => {
|
||||
const features = [createMockFeature('feat-1', 'in_progress')];
|
||||
|
||||
const { result } = renderHook(() =>
|
||||
useBoardColumnFeatures({
|
||||
...defaultProps,
|
||||
features,
|
||||
})
|
||||
);
|
||||
|
||||
expect(result.current.columnFeaturesMap.in_progress).toHaveLength(1);
|
||||
expect(result.current.columnFeaturesMap.in_progress[0].id).toBe('feat-1');
|
||||
});
|
||||
|
||||
it('should map verified features to verified column', () => {
|
||||
const features = [createMockFeature('feat-1', 'verified')];
|
||||
|
||||
const { result } = renderHook(() =>
|
||||
useBoardColumnFeatures({
|
||||
...defaultProps,
|
||||
features,
|
||||
})
|
||||
);
|
||||
|
||||
expect(result.current.columnFeaturesMap.verified).toHaveLength(1);
|
||||
expect(result.current.columnFeaturesMap.verified[0].id).toBe('feat-1');
|
||||
});
|
||||
|
||||
it('should map completed features to completed column', () => {
|
||||
const features = [createMockFeature('feat-1', 'completed')];
|
||||
|
||||
const { result } = renderHook(() =>
|
||||
useBoardColumnFeatures({
|
||||
...defaultProps,
|
||||
features,
|
||||
})
|
||||
);
|
||||
|
||||
expect(result.current.columnFeaturesMap.completed).toHaveLength(1);
|
||||
expect(result.current.columnFeaturesMap.completed[0].id).toBe('feat-1');
|
||||
});
|
||||
});
|
||||
|
||||
describe('race condition protection for running tasks', () => {
|
||||
it('should place running features in in_progress even if status is backlog', () => {
|
||||
const features = [createMockFeature('feat-1', 'backlog')];
|
||||
|
||||
const { result } = renderHook(() =>
|
||||
useBoardColumnFeatures({
|
||||
...defaultProps,
|
||||
features,
|
||||
runningAutoTasksAllWorktrees: ['feat-1'],
|
||||
})
|
||||
);
|
||||
|
||||
// Should be in in_progress due to running task protection
|
||||
expect(result.current.columnFeaturesMap.in_progress).toHaveLength(1);
|
||||
expect(result.current.columnFeaturesMap.backlog).toHaveLength(0);
|
||||
});
|
||||
|
||||
it('should place running ready features in in_progress', () => {
|
||||
const features = [createMockFeature('feat-1', 'ready')];
|
||||
|
||||
const { result } = renderHook(() =>
|
||||
useBoardColumnFeatures({
|
||||
...defaultProps,
|
||||
features,
|
||||
runningAutoTasksAllWorktrees: ['feat-1'],
|
||||
})
|
||||
);
|
||||
|
||||
expect(result.current.columnFeaturesMap.in_progress).toHaveLength(1);
|
||||
expect(result.current.columnFeaturesMap.backlog).toHaveLength(0);
|
||||
});
|
||||
|
||||
it('should place running interrupted features in in_progress', () => {
|
||||
const features = [createMockFeature('feat-1', 'interrupted')];
|
||||
|
||||
const { result } = renderHook(() =>
|
||||
useBoardColumnFeatures({
|
||||
...defaultProps,
|
||||
features,
|
||||
runningAutoTasksAllWorktrees: ['feat-1'],
|
||||
})
|
||||
);
|
||||
|
||||
expect(result.current.columnFeaturesMap.in_progress).toHaveLength(1);
|
||||
expect(result.current.columnFeaturesMap.backlog).toHaveLength(0);
|
||||
});
|
||||
});
|
||||
|
||||
describe('recently completed features race condition protection', () => {
|
||||
it('should NOT place recently completed features in backlog (stale cache race condition)', () => {
|
||||
const features = [createMockFeature('feat-1', 'backlog')];
|
||||
|
||||
// Simulate the race condition: feature just completed but cache still has status=backlog
|
||||
useAppStore.setState({
|
||||
recentlyCompletedFeatures: new Set(['feat-1']),
|
||||
});
|
||||
|
||||
const { result } = renderHook(() =>
|
||||
useBoardColumnFeatures({
|
||||
...defaultProps,
|
||||
features,
|
||||
// Feature is no longer in running tasks (was just removed)
|
||||
runningAutoTasksAllWorktrees: [],
|
||||
})
|
||||
);
|
||||
|
||||
// Feature should NOT appear in backlog due to race condition protection
|
||||
expect(result.current.columnFeaturesMap.backlog).toHaveLength(0);
|
||||
// And not in in_progress since it's not running
|
||||
expect(result.current.columnFeaturesMap.in_progress).toHaveLength(0);
|
||||
});
|
||||
|
||||
it('should allow recently completed features with verified status to go to verified column', () => {
|
||||
const features = [createMockFeature('feat-1', 'verified')];
|
||||
|
||||
// Feature is both recently completed AND has correct status
|
||||
useAppStore.setState({
|
||||
recentlyCompletedFeatures: new Set(['feat-1']),
|
||||
});
|
||||
|
||||
const { result } = renderHook(() =>
|
||||
useBoardColumnFeatures({
|
||||
...defaultProps,
|
||||
features,
|
||||
runningAutoTasksAllWorktrees: [],
|
||||
})
|
||||
);
|
||||
|
||||
// Feature should be in verified (status takes precedence)
|
||||
expect(result.current.columnFeaturesMap.verified).toHaveLength(1);
|
||||
expect(result.current.columnFeaturesMap.backlog).toHaveLength(0);
|
||||
});
|
||||
|
||||
it('should protect multiple recently completed features from appearing in backlog', () => {
|
||||
const features = [
|
||||
createMockFeature('feat-1', 'backlog'),
|
||||
createMockFeature('feat-2', 'backlog'),
|
||||
createMockFeature('feat-3', 'backlog'),
|
||||
];
|
||||
|
||||
// Multiple features just completed but cache has stale status
|
||||
useAppStore.setState({
|
||||
recentlyCompletedFeatures: new Set(['feat-1', 'feat-2', 'feat-3']),
|
||||
});
|
||||
|
||||
const { result } = renderHook(() =>
|
||||
useBoardColumnFeatures({
|
||||
...defaultProps,
|
||||
features,
|
||||
runningAutoTasksAllWorktrees: [],
|
||||
})
|
||||
);
|
||||
|
||||
// None should appear in backlog
|
||||
expect(result.current.columnFeaturesMap.backlog).toHaveLength(0);
|
||||
});
|
||||
|
||||
it('should only protect recently completed features, not all backlog features', () => {
|
||||
const features = [
|
||||
createMockFeature('feat-completed', 'backlog'), // Recently completed
|
||||
createMockFeature('feat-normal', 'backlog'), // Normal backlog feature
|
||||
];
|
||||
|
||||
// Only one feature is recently completed
|
||||
useAppStore.setState({
|
||||
recentlyCompletedFeatures: new Set(['feat-completed']),
|
||||
});
|
||||
|
||||
const { result } = renderHook(() =>
|
||||
useBoardColumnFeatures({
|
||||
...defaultProps,
|
||||
features,
|
||||
runningAutoTasksAllWorktrees: [],
|
||||
})
|
||||
);
|
||||
|
||||
// Normal feature should still appear in backlog
|
||||
expect(result.current.columnFeaturesMap.backlog).toHaveLength(1);
|
||||
expect(result.current.columnFeaturesMap.backlog[0].id).toBe('feat-normal');
|
||||
});
|
||||
|
||||
it('should protect ready status features that are recently completed', () => {
|
||||
const features = [createMockFeature('feat-1', 'ready')];
|
||||
|
||||
useAppStore.setState({
|
||||
recentlyCompletedFeatures: new Set(['feat-1']),
|
||||
});
|
||||
|
||||
const { result } = renderHook(() =>
|
||||
useBoardColumnFeatures({
|
||||
...defaultProps,
|
||||
features,
|
||||
runningAutoTasksAllWorktrees: [],
|
||||
})
|
||||
);
|
||||
|
||||
// Should not appear in backlog (ready normally goes to backlog)
|
||||
expect(result.current.columnFeaturesMap.backlog).toHaveLength(0);
|
||||
});
|
||||
|
||||
it('should protect interrupted status features that are recently completed', () => {
|
||||
const features = [createMockFeature('feat-1', 'interrupted')];
|
||||
|
||||
useAppStore.setState({
|
||||
recentlyCompletedFeatures: new Set(['feat-1']),
|
||||
});
|
||||
|
||||
const { result } = renderHook(() =>
|
||||
useBoardColumnFeatures({
|
||||
...defaultProps,
|
||||
features,
|
||||
runningAutoTasksAllWorktrees: [],
|
||||
})
|
||||
);
|
||||
|
||||
// Should not appear in backlog (interrupted normally goes to backlog)
|
||||
expect(result.current.columnFeaturesMap.backlog).toHaveLength(0);
|
||||
});
|
||||
});
|
||||
|
||||
describe('recently completed features clearing on cache refresh', () => {
|
||||
it('should clear recently completed features when features list updates with terminal status', async () => {
|
||||
const {
|
||||
addRecentlyCompletedFeature,
|
||||
clearRecentlyCompletedFeatures: _clearRecentlyCompletedFeatures,
|
||||
} = useAppStore.getState();
|
||||
|
||||
// Add feature to recently completed
|
||||
act(() => {
|
||||
addRecentlyCompletedFeature('feat-1');
|
||||
});
|
||||
|
||||
expect(useAppStore.getState().recentlyCompletedFeatures.has('feat-1')).toBe(true);
|
||||
|
||||
// Simulate cache refresh with updated feature status
|
||||
const features = [createMockFeature('feat-1', 'verified')];
|
||||
|
||||
const { rerender } = renderHook((props) => useBoardColumnFeatures(props), {
|
||||
initialProps: {
|
||||
...defaultProps,
|
||||
features: [],
|
||||
},
|
||||
});
|
||||
|
||||
// Rerender with the new features (simulating cache refresh)
|
||||
rerender({
|
||||
...defaultProps,
|
||||
features,
|
||||
});
|
||||
|
||||
// The useEffect should detect that feat-1 now has verified status
|
||||
// and clear the recentlyCompletedFeatures
|
||||
// Note: This happens asynchronously in the useEffect
|
||||
await vi.waitFor(() => {
|
||||
expect(useAppStore.getState().recentlyCompletedFeatures.has('feat-1')).toBe(false);
|
||||
});
|
||||
});
|
||||
|
||||
it('should clear recently completed when completed status is detected', async () => {
|
||||
const { addRecentlyCompletedFeature } = useAppStore.getState();
|
||||
|
||||
act(() => {
|
||||
addRecentlyCompletedFeature('feat-1');
|
||||
});
|
||||
|
||||
const features = [createMockFeature('feat-1', 'completed')];
|
||||
|
||||
renderHook(() =>
|
||||
useBoardColumnFeatures({
|
||||
...defaultProps,
|
||||
features,
|
||||
})
|
||||
);
|
||||
|
||||
await vi.waitFor(() => {
|
||||
expect(useAppStore.getState().recentlyCompletedFeatures.has('feat-1')).toBe(false);
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
describe('combined running task and recently completed protection', () => {
|
||||
it('should prioritize running task protection over recently completed for same feature', () => {
|
||||
const features = [createMockFeature('feat-1', 'backlog')];
|
||||
|
||||
// Feature is both in running tasks AND recently completed
|
||||
useAppStore.setState({
|
||||
recentlyCompletedFeatures: new Set(['feat-1']),
|
||||
});
|
||||
|
||||
const { result } = renderHook(() =>
|
||||
useBoardColumnFeatures({
|
||||
...defaultProps,
|
||||
features,
|
||||
runningAutoTasksAllWorktrees: ['feat-1'],
|
||||
})
|
||||
);
|
||||
|
||||
// Running task protection should win - feature goes to in_progress
|
||||
expect(result.current.columnFeaturesMap.in_progress).toHaveLength(1);
|
||||
expect(result.current.columnFeaturesMap.backlog).toHaveLength(0);
|
||||
});
|
||||
|
||||
it('should handle mixed scenario with running, recently completed, and normal features', () => {
|
||||
const features = [
|
||||
createMockFeature('feat-running', 'backlog'), // Running but status stale
|
||||
createMockFeature('feat-completed', 'backlog'), // Just completed but status stale
|
||||
createMockFeature('feat-normal', 'backlog'), // Normal backlog feature
|
||||
];
|
||||
|
||||
useAppStore.setState({
|
||||
recentlyCompletedFeatures: new Set(['feat-completed']),
|
||||
});
|
||||
|
||||
const { result } = renderHook(() =>
|
||||
useBoardColumnFeatures({
|
||||
...defaultProps,
|
||||
features,
|
||||
runningAutoTasksAllWorktrees: ['feat-running'],
|
||||
})
|
||||
);
|
||||
|
||||
// Running feature -> in_progress
|
||||
expect(result.current.columnFeaturesMap.in_progress).toHaveLength(1);
|
||||
expect(result.current.columnFeaturesMap.in_progress[0].id).toBe('feat-running');
|
||||
|
||||
// Normal feature -> backlog
|
||||
expect(result.current.columnFeaturesMap.backlog).toHaveLength(1);
|
||||
expect(result.current.columnFeaturesMap.backlog[0].id).toBe('feat-normal');
|
||||
|
||||
// Recently completed feature -> nowhere (protected from backlog flash)
|
||||
const allColumns = Object.values(result.current.columnFeaturesMap).flat();
|
||||
const completedFeature = allColumns.find((f) => f.id === 'feat-completed');
|
||||
expect(completedFeature).toBeUndefined();
|
||||
});
|
||||
});
|
||||
|
||||
describe('debug logging', () => {
|
||||
it('should log debug message when recently completed feature is skipped from backlog', () => {
|
||||
const debugSpy = vi.spyOn(console, 'debug').mockImplementation(() => {});
|
||||
|
||||
const features = [createMockFeature('feat-1', 'backlog')];
|
||||
|
||||
useAppStore.setState({
|
||||
recentlyCompletedFeatures: new Set(['feat-1']),
|
||||
});
|
||||
|
||||
renderHook(() =>
|
||||
useBoardColumnFeatures({
|
||||
...defaultProps,
|
||||
features,
|
||||
runningAutoTasksAllWorktrees: [],
|
||||
})
|
||||
);
|
||||
|
||||
expect(debugSpy).toHaveBeenCalledWith(expect.stringContaining('feat-1 recently completed'));
|
||||
});
|
||||
});
|
||||
});
|
||||
252
apps/ui/tests/unit/hooks/use-dev-servers.test.ts
Normal file
252
apps/ui/tests/unit/hooks/use-dev-servers.test.ts
Normal file
@@ -0,0 +1,252 @@
|
||||
/**
|
||||
* Tests for useDevServers hook
|
||||
* Verifies dev server state management, server lifecycle callbacks,
|
||||
* and correct distinction between isStartingAnyDevServer and isDevServerStarting.
|
||||
*/
|
||||
|
||||
import { describe, it, expect, vi, beforeEach } from 'vitest';
|
||||
import { renderHook, act } from '@testing-library/react';
|
||||
import { useDevServers } from '../../../src/components/views/board-view/worktree-panel/hooks/use-dev-servers';
|
||||
import { getElectronAPI } from '@/lib/electron';
|
||||
import type { ElectronAPI } from '@/lib/electron';
|
||||
import type { WorktreeInfo } from '../../../src/components/views/board-view/worktree-panel/types';
|
||||
|
||||
vi.mock('@/lib/electron');
|
||||
vi.mock('@automaker/utils/logger', () => ({
|
||||
createLogger: () => ({
|
||||
info: vi.fn(),
|
||||
warn: vi.fn(),
|
||||
error: vi.fn(),
|
||||
debug: vi.fn(),
|
||||
}),
|
||||
}));
|
||||
vi.mock('sonner', () => ({
|
||||
toast: {
|
||||
success: vi.fn(),
|
||||
error: vi.fn(),
|
||||
warning: vi.fn(),
|
||||
},
|
||||
}));
|
||||
|
||||
const mockGetElectronAPI = vi.mocked(getElectronAPI);
|
||||
|
||||
describe('useDevServers', () => {
|
||||
const projectPath = '/test/project';
|
||||
|
||||
const createWorktree = (overrides: Partial<WorktreeInfo> = {}): WorktreeInfo => ({
|
||||
path: '/test/project/worktrees/feature-1',
|
||||
branch: 'feature/test-1',
|
||||
isMain: false,
|
||||
isCurrent: false,
|
||||
hasWorktree: true,
|
||||
...overrides,
|
||||
});
|
||||
|
||||
const mainWorktree = createWorktree({
|
||||
path: '/test/project',
|
||||
branch: 'main',
|
||||
isMain: true,
|
||||
});
|
||||
|
||||
beforeEach(() => {
|
||||
vi.clearAllMocks();
|
||||
mockGetElectronAPI.mockReturnValue(null);
|
||||
});
|
||||
|
||||
describe('initial state', () => {
|
||||
it('should return isStartingAnyDevServer as false initially', () => {
|
||||
const { result } = renderHook(() => useDevServers({ projectPath }));
|
||||
expect(result.current.isStartingAnyDevServer).toBe(false);
|
||||
});
|
||||
|
||||
it('should return isDevServerRunning as false for any worktree initially', () => {
|
||||
const { result } = renderHook(() => useDevServers({ projectPath }));
|
||||
expect(result.current.isDevServerRunning(mainWorktree)).toBe(false);
|
||||
});
|
||||
|
||||
it('should return isDevServerStarting as false for any worktree initially', () => {
|
||||
const { result } = renderHook(() => useDevServers({ projectPath }));
|
||||
expect(result.current.isDevServerStarting(mainWorktree)).toBe(false);
|
||||
});
|
||||
|
||||
it('should return undefined for getDevServerInfo when no server running', () => {
|
||||
const { result } = renderHook(() => useDevServers({ projectPath }));
|
||||
expect(result.current.getDevServerInfo(mainWorktree)).toBeUndefined();
|
||||
});
|
||||
});
|
||||
|
||||
describe('isDevServerStarting vs isStartingAnyDevServer', () => {
|
||||
it('isDevServerStarting should check per-worktree starting state', () => {
|
||||
const { result } = renderHook(() => useDevServers({ projectPath }));
|
||||
|
||||
const worktreeA = createWorktree({
|
||||
path: '/test/worktree-a',
|
||||
branch: 'feature/a',
|
||||
});
|
||||
const worktreeB = createWorktree({
|
||||
path: '/test/worktree-b',
|
||||
branch: 'feature/b',
|
||||
});
|
||||
|
||||
// Neither should be starting initially
|
||||
expect(result.current.isDevServerStarting(worktreeA)).toBe(false);
|
||||
expect(result.current.isDevServerStarting(worktreeB)).toBe(false);
|
||||
});
|
||||
|
||||
it('isStartingAnyDevServer should be a single boolean for all servers', () => {
|
||||
const { result } = renderHook(() => useDevServers({ projectPath }));
|
||||
expect(typeof result.current.isStartingAnyDevServer).toBe('boolean');
|
||||
});
|
||||
});
|
||||
|
||||
describe('getWorktreeKey', () => {
|
||||
it('should use projectPath for main worktree', () => {
|
||||
const { result } = renderHook(() => useDevServers({ projectPath }));
|
||||
|
||||
// The main worktree should normalize to projectPath
|
||||
const mainWt = createWorktree({ isMain: true, path: '/test/project' });
|
||||
const otherWt = createWorktree({ isMain: false, path: '/test/other' });
|
||||
|
||||
// Both should resolve to different keys
|
||||
expect(result.current.isDevServerRunning(mainWt)).toBe(false);
|
||||
expect(result.current.isDevServerRunning(otherWt)).toBe(false);
|
||||
});
|
||||
});
|
||||
|
||||
describe('handleStartDevServer', () => {
|
||||
it('should call startDevServer API when available', async () => {
|
||||
const mockStartDevServer = vi.fn().mockResolvedValue({
|
||||
success: true,
|
||||
result: {
|
||||
worktreePath: '/test/project',
|
||||
port: 3000,
|
||||
url: 'http://localhost:3000',
|
||||
},
|
||||
});
|
||||
|
||||
mockGetElectronAPI.mockReturnValue({
|
||||
worktree: {
|
||||
startDevServer: mockStartDevServer,
|
||||
listDevServers: vi.fn().mockResolvedValue({ success: true, result: { servers: [] } }),
|
||||
onDevServerLogEvent: vi.fn().mockReturnValue(vi.fn()),
|
||||
},
|
||||
} as unknown as ElectronAPI);
|
||||
|
||||
const { result } = renderHook(() => useDevServers({ projectPath }));
|
||||
|
||||
await act(async () => {
|
||||
await result.current.handleStartDevServer(mainWorktree);
|
||||
});
|
||||
|
||||
expect(mockStartDevServer).toHaveBeenCalledWith(projectPath, projectPath);
|
||||
});
|
||||
|
||||
it('should set isStartingAnyDevServer to true during start and false after completion', async () => {
|
||||
let resolveStart: (value: unknown) => void;
|
||||
const startPromise = new Promise((resolve) => {
|
||||
resolveStart = resolve;
|
||||
});
|
||||
const mockStartDevServer = vi.fn().mockReturnValue(startPromise);
|
||||
|
||||
mockGetElectronAPI.mockReturnValue({
|
||||
worktree: {
|
||||
startDevServer: mockStartDevServer,
|
||||
listDevServers: vi.fn().mockResolvedValue({ success: true, result: { servers: [] } }),
|
||||
onDevServerLogEvent: vi.fn().mockReturnValue(vi.fn()),
|
||||
},
|
||||
} as unknown as ElectronAPI);
|
||||
|
||||
const { result } = renderHook(() => useDevServers({ projectPath }));
|
||||
|
||||
// Initially not starting
|
||||
expect(result.current.isStartingAnyDevServer).toBe(false);
|
||||
|
||||
// Start server (don't await - it will hang until we resolve)
|
||||
let startDone = false;
|
||||
act(() => {
|
||||
result.current.handleStartDevServer(mainWorktree).then(() => {
|
||||
startDone = true;
|
||||
});
|
||||
});
|
||||
|
||||
// Resolve the start promise
|
||||
await act(async () => {
|
||||
resolveStart!({
|
||||
success: true,
|
||||
result: { worktreePath: '/test/project', port: 3000, url: 'http://localhost:3000' },
|
||||
});
|
||||
await new Promise((r) => setTimeout(r, 10));
|
||||
});
|
||||
|
||||
// After completion, should be false again
|
||||
expect(result.current.isStartingAnyDevServer).toBe(false);
|
||||
expect(startDone).toBe(true);
|
||||
});
|
||||
});
|
||||
|
||||
describe('handleStopDevServer', () => {
|
||||
it('should call stopDevServer API when available', async () => {
|
||||
const mockStopDevServer = vi.fn().mockResolvedValue({
|
||||
success: true,
|
||||
result: { message: 'Dev server stopped' },
|
||||
});
|
||||
|
||||
mockGetElectronAPI.mockReturnValue({
|
||||
worktree: {
|
||||
stopDevServer: mockStopDevServer,
|
||||
listDevServers: vi.fn().mockResolvedValue({ success: true, result: { servers: [] } }),
|
||||
onDevServerLogEvent: vi.fn().mockReturnValue(vi.fn()),
|
||||
},
|
||||
} as unknown as ElectronAPI);
|
||||
|
||||
const { result } = renderHook(() => useDevServers({ projectPath }));
|
||||
|
||||
await act(async () => {
|
||||
await result.current.handleStopDevServer(mainWorktree);
|
||||
});
|
||||
|
||||
expect(mockStopDevServer).toHaveBeenCalledWith(projectPath);
|
||||
});
|
||||
});
|
||||
|
||||
describe('fetchDevServers on mount', () => {
|
||||
it('should fetch running dev servers on initialization', async () => {
|
||||
const mockListDevServers = vi.fn().mockResolvedValue({
|
||||
success: true,
|
||||
result: {
|
||||
servers: [
|
||||
{
|
||||
worktreePath: '/test/project',
|
||||
port: 3000,
|
||||
url: 'http://localhost:3000',
|
||||
urlDetected: true,
|
||||
},
|
||||
],
|
||||
},
|
||||
});
|
||||
|
||||
mockGetElectronAPI.mockReturnValue({
|
||||
worktree: {
|
||||
listDevServers: mockListDevServers,
|
||||
onDevServerLogEvent: vi.fn().mockReturnValue(vi.fn()),
|
||||
},
|
||||
} as unknown as ElectronAPI);
|
||||
|
||||
const { result } = renderHook(() => useDevServers({ projectPath }));
|
||||
|
||||
// Wait for initial fetch
|
||||
await act(async () => {
|
||||
await new Promise((resolve) => setTimeout(resolve, 50));
|
||||
});
|
||||
|
||||
expect(result.current.isDevServerRunning(mainWorktree)).toBe(true);
|
||||
expect(result.current.getDevServerInfo(mainWorktree)).toEqual(
|
||||
expect.objectContaining({
|
||||
port: 3000,
|
||||
url: 'http://localhost:3000',
|
||||
urlDetected: true,
|
||||
})
|
||||
);
|
||||
});
|
||||
});
|
||||
});
|
||||
58
apps/ui/tests/unit/hooks/use-features-cache.test.ts
Normal file
58
apps/ui/tests/unit/hooks/use-features-cache.test.ts
Normal file
@@ -0,0 +1,58 @@
|
||||
import { describe, expect, it } from 'vitest';
|
||||
import { sanitizePersistedFeatures } from '../../../src/hooks/queries/use-features';
|
||||
|
||||
describe('sanitizePersistedFeatures', () => {
|
||||
it('returns empty array for non-array values', () => {
|
||||
expect(sanitizePersistedFeatures(null)).toEqual([]);
|
||||
expect(sanitizePersistedFeatures({})).toEqual([]);
|
||||
expect(sanitizePersistedFeatures('bad')).toEqual([]);
|
||||
});
|
||||
|
||||
it('drops entries without a valid id', () => {
|
||||
const sanitized = sanitizePersistedFeatures([
|
||||
null,
|
||||
{},
|
||||
{ id: '' },
|
||||
{ id: ' ' },
|
||||
{ id: 'feature-a', description: 'valid', category: '' },
|
||||
]);
|
||||
|
||||
expect(sanitized).toHaveLength(1);
|
||||
expect(sanitized[0].id).toBe('feature-a');
|
||||
});
|
||||
|
||||
it('normalizes malformed fields to safe defaults', () => {
|
||||
const sanitized = sanitizePersistedFeatures([
|
||||
{
|
||||
id: 'feature-1',
|
||||
description: 123,
|
||||
category: null,
|
||||
status: 'not-a-real-status',
|
||||
steps: ['first', 2, 'third'],
|
||||
},
|
||||
]);
|
||||
|
||||
expect(sanitized).toEqual([
|
||||
{
|
||||
id: 'feature-1',
|
||||
description: '',
|
||||
category: '',
|
||||
status: 'backlog',
|
||||
steps: ['first', 'third'],
|
||||
title: undefined,
|
||||
titleGenerating: undefined,
|
||||
branchName: undefined,
|
||||
},
|
||||
]);
|
||||
});
|
||||
|
||||
it('keeps valid static and pipeline statuses', () => {
|
||||
const sanitized = sanitizePersistedFeatures([
|
||||
{ id: 'feature-static', description: '', category: '', status: 'in_progress' },
|
||||
{ id: 'feature-pipeline', description: '', category: '', status: 'pipeline_tests' },
|
||||
]);
|
||||
|
||||
expect(sanitized[0].status).toBe('in_progress');
|
||||
expect(sanitized[1].status).toBe('pipeline_tests');
|
||||
});
|
||||
});
|
||||
209
apps/ui/tests/unit/hooks/use-guided-prompts.test.ts
Normal file
209
apps/ui/tests/unit/hooks/use-guided-prompts.test.ts
Normal file
@@ -0,0 +1,209 @@
|
||||
/**
|
||||
* Unit tests for useGuidedPrompts hook
|
||||
* Tests memoization of prompts and categories arrays to ensure
|
||||
* they maintain referential stability when underlying data hasn't changed.
|
||||
*/
|
||||
|
||||
import { describe, it, expect, vi, beforeEach } from 'vitest';
|
||||
import { renderHook } from '@testing-library/react';
|
||||
|
||||
// Mock the queries module
|
||||
vi.mock('@/hooks/queries', () => ({
|
||||
useIdeationPrompts: vi.fn(),
|
||||
}));
|
||||
|
||||
// Must import after mock setup
|
||||
import { useGuidedPrompts } from '../../../src/hooks/use-guided-prompts';
|
||||
import { useIdeationPrompts } from '@/hooks/queries';
|
||||
|
||||
const mockUseIdeationPrompts = vi.mocked(useIdeationPrompts);
|
||||
|
||||
describe('useGuidedPrompts', () => {
|
||||
const mockPrompts = [
|
||||
{ id: 'p1', category: 'feature' as const, title: 'Prompt 1', prompt: 'Do thing 1' },
|
||||
{ id: 'p2', category: 'bugfix' as const, title: 'Prompt 2', prompt: 'Do thing 2' },
|
||||
];
|
||||
|
||||
const mockCategories = [
|
||||
{ id: 'feature' as const, label: 'Feature', description: 'Feature prompts' },
|
||||
{ id: 'bugfix' as const, label: 'Bug Fix', description: 'Bug fix prompts' },
|
||||
];
|
||||
|
||||
beforeEach(() => {
|
||||
vi.clearAllMocks();
|
||||
});
|
||||
|
||||
it('should return empty arrays when data is undefined', () => {
|
||||
mockUseIdeationPrompts.mockReturnValue({
|
||||
data: undefined,
|
||||
isLoading: true,
|
||||
error: null,
|
||||
refetch: vi.fn(),
|
||||
} as ReturnType<typeof useIdeationPrompts>);
|
||||
|
||||
const { result } = renderHook(() => useGuidedPrompts());
|
||||
|
||||
expect(result.current.prompts).toEqual([]);
|
||||
expect(result.current.categories).toEqual([]);
|
||||
expect(result.current.isLoading).toBe(true);
|
||||
});
|
||||
|
||||
it('should return prompts and categories when data is available', () => {
|
||||
mockUseIdeationPrompts.mockReturnValue({
|
||||
data: { prompts: mockPrompts, categories: mockCategories },
|
||||
isLoading: false,
|
||||
error: null,
|
||||
refetch: vi.fn(),
|
||||
} as ReturnType<typeof useIdeationPrompts>);
|
||||
|
||||
const { result } = renderHook(() => useGuidedPrompts());
|
||||
|
||||
expect(result.current.prompts).toEqual(mockPrompts);
|
||||
expect(result.current.categories).toEqual(mockCategories);
|
||||
expect(result.current.isLoading).toBe(false);
|
||||
});
|
||||
|
||||
it('should memoize prompts array reference when data has not changed', () => {
|
||||
const stableData = { prompts: mockPrompts, categories: mockCategories };
|
||||
|
||||
mockUseIdeationPrompts.mockReturnValue({
|
||||
data: stableData,
|
||||
isLoading: false,
|
||||
error: null,
|
||||
refetch: vi.fn(),
|
||||
} as ReturnType<typeof useIdeationPrompts>);
|
||||
|
||||
const { result, rerender } = renderHook(() => useGuidedPrompts());
|
||||
|
||||
const firstPrompts = result.current.prompts;
|
||||
const firstCategories = result.current.categories;
|
||||
|
||||
// Re-render with same data
|
||||
rerender();
|
||||
|
||||
// References should be stable (same object, not a new empty array on each render)
|
||||
expect(result.current.prompts).toBe(firstPrompts);
|
||||
expect(result.current.categories).toBe(firstCategories);
|
||||
});
|
||||
|
||||
it('should update prompts reference when data.prompts changes', () => {
|
||||
const refetchFn = vi.fn();
|
||||
mockUseIdeationPrompts.mockReturnValue({
|
||||
data: { prompts: mockPrompts, categories: mockCategories },
|
||||
isLoading: false,
|
||||
error: null,
|
||||
refetch: refetchFn,
|
||||
} as ReturnType<typeof useIdeationPrompts>);
|
||||
|
||||
const { result, rerender } = renderHook(() => useGuidedPrompts());
|
||||
|
||||
const firstPrompts = result.current.prompts;
|
||||
|
||||
// Update with new prompts array
|
||||
const newPrompts = [
|
||||
...mockPrompts,
|
||||
{ id: 'p3', category: 'feature' as const, title: 'Prompt 3', prompt: 'Do thing 3' },
|
||||
];
|
||||
mockUseIdeationPrompts.mockReturnValue({
|
||||
data: { prompts: newPrompts, categories: mockCategories },
|
||||
isLoading: false,
|
||||
error: null,
|
||||
refetch: refetchFn,
|
||||
} as ReturnType<typeof useIdeationPrompts>);
|
||||
|
||||
rerender();
|
||||
|
||||
// Reference should be different since data.prompts changed
|
||||
expect(result.current.prompts).not.toBe(firstPrompts);
|
||||
expect(result.current.prompts).toEqual(newPrompts);
|
||||
});
|
||||
|
||||
it('should filter prompts by category', () => {
|
||||
mockUseIdeationPrompts.mockReturnValue({
|
||||
data: { prompts: mockPrompts, categories: mockCategories },
|
||||
isLoading: false,
|
||||
error: null,
|
||||
refetch: vi.fn(),
|
||||
} as ReturnType<typeof useIdeationPrompts>);
|
||||
|
||||
const { result } = renderHook(() => useGuidedPrompts());
|
||||
|
||||
const featurePrompts = result.current.getPromptsByCategory('feature' as const);
|
||||
expect(featurePrompts).toHaveLength(1);
|
||||
expect(featurePrompts[0].id).toBe('p1');
|
||||
});
|
||||
|
||||
it('should find prompt by id', () => {
|
||||
mockUseIdeationPrompts.mockReturnValue({
|
||||
data: { prompts: mockPrompts, categories: mockCategories },
|
||||
isLoading: false,
|
||||
error: null,
|
||||
refetch: vi.fn(),
|
||||
} as ReturnType<typeof useIdeationPrompts>);
|
||||
|
||||
const { result } = renderHook(() => useGuidedPrompts());
|
||||
|
||||
expect(result.current.getPromptById('p2')?.title).toBe('Prompt 2');
|
||||
expect(result.current.getPromptById('nonexistent')).toBeUndefined();
|
||||
});
|
||||
|
||||
it('should find category by id', () => {
|
||||
mockUseIdeationPrompts.mockReturnValue({
|
||||
data: { prompts: mockPrompts, categories: mockCategories },
|
||||
isLoading: false,
|
||||
error: null,
|
||||
refetch: vi.fn(),
|
||||
} as ReturnType<typeof useIdeationPrompts>);
|
||||
|
||||
const { result } = renderHook(() => useGuidedPrompts());
|
||||
|
||||
expect(result.current.getCategoryById('feature' as const)?.label).toBe('Feature');
|
||||
expect(result.current.getCategoryById('nonexistent' as never)).toBeUndefined();
|
||||
});
|
||||
|
||||
it('should convert error to string', () => {
|
||||
mockUseIdeationPrompts.mockReturnValue({
|
||||
data: undefined,
|
||||
isLoading: false,
|
||||
error: new Error('Test error'),
|
||||
refetch: vi.fn(),
|
||||
} as ReturnType<typeof useIdeationPrompts>);
|
||||
|
||||
const { result } = renderHook(() => useGuidedPrompts());
|
||||
|
||||
expect(result.current.error).toBe('Test error');
|
||||
});
|
||||
|
||||
it('should return null error when no error', () => {
|
||||
mockUseIdeationPrompts.mockReturnValue({
|
||||
data: undefined,
|
||||
isLoading: false,
|
||||
error: null,
|
||||
refetch: vi.fn(),
|
||||
} as ReturnType<typeof useIdeationPrompts>);
|
||||
|
||||
const { result } = renderHook(() => useGuidedPrompts());
|
||||
|
||||
expect(result.current.error).toBeNull();
|
||||
});
|
||||
|
||||
it('should memoize empty arrays when data is undefined across renders', () => {
|
||||
mockUseIdeationPrompts.mockReturnValue({
|
||||
data: undefined,
|
||||
isLoading: true,
|
||||
error: null,
|
||||
refetch: vi.fn(),
|
||||
} as ReturnType<typeof useIdeationPrompts>);
|
||||
|
||||
const { result, rerender } = renderHook(() => useGuidedPrompts());
|
||||
|
||||
const firstPrompts = result.current.prompts;
|
||||
const firstCategories = result.current.categories;
|
||||
|
||||
rerender();
|
||||
|
||||
// Empty arrays should be referentially stable too (via useMemo)
|
||||
expect(result.current.prompts).toBe(firstPrompts);
|
||||
expect(result.current.categories).toBe(firstCategories);
|
||||
});
|
||||
});
|
||||
240
apps/ui/tests/unit/hooks/use-media-query.test.ts
Normal file
240
apps/ui/tests/unit/hooks/use-media-query.test.ts
Normal file
@@ -0,0 +1,240 @@
|
||||
/**
|
||||
* Unit tests for useMediaQuery, useIsMobile, useIsTablet, and useIsCompact hooks
|
||||
* These tests verify the responsive detection behavior for terminal shortcuts bar
|
||||
*/
|
||||
|
||||
import { describe, it, expect, beforeEach, vi, afterEach } from 'vitest';
|
||||
import { renderHook, act } from '@testing-library/react';
|
||||
import {
|
||||
useMediaQuery,
|
||||
useIsMobile,
|
||||
useIsTablet,
|
||||
useIsCompact,
|
||||
} from '../../../src/hooks/use-media-query.ts';
|
||||
|
||||
/**
|
||||
* Creates a mock matchMedia implementation for testing
|
||||
* @param matchingQuery - The query that should match. If null, no queries match.
|
||||
*/
|
||||
function createMatchMediaMock(matchingQuery: string | null = null) {
|
||||
return vi.fn().mockImplementation((query: string) => ({
|
||||
matches: matchingQuery !== null && query === matchingQuery,
|
||||
media: query,
|
||||
onchange: null,
|
||||
addListener: vi.fn(),
|
||||
removeListener: vi.fn(),
|
||||
addEventListener: vi.fn(),
|
||||
removeEventListener: vi.fn(),
|
||||
dispatchEvent: vi.fn(),
|
||||
}));
|
||||
}
|
||||
|
||||
/**
|
||||
* Creates a mock matchMedia that tracks event listeners for testing cleanup
|
||||
*/
|
||||
function createTrackingMatchMediaMock() {
|
||||
const listeners: Array<(e: MediaQueryListEvent) => void> = [];
|
||||
return {
|
||||
matchMedia: vi.fn().mockImplementation((query: string) => ({
|
||||
matches: false,
|
||||
media: query,
|
||||
onchange: null,
|
||||
addListener: vi.fn(),
|
||||
removeListener: vi.fn(),
|
||||
addEventListener: vi.fn((_event: string, listener: (e: MediaQueryListEvent) => void) => {
|
||||
listeners.push(listener);
|
||||
}),
|
||||
removeEventListener: vi.fn((_event: string, listener: (e: MediaQueryListEvent) => void) => {
|
||||
const index = listeners.indexOf(listener);
|
||||
if (index > -1) listeners.splice(index, 1);
|
||||
}),
|
||||
dispatchEvent: vi.fn(),
|
||||
})),
|
||||
listeners,
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Creates a mock matchMedia that matches multiple queries (for testing viewport combinations)
|
||||
* @param queries - Array of queries that should match
|
||||
*/
|
||||
function createMultiQueryMatchMediaMock(queries: string[] = []) {
|
||||
return vi.fn().mockImplementation((query: string) => ({
|
||||
matches: queries.includes(query),
|
||||
media: query,
|
||||
onchange: null,
|
||||
addListener: vi.fn(),
|
||||
removeListener: vi.fn(),
|
||||
addEventListener: vi.fn(),
|
||||
removeEventListener: vi.fn(),
|
||||
dispatchEvent: vi.fn(),
|
||||
}));
|
||||
}
|
||||
|
||||
describe('useMediaQuery', () => {
|
||||
let mockData: ReturnType<typeof createTrackingMatchMediaMock>;
|
||||
|
||||
beforeEach(() => {
|
||||
mockData = createTrackingMatchMediaMock();
|
||||
window.matchMedia = mockData.matchMedia;
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
vi.clearAllMocks();
|
||||
});
|
||||
|
||||
it('should return false by default', () => {
|
||||
const { result } = renderHook(() => useMediaQuery('(max-width: 768px)'));
|
||||
expect(result.current).toBe(false);
|
||||
});
|
||||
|
||||
it('should return true when media query matches', () => {
|
||||
window.matchMedia = createMatchMediaMock('(max-width: 768px)');
|
||||
|
||||
const { result } = renderHook(() => useMediaQuery('(max-width: 768px)'));
|
||||
expect(result.current).toBe(true);
|
||||
});
|
||||
|
||||
it('should update when media query changes', () => {
|
||||
const { result } = renderHook(() => useMediaQuery('(max-width: 768px)'));
|
||||
|
||||
// Initial state is false
|
||||
expect(result.current).toBe(false);
|
||||
|
||||
// Simulate a media query change event
|
||||
act(() => {
|
||||
const listener = mockData.listeners[0];
|
||||
if (listener) {
|
||||
listener({ matches: true, media: '(max-width: 768px)' } as MediaQueryListEvent);
|
||||
}
|
||||
});
|
||||
|
||||
expect(result.current).toBe(true);
|
||||
});
|
||||
|
||||
it('should cleanup event listener on unmount', () => {
|
||||
const { unmount } = renderHook(() => useMediaQuery('(max-width: 768px)'));
|
||||
|
||||
expect(mockData.listeners.length).toBe(1);
|
||||
|
||||
unmount();
|
||||
|
||||
expect(mockData.listeners.length).toBe(0);
|
||||
});
|
||||
});
|
||||
|
||||
describe('useIsMobile', () => {
|
||||
afterEach(() => {
|
||||
vi.clearAllMocks();
|
||||
});
|
||||
|
||||
it('should return true when viewport is <= 768px', () => {
|
||||
window.matchMedia = createMatchMediaMock('(max-width: 768px)');
|
||||
|
||||
const { result } = renderHook(() => useIsMobile());
|
||||
expect(result.current).toBe(true);
|
||||
});
|
||||
|
||||
it('should return false when viewport is > 768px', () => {
|
||||
window.matchMedia = createMatchMediaMock(null);
|
||||
|
||||
const { result } = renderHook(() => useIsMobile());
|
||||
expect(result.current).toBe(false);
|
||||
});
|
||||
});
|
||||
|
||||
describe('useIsTablet', () => {
|
||||
afterEach(() => {
|
||||
vi.clearAllMocks();
|
||||
});
|
||||
|
||||
it('should return true when viewport is <= 1024px (tablet or smaller)', () => {
|
||||
window.matchMedia = createMatchMediaMock('(max-width: 1024px)');
|
||||
|
||||
const { result } = renderHook(() => useIsTablet());
|
||||
expect(result.current).toBe(true);
|
||||
});
|
||||
|
||||
it('should return false when viewport is > 1024px (desktop)', () => {
|
||||
window.matchMedia = createMatchMediaMock(null);
|
||||
|
||||
const { result } = renderHook(() => useIsTablet());
|
||||
expect(result.current).toBe(false);
|
||||
});
|
||||
});
|
||||
|
||||
describe('useIsCompact', () => {
|
||||
afterEach(() => {
|
||||
vi.clearAllMocks();
|
||||
});
|
||||
|
||||
it('should return true when viewport is <= 1240px', () => {
|
||||
window.matchMedia = createMatchMediaMock('(max-width: 1240px)');
|
||||
|
||||
const { result } = renderHook(() => useIsCompact());
|
||||
expect(result.current).toBe(true);
|
||||
});
|
||||
|
||||
it('should return false when viewport is > 1240px', () => {
|
||||
window.matchMedia = createMatchMediaMock(null);
|
||||
|
||||
const { result } = renderHook(() => useIsCompact());
|
||||
expect(result.current).toBe(false);
|
||||
});
|
||||
});
|
||||
|
||||
describe('Responsive Viewport Combinations', () => {
|
||||
// Test the logic that TerminalPanel uses: showShortcutsBar = isMobile || isTablet
|
||||
|
||||
afterEach(() => {
|
||||
vi.clearAllMocks();
|
||||
});
|
||||
|
||||
it('should show shortcuts bar on mobile viewport (< 768px)', () => {
|
||||
// Mobile: matches both mobile and tablet queries (since 768px < 1024px)
|
||||
window.matchMedia = createMultiQueryMatchMediaMock([
|
||||
'(max-width: 768px)',
|
||||
'(max-width: 1024px)',
|
||||
]);
|
||||
|
||||
const { result: mobileResult } = renderHook(() => useIsMobile());
|
||||
const { result: tabletResult } = renderHook(() => useIsTablet());
|
||||
|
||||
// Mobile is always tablet (since 768px < 1024px)
|
||||
expect(mobileResult.current).toBe(true);
|
||||
expect(tabletResult.current).toBe(true);
|
||||
|
||||
// showShortcutsBar = isMobile || isTablet = true
|
||||
expect(mobileResult.current || tabletResult.current).toBe(true);
|
||||
});
|
||||
|
||||
it('should show shortcuts bar on tablet viewport (768px - 1024px)', () => {
|
||||
// Tablet: matches tablet query but not mobile (viewport > 768px but <= 1024px)
|
||||
window.matchMedia = createMultiQueryMatchMediaMock(['(max-width: 1024px)']);
|
||||
|
||||
const { result: mobileResult } = renderHook(() => useIsMobile());
|
||||
const { result: tabletResult } = renderHook(() => useIsTablet());
|
||||
|
||||
// Tablet is not mobile (viewport > 768px but <= 1024px)
|
||||
expect(mobileResult.current).toBe(false);
|
||||
expect(tabletResult.current).toBe(true);
|
||||
|
||||
// showShortcutsBar = isMobile || isTablet = true
|
||||
expect(mobileResult.current || tabletResult.current).toBe(true);
|
||||
});
|
||||
|
||||
it('should hide shortcuts bar on desktop viewport (> 1024px)', () => {
|
||||
// Desktop: matches neither mobile nor tablet
|
||||
window.matchMedia = createMultiQueryMatchMediaMock([]);
|
||||
|
||||
const { result: mobileResult } = renderHook(() => useIsMobile());
|
||||
const { result: tabletResult } = renderHook(() => useIsTablet());
|
||||
|
||||
// Desktop is neither mobile nor tablet
|
||||
expect(mobileResult.current).toBe(false);
|
||||
expect(tabletResult.current).toBe(false);
|
||||
|
||||
// showShortcutsBar = isMobile || isTablet = false
|
||||
expect(mobileResult.current || tabletResult.current).toBe(false);
|
||||
});
|
||||
});
|
||||
204
apps/ui/tests/unit/hooks/use-test-runners-deps.test.ts
Normal file
204
apps/ui/tests/unit/hooks/use-test-runners-deps.test.ts
Normal file
@@ -0,0 +1,204 @@
|
||||
/**
|
||||
* Unit tests for useTestRunners hook - dependency array changes
|
||||
*
|
||||
* The lint fix removed unnecessary deps (activeSessionByWorktree, sessions)
|
||||
* from useMemo for activeSession and isRunning. These tests verify that the
|
||||
* store-level getActiveSession and isWorktreeRunning functions work correctly
|
||||
* since they are the actual deps used in the hook's useMemo.
|
||||
*/
|
||||
|
||||
import { describe, it, expect, vi, beforeEach } from 'vitest';
|
||||
import { renderHook, act } from '@testing-library/react';
|
||||
|
||||
// Mock the electron API
|
||||
vi.mock('@/lib/electron', () => ({
|
||||
getElectronAPI: vi.fn(() => ({
|
||||
worktree: {
|
||||
onTestRunnerEvent: vi.fn(() => vi.fn()),
|
||||
getTestLogs: vi.fn(() => Promise.resolve({ success: false })),
|
||||
},
|
||||
})),
|
||||
}));
|
||||
|
||||
// Mock the logger
|
||||
vi.mock('@automaker/utils/logger', () => ({
|
||||
createLogger: () => ({
|
||||
info: vi.fn(),
|
||||
warn: vi.fn(),
|
||||
error: vi.fn(),
|
||||
debug: vi.fn(),
|
||||
}),
|
||||
}));
|
||||
|
||||
import { useTestRunners } from '../../../src/hooks/use-test-runners';
|
||||
import { useTestRunnersStore } from '../../../src/store/test-runners-store';
|
||||
|
||||
describe('useTestRunners - dependency changes', () => {
|
||||
beforeEach(() => {
|
||||
vi.clearAllMocks();
|
||||
// Reset the store state by clearing all sessions
|
||||
const store = useTestRunnersStore.getState();
|
||||
// Clear any existing sessions
|
||||
Object.keys(store.sessions).forEach((id) => {
|
||||
store.removeSession(id);
|
||||
});
|
||||
});
|
||||
|
||||
it('should return null activeSession when no worktreePath', () => {
|
||||
const { result } = renderHook(() => useTestRunners());
|
||||
|
||||
expect(result.current.activeSession).toBeNull();
|
||||
expect(result.current.isRunning).toBe(false);
|
||||
});
|
||||
|
||||
it('should return null activeSession when worktreePath has no active session', () => {
|
||||
const { result } = renderHook(() => useTestRunners('/test/worktree'));
|
||||
|
||||
expect(result.current.activeSession).toBeNull();
|
||||
expect(result.current.isRunning).toBe(false);
|
||||
});
|
||||
|
||||
it('should return empty sessions for worktree without sessions', () => {
|
||||
const { result } = renderHook(() => useTestRunners('/test/worktree'));
|
||||
|
||||
expect(result.current.sessions).toEqual([]);
|
||||
});
|
||||
|
||||
it('should verify store getActiveSession works correctly', () => {
|
||||
// This verifies the store-level function that the hook's useMemo depends on
|
||||
const store = useTestRunnersStore.getState();
|
||||
|
||||
// No sessions initially
|
||||
expect(store.getActiveSession('/test/worktree')).toBeNull();
|
||||
|
||||
// Add a session
|
||||
store.startSession({
|
||||
sessionId: 'test-session-1',
|
||||
worktreePath: '/test/worktree',
|
||||
command: 'npm test',
|
||||
status: 'running',
|
||||
startedAt: Date.now(),
|
||||
});
|
||||
|
||||
// Should find it
|
||||
const active = useTestRunnersStore.getState().getActiveSession('/test/worktree');
|
||||
expect(active).not.toBeNull();
|
||||
expect(active?.sessionId).toBe('test-session-1');
|
||||
expect(active?.status).toBe('running');
|
||||
});
|
||||
|
||||
it('should verify store isWorktreeRunning works correctly', () => {
|
||||
const store = useTestRunnersStore.getState();
|
||||
|
||||
// Not running initially
|
||||
expect(store.isWorktreeRunning('/test/worktree')).toBe(false);
|
||||
|
||||
// Start a session
|
||||
store.startSession({
|
||||
sessionId: 'test-session-2',
|
||||
worktreePath: '/test/worktree',
|
||||
command: 'npm test',
|
||||
status: 'running',
|
||||
startedAt: Date.now(),
|
||||
});
|
||||
|
||||
expect(useTestRunnersStore.getState().isWorktreeRunning('/test/worktree')).toBe(true);
|
||||
|
||||
// Complete the session
|
||||
useTestRunnersStore.getState().completeSession('test-session-2', 'passed', 0, 5000);
|
||||
|
||||
expect(useTestRunnersStore.getState().isWorktreeRunning('/test/worktree')).toBe(false);
|
||||
});
|
||||
|
||||
it('should not return sessions from different worktrees via store', () => {
|
||||
const store = useTestRunnersStore.getState();
|
||||
|
||||
// Add session for worktree-b
|
||||
store.startSession({
|
||||
sessionId: 'test-session-b',
|
||||
worktreePath: '/test/worktree-b',
|
||||
command: 'npm test',
|
||||
status: 'running',
|
||||
startedAt: Date.now(),
|
||||
});
|
||||
|
||||
// worktree-a should have no active session
|
||||
const active = useTestRunnersStore.getState().getActiveSession('/test/worktree-a');
|
||||
expect(active).toBeNull();
|
||||
expect(useTestRunnersStore.getState().isWorktreeRunning('/test/worktree-a')).toBe(false);
|
||||
|
||||
// worktree-b should have the session
|
||||
const activeB = useTestRunnersStore.getState().getActiveSession('/test/worktree-b');
|
||||
expect(activeB).not.toBeNull();
|
||||
expect(activeB?.sessionId).toBe('test-session-b');
|
||||
});
|
||||
|
||||
it('should return error when starting without worktreePath', async () => {
|
||||
const { result } = renderHook(() => useTestRunners());
|
||||
|
||||
let startResult: { success: boolean; error?: string };
|
||||
await act(async () => {
|
||||
startResult = await result.current.start();
|
||||
});
|
||||
|
||||
expect(startResult!.success).toBe(false);
|
||||
expect(startResult!.error).toBe('No worktree path provided');
|
||||
});
|
||||
|
||||
it('should start a test run via the start action', async () => {
|
||||
const mockStartTests = vi.fn().mockResolvedValue({
|
||||
success: true,
|
||||
result: { sessionId: 'new-session' },
|
||||
});
|
||||
|
||||
const { getElectronAPI } = await import('@/lib/electron');
|
||||
vi.mocked(getElectronAPI).mockReturnValue({
|
||||
worktree: {
|
||||
onTestRunnerEvent: vi.fn(() => vi.fn()),
|
||||
getTestLogs: vi.fn(() => Promise.resolve({ success: false })),
|
||||
startTests: mockStartTests,
|
||||
},
|
||||
} as ReturnType<typeof getElectronAPI>);
|
||||
|
||||
const { result } = renderHook(() => useTestRunners('/test/worktree'));
|
||||
|
||||
let startResult: { success: boolean; sessionId?: string };
|
||||
await act(async () => {
|
||||
startResult = await result.current.start();
|
||||
});
|
||||
|
||||
expect(startResult!.success).toBe(true);
|
||||
expect(startResult!.sessionId).toBe('new-session');
|
||||
});
|
||||
|
||||
it('should clear session history for a worktree', () => {
|
||||
const store = useTestRunnersStore.getState();
|
||||
|
||||
// Add sessions for two worktrees
|
||||
store.startSession({
|
||||
sessionId: 'session-a',
|
||||
worktreePath: '/test/worktree-a',
|
||||
command: 'npm test',
|
||||
status: 'running',
|
||||
startedAt: Date.now(),
|
||||
});
|
||||
store.startSession({
|
||||
sessionId: 'session-b',
|
||||
worktreePath: '/test/worktree-b',
|
||||
command: 'npm test',
|
||||
status: 'running',
|
||||
startedAt: Date.now(),
|
||||
});
|
||||
|
||||
const { result } = renderHook(() => useTestRunners('/test/worktree-a'));
|
||||
|
||||
act(() => {
|
||||
result.current.clearHistory();
|
||||
});
|
||||
|
||||
// worktree-a sessions should be cleared
|
||||
expect(useTestRunnersStore.getState().getActiveSession('/test/worktree-a')).toBeNull();
|
||||
// worktree-b sessions should still exist
|
||||
expect(useTestRunnersStore.getState().getActiveSession('/test/worktree-b')).not.toBeNull();
|
||||
});
|
||||
});
|
||||
Reference in New Issue
Block a user