Compare commits

..

67 Commits

Author SHA1 Message Date
Test User
c7ebdb1f80 feat: Implement API key authentication with rate limiting and secure comparison
- Added rate limiting to the authentication middleware to prevent brute-force attacks.
- Introduced a secure comparison function to mitigate timing attacks during API key validation.
- Created a new rate limiter class to track failed authentication attempts and block requests after exceeding the maximum allowed failures.
- Updated the authentication middleware to handle rate limiting and secure key comparison.
- Enhanced error handling for rate-limited requests, providing appropriate responses to clients.
2025-12-24 14:49:47 -05:00
Web Dev Cody
97af998066 Merge pull request #250 from AutoMaker-Org/feat/convert-issues-to-task
feat: abbility to analyze github issues with ai with confidence / task creation
2025-12-23 22:34:18 -05:00
Web Dev Cody
44e341ab41 Merge pull request #256 from AutoMaker-Org/feat/improve-ai-suggestions-ui
feat: Improve ai suggestion output ui
2025-12-23 22:33:53 -05:00
Kacper
38addacf1e refactor: Enhance fetchIssues logic with mounted state checks
- Introduced a useRef hook to track component mount status, preventing state updates on unmounted components.
- Updated fetchIssues function to conditionally set state only if the component is still mounted, improving reliability during asynchronous operations.
- Ensured proper cleanup in useEffect to maintain accurate mounted state, enhancing overall component stability.
2025-12-24 02:31:56 +01:00
Kacper
a85e1aaa89 refactor: Simplify validation handling in GitHubIssuesView
- Removed the isValidating prop from GitHubIssuesView and ValidationDialog components to streamline validation logic.
- Updated handleValidateIssue function to eliminate unnecessary dialog options, focusing on background validation notifications.
- Enhanced user feedback by notifying users when validation starts, improving overall experience during issue analysis.
2025-12-24 02:30:36 +01:00
Kacper
dd86e987a4 feat: Introduce ErrorState and LoadingState components for improved UI feedback
- Added ErrorState component to display error messages with retry functionality, enhancing user experience during issue loading failures.
- Implemented LoadingState component to provide visual feedback while issues are being fetched, improving the overall responsiveness of the GitHubIssuesView.
- Refactored GitHubIssuesView to utilize the new components, streamlining error and loading handling logic.
2025-12-24 02:23:12 +01:00
Kacper
6cd2898923 feat: Improve GitHubIssuesView with stable event handler references
- Introduced refs for selected issue and validation dialog state to prevent unnecessary re-subscribing on state changes.
- Added cleanup logic to ensure proper handling of asynchronous operations during component unmounting.
- Enhanced error handling in validation loading functions to only log errors if the component is still mounted, improving reliability.
2025-12-24 02:19:03 +01:00
Kacper
7fec9e7c5c chore: remove old app folder ? 2025-12-24 02:18:52 +01:00
Kacper
2c9a3c5161 feat: Refactor validation logic in GitHubIssuesView for improved clarity
- Simplified the validation staleness check by introducing a dedicated variable for stale validation status.
- Enhanced the conditions for unviewed and viewed validation indicators, improving user feedback on validation status.
- Added a visual indicator for viewed validations, enhancing the user interface and experience.
2025-12-24 02:12:22 +01:00
Kacper
bb3b1960c5 feat: Enhance GitHubIssuesView with AI profile and worktree integration
- Added support for default AI profile retrieval and integration into task creation, improving user experience in task management.
- Implemented current branch detection based on selected worktree, ensuring accurate context for issue handling.
- Updated fetchIssues function dependencies to include new profile and branch data, enhancing task creation logic.
2025-12-24 02:07:33 +01:00
Kacper
7007a8aa66 feat: Add ConfirmDialog component and integrate into GitHubIssuesView
- Introduced a new ConfirmDialog component for user confirmation prompts.
- Integrated ConfirmDialog into GitHubIssuesView to confirm re-validation of issues, enhancing user interaction and decision-making.
- Updated handleValidateIssue function to support re-validation options, improving flexibility in issue validation handling.
2025-12-24 01:53:40 +01:00
Kacper
1ff617703c fix: Update fetchLinkedPRs to prevent shell injection vulnerabilities
- Modified the fetchLinkedPRs function to use JSON.stringify for the request body, ensuring safe input handling when spawning the GitHub CLI command.
- Changed the command to read the query from stdin using the --input flag, enhancing security against shell injection risks.
2025-12-24 01:41:05 +01:00
Kacper
0461045767 Changes from feat/improve-ai-suggestions-ui 2025-12-24 01:02:49 +01:00
Kacper
7b61a274e5 fix: Prevent race condition in unviewed validations count update
- Added a guard to ensure the unviewed count is only updated if the current project matches the reference, preventing potential race conditions during state updates.
2025-12-23 23:28:02 +01:00
Kacper
ef8eaa0463 feat: Add unit tests for validation storage functionality
- Introduced comprehensive unit tests for the validation storage module, covering functions such as writeValidation, readValidation, getAllValidations, deleteValidation, and others.
- Implemented tests to ensure correct behavior for validation creation, retrieval, deletion, and freshness checks.
- Enhanced test coverage for edge cases, including handling of non-existent validations and directory structure validation.
2025-12-23 23:17:41 +01:00
Kacper
65319f93b4 feat: Improve GitHub issues view with validation indicators and Markdown support
- Added an `isValidating` prop to the `IssueRow` component to indicate ongoing validation for issues.
- Introduced a visual indicator for validation in progress, enhancing user feedback during analysis.
- Updated the `ValidationDialog` to render validation reasoning and suggested fixes using Markdown for better formatting and readability.
2025-12-23 22:49:37 +01:00
Kacper
dd27c5c4fb feat: Enhance validation viewing functionality with event emission
- Updated the `createMarkViewedHandler` to emit an event when a validation is marked as viewed, allowing the UI to update the unviewed count dynamically.
- Modified the `useUnviewedValidations` hook to handle the new event type for decrementing the unviewed validations count.
- Introduced a new event type `issue_validation_viewed` in the issue validation event type definition for better event handling.
2025-12-23 22:25:48 +01:00
Kacper
d1418aa054 feat: Implement stale validation cleanup and improve GitHub issue handling
- Added a scheduled task to clean up stale validation entries every hour, preventing memory leaks.
- Enhanced the `getAllValidations` function to read validation files in parallel for improved performance.
- Updated the `fetchLinkedPRs` function to use `spawn` for safer execution of GitHub CLI commands, mitigating shell injection risks.
- Modified event handling in the GitHub issues view to utilize the model for validation, ensuring consistency and reducing stale closure issues.
- Introduced a new property in the issue validation event to track the model used for validation.
2025-12-23 22:21:08 +01:00
Kacper
0c9f05ee38 feat: Add validation viewing functionality and UI updates
- Implemented a new function to mark validations as viewed by the user, updating the validation state accordingly.
- Added a new API endpoint for marking validations as viewed, integrated with the existing GitHub routes.
- Enhanced the sidebar to display the count of unviewed validations, providing real-time updates.
- Updated the GitHub issues view to mark validations as viewed when issues are accessed, improving user interaction.
- Introduced a visual indicator for unviewed validations in the issue list, enhancing user awareness of pending validations.
2025-12-23 22:11:26 +01:00
Web Dev Cody
d50b15e639 Merge pull request #245 from illia1f/feature/project-picker-scroll
feat(ProjectSelector): add auto-scroll and improved UX for project picker
2025-12-23 15:46:34 -05:00
Web Dev Cody
172f1a7a3f Merge pull request #251 from AutoMaker-Org/fix/list-branch-issue-on-fresh-repo
fix: branch list issue and improve ui feedback
2025-12-23 15:43:27 -05:00
Web Dev Cody
5edb38691c Merge pull request #249 from AutoMaker-Org/fix/new-project-dialog-path-overflow
fix: new project path overflow
2025-12-23 15:42:44 -05:00
Web Dev Cody
f1f149c6c0 Merge pull request #247 from AutoMaker-Org/fix/git-diff-loop
fix: git diff loop
2025-12-23 15:42:24 -05:00
Kacper
e0c5f55fe7 fix: adress pr reviews 2025-12-23 21:07:36 +01:00
Kacper
4958ee1dda Changes from fix/list-branch-issue-on-fresh-repo 2025-12-23 20:46:10 +01:00
Kacper
3d00f40ea0 Changes from fix/new-project-dialog-path-overflow 2025-12-23 18:58:15 +01:00
Kacper
c9e0957dfe feat(diff): add helper function to create synthetic diffs for new files
This update introduces a new function, createNewFileDiff, to streamline the generation of synthetic diffs for untracked files. The function reduces code duplication by handling the diff formatting for new files, including directories and large files, improving overall maintainability.
2025-12-23 18:39:43 +01:00
Kacper
9d4f912c93 Changes from main 2025-12-23 18:26:02 +01:00
Illia Filippov
4898a1307e refactor(ProjectSelector): enhance project picker scrollbar styling and improve selection logic 2025-12-23 18:17:12 +01:00
Kacper
6acb751eb3 feat: Implement GitHub issue validation management and UI enhancements
- Introduced CRUD operations for GitHub issue validation results, including storage and retrieval.
- Added new endpoints for checking validation status, stopping validations, and deleting stored validations.
- Enhanced the GitHub routes to support validation management features.
- Updated the UI to display validation results and manage validation states for GitHub issues.
- Integrated event handling for validation progress and completion notifications.
2025-12-23 18:15:30 +01:00
Web Dev Cody
629b7e7433 Merge pull request #244 from WikiRik/WikiRik/fix-urls
docs: update links to Claude
2025-12-23 11:54:17 -05:00
Illia Filippov
190f18ecae feat(ProjectSelector): add auto-scroll and improved UX for project picker 2025-12-23 17:45:04 +01:00
Rik Smale
e6eb5ad97e docs: update links to Claude
These links were referring to pages that do not exist anymore. I have updated them to what I think are the new URLs.
2025-12-23 16:12:52 +00:00
Kacper
5f0ecc8dd6 feat: Enhance GitHub issue handling with assignees and linked PRs
- Added support for assignees in GitHub issue data structure.
- Implemented fetching of linked pull requests for open issues using the GitHub GraphQL API.
- Updated UI to display assignees and linked PRs for selected issues.
- Adjusted issue listing commands to include assignees in the fetched data.
2025-12-23 16:57:29 +01:00
Web Dev Cody
e95912f931 Merge pull request #232 from leonvanzyl/main
fix: Open in Browser button not working on Windows
2025-12-23 10:27:27 -05:00
Web Dev Cody
eb1875f558 Merge pull request #239 from illia1f/refactor/project-selector-with-options
refactor(ProjectSelector): improve project selection logic and UI/UX
2025-12-23 10:24:59 -05:00
Web Dev Cody
c761ce8120 Merge pull request #240 from AutoMaker-Org/fix/onboarding-dialog-overflow
fix: onboarding dialog title overflowing
2025-12-23 10:14:24 -05:00
Illia Filippov
ee9cb4deec refactor(ProjectSelector): streamline project selection handling by removing unnecessary useCallback 2025-12-23 16:03:13 +01:00
Kacper
a881d175bc feat: Implement GitHub issue validation endpoint and UI integration
- Added a new endpoint for validating GitHub issues using the Claude SDK.
- Introduced validation schema and logic to handle issue validation requests.
- Updated GitHub routes to include the new validation route.
- Enhanced the UI with a validation dialog and button to trigger issue validation.
- Mapped issue complexity to feature priority for better task management.
- Integrated validation results display in the UI, allowing users to convert validated issues into tasks.
2025-12-23 15:50:10 +01:00
Kacper
17ed2be918 fix(OnboardingDialog): adjust layout for title and description to improve responsiveness 2025-12-23 14:54:45 +01:00
Illia Filippov
5a5165818e refactor(ProjectSelector): improve project selection logic and UI/UX 2025-12-23 13:44:09 +01:00
Auto
9a7d21438b fix: Open in Browser button not working on Windows
The handleOpenDevServerUrl function was looking up the dev server info using an un-normalized path, but the Map stores entries with normalized paths (forward slashes).

On Windows, paths come in as C:\Projects\foo but stored keys use C:/Projects/foo (normalized). The lookup used the raw path, so it never matched.

Fix: Use getWorktreeKey() helper which normalizes the path, consistent with how isDevServerRunning() and getDevServerInfo() already work.
2025-12-23 07:50:37 +02:00
Test User
d4d4b8fb3d feat(TaskNode): conditionally render title and adjust description styling 2025-12-22 23:08:58 -05:00
Web Dev Cody
48955e9a71 Merge pull request #231 from stephan271c/add-pause-button
feat: Add a stop button to halt agent execution when processing.
2025-12-22 21:49:43 -05:00
Web Dev Cody
870df88cd1 Merge pull request #225 from illia1f/fix/project-picker-dropdown
fix: project picker dropdown highlights first item instead of current project
2025-12-22 21:22:35 -05:00
Web Dev Cody
7618a75d85 Merge pull request #226 from JBotwina/graph-filtering-and-node-controls
feat: Graph Filtering and Node Controls
2025-12-22 21:18:19 -05:00
Stephan Cho
51281095ea feat: Add a stop button to halt agent execution when processing. 2025-12-22 21:08:04 -05:00
Illia Filippov
50a595a8da fix(useProjectPicker): ensure project selection resets correctly when project picker is opened 2025-12-23 02:30:28 +01:00
Illia Filippov
a398367f00 refactor: simplify project index retrieval and selection logic in project picker 2025-12-23 02:06:49 +01:00
James
fe6faf9aae fix type errors 2025-12-22 19:44:48 -05:00
James
a1331ed514 fix format 2025-12-22 19:37:36 -05:00
Illia Filippov
38f2e0beea fix: ensure current project is highlighted in project picker dropdown without side effects 2025-12-23 01:36:20 +01:00
James
ef4035a462 fix lock file 2025-12-22 19:35:48 -05:00
James
cb07206dae add use ts hooks 2025-12-22 19:30:44 -05:00
James
cc0405cf27 refactor: update graph view actions to include onViewDetails and remove onViewBranch
- Added onViewDetails callback to handle feature detail viewing.
- Removed onViewBranch functionality and associated UI elements for a cleaner interface.
2025-12-22 19:30:44 -05:00
James
4dd00a98e4 add more filters about process status 2025-12-22 19:30:44 -05:00
James
b3c321ce02 add node actions 2025-12-22 19:30:44 -05:00
James
12a796bcbb branch filtering 2025-12-22 19:30:44 -05:00
James
ffcdbf7d75 fix styling of graph controls 2025-12-22 19:30:44 -05:00
Illia Filippov
e70c3b7722 fix: project picker dropdown highlights first item instead of current project 2025-12-23 00:50:21 +01:00
Web Dev Cody
524a9736b4 Merge pull request #222 from JBotwina/claude/task-dependency-graph-iPz1k
feat: task dependency graph view
2025-12-22 17:30:52 -05:00
Test User
036a7d9d26 refactor: update e2e tests to use 'load' state for page navigation
- Changed instances of `waitForLoadState('networkidle')` to `waitForLoadState('load')` across multiple test files and utility functions to improve test reliability in applications with persistent connections.
- Added documentation to the e2e testing guide explaining the rationale behind using 'load' state instead of 'networkidle' to prevent timeouts and flaky tests.
2025-12-22 17:16:55 -05:00
Test User
c4df2c141a Merge branch 'main' of github.com:AutoMaker-Org/automaker into claude/task-dependency-graph-iPz1k 2025-12-22 17:01:18 -05:00
James
7c75c24b5c fix: graph nodes now respect theme colors
Override React Flow's default node styling (white background) with
transparent to allow the TaskNode component's bg-card class to show
through with the correct theme colors.
2025-12-22 15:53:15 -05:00
James
2588ecaafa Merge remote-tracking branch 'origin/main' into claude/task-dependency-graph-iPz1k 2025-12-22 15:37:24 -05:00
James Botwina
a071097c0d Merge branch 'AutoMaker-Org:main' into claude/task-dependency-graph-iPz1k 2025-12-22 14:23:18 -05:00
Claude
b930091c42 feat: add dependency graph view for task visualization
Add a new interactive graph view alongside the kanban board for visualizing
task dependencies. The graph view uses React Flow with dagre auto-layout to
display tasks as nodes connected by dependency edges.

Key features:
- Toggle between kanban and graph view via new control buttons
- Custom TaskNode component matching existing card styling/themes
- Animated edges that flow when tasks are in progress
- Status-aware node colors (backlog, in-progress, waiting, verified)
- Blocked tasks show lock icon with dependency count tooltip
- MiniMap for navigation in large graphs
- Zoom, pan, fit-view, and lock controls
- Horizontal/vertical layout options via dagre
- Click node to view details, double-click to edit
- Respects all 32 themes via CSS variables
- Reduced motion support for animations

New dependencies: @xyflow/react, dagre
2025-12-22 19:10:32 +00:00
204 changed files with 11267 additions and 7323 deletions

View File

@@ -1,99 +0,0 @@
# CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## Project Overview
Automaker is an autonomous AI development studio - a Kanban-based application where users describe features and AI agents (powered by Claude Agent SDK) automatically implement them. It runs as an Electron desktop app or in web browser mode.
## Commands
```bash
# Install dependencies
npm install
# Build shared packages (REQUIRED before running)
npm run build:packages
# Development
npm run dev # Interactive mode selector
npm run dev:electron # Electron desktop app
npm run dev:web # Web browser mode (localhost:3007)
npm run dev:server # Backend server only
# Testing
npm run test # UI E2E tests (Playwright)
npm run test:headed # UI tests with visible browser
npm run test:server # Server unit tests (Vitest)
npm run test:packages # Shared package tests
# Linting & Formatting
npm run lint # ESLint
npm run format # Prettier
npm run format:check # Check formatting
# Building
npm run build # Build Next.js app
npm run build:electron # Build Electron distribution
```
## Architecture
### Monorepo Structure (npm workspaces)
```
apps/
├── ui/ # Electron + Vite + React frontend (@automaker/ui)
└── server/ # Express + WebSocket backend (@automaker/server)
libs/ # Shared packages (@automaker/*)
├── types/ # Shared TypeScript interfaces
├── utils/ # Common utilities
├── prompts/ # AI prompt templates
├── platform/ # Platform-specific code (paths, security)
├── git-utils/ # Git operations
├── model-resolver/ # AI model configuration
└── dependency-resolver/ # Dependency management
```
### Key Patterns
**State Management (UI)**: Zustand stores in `apps/ui/src/store/`
- `app-store.ts` - Main application state (features, settings, themes)
- `setup-store.ts` - Project setup wizard state
**Routing (UI)**: TanStack Router with file-based routes in `apps/ui/src/routes/`
**Backend Services**: Express + WebSocket in `apps/server/src/`
- Services in `/services/` handle business logic
- Routes in `/routes/` define API endpoints
- Providers in `/providers/` abstract AI model integrations
**Provider Architecture**: Model-based routing via `ProviderFactory`
- `ClaudeProvider` wraps @anthropic-ai/claude-agent-sdk
- Designed for easy addition of other providers
**Feature Storage**: Features stored in `.automaker/features/{id}/feature.json`
**Communication**:
- Electron: IPC via preload script (`apps/ui/src/preload.ts`)
- Web: HTTP API client (`apps/ui/src/lib/http-api-client.ts`)
### Important Files
- `apps/ui/src/main.ts` - Electron main process
- `apps/server/src/index.ts` - Server entry point
- `apps/ui/src/lib/electron.ts` - IPC type definitions
- `apps/server/src/services/agent-service.ts` - AI agent session management
- `apps/server/src/providers/provider-factory.ts` - Model routing
## Development Notes
- Always run `npm run build:packages` after modifying any `libs/*` package
- Server runs on port 3008 by default
- UI runs on port 3007 in web mode
- Authentication: Set `ANTHROPIC_API_KEY` env var or configure via Settings

View File

@@ -61,7 +61,7 @@ Traditional development tools help you write code. Automaker helps you **orchest
### Powered by Claude Code ### Powered by Claude Code
Automaker leverages the [Claude Agent SDK](https://docs.anthropic.com/en/docs/claude-code) to give AI agents full access to your codebase. Agents can read files, write code, execute commands, run tests, and make git commits—all while working in isolated git worktrees to keep your main branch safe. Automaker leverages the [Claude Agent SDK](https://platform.claude.com/docs/en/agent-sdk/overview) to give AI agents full access to your codebase. Agents can read files, write code, execute commands, run tests, and make git commits—all while working in isolated git worktrees to keep your main branch safe.
### Why This Matters ### Why This Matters
@@ -106,7 +106,7 @@ https://discord.gg/jjem7aEDKU
- Node.js 18+ - Node.js 18+
- npm - npm
- [Claude Code CLI](https://docs.anthropic.com/en/docs/claude-code) installed and authenticated - [Claude Code CLI](https://code.claude.com/docs/en/overview) installed and authenticated
### Quick Start ### Quick Start

File diff suppressed because it is too large Load Diff

View File

@@ -1,15 +0,0 @@
{
"name": "@automaker/server-bundle",
"version": "0.1.0",
"type": "module",
"main": "dist/index.js",
"dependencies": {
"@anthropic-ai/claude-agent-sdk": "^0.1.61",
"cors": "^2.8.5",
"dotenv": "^17.2.3",
"express": "^5.1.0",
"morgan": "^1.10.1",
"node-pty": "1.1.0-beta41",
"ws": "^8.18.0"
}
}

View File

@@ -48,6 +48,7 @@ import { createClaudeRoutes } from './routes/claude/index.js';
import { ClaudeUsageService } from './services/claude-usage-service.js'; import { ClaudeUsageService } from './services/claude-usage-service.js';
import { createGitHubRoutes } from './routes/github/index.js'; import { createGitHubRoutes } from './routes/github/index.js';
import { createContextRoutes } from './routes/context/index.js'; import { createContextRoutes } from './routes/context/index.js';
import { cleanupStaleValidations } from './routes/github/routes/validation-common.js';
// Load environment variables // Load environment variables
dotenv.config(); dotenv.config();
@@ -123,6 +124,15 @@ const claudeUsageService = new ClaudeUsageService();
console.log('[Server] Agent service initialized'); console.log('[Server] Agent service initialized');
})(); })();
// Run stale validation cleanup every hour to prevent memory leaks from crashed validations
const VALIDATION_CLEANUP_INTERVAL_MS = 60 * 60 * 1000; // 1 hour
setInterval(() => {
const cleaned = cleanupStaleValidations();
if (cleaned > 0) {
console.log(`[Server] Cleaned up ${cleaned} stale validation entries`);
}
}, VALIDATION_CLEANUP_INTERVAL_MS);
// Mount API routes - health is unauthenticated for monitoring // Mount API routes - health is unauthenticated for monitoring
app.use('/api/health', createHealthRoutes()); app.use('/api/health', createHealthRoutes());
@@ -147,7 +157,7 @@ app.use('/api/templates', createTemplatesRoutes());
app.use('/api/terminal', createTerminalRoutes()); app.use('/api/terminal', createTerminalRoutes());
app.use('/api/settings', createSettingsRoutes(settingsService)); app.use('/api/settings', createSettingsRoutes(settingsService));
app.use('/api/claude', createClaudeRoutes(claudeUsageService)); app.use('/api/claude', createClaudeRoutes(claudeUsageService));
app.use('/api/github', createGitHubRoutes()); app.use('/api/github', createGitHubRoutes(events));
app.use('/api/context', createContextRoutes()); app.use('/api/context', createContextRoutes());
// Create HTTP server // Create HTTP server

View File

@@ -2,9 +2,30 @@
* Authentication middleware for API security * Authentication middleware for API security
* *
* Supports API key authentication via header or environment variable. * Supports API key authentication via header or environment variable.
* Includes rate limiting to prevent brute-force attacks.
*/ */
import * as crypto from 'crypto';
import type { Request, Response, NextFunction } from 'express'; import type { Request, Response, NextFunction } from 'express';
import { apiKeyRateLimiter } from './rate-limiter.js';
/**
* Performs a constant-time string comparison to prevent timing attacks.
* Uses crypto.timingSafeEqual with proper buffer handling.
*/
function secureCompare(a: string, b: string): boolean {
const bufferA = Buffer.from(a, 'utf8');
const bufferB = Buffer.from(b, 'utf8');
// If lengths differ, we still need to do a constant-time comparison
// to avoid leaking length information. We compare against bufferA twice.
if (bufferA.length !== bufferB.length) {
crypto.timingSafeEqual(bufferA, bufferA);
return false;
}
return crypto.timingSafeEqual(bufferA, bufferB);
}
// API key from environment (optional - if not set, auth is disabled) // API key from environment (optional - if not set, auth is disabled)
const API_KEY = process.env.AUTOMAKER_API_KEY; const API_KEY = process.env.AUTOMAKER_API_KEY;
@@ -14,6 +35,7 @@ const API_KEY = process.env.AUTOMAKER_API_KEY;
* *
* If AUTOMAKER_API_KEY is set, requires matching key in X-API-Key header. * If AUTOMAKER_API_KEY is set, requires matching key in X-API-Key header.
* If not set, allows all requests (development mode). * If not set, allows all requests (development mode).
* Includes rate limiting to prevent brute-force attacks.
*/ */
export function authMiddleware(req: Request, res: Response, next: NextFunction): void { export function authMiddleware(req: Request, res: Response, next: NextFunction): void {
// If no API key is configured, allow all requests // If no API key is configured, allow all requests
@@ -22,6 +44,22 @@ export function authMiddleware(req: Request, res: Response, next: NextFunction):
return; return;
} }
const clientIp = apiKeyRateLimiter.getClientIp(req);
// Check if client is rate limited
if (apiKeyRateLimiter.isBlocked(clientIp)) {
const retryAfterMs = apiKeyRateLimiter.getBlockTimeRemaining(clientIp);
const retryAfterSeconds = Math.ceil(retryAfterMs / 1000);
res.setHeader('Retry-After', retryAfterSeconds.toString());
res.status(429).json({
success: false,
error: 'Too many failed authentication attempts. Please try again later.',
retryAfter: retryAfterSeconds,
});
return;
}
// Check for API key in header // Check for API key in header
const providedKey = req.headers['x-api-key'] as string | undefined; const providedKey = req.headers['x-api-key'] as string | undefined;
@@ -33,7 +71,10 @@ export function authMiddleware(req: Request, res: Response, next: NextFunction):
return; return;
} }
if (providedKey !== API_KEY) { if (!secureCompare(providedKey, API_KEY)) {
// Record failed attempt
apiKeyRateLimiter.recordFailure(clientIp);
res.status(403).json({ res.status(403).json({
success: false, success: false,
error: 'Invalid API key.', error: 'Invalid API key.',
@@ -41,6 +82,9 @@ export function authMiddleware(req: Request, res: Response, next: NextFunction):
return; return;
} }
// Successful authentication - reset rate limiter for this IP
apiKeyRateLimiter.reset(clientIp);
next(); next();
} }

View File

@@ -0,0 +1,25 @@
/**
* Enhancement Prompts - Re-exported from @automaker/prompts
*
* This file now re-exports enhancement prompts from the shared @automaker/prompts package
* to maintain backward compatibility with existing imports in the server codebase.
*/
export {
IMPROVE_SYSTEM_PROMPT,
TECHNICAL_SYSTEM_PROMPT,
SIMPLIFY_SYSTEM_PROMPT,
ACCEPTANCE_SYSTEM_PROMPT,
IMPROVE_EXAMPLES,
TECHNICAL_EXAMPLES,
SIMPLIFY_EXAMPLES,
ACCEPTANCE_EXAMPLES,
getEnhancementPrompt,
getSystemPrompt,
getExamples,
buildUserPrompt,
isValidEnhancementMode,
getAvailableEnhancementModes,
} from '@automaker/prompts';
export type { EnhancementMode, EnhancementExample } from '@automaker/prompts';

View File

@@ -1,69 +0,0 @@
/**
* Shell execution utilities
*
* Provides cross-platform shell execution with extended PATH
* to find tools like git and gh in Electron environments.
*/
import { exec } from 'child_process';
import { promisify } from 'util';
/**
* Promisified exec for async/await usage
*/
export const execAsync = promisify(exec);
/**
* Path separator for the current platform
*/
const pathSeparator = process.platform === 'win32' ? ';' : ':';
/**
* Additional paths to search for executables.
* Electron apps don't inherit the user's shell PATH, so we need to add
* common tool installation locations.
*/
const additionalPaths: string[] = [];
if (process.platform === 'win32') {
// Windows paths for Git and other tools
if (process.env.LOCALAPPDATA) {
additionalPaths.push(`${process.env.LOCALAPPDATA}\\Programs\\Git\\cmd`);
}
if (process.env.PROGRAMFILES) {
additionalPaths.push(`${process.env.PROGRAMFILES}\\Git\\cmd`);
}
if (process.env['ProgramFiles(x86)']) {
additionalPaths.push(`${process.env['ProgramFiles(x86)']}\\Git\\cmd`);
}
} else {
// Unix/Mac paths
additionalPaths.push(
'/opt/homebrew/bin', // Homebrew on Apple Silicon
'/usr/local/bin', // Homebrew on Intel Mac, common Linux location
'/home/linuxbrew/.linuxbrew/bin', // Linuxbrew
`${process.env.HOME}/.local/bin` // pipx, other user installs
);
}
/**
* Extended PATH that includes common tool installation locations.
*/
export const extendedPath = [process.env.PATH, ...additionalPaths.filter(Boolean)]
.filter(Boolean)
.join(pathSeparator);
/**
* Environment variables with extended PATH for executing shell commands.
*/
export const execEnv = {
...process.env,
PATH: extendedPath,
};
/**
* Check if an error is ENOENT (file/path not found or spawn failed)
*/
export function isENOENT(error: unknown): boolean {
return error !== null && typeof error === 'object' && 'code' in error && error.code === 'ENOENT';
}

View File

@@ -0,0 +1,208 @@
/**
* In-memory rate limiter for authentication endpoints
*
* Provides brute-force protection by tracking failed attempts per IP address.
* Blocks requests after exceeding the maximum number of failures within a time window.
*/
import type { Request, Response, NextFunction } from 'express';
interface AttemptRecord {
count: number;
firstAttempt: number;
blockedUntil: number | null;
}
interface RateLimiterConfig {
maxAttempts: number;
windowMs: number;
blockDurationMs: number;
}
const DEFAULT_CONFIG: RateLimiterConfig = {
maxAttempts: 5,
windowMs: 15 * 60 * 1000, // 15 minutes
blockDurationMs: 15 * 60 * 1000, // 15 minutes
};
/**
* Rate limiter instance that tracks attempts by a key (typically IP address)
*/
export class RateLimiter {
private attempts: Map<string, AttemptRecord> = new Map();
private config: RateLimiterConfig;
constructor(config: Partial<RateLimiterConfig> = {}) {
this.config = { ...DEFAULT_CONFIG, ...config };
}
/**
* Extract client IP address from request
* Handles proxied requests via X-Forwarded-For header
*/
getClientIp(req: Request): string {
const forwarded = req.headers['x-forwarded-for'];
if (forwarded) {
const forwardedIp = Array.isArray(forwarded) ? forwarded[0] : forwarded.split(',')[0];
return forwardedIp.trim();
}
return req.socket.remoteAddress || 'unknown';
}
/**
* Check if a key is currently rate limited
*/
isBlocked(key: string): boolean {
const record = this.attempts.get(key);
if (!record) return false;
const now = Date.now();
// Check if currently blocked
if (record.blockedUntil && now < record.blockedUntil) {
return true;
}
// Clear expired block
if (record.blockedUntil && now >= record.blockedUntil) {
this.attempts.delete(key);
return false;
}
return false;
}
/**
* Get remaining time until block expires (in milliseconds)
*/
getBlockTimeRemaining(key: string): number {
const record = this.attempts.get(key);
if (!record?.blockedUntil) return 0;
const remaining = record.blockedUntil - Date.now();
return remaining > 0 ? remaining : 0;
}
/**
* Record a failed authentication attempt
* Returns true if the key is now blocked
*/
recordFailure(key: string): boolean {
const now = Date.now();
const record = this.attempts.get(key);
if (!record) {
this.attempts.set(key, {
count: 1,
firstAttempt: now,
blockedUntil: null,
});
return false;
}
// If window has expired, reset the counter
if (now - record.firstAttempt > this.config.windowMs) {
this.attempts.set(key, {
count: 1,
firstAttempt: now,
blockedUntil: null,
});
return false;
}
// Increment counter
record.count += 1;
// Check if should be blocked
if (record.count >= this.config.maxAttempts) {
record.blockedUntil = now + this.config.blockDurationMs;
return true;
}
return false;
}
/**
* Clear a key's record (e.g., on successful authentication)
*/
reset(key: string): void {
this.attempts.delete(key);
}
/**
* Get the number of attempts remaining before block
*/
getAttemptsRemaining(key: string): number {
const record = this.attempts.get(key);
if (!record) return this.config.maxAttempts;
const now = Date.now();
// If window expired, full attempts available
if (now - record.firstAttempt > this.config.windowMs) {
return this.config.maxAttempts;
}
return Math.max(0, this.config.maxAttempts - record.count);
}
/**
* Clean up expired records to prevent memory leaks
*/
cleanup(): void {
const now = Date.now();
const keysToDelete: string[] = [];
this.attempts.forEach((record, key) => {
// Mark for deletion if block has expired
if (record.blockedUntil && now >= record.blockedUntil) {
keysToDelete.push(key);
return;
}
// Mark for deletion if window has expired and not blocked
if (!record.blockedUntil && now - record.firstAttempt > this.config.windowMs) {
keysToDelete.push(key);
}
});
keysToDelete.forEach((key) => this.attempts.delete(key));
}
}
// Shared rate limiter instances for authentication endpoints
export const apiKeyRateLimiter = new RateLimiter();
export const terminalAuthRateLimiter = new RateLimiter();
// Clean up expired records periodically (every 5 minutes)
setInterval(
() => {
apiKeyRateLimiter.cleanup();
terminalAuthRateLimiter.cleanup();
},
5 * 60 * 1000
);
/**
* Create rate limiting middleware for authentication endpoints
* This middleware checks if the request is rate limited before processing
*/
export function createRateLimitMiddleware(rateLimiter: RateLimiter) {
return (req: Request, res: Response, next: NextFunction): void => {
const clientIp = rateLimiter.getClientIp(req);
if (rateLimiter.isBlocked(clientIp)) {
const retryAfterMs = rateLimiter.getBlockTimeRemaining(clientIp);
const retryAfterSeconds = Math.ceil(retryAfterMs / 1000);
res.setHeader('Retry-After', retryAfterSeconds.toString());
res.status(429).json({
success: false,
error: 'Too many failed authentication attempts. Please try again later.',
retryAfter: retryAfterSeconds,
});
return;
}
next();
};
}

View File

@@ -0,0 +1,23 @@
/**
* Re-export secure file system utilities from @automaker/platform
* This file exists for backward compatibility with existing imports
*/
import { secureFs } from '@automaker/platform';
export const {
access,
readFile,
writeFile,
mkdir,
readdir,
stat,
rm,
unlink,
copyFile,
appendFile,
rename,
lstat,
joinPath,
resolvePath,
} = secureFs;

View File

@@ -0,0 +1,181 @@
/**
* Validation Storage - CRUD operations for GitHub issue validation results
*
* Stores validation results in .automaker/validations/{issueNumber}/validation.json
* Results include the validation verdict, metadata, and timestamp for cache invalidation.
*/
import * as secureFs from './secure-fs.js';
import { getValidationsDir, getValidationDir, getValidationPath } from '@automaker/platform';
import type { StoredValidation } from '@automaker/types';
// Re-export StoredValidation for convenience
export type { StoredValidation };
/** Number of hours before a validation is considered stale */
const VALIDATION_CACHE_TTL_HOURS = 24;
/**
* Write validation result to storage
*
* Creates the validation directory if needed and stores the result as JSON.
*
* @param projectPath - Absolute path to project directory
* @param issueNumber - GitHub issue number
* @param data - Validation data to store
*/
export async function writeValidation(
projectPath: string,
issueNumber: number,
data: StoredValidation
): Promise<void> {
const validationDir = getValidationDir(projectPath, issueNumber);
const validationPath = getValidationPath(projectPath, issueNumber);
// Ensure directory exists
await secureFs.mkdir(validationDir, { recursive: true });
// Write validation result
await secureFs.writeFile(validationPath, JSON.stringify(data, null, 2), 'utf-8');
}
/**
* Read validation result from storage
*
* @param projectPath - Absolute path to project directory
* @param issueNumber - GitHub issue number
* @returns Stored validation or null if not found
*/
export async function readValidation(
projectPath: string,
issueNumber: number
): Promise<StoredValidation | null> {
try {
const validationPath = getValidationPath(projectPath, issueNumber);
const content = (await secureFs.readFile(validationPath, 'utf-8')) as string;
return JSON.parse(content) as StoredValidation;
} catch {
// File doesn't exist or can't be read
return null;
}
}
/**
* Get all stored validations for a project
*
* @param projectPath - Absolute path to project directory
* @returns Array of stored validations
*/
export async function getAllValidations(projectPath: string): Promise<StoredValidation[]> {
const validationsDir = getValidationsDir(projectPath);
try {
const dirs = await secureFs.readdir(validationsDir, { withFileTypes: true });
// Read all validation files in parallel for better performance
const promises = dirs
.filter((dir) => dir.isDirectory())
.map((dir) => {
const issueNumber = parseInt(dir.name, 10);
if (!isNaN(issueNumber)) {
return readValidation(projectPath, issueNumber);
}
return Promise.resolve(null);
});
const results = await Promise.all(promises);
const validations = results.filter((v): v is StoredValidation => v !== null);
// Sort by issue number
validations.sort((a, b) => a.issueNumber - b.issueNumber);
return validations;
} catch {
// Directory doesn't exist
return [];
}
}
/**
* Delete a validation from storage
*
* @param projectPath - Absolute path to project directory
* @param issueNumber - GitHub issue number
* @returns true if validation was deleted, false if not found
*/
export async function deleteValidation(projectPath: string, issueNumber: number): Promise<boolean> {
try {
const validationDir = getValidationDir(projectPath, issueNumber);
await secureFs.rm(validationDir, { recursive: true, force: true });
return true;
} catch {
return false;
}
}
/**
* Check if a validation is stale (older than TTL)
*
* @param validation - Stored validation to check
* @returns true if validation is older than 24 hours
*/
export function isValidationStale(validation: StoredValidation): boolean {
const validatedAt = new Date(validation.validatedAt);
const now = new Date();
const hoursDiff = (now.getTime() - validatedAt.getTime()) / (1000 * 60 * 60);
return hoursDiff > VALIDATION_CACHE_TTL_HOURS;
}
/**
* Get validation with freshness info
*
* @param projectPath - Absolute path to project directory
* @param issueNumber - GitHub issue number
* @returns Object with validation and isStale flag, or null if not found
*/
export async function getValidationWithFreshness(
projectPath: string,
issueNumber: number
): Promise<{ validation: StoredValidation; isStale: boolean } | null> {
const validation = await readValidation(projectPath, issueNumber);
if (!validation) {
return null;
}
return {
validation,
isStale: isValidationStale(validation),
};
}
/**
* Mark a validation as viewed by the user
*
* @param projectPath - Absolute path to project directory
* @param issueNumber - GitHub issue number
* @returns true if validation was marked as viewed, false if not found
*/
export async function markValidationViewed(
projectPath: string,
issueNumber: number
): Promise<boolean> {
const validation = await readValidation(projectPath, issueNumber);
if (!validation) {
return false;
}
validation.viewedAt = new Date().toISOString();
await writeValidation(projectPath, issueNumber, validation);
return true;
}
/**
* Get count of unviewed, non-stale validations for a project
*
* @param projectPath - Absolute path to project directory
* @returns Number of unviewed validations
*/
export async function getUnviewedValidationsCount(projectPath: string): Promise<number> {
const validations = await getAllValidations(projectPath);
return validations.filter((v) => !v.viewedAt && !isValidationStale(v)).length;
}

View File

@@ -3,16 +3,26 @@
* Stores worktree-specific data in .automaker/worktrees/:branch/worktree.json * Stores worktree-specific data in .automaker/worktrees/:branch/worktree.json
*/ */
import { secureFs } from '@automaker/platform'; import * as secureFs from './secure-fs.js';
import * as path from 'path'; import * as path from 'path';
import type { WorktreePRInfo, WorktreeMetadata } from '@automaker/types';
// Re-export types for convenience
export type { WorktreePRInfo, WorktreeMetadata } from '@automaker/types';
/** Maximum length for sanitized branch names in filesystem paths */ /** Maximum length for sanitized branch names in filesystem paths */
const MAX_SANITIZED_BRANCH_PATH_LENGTH = 200; const MAX_SANITIZED_BRANCH_PATH_LENGTH = 200;
export interface WorktreePRInfo {
number: number;
url: string;
title: string;
state: string;
createdAt: string;
}
export interface WorktreeMetadata {
branch: string;
createdAt: string;
pr?: WorktreePRInfo;
}
/** /**
* Sanitize branch name for cross-platform filesystem safety * Sanitize branch name for cross-platform filesystem safety
*/ */

View File

@@ -7,6 +7,29 @@
import type { Request, Response, NextFunction } from 'express'; import type { Request, Response, NextFunction } from 'express';
import { validatePath, PathNotAllowedError } from '@automaker/platform'; import { validatePath, PathNotAllowedError } from '@automaker/platform';
/**
* Custom error for invalid path type
*/
class InvalidPathTypeError extends Error {
constructor(paramName: string, expectedType: string, actualType: string) {
super(`Invalid type for '${paramName}': expected ${expectedType}, got ${actualType}`);
this.name = 'InvalidPathTypeError';
}
}
/**
* Validates that a value is a non-empty string suitable for path validation
*
* @param value - The value to check
* @param paramName - The parameter name for error messages
* @throws InvalidPathTypeError if value is not a valid string
*/
function assertValidPathString(value: unknown, paramName: string): asserts value is string {
if (typeof value !== 'string') {
throw new InvalidPathTypeError(paramName, 'string', typeof value);
}
}
/** /**
* Creates a middleware that validates specified path parameters in req.body * Creates a middleware that validates specified path parameters in req.body
* @param paramNames - Names of parameters to validate (e.g., 'projectPath', 'worktreePath') * @param paramNames - Names of parameters to validate (e.g., 'projectPath', 'worktreePath')
@@ -27,7 +50,8 @@ export function validatePathParams(...paramNames: string[]) {
if (paramName.endsWith('?')) { if (paramName.endsWith('?')) {
const actualName = paramName.slice(0, -1); const actualName = paramName.slice(0, -1);
const value = req.body[actualName]; const value = req.body[actualName];
if (value) { if (value !== undefined && value !== null) {
assertValidPathString(value, actualName);
validatePath(value); validatePath(value);
} }
continue; continue;
@@ -37,17 +61,30 @@ export function validatePathParams(...paramNames: string[]) {
if (paramName.endsWith('[]')) { if (paramName.endsWith('[]')) {
const actualName = paramName.slice(0, -2); const actualName = paramName.slice(0, -2);
const values = req.body[actualName]; const values = req.body[actualName];
if (Array.isArray(values) && values.length > 0) {
for (const value of values) { // Skip if not provided or empty
validatePath(value); if (values === undefined || values === null) {
continue;
} }
// Validate that it's actually an array
if (!Array.isArray(values)) {
throw new InvalidPathTypeError(actualName, 'array', typeof values);
}
// Validate each element in the array
for (let i = 0; i < values.length; i++) {
const value = values[i];
assertValidPathString(value, `${actualName}[${i}]`);
validatePath(value);
} }
continue; continue;
} }
// Handle regular parameters // Handle regular parameters
const value = req.body[paramName]; const value = req.body[paramName];
if (value) { if (value !== undefined && value !== null) {
assertValidPathString(value, paramName);
validatePath(value); validatePath(value);
} }
} }
@@ -62,6 +99,14 @@ export function validatePathParams(...paramNames: string[]) {
return; return;
} }
if (error instanceof InvalidPathTypeError) {
res.status(400).json({
success: false,
error: error.message,
});
return;
}
// Re-throw unexpected errors // Re-throw unexpected errors
throw error; throw error;
} }

View File

@@ -9,10 +9,6 @@ import type {
InstallationStatus, InstallationStatus,
ValidationResult, ValidationResult,
ModelDefinition, ModelDefinition,
SimpleQueryOptions,
SimpleQueryResult,
StreamingQueryOptions,
StreamingQueryResult,
} from './types.js'; } from './types.js';
/** /**
@@ -39,22 +35,6 @@ export abstract class BaseProvider {
*/ */
abstract executeQuery(options: ExecuteOptions): AsyncGenerator<ProviderMessage>; abstract executeQuery(options: ExecuteOptions): AsyncGenerator<ProviderMessage>;
/**
* Execute a simple one-shot query and return text directly
* Use for quick completions without tools (title gen, descriptions, etc.)
* @param options Simple query options
* @returns Query result with text
*/
abstract executeSimpleQuery(options: SimpleQueryOptions): Promise<SimpleQueryResult>;
/**
* Execute a streaming query with tools and/or structured output
* Use for queries that need tools, progress callbacks, or structured JSON output
* @param options Streaming query options
* @returns Query result with text and optional structured output
*/
abstract executeStreamingQuery(options: StreamingQueryOptions): Promise<StreamingQueryResult>;
/** /**
* Detect if the provider is installed and configured * Detect if the provider is installed and configured
* @returns Installation status * @returns Installation status

View File

@@ -3,26 +3,15 @@
* *
* Wraps the @anthropic-ai/claude-agent-sdk for seamless integration * Wraps the @anthropic-ai/claude-agent-sdk for seamless integration
* with the provider architecture. * with the provider architecture.
*
* Provides two query methods:
* - executeQuery(): Streaming async generator for complex multi-turn sessions
* - executeSimpleQuery(): One-shot queries that return text directly (title gen, descriptions, etc.)
*/ */
import { query, type Options } from '@anthropic-ai/claude-agent-sdk'; import { query, type Options } from '@anthropic-ai/claude-agent-sdk';
import { BaseProvider } from './base-provider.js'; import { BaseProvider } from './base-provider.js';
import { resolveModelString } from '@automaker/model-resolver';
import { CLAUDE_MODEL_MAP } from '@automaker/types';
import type { import type {
ExecuteOptions, ExecuteOptions,
ProviderMessage, ProviderMessage,
InstallationStatus, InstallationStatus,
ModelDefinition, ModelDefinition,
SimpleQueryOptions,
SimpleQueryResult,
StreamingQueryOptions,
StreamingQueryResult,
PromptContentBlock,
} from './types.js'; } from './types.js';
export class ClaudeProvider extends BaseProvider { export class ClaudeProvider extends BaseProvider {
@@ -186,225 +175,4 @@ export class ClaudeProvider extends BaseProvider {
const supportedFeatures = ['tools', 'text', 'vision', 'thinking']; const supportedFeatures = ['tools', 'text', 'vision', 'thinking'];
return supportedFeatures.includes(feature); return supportedFeatures.includes(feature);
} }
/**
* Execute a simple one-shot query and return text directly
*
* Use this for:
* - Title generation from description
* - Text enhancement
* - File/image description
* - Any quick, single-turn completion without tools
*
* @example
* ```typescript
* const provider = ProviderFactory.getProviderForModel('haiku');
* const result = await provider.executeSimpleQuery({
* prompt: 'Generate a title for: User authentication feature',
* systemPrompt: 'You are a title generator...',
* });
* if (result.success) console.log(result.text);
* ```
*/
async executeSimpleQuery(options: SimpleQueryOptions): Promise<SimpleQueryResult> {
const { prompt, model, systemPrompt, abortController } = options;
const resolvedModel = resolveModelString(model, CLAUDE_MODEL_MAP.haiku);
try {
const sdkOptions: Options = {
model: resolvedModel,
systemPrompt,
maxTurns: 1,
allowedTools: [],
permissionMode: 'acceptEdits',
abortController,
};
// Handle both string prompts and multi-part content blocks
const stream = Array.isArray(prompt)
? query({ prompt: this.createPromptGenerator(prompt), options: sdkOptions })
: query({ prompt, options: sdkOptions });
const { text } = await this.extractTextFromStream(stream);
if (!text || text.trim().length === 0) {
return {
text: '',
success: false,
error: 'Empty response from Claude',
};
}
return {
text: text.trim(),
success: true,
};
} catch (error) {
const errorMessage = error instanceof Error ? error.message : String(error);
console.error('[ClaudeProvider] executeSimpleQuery() error:', errorMessage);
return {
text: '',
success: false,
error: errorMessage,
};
}
}
/**
* Execute a streaming query with tools and/or structured output
*
* Use this for:
* - Spec generation (with JSON schema output)
* - Feature generation from specs
* - Suggestions generation
* - Any query that needs tools or progress callbacks
*
* @example
* ```typescript
* const provider = ProviderFactory.getProviderForModel('opus');
* const result = await provider.executeStreamingQuery({
* prompt: 'Analyze this project...',
* cwd: '/path/to/project',
* allowedTools: ['Read', 'Glob', 'Grep'],
* outputFormat: { type: 'json_schema', schema: mySchema },
* onText: (chunk) => console.log('Progress:', chunk),
* });
* console.log(result.structuredOutput);
* ```
*/
async executeStreamingQuery(options: StreamingQueryOptions): Promise<StreamingQueryResult> {
const {
prompt,
model,
systemPrompt,
cwd,
maxTurns = 100,
allowedTools = ['Read', 'Glob', 'Grep'],
abortController,
outputFormat,
onText,
onToolUse,
} = options;
const resolvedModel = resolveModelString(model, CLAUDE_MODEL_MAP.haiku);
try {
const sdkOptions: Options = {
model: resolvedModel,
systemPrompt,
maxTurns,
cwd,
allowedTools: [...allowedTools],
permissionMode: 'acceptEdits',
abortController,
...(outputFormat && { outputFormat }),
};
// Handle both string prompts and multi-part content blocks
const stream = Array.isArray(prompt)
? query({ prompt: this.createPromptGenerator(prompt), options: sdkOptions })
: query({ prompt, options: sdkOptions });
const { text, structuredOutput } = await this.extractTextFromStream(stream, {
onText,
onToolUse,
});
if (!text && !structuredOutput) {
return {
text: '',
success: false,
error: 'Empty response from Claude',
};
}
return {
text: text.trim(),
success: true,
structuredOutput,
};
} catch (error) {
const errorMessage = error instanceof Error ? error.message : String(error);
console.error('[ClaudeProvider] executeStreamingQuery() error:', errorMessage);
return {
text: '',
success: false,
error: errorMessage,
};
}
}
/**
* Create a multi-part prompt generator for content blocks
*/
private createPromptGenerator(content: PromptContentBlock[]) {
// Return an async generator that yields SDK user messages
// The SDK expects this format for multi-part prompts
return (async function* () {
yield {
type: 'user' as const,
session_id: '',
message: { role: 'user' as const, content },
parent_tool_use_id: null,
};
})();
}
/**
* Extract text and structured output from SDK stream
*
* This consolidates the duplicated extractTextFromStream() function
* that was copied across 5+ route files.
*/
private async extractTextFromStream(
stream: AsyncIterable<unknown>,
handlers?: {
onText?: (text: string) => void;
onToolUse?: (name: string, input: unknown) => void;
}
): Promise<{ text: string; structuredOutput?: unknown }> {
let responseText = '';
let structuredOutput: unknown = undefined;
for await (const msg of stream) {
const message = msg as {
type: string;
subtype?: string;
result?: string;
structured_output?: unknown;
message?: {
content?: Array<{ type: string; text?: string; name?: string; input?: unknown }>;
};
};
if (message.type === 'assistant' && message.message?.content) {
for (const block of message.message.content) {
if (block.type === 'text' && block.text) {
responseText += block.text;
handlers?.onText?.(block.text);
} else if (block.type === 'tool_use' && block.name) {
handlers?.onToolUse?.(block.name, block.input);
}
}
} else if (message.type === 'result' && message.subtype === 'success') {
if (message.result) {
responseText = message.result;
}
if (message.structured_output) {
structuredOutput = message.structured_output;
}
} else if (message.type === 'result' && message.subtype === 'error_max_turns') {
console.warn('[ClaudeProvider] Hit max turns limit');
} else if (
message.type === 'result' &&
message.subtype === 'error_max_structured_output_retries'
) {
throw new Error('Failed to produce valid structured output after retries');
} else if (message.type === 'error') {
const errorMsg = (message as { error?: string }).error || 'Unknown error';
throw new Error(errorMsg);
}
}
return { text: responseText, structuredOutput };
}
} }

View File

@@ -102,92 +102,3 @@ export interface ModelDefinition {
tier?: 'basic' | 'standard' | 'premium'; tier?: 'basic' | 'standard' | 'premium';
default?: boolean; default?: boolean;
} }
/**
* Content block for multi-part prompts (images, structured text)
*/
export interface TextContentBlock {
type: 'text';
text: string;
}
export interface ImageContentBlock {
type: 'image';
source: {
type: 'base64';
media_type: 'image/jpeg' | 'image/png' | 'image/gif' | 'image/webp';
data: string;
};
}
export type PromptContentBlock = TextContentBlock | ImageContentBlock;
/**
* Options for simple one-shot queries (title generation, descriptions, text enhancement)
*
* These queries:
* - Don't need tools
* - Return text directly (no streaming)
* - Are single-turn (maxTurns=1)
*/
export interface SimpleQueryOptions {
/** The prompt - either a string or array of content blocks */
prompt: string | PromptContentBlock[];
/** Model to use (defaults to haiku) */
model?: string;
/** Optional system prompt */
systemPrompt?: string;
/** Abort controller for cancellation */
abortController?: AbortController;
}
/**
* Result from a simple query
*/
export interface SimpleQueryResult {
/** Extracted text from the response */
text: string;
/** Whether the query completed successfully */
success: boolean;
/** Error message if failed */
error?: string;
}
/**
* Options for streaming queries with tools and/or structured output
*/
export interface StreamingQueryOptions extends SimpleQueryOptions {
/** Working directory for tool execution */
cwd: string;
/** Max turns (defaults to sdk-options presets) */
maxTurns?: number;
/** Tools to allow */
allowedTools?: readonly string[];
/** JSON schema for structured output */
outputFormat?: {
type: 'json_schema';
schema: Record<string, unknown>;
};
/** Callback for text chunks */
onText?: (text: string) => void;
/** Callback for tool usage */
onToolUse?: (name: string, input: unknown) => void;
}
/**
* Result from a streaming query with structured output
*/
export interface StreamingQueryResult extends SimpleQueryResult {
/** Parsed structured output if outputFormat was specified */
structuredOutput?: unknown;
}

View File

@@ -1,14 +1,15 @@
/** /**
* Generate features from existing app_spec.txt * Generate features from existing app_spec.txt
*
* Uses ClaudeProvider.executeStreamingQuery() for SDK interaction.
*/ */
import { query } from '@anthropic-ai/claude-agent-sdk';
import * as secureFs from '../../lib/secure-fs.js';
import type { EventEmitter } from '../../lib/events.js'; import type { EventEmitter } from '../../lib/events.js';
import { createLogger } from '@automaker/utils'; import { createLogger } from '@automaker/utils';
import { ProviderFactory } from '../../providers/provider-factory.js'; import { createFeatureGenerationOptions } from '../../lib/sdk-options.js';
import { logAuthStatus } from './common.js';
import { parseAndCreateFeatures } from './parse-and-create-features.js'; import { parseAndCreateFeatures } from './parse-and-create-features.js';
import { getAppSpecPath, secureFs } from '@automaker/platform'; import { getAppSpecPath } from '@automaker/platform';
const logger = createLogger('SpecRegeneration'); const logger = createLogger('SpecRegeneration');
@@ -90,37 +91,72 @@ IMPORTANT: Do not ask for clarification. The specification is provided above. Ge
projectPath: projectPath, projectPath: projectPath,
}); });
logger.info('Calling provider.executeStreamingQuery() for features...'); const options = createFeatureGenerationOptions({
const provider = ProviderFactory.getProviderForModel('haiku');
const result = await provider.executeStreamingQuery({
prompt,
model: 'haiku',
cwd: projectPath, cwd: projectPath,
maxTurns: 50,
allowedTools: ['Read', 'Glob', 'Grep'],
abortController, abortController,
onText: (text) => {
logger.debug(`Feature text block received (${text.length} chars)`);
events.emit('spec-regeneration:event', {
type: 'spec_regeneration_progress',
content: text,
projectPath: projectPath,
});
},
}); });
if (!result.success) { logger.debug('SDK Options:', JSON.stringify(options, null, 2));
logger.error('❌ Feature generation failed:', result.error); logger.info('Calling Claude Agent SDK query() for features...');
throw new Error(result.error || 'Feature generation failed');
logAuthStatus('Right before SDK query() for features');
let stream;
try {
stream = query({ prompt, options });
logger.debug('query() returned stream successfully');
} catch (queryError) {
logger.error('❌ query() threw an exception:');
logger.error('Error:', queryError);
throw queryError;
} }
logger.info(`Feature response length: ${result.text.length} chars`); let responseText = '';
let messageCount = 0;
logger.debug('Starting to iterate over feature stream...');
try {
for await (const msg of stream) {
messageCount++;
logger.debug(
`Feature stream message #${messageCount}:`,
JSON.stringify({ type: msg.type, subtype: (msg as any).subtype }, null, 2)
);
if (msg.type === 'assistant' && msg.message.content) {
for (const block of msg.message.content) {
if (block.type === 'text') {
responseText += block.text;
logger.debug(`Feature text block received (${block.text.length} chars)`);
events.emit('spec-regeneration:event', {
type: 'spec_regeneration_progress',
content: block.text,
projectPath: projectPath,
});
}
}
} else if (msg.type === 'result' && (msg as any).subtype === 'success') {
logger.debug('Received success result for features');
responseText = (msg as any).result || responseText;
} else if ((msg as { type: string }).type === 'error') {
logger.error('❌ Received error message from feature stream:');
logger.error('Error message:', JSON.stringify(msg, null, 2));
}
}
} catch (streamError) {
logger.error('❌ Error while iterating feature stream:');
logger.error('Stream error:', streamError);
throw streamError;
}
logger.info(`Feature stream complete. Total messages: ${messageCount}`);
logger.info(`Feature response length: ${responseText.length} chars`);
logger.info('========== FULL RESPONSE TEXT =========='); logger.info('========== FULL RESPONSE TEXT ==========');
logger.info(result.text); logger.info(responseText);
logger.info('========== END RESPONSE TEXT =========='); logger.info('========== END RESPONSE TEXT ==========');
await parseAndCreateFeatures(projectPath, result.text, events); await parseAndCreateFeatures(projectPath, responseText, events);
logger.debug('========== generateFeaturesFromSpec() completed =========='); logger.debug('========== generateFeaturesFromSpec() completed ==========');
} }

View File

@@ -1,9 +1,10 @@
/** /**
* Generate app_spec.txt from project overview * Generate app_spec.txt from project overview
*
* Uses ClaudeProvider.executeStreamingQuery() for SDK interaction.
*/ */
import { query } from '@anthropic-ai/claude-agent-sdk';
import path from 'path';
import * as secureFs from '../../lib/secure-fs.js';
import type { EventEmitter } from '../../lib/events.js'; import type { EventEmitter } from '../../lib/events.js';
import { import {
specOutputSchema, specOutputSchema,
@@ -12,9 +13,10 @@ import {
type SpecOutput, type SpecOutput,
} from '../../lib/app-spec-format.js'; } from '../../lib/app-spec-format.js';
import { createLogger } from '@automaker/utils'; import { createLogger } from '@automaker/utils';
import { ProviderFactory } from '../../providers/provider-factory.js'; import { createSpecGenerationOptions } from '../../lib/sdk-options.js';
import { logAuthStatus } from './common.js';
import { generateFeaturesFromSpec } from './generate-features-from-spec.js'; import { generateFeaturesFromSpec } from './generate-features-from-spec.js';
import { ensureAutomakerDir, getAppSpecPath, secureFs } from '@automaker/platform'; import { ensureAutomakerDir, getAppSpecPath } from '@automaker/platform';
const logger = createLogger('SpecRegeneration'); const logger = createLogger('SpecRegeneration');
@@ -81,53 +83,105 @@ ${getStructuredSpecPromptInstruction()}`;
content: 'Starting spec generation...\n', content: 'Starting spec generation...\n',
}); });
logger.info('Calling provider.executeStreamingQuery()...'); const options = createSpecGenerationOptions({
const provider = ProviderFactory.getProviderForModel('haiku');
const result = await provider.executeStreamingQuery({
prompt,
model: 'haiku',
cwd: projectPath, cwd: projectPath,
maxTurns: 1000,
allowedTools: ['Read', 'Glob', 'Grep'],
abortController, abortController,
outputFormat: { outputFormat: {
type: 'json_schema', type: 'json_schema',
schema: specOutputSchema, schema: specOutputSchema,
}, },
onText: (text) => {
logger.info(`Text block received (${text.length} chars)`);
events.emit('spec-regeneration:event', {
type: 'spec_regeneration_progress',
content: text,
projectPath: projectPath,
});
},
onToolUse: (name, input) => {
logger.info('Tool use:', name);
events.emit('spec-regeneration:event', {
type: 'spec_tool',
tool: name,
input,
});
},
}); });
if (!result.success) { logger.debug('SDK Options:', JSON.stringify(options, null, 2));
logger.error('❌ Spec generation failed:', result.error); logger.info('Calling Claude Agent SDK query()...');
throw new Error(result.error || 'Spec generation failed');
// Log auth status right before the SDK call
logAuthStatus('Right before SDK query()');
let stream;
try {
stream = query({ prompt, options });
logger.debug('query() returned stream successfully');
} catch (queryError) {
logger.error('❌ query() threw an exception:');
logger.error('Error:', queryError);
throw queryError;
} }
const responseText = result.text; let responseText = '';
const structuredOutput = result.structuredOutput as SpecOutput | undefined; let messageCount = 0;
let structuredOutput: SpecOutput | null = null;
logger.info(`Response text length: ${responseText.length} chars`); logger.info('Starting to iterate over stream...');
if (structuredOutput) {
try {
for await (const msg of stream) {
messageCount++;
logger.info(
`Stream message #${messageCount}: type=${msg.type}, subtype=${(msg as any).subtype}`
);
if (msg.type === 'assistant') {
const msgAny = msg as any;
if (msgAny.message?.content) {
for (const block of msgAny.message.content) {
if (block.type === 'text') {
responseText += block.text;
logger.info(
`Text block received (${block.text.length} chars), total now: ${responseText.length} chars`
);
events.emit('spec-regeneration:event', {
type: 'spec_regeneration_progress',
content: block.text,
projectPath: projectPath,
});
} else if (block.type === 'tool_use') {
logger.info('Tool use:', block.name);
events.emit('spec-regeneration:event', {
type: 'spec_tool',
tool: block.name,
input: block.input,
});
}
}
}
} else if (msg.type === 'result' && (msg as any).subtype === 'success') {
logger.info('Received success result');
// Check for structured output - this is the reliable way to get spec data
const resultMsg = msg as any;
if (resultMsg.structured_output) {
structuredOutput = resultMsg.structured_output as SpecOutput;
logger.info('✅ Received structured output'); logger.info('✅ Received structured output');
logger.debug('Structured output:', JSON.stringify(structuredOutput, null, 2)); logger.debug('Structured output:', JSON.stringify(structuredOutput, null, 2));
} else { } else {
logger.warn('⚠️ No structured output in result, will fall back to text parsing'); logger.warn('⚠️ No structured output in result, will fall back to text parsing');
} }
} else if (msg.type === 'result') {
// Handle error result types
const subtype = (msg as any).subtype;
logger.info(`Result message: subtype=${subtype}`);
if (subtype === 'error_max_turns') {
logger.error('❌ Hit max turns limit!');
} else if (subtype === 'error_max_structured_output_retries') {
logger.error('❌ Failed to produce valid structured output after retries');
throw new Error('Could not produce valid spec output');
}
} else if ((msg as { type: string }).type === 'error') {
logger.error('❌ Received error message from stream:');
logger.error('Error message:', JSON.stringify(msg, null, 2));
} else if (msg.type === 'user') {
// Log user messages (tool results)
logger.info(`User message (tool result): ${JSON.stringify(msg).substring(0, 500)}`);
}
}
} catch (streamError) {
logger.error('❌ Error while iterating stream:');
logger.error('Stream error:', streamError);
throw streamError;
}
logger.info(`Stream iteration complete. Total messages: ${messageCount}`);
logger.info(`Response text length: ${responseText.length} chars`);
// Determine XML content to save // Determine XML content to save
let xmlContent: string; let xmlContent: string;

View File

@@ -3,9 +3,10 @@
*/ */
import path from 'path'; import path from 'path';
import * as secureFs from '../../lib/secure-fs.js';
import type { EventEmitter } from '../../lib/events.js'; import type { EventEmitter } from '../../lib/events.js';
import { createLogger } from '@automaker/utils'; import { createLogger } from '@automaker/utils';
import { getFeaturesDir, secureFs } from '@automaker/platform'; import { getFeaturesDir } from '@automaker/platform';
const logger = createLogger('SpecRegeneration'); const logger = createLogger('SpecRegeneration');

View File

@@ -1,6 +1,35 @@
/** /**
* Claude Usage types for CLI-based usage tracking * Claude Usage types for CLI-based usage tracking
* Re-exported from @automaker/types for convenience
*/ */
export type { ClaudeUsage, ClaudeStatus } from '@automaker/types'; export type ClaudeUsage = {
sessionTokensUsed: number;
sessionLimit: number;
sessionPercentage: number;
sessionResetTime: string; // ISO date string
sessionResetText: string; // Raw text like "Resets 10:59am (Asia/Dubai)"
weeklyTokensUsed: number;
weeklyLimit: number;
weeklyPercentage: number;
weeklyResetTime: string; // ISO date string
weeklyResetText: string; // Raw text like "Resets Dec 22 at 7:59pm (Asia/Dubai)"
sonnetWeeklyTokensUsed: number;
sonnetWeeklyPercentage: number;
sonnetResetText: string; // Raw text like "Resets Dec 27 at 9:59am (Asia/Dubai)"
costUsed: number | null;
costLimit: number | null;
costCurrency: string | null;
lastUpdated: string; // ISO date string
userTimezone: string;
};
export type ClaudeStatus = {
indicator: {
color: 'green' | 'yellow' | 'orange' | 'red' | 'gray';
};
description: string;
};

View File

@@ -18,14 +18,15 @@ export {
getGitRepositoryDiffs, getGitRepositoryDiffs,
} from '@automaker/git-utils'; } from '@automaker/git-utils';
// Re-export error utilities from shared package
export { getErrorMessage } from '@automaker/utils';
// Re-export exec utilities
export { execAsync, execEnv, isENOENT } from '../lib/exec-utils.js';
type Logger = ReturnType<typeof createLogger>; type Logger = ReturnType<typeof createLogger>;
/**
* Get error message from error object
*/
export function getErrorMessage(error: unknown): string {
return error instanceof Error ? error.message : 'Unknown error';
}
/** /**
* Create a logError function for a specific logger * Create a logError function for a specific logger
* This ensures consistent error logging format across all routes * This ensures consistent error logging format across all routes

View File

@@ -1,8 +1,8 @@
/** /**
* POST /context/describe-file endpoint - Generate description for a text file * POST /context/describe-file endpoint - Generate description for a text file
* *
* Uses Claude Haiku via ClaudeProvider to analyze a text file and generate * Uses Claude Haiku to analyze a text file and generate a concise description
* a concise description suitable for context file metadata. * suitable for context file metadata.
* *
* SECURITY: This endpoint validates file paths against ALLOWED_ROOT_DIRECTORY * SECURITY: This endpoint validates file paths against ALLOWED_ROOT_DIRECTORY
* and reads file content directly (not via Claude's Read tool) to prevent * and reads file content directly (not via Claude's Read tool) to prevent
@@ -10,9 +10,12 @@
*/ */
import type { Request, Response } from 'express'; import type { Request, Response } from 'express';
import { query } from '@anthropic-ai/claude-agent-sdk';
import { createLogger } from '@automaker/utils'; import { createLogger } from '@automaker/utils';
import { PathNotAllowedError, secureFs } from '@automaker/platform'; import { CLAUDE_MODEL_MAP } from '@automaker/types';
import { ProviderFactory } from '../../../providers/provider-factory.js'; import { PathNotAllowedError } from '@automaker/platform';
import { createCustomOptions } from '../../../lib/sdk-options.js';
import * as secureFs from '../../../lib/secure-fs.js';
import * as path from 'path'; import * as path from 'path';
const logger = createLogger('DescribeFile'); const logger = createLogger('DescribeFile');
@@ -41,6 +44,31 @@ interface DescribeFileErrorResponse {
error: string; error: string;
} }
/**
* Extract text content from Claude SDK response messages
*/
async function extractTextFromStream(
// eslint-disable-next-line @typescript-eslint/no-explicit-any
stream: AsyncIterable<any>
): Promise<string> {
let responseText = '';
for await (const msg of stream) {
if (msg.type === 'assistant' && msg.message?.content) {
const blocks = msg.message.content as Array<{ type: string; text?: string }>;
for (const block of blocks) {
if (block.type === 'text' && block.text) {
responseText += block.text;
}
}
} else if (msg.type === 'result' && msg.subtype === 'success') {
responseText = msg.result || responseText;
}
}
return responseText;
}
/** /**
* Create the describe-file request handler * Create the describe-file request handler
* *
@@ -122,39 +150,60 @@ export function createDescribeFileHandler(): (req: Request, res: Response) => Pr
const fileName = path.basename(resolvedPath); const fileName = path.basename(resolvedPath);
// Build prompt with file content passed as structured data // Build prompt with file content passed as structured data
const promptContent = [ // The file content is included directly, not via tool invocation
{ const instructionText = `Analyze the following file and provide a 1-2 sentence description suitable for use as context in an AI coding assistant. Focus on what the file contains, its purpose, and why an AI agent might want to use this context in the future (e.g., "API documentation for the authentication endpoints", "Configuration file for database connections", "Coding style guidelines for the project").
type: 'text' as const,
text: `Analyze the following file and provide a 1-2 sentence description suitable for use as context in an AI coding assistant. Focus on what the file contains, its purpose, and why an AI agent might want to use this context in the future (e.g., "API documentation for the authentication endpoints", "Configuration file for database connections", "Coding style guidelines for the project").
Respond with ONLY the description text, no additional formatting, preamble, or explanation. Respond with ONLY the description text, no additional formatting, preamble, or explanation.
File: ${fileName}${truncated ? ' (truncated)' : ''}`, File: ${fileName}${truncated ? ' (truncated)' : ''}`;
},
const promptContent = [
{ type: 'text' as const, text: instructionText },
{ type: 'text' as const, text: `\n\n--- FILE CONTENT ---\n${contentToAnalyze}` }, { type: 'text' as const, text: `\n\n--- FILE CONTENT ---\n${contentToAnalyze}` },
]; ];
const provider = ProviderFactory.getProviderForModel('haiku'); // Use the file's directory as the working directory
const result = await provider.executeSimpleQuery({ const cwd = path.dirname(resolvedPath);
prompt: promptContent,
model: 'haiku', // Use centralized SDK options with proper cwd validation
// No tools needed since we're passing file content directly
const sdkOptions = createCustomOptions({
cwd,
model: CLAUDE_MODEL_MAP.haiku,
maxTurns: 1,
allowedTools: [],
sandbox: { enabled: true, autoAllowBashIfSandboxed: true },
}); });
if (!result.success) { const promptGenerator = (async function* () {
logger.warn('Failed to generate description:', result.error); yield {
type: 'user' as const,
session_id: '',
message: { role: 'user' as const, content: promptContent },
parent_tool_use_id: null,
};
})();
const stream = query({ prompt: promptGenerator, options: sdkOptions });
// Extract the description from the response
const description = await extractTextFromStream(stream);
if (!description || description.trim().length === 0) {
logger.warn('Received empty response from Claude');
const response: DescribeFileErrorResponse = { const response: DescribeFileErrorResponse = {
success: false, success: false,
error: result.error || 'Failed to generate description', error: 'Failed to generate description - empty response',
}; };
res.status(500).json(response); res.status(500).json(response);
return; return;
} }
logger.info(`Description generated, length: ${result.text.length} chars`); logger.info(`Description generated, length: ${description.length} chars`);
const response: DescribeFileSuccessResponse = { const response: DescribeFileSuccessResponse = {
success: true, success: true,
description: result.text, description: description.trim(),
}; };
res.json(response); res.json(response);
} catch (error) { } catch (error) {

View File

@@ -1,8 +1,8 @@
/** /**
* POST /context/describe-image endpoint - Generate description for an image * POST /context/describe-image endpoint - Generate description for an image
* *
* Uses Claude Haiku via ClaudeProvider to analyze an image and generate * Uses Claude Haiku to analyze an image and generate a concise description
* a concise description suitable for context file metadata. * suitable for context file metadata.
* *
* IMPORTANT: * IMPORTANT:
* The agent runner (chat/auto-mode) sends images as multi-part content blocks (base64 image blocks), * The agent runner (chat/auto-mode) sends images as multi-part content blocks (base64 image blocks),
@@ -11,9 +11,10 @@
*/ */
import type { Request, Response } from 'express'; import type { Request, Response } from 'express';
import { query } from '@anthropic-ai/claude-agent-sdk';
import { createLogger, readImageAsBase64 } from '@automaker/utils'; import { createLogger, readImageAsBase64 } from '@automaker/utils';
import { ProviderFactory } from '../../../providers/provider-factory.js'; import { CLAUDE_MODEL_MAP } from '@automaker/types';
import type { PromptContentBlock } from '../../../providers/types.js'; import { createCustomOptions } from '../../../lib/sdk-options.js';
import * as fs from 'fs'; import * as fs from 'fs';
import * as path from 'path'; import * as path from 'path';
@@ -172,6 +173,53 @@ function mapDescribeImageError(rawMessage: string | undefined): {
return baseResponse; return baseResponse;
} }
/**
* Extract text content from Claude SDK response messages and log high-signal stream events.
*/
async function extractTextFromStream(
// eslint-disable-next-line @typescript-eslint/no-explicit-any
stream: AsyncIterable<any>,
requestId: string
): Promise<string> {
let responseText = '';
let messageCount = 0;
logger.info(`[${requestId}] [Stream] Begin reading SDK stream...`);
for await (const msg of stream) {
messageCount++;
const msgType = msg?.type;
const msgSubtype = msg?.subtype;
// Keep this concise but informative. Full error object is logged in catch blocks.
logger.info(
`[${requestId}] [Stream] #${messageCount} type=${String(msgType)} subtype=${String(msgSubtype ?? '')}`
);
if (msgType === 'assistant' && msg.message?.content) {
const blocks = msg.message.content as Array<{ type: string; text?: string }>;
logger.info(`[${requestId}] [Stream] assistant blocks=${blocks.length}`);
for (const block of blocks) {
if (block.type === 'text' && block.text) {
responseText += block.text;
}
}
}
if (msgType === 'result' && msgSubtype === 'success') {
if (typeof msg.result === 'string' && msg.result.length > 0) {
responseText = msg.result;
}
}
}
logger.info(
`[${requestId}] [Stream] End of stream. messages=${messageCount} textLength=${responseText.length}`
);
return responseText;
}
/** /**
* Create the describe-image request handler * Create the describe-image request handler
* *
@@ -260,17 +308,13 @@ export function createDescribeImageHandler(): (req: Request, res: Response) => P
`"Architecture diagram of microservices", "Screenshot of error message in terminal").\n\n` + `"Architecture diagram of microservices", "Screenshot of error message in terminal").\n\n` +
`Respond with ONLY the description text, no additional formatting, preamble, or explanation.`; `Respond with ONLY the description text, no additional formatting, preamble, or explanation.`;
const promptContent: PromptContentBlock[] = [ const promptContent = [
{ type: 'text', text: instructionText }, { type: 'text' as const, text: instructionText },
{ {
type: 'image', type: 'image' as const,
source: { source: {
type: 'base64', type: 'base64' as const,
media_type: imageData.mimeType as media_type: imageData.mimeType,
| 'image/jpeg'
| 'image/png'
| 'image/gif'
| 'image/webp',
data: imageData.base64, data: imageData.base64,
}, },
}, },
@@ -278,26 +322,48 @@ export function createDescribeImageHandler(): (req: Request, res: Response) => P
logger.info(`[${requestId}] Built multi-part prompt blocks=${promptContent.length}`); logger.info(`[${requestId}] Built multi-part prompt blocks=${promptContent.length}`);
logger.info(`[${requestId}] Calling provider.executeSimpleQuery()...`); const cwd = path.dirname(actualPath);
const queryStart = Date.now(); logger.info(`[${requestId}] Using cwd=${cwd}`);
const provider = ProviderFactory.getProviderForModel('haiku'); // Use the same centralized option builder used across the server (validates cwd)
const result = await provider.executeSimpleQuery({ const sdkOptions = createCustomOptions({
prompt: promptContent, cwd,
model: 'haiku', model: CLAUDE_MODEL_MAP.haiku,
maxTurns: 1,
allowedTools: [],
sandbox: { enabled: true, autoAllowBashIfSandboxed: true },
}); });
logger.info(`[${requestId}] Query completed in ${Date.now() - queryStart}ms`); logger.info(
`[${requestId}] SDK options model=${sdkOptions.model} maxTurns=${sdkOptions.maxTurns} allowedTools=${JSON.stringify(
const description = result.success ? result.text : ''; sdkOptions.allowedTools
)} sandbox=${JSON.stringify(sdkOptions.sandbox)}`
if (!result.success || !description || description.trim().length === 0) {
logger.warn(
`[${requestId}] Failed to generate description: ${result.error || 'empty response'}`
); );
const promptGenerator = (async function* () {
yield {
type: 'user' as const,
session_id: '',
message: { role: 'user' as const, content: promptContent },
parent_tool_use_id: null,
};
})();
logger.info(`[${requestId}] Calling query()...`);
const queryStart = Date.now();
const stream = query({ prompt: promptGenerator, options: sdkOptions });
logger.info(`[${requestId}] query() returned stream in ${Date.now() - queryStart}ms`);
// Extract the description from the response
const extractStart = Date.now();
const description = await extractTextFromStream(stream, requestId);
logger.info(`[${requestId}] extractMs=${Date.now() - extractStart}`);
if (!description || description.trim().length === 0) {
logger.warn(`[${requestId}] Received empty response from Claude`);
const response: DescribeImageErrorResponse = { const response: DescribeImageErrorResponse = {
success: false, success: false,
error: result.error || 'Failed to generate description - empty response', error: 'Failed to generate description - empty response',
requestId, requestId,
}; };
res.status(500).json(response); res.status(500).json(response);

View File

@@ -1,19 +1,21 @@
/** /**
* POST /enhance-prompt endpoint - Enhance user input text * POST /enhance-prompt endpoint - Enhance user input text
* *
* Uses Claude AI via ClaudeProvider to enhance text based on the specified * Uses Claude AI to enhance text based on the specified enhancement mode.
* enhancement mode. Supports modes: improve, technical, simplify, acceptance * Supports modes: improve, technical, simplify, acceptance
*/ */
import type { Request, Response } from 'express'; import type { Request, Response } from 'express';
import { query } from '@anthropic-ai/claude-agent-sdk';
import { createLogger } from '@automaker/utils'; import { createLogger } from '@automaker/utils';
import { ProviderFactory } from '../../../providers/provider-factory.js'; import { resolveModelString } from '@automaker/model-resolver';
import { CLAUDE_MODEL_MAP } from '@automaker/types';
import { import {
getSystemPrompt, getSystemPrompt,
buildUserPrompt, buildUserPrompt,
isValidEnhancementMode, isValidEnhancementMode,
type EnhancementMode, type EnhancementMode,
} from '@automaker/prompts'; } from '../../../lib/enhancement-prompts.js';
const logger = createLogger('EnhancePrompt'); const logger = createLogger('EnhancePrompt');
@@ -45,6 +47,39 @@ interface EnhanceErrorResponse {
error: string; error: string;
} }
/**
* Extract text content from Claude SDK response messages
*
* @param stream - The async iterable from the query function
* @returns The extracted text content
*/
async function extractTextFromStream(
stream: AsyncIterable<{
type: string;
subtype?: string;
result?: string;
message?: {
content?: Array<{ type: string; text?: string }>;
};
}>
): Promise<string> {
let responseText = '';
for await (const msg of stream) {
if (msg.type === 'assistant' && msg.message?.content) {
for (const block of msg.message.content) {
if (block.type === 'text' && block.text) {
responseText += block.text;
}
}
} else if (msg.type === 'result' && msg.subtype === 'success') {
responseText = msg.result || responseText;
}
}
return responseText;
}
/** /**
* Create the enhance request handler * Create the enhance request handler
* *
@@ -97,30 +132,45 @@ export function createEnhanceHandler(): (req: Request, res: Response) => Promise
const systemPrompt = getSystemPrompt(validMode); const systemPrompt = getSystemPrompt(validMode);
// Build the user prompt with few-shot examples // Build the user prompt with few-shot examples
// This helps the model understand this is text transformation, not a coding task
const userPrompt = buildUserPrompt(validMode, trimmedText, true); const userPrompt = buildUserPrompt(validMode, trimmedText, true);
const provider = ProviderFactory.getProviderForModel(model || 'sonnet'); // Resolve the model - use the passed model, default to sonnet for quality
const result = await provider.executeSimpleQuery({ const resolvedModel = resolveModelString(model, CLAUDE_MODEL_MAP.sonnet);
logger.debug(`Using model: ${resolvedModel}`);
// Call Claude SDK with minimal configuration for text transformation
// Key: no tools, just text completion
const stream = query({
prompt: userPrompt, prompt: userPrompt,
model: model || 'sonnet', options: {
model: resolvedModel,
systemPrompt, systemPrompt,
maxTurns: 1,
allowedTools: [],
permissionMode: 'acceptEdits',
},
}); });
if (!result.success) { // Extract the enhanced text from the response
logger.warn('Failed to enhance text:', result.error); const enhancedText = await extractTextFromStream(stream);
if (!enhancedText || enhancedText.trim().length === 0) {
logger.warn('Received empty response from Claude');
const response: EnhanceErrorResponse = { const response: EnhanceErrorResponse = {
success: false, success: false,
error: result.error || 'Failed to generate enhanced text', error: 'Failed to generate enhanced text - empty response',
}; };
res.status(500).json(response); res.status(500).json(response);
return; return;
} }
logger.info(`Enhancement complete, output length: ${result.text.length} chars`); logger.info(`Enhancement complete, output length: ${enhancedText.length} chars`);
const response: EnhanceSuccessResponse = { const response: EnhanceSuccessResponse = {
success: true, success: true,
enhancedText: result.text, enhancedText: enhancedText.trim(),
}; };
res.json(response); res.json(response);
} catch (error) { } catch (error) {

View File

@@ -1,12 +1,13 @@
/** /**
* POST /features/generate-title endpoint - Generate a concise title from description * POST /features/generate-title endpoint - Generate a concise title from description
* *
* Uses Claude Haiku via ClaudeProvider to generate a short, descriptive title. * Uses Claude Haiku to generate a short, descriptive title from feature description.
*/ */
import type { Request, Response } from 'express'; import type { Request, Response } from 'express';
import { query } from '@anthropic-ai/claude-agent-sdk';
import { createLogger } from '@automaker/utils'; import { createLogger } from '@automaker/utils';
import { ProviderFactory } from '../../../providers/provider-factory.js'; import { CLAUDE_MODEL_MAP } from '@automaker/model-resolver';
const logger = createLogger('GenerateTitle'); const logger = createLogger('GenerateTitle');
@@ -33,6 +34,33 @@ Rules:
- No quotes, periods, or extra formatting - No quotes, periods, or extra formatting
- Capture the essence of the feature in a scannable way`; - Capture the essence of the feature in a scannable way`;
async function extractTextFromStream(
stream: AsyncIterable<{
type: string;
subtype?: string;
result?: string;
message?: {
content?: Array<{ type: string; text?: string }>;
};
}>
): Promise<string> {
let responseText = '';
for await (const msg of stream) {
if (msg.type === 'assistant' && msg.message?.content) {
for (const block of msg.message.content) {
if (block.type === 'text' && block.text) {
responseText += block.text;
}
}
} else if (msg.type === 'result' && msg.subtype === 'success') {
responseText = msg.result || responseText;
}
}
return responseText;
}
export function createGenerateTitleHandler(): (req: Request, res: Response) => Promise<void> { export function createGenerateTitleHandler(): (req: Request, res: Response) => Promise<void> {
return async (req: Request, res: Response): Promise<void> => { return async (req: Request, res: Response): Promise<void> => {
try { try {
@@ -61,28 +89,34 @@ export function createGenerateTitleHandler(): (req: Request, res: Response) => P
const userPrompt = `Generate a concise title for this feature:\n\n${trimmedDescription}`; const userPrompt = `Generate a concise title for this feature:\n\n${trimmedDescription}`;
const provider = ProviderFactory.getProviderForModel('haiku'); const stream = query({
const result = await provider.executeSimpleQuery({
prompt: userPrompt, prompt: userPrompt,
model: 'haiku', options: {
model: CLAUDE_MODEL_MAP.haiku,
systemPrompt: SYSTEM_PROMPT, systemPrompt: SYSTEM_PROMPT,
maxTurns: 1,
allowedTools: [],
permissionMode: 'acceptEdits',
},
}); });
if (!result.success) { const title = await extractTextFromStream(stream);
logger.warn('Failed to generate title:', result.error);
if (!title || title.trim().length === 0) {
logger.warn('Received empty response from Claude');
const response: GenerateTitleErrorResponse = { const response: GenerateTitleErrorResponse = {
success: false, success: false,
error: result.error || 'Failed to generate title', error: 'Failed to generate title - empty response',
}; };
res.status(500).json(response); res.status(500).json(response);
return; return;
} }
logger.info(`Generated title: ${result.text}`); logger.info(`Generated title: ${title.trim()}`);
const response: GenerateTitleSuccessResponse = { const response: GenerateTitleSuccessResponse = {
success: true, success: true,
title: result.text, title: title.trim(),
}; };
res.json(response); res.json(response);
} catch (error) { } catch (error) {

View File

@@ -3,11 +3,10 @@
*/ */
import { createLogger } from '@automaker/utils'; import { createLogger } from '@automaker/utils';
import { getErrorMessage as getErrorMessageShared, createLogError, isENOENT } from '../common.js'; import { getErrorMessage as getErrorMessageShared, createLogError } from '../common.js';
const logger = createLogger('FS'); const logger = createLogger('FS');
// Re-export shared utilities // Re-export shared utilities
export { getErrorMessageShared as getErrorMessage }; export { getErrorMessageShared as getErrorMessage };
export { isENOENT };
export const logError = createLogError(logger); export const logError = createLogError(logger);

View File

@@ -3,9 +3,10 @@
*/ */
import type { Request, Response } from 'express'; import type { Request, Response } from 'express';
import { secureFs, getAllowedRootDirectory, PathNotAllowedError } from '@automaker/platform'; import * as secureFs from '../../../lib/secure-fs.js';
import os from 'os'; import os from 'os';
import path from 'path'; import path from 'path';
import { getAllowedRootDirectory, PathNotAllowedError } from '@automaker/platform';
import { getErrorMessage, logError } from '../common.js'; import { getErrorMessage, logError } from '../common.js';
export function createBrowseHandler() { export function createBrowseHandler() {

View File

@@ -3,9 +3,10 @@
*/ */
import type { Request, Response } from 'express'; import type { Request, Response } from 'express';
import { secureFs, getBoardDir } from '@automaker/platform'; import * as secureFs from '../../../lib/secure-fs.js';
import path from 'path'; import path from 'path';
import { getErrorMessage, logError } from '../common.js'; import { getErrorMessage, logError } from '../common.js';
import { getBoardDir } from '@automaker/platform';
export function createDeleteBoardBackgroundHandler() { export function createDeleteBoardBackgroundHandler() {
return async (req: Request, res: Response): Promise<void> => { return async (req: Request, res: Response): Promise<void> => {

View File

@@ -3,7 +3,8 @@
*/ */
import type { Request, Response } from 'express'; import type { Request, Response } from 'express';
import { secureFs, PathNotAllowedError } from '@automaker/platform'; import * as secureFs from '../../../lib/secure-fs.js';
import { PathNotAllowedError } from '@automaker/platform';
import { getErrorMessage, logError } from '../common.js'; import { getErrorMessage, logError } from '../common.js';
export function createDeleteHandler() { export function createDeleteHandler() {

View File

@@ -3,7 +3,8 @@
*/ */
import type { Request, Response } from 'express'; import type { Request, Response } from 'express';
import { secureFs, PathNotAllowedError } from '@automaker/platform'; import * as secureFs from '../../../lib/secure-fs.js';
import { PathNotAllowedError } from '@automaker/platform';
import { getErrorMessage, logError } from '../common.js'; import { getErrorMessage, logError } from '../common.js';
export function createExistsHandler() { export function createExistsHandler() {

View File

@@ -3,8 +3,9 @@
*/ */
import type { Request, Response } from 'express'; import type { Request, Response } from 'express';
import { secureFs, PathNotAllowedError } from '@automaker/platform'; import * as secureFs from '../../../lib/secure-fs.js';
import path from 'path'; import path from 'path';
import { PathNotAllowedError } from '@automaker/platform';
import { getErrorMessage, logError } from '../common.js'; import { getErrorMessage, logError } from '../common.js';
export function createImageHandler() { export function createImageHandler() {

View File

@@ -4,8 +4,9 @@
*/ */
import type { Request, Response } from 'express'; import type { Request, Response } from 'express';
import { secureFs, PathNotAllowedError } from '@automaker/platform'; import * as secureFs from '../../../lib/secure-fs.js';
import path from 'path'; import path from 'path';
import { PathNotAllowedError } from '@automaker/platform';
import { getErrorMessage, logError } from '../common.js'; import { getErrorMessage, logError } from '../common.js';
export function createMkdirHandler() { export function createMkdirHandler() {

View File

@@ -3,8 +3,9 @@
*/ */
import type { Request, Response } from 'express'; import type { Request, Response } from 'express';
import { secureFs, PathNotAllowedError } from '@automaker/platform'; import * as secureFs from '../../../lib/secure-fs.js';
import { getErrorMessage, logError, isENOENT } from '../common.js'; import { PathNotAllowedError } from '@automaker/platform';
import { getErrorMessage, logError } from '../common.js';
// Optional files that are expected to not exist in new projects // Optional files that are expected to not exist in new projects
// Don't log ENOENT errors for these to reduce noise // Don't log ENOENT errors for these to reduce noise
@@ -14,6 +15,10 @@ function isOptionalFile(filePath: string): boolean {
return OPTIONAL_FILES.some((optionalFile) => filePath.endsWith(optionalFile)); return OPTIONAL_FILES.some((optionalFile) => filePath.endsWith(optionalFile));
} }
function isENOENT(error: unknown): boolean {
return error !== null && typeof error === 'object' && 'code' in error && error.code === 'ENOENT';
}
export function createReadHandler() { export function createReadHandler() {
return async (req: Request, res: Response): Promise<void> => { return async (req: Request, res: Response): Promise<void> => {
try { try {

View File

@@ -3,7 +3,8 @@
*/ */
import type { Request, Response } from 'express'; import type { Request, Response } from 'express';
import { secureFs, PathNotAllowedError } from '@automaker/platform'; import * as secureFs from '../../../lib/secure-fs.js';
import { PathNotAllowedError } from '@automaker/platform';
import { getErrorMessage, logError } from '../common.js'; import { getErrorMessage, logError } from '../common.js';
export function createReaddirHandler() { export function createReaddirHandler() {

View File

@@ -3,7 +3,7 @@
*/ */
import type { Request, Response } from 'express'; import type { Request, Response } from 'express';
import { secureFs } from '@automaker/platform'; import * as secureFs from '../../../lib/secure-fs.js';
import path from 'path'; import path from 'path';
import { getErrorMessage, logError } from '../common.js'; import { getErrorMessage, logError } from '../common.js';

View File

@@ -3,9 +3,10 @@
*/ */
import type { Request, Response } from 'express'; import type { Request, Response } from 'express';
import { secureFs, getBoardDir } from '@automaker/platform'; import * as secureFs from '../../../lib/secure-fs.js';
import path from 'path'; import path from 'path';
import { getErrorMessage, logError } from '../common.js'; import { getErrorMessage, logError } from '../common.js';
import { getBoardDir } from '@automaker/platform';
export function createSaveBoardBackgroundHandler() { export function createSaveBoardBackgroundHandler() {
return async (req: Request, res: Response): Promise<void> => { return async (req: Request, res: Response): Promise<void> => {

View File

@@ -3,9 +3,10 @@
*/ */
import type { Request, Response } from 'express'; import type { Request, Response } from 'express';
import { secureFs, getImagesDir } from '@automaker/platform'; import * as secureFs from '../../../lib/secure-fs.js';
import path from 'path'; import path from 'path';
import { getErrorMessage, logError } from '../common.js'; import { getErrorMessage, logError } from '../common.js';
import { getImagesDir } from '@automaker/platform';
export function createSaveImageHandler() { export function createSaveImageHandler() {
return async (req: Request, res: Response): Promise<void> => { return async (req: Request, res: Response): Promise<void> => {

View File

@@ -3,7 +3,8 @@
*/ */
import type { Request, Response } from 'express'; import type { Request, Response } from 'express';
import { secureFs, PathNotAllowedError } from '@automaker/platform'; import * as secureFs from '../../../lib/secure-fs.js';
import { PathNotAllowedError } from '@automaker/platform';
import { getErrorMessage, logError } from '../common.js'; import { getErrorMessage, logError } from '../common.js';
export function createStatHandler() { export function createStatHandler() {

View File

@@ -3,8 +3,9 @@
*/ */
import type { Request, Response } from 'express'; import type { Request, Response } from 'express';
import { secureFs, isPathAllowed } from '@automaker/platform'; import * as secureFs from '../../../lib/secure-fs.js';
import path from 'path'; import path from 'path';
import { isPathAllowed } from '@automaker/platform';
import { getErrorMessage, logError } from '../common.js'; import { getErrorMessage, logError } from '../common.js';
export function createValidatePathHandler() { export function createValidatePathHandler() {

View File

@@ -3,8 +3,9 @@
*/ */
import type { Request, Response } from 'express'; import type { Request, Response } from 'express';
import { secureFs, PathNotAllowedError } from '@automaker/platform'; import * as secureFs from '../../../lib/secure-fs.js';
import path from 'path'; import path from 'path';
import { PathNotAllowedError } from '@automaker/platform';
import { mkdirSafe } from '@automaker/utils'; import { mkdirSafe } from '@automaker/utils';
import { getErrorMessage, logError } from '../common.js'; import { getErrorMessage, logError } from '../common.js';

View File

@@ -3,16 +3,50 @@
*/ */
import { Router } from 'express'; import { Router } from 'express';
import type { EventEmitter } from '../../lib/events.js';
import { validatePathParams } from '../../middleware/validate-paths.js';
import { createCheckGitHubRemoteHandler } from './routes/check-github-remote.js'; import { createCheckGitHubRemoteHandler } from './routes/check-github-remote.js';
import { createListIssuesHandler } from './routes/list-issues.js'; import { createListIssuesHandler } from './routes/list-issues.js';
import { createListPRsHandler } from './routes/list-prs.js'; import { createListPRsHandler } from './routes/list-prs.js';
import { createValidateIssueHandler } from './routes/validate-issue.js';
import {
createValidationStatusHandler,
createValidationStopHandler,
createGetValidationsHandler,
createDeleteValidationHandler,
createMarkViewedHandler,
} from './routes/validation-endpoints.js';
export function createGitHubRoutes(): Router { export function createGitHubRoutes(events: EventEmitter): Router {
const router = Router(); const router = Router();
router.post('/check-remote', createCheckGitHubRemoteHandler()); router.post('/check-remote', validatePathParams('projectPath'), createCheckGitHubRemoteHandler());
router.post('/issues', createListIssuesHandler()); router.post('/issues', validatePathParams('projectPath'), createListIssuesHandler());
router.post('/prs', createListPRsHandler()); router.post('/prs', validatePathParams('projectPath'), createListPRsHandler());
router.post(
'/validate-issue',
validatePathParams('projectPath'),
createValidateIssueHandler(events)
);
// Validation management endpoints
router.post(
'/validation-status',
validatePathParams('projectPath'),
createValidationStatusHandler()
);
router.post('/validation-stop', validatePathParams('projectPath'), createValidationStopHandler());
router.post('/validations', validatePathParams('projectPath'), createGetValidationsHandler());
router.post(
'/validation-delete',
validatePathParams('projectPath'),
createDeleteValidationHandler()
);
router.post(
'/validation-mark-viewed',
validatePathParams('projectPath'),
createMarkViewedHandler(events)
);
return router; return router;
} }

View File

@@ -3,11 +3,14 @@
*/ */
import type { Request, Response } from 'express'; import type { Request, Response } from 'express';
import type { GitHubRemoteStatus } from '@automaker/types';
import { execAsync, execEnv, getErrorMessage, logError } from './common.js'; import { execAsync, execEnv, getErrorMessage, logError } from './common.js';
// Re-export type for convenience export interface GitHubRemoteStatus {
export type { GitHubRemoteStatus } from '@automaker/types'; hasGitHubRemote: boolean;
remoteUrl: string | null;
owner: string | null;
repo: string | null;
}
export async function checkGitHubRemote(projectPath: string): Promise<GitHubRemoteStatus> { export async function checkGitHubRemote(projectPath: string): Promise<GitHubRemoteStatus> {
const status: GitHubRemoteStatus = { const status: GitHubRemoteStatus = {

View File

@@ -2,16 +2,34 @@
* Common utilities for GitHub routes * Common utilities for GitHub routes
*/ */
import { createLogger } from '@automaker/utils'; import { exec } from 'child_process';
import { createLogError, getErrorMessage } from '../../common.js'; import { promisify } from 'util';
import { execAsync, execEnv } from '../../../lib/exec-utils.js';
const logger = createLogger('GitHub'); export const execAsync = promisify(exec);
// Re-export exec utilities for convenience // Extended PATH to include common tool installation locations
export { execAsync, execEnv } from '../../../lib/exec-utils.js'; export const extendedPath = [
process.env.PATH,
'/opt/homebrew/bin',
'/usr/local/bin',
'/home/linuxbrew/.linuxbrew/bin',
`${process.env.HOME}/.local/bin`,
]
.filter(Boolean)
.join(':');
// Re-export error utilities export const execEnv = {
export { getErrorMessage } from '../../common.js'; ...process.env,
PATH: extendedPath,
};
export const logError = createLogError(logger); export function getErrorMessage(error: unknown): string {
if (error instanceof Error) {
return error.message;
}
return String(error);
}
export function logError(error: unknown, context: string): void {
console.error(`[GitHub] ${context}:`, error);
}

View File

@@ -2,13 +2,192 @@
* POST /list-issues endpoint - List GitHub issues for a project * POST /list-issues endpoint - List GitHub issues for a project
*/ */
import { spawn } from 'child_process';
import type { Request, Response } from 'express'; import type { Request, Response } from 'express';
import type { GitHubIssue, ListIssuesResult } from '@automaker/types';
import { execAsync, execEnv, getErrorMessage, logError } from './common.js'; import { execAsync, execEnv, getErrorMessage, logError } from './common.js';
import { checkGitHubRemote } from './check-github-remote.js'; import { checkGitHubRemote } from './check-github-remote.js';
// Re-export types for convenience export interface GitHubLabel {
export type { GitHubLabel, GitHubAuthor, GitHubIssue, ListIssuesResult } from '@automaker/types'; name: string;
color: string;
}
export interface GitHubAuthor {
login: string;
avatarUrl?: string;
}
export interface GitHubAssignee {
login: string;
avatarUrl?: string;
}
export interface LinkedPullRequest {
number: number;
title: string;
state: string;
url: string;
}
export interface GitHubIssue {
number: number;
title: string;
state: string;
author: GitHubAuthor;
createdAt: string;
labels: GitHubLabel[];
url: string;
body: string;
assignees: GitHubAssignee[];
linkedPRs?: LinkedPullRequest[];
}
export interface ListIssuesResult {
success: boolean;
openIssues?: GitHubIssue[];
closedIssues?: GitHubIssue[];
error?: string;
}
/**
* Fetch linked PRs for a list of issues using GitHub GraphQL API
*/
async function fetchLinkedPRs(
projectPath: string,
owner: string,
repo: string,
issueNumbers: number[]
): Promise<Map<number, LinkedPullRequest[]>> {
const linkedPRsMap = new Map<number, LinkedPullRequest[]>();
if (issueNumbers.length === 0) {
return linkedPRsMap;
}
// Build GraphQL query for batch fetching linked PRs
// We fetch up to 20 issues at a time to avoid query limits
const batchSize = 20;
for (let i = 0; i < issueNumbers.length; i += batchSize) {
const batch = issueNumbers.slice(i, i + batchSize);
const issueQueries = batch
.map(
(num, idx) => `
issue${idx}: issue(number: ${num}) {
number
timelineItems(first: 10, itemTypes: [CROSS_REFERENCED_EVENT, CONNECTED_EVENT]) {
nodes {
... on CrossReferencedEvent {
source {
... on PullRequest {
number
title
state
url
}
}
}
... on ConnectedEvent {
subject {
... on PullRequest {
number
title
state
url
}
}
}
}
}
}`
)
.join('\n');
const query = `{
repository(owner: "${owner}", name: "${repo}") {
${issueQueries}
}
}`;
try {
// Use spawn with stdin to avoid shell injection vulnerabilities
// --input - reads the JSON request body from stdin
const requestBody = JSON.stringify({ query });
const response = await new Promise<Record<string, unknown>>((resolve, reject) => {
const gh = spawn('gh', ['api', 'graphql', '--input', '-'], {
cwd: projectPath,
env: execEnv,
});
let stdout = '';
let stderr = '';
gh.stdout.on('data', (data: Buffer) => (stdout += data.toString()));
gh.stderr.on('data', (data: Buffer) => (stderr += data.toString()));
gh.on('close', (code) => {
if (code !== 0) {
return reject(new Error(`gh process exited with code ${code}: ${stderr}`));
}
try {
resolve(JSON.parse(stdout));
} catch (e) {
reject(e);
}
});
gh.stdin.write(requestBody);
gh.stdin.end();
});
const repoData = (response?.data as Record<string, unknown>)?.repository as Record<
string,
unknown
> | null;
if (repoData) {
batch.forEach((issueNum, idx) => {
const issueData = repoData[`issue${idx}`] as {
timelineItems?: {
nodes?: Array<{
source?: { number?: number; title?: string; state?: string; url?: string };
subject?: { number?: number; title?: string; state?: string; url?: string };
}>;
};
} | null;
if (issueData?.timelineItems?.nodes) {
const linkedPRs: LinkedPullRequest[] = [];
const seenPRs = new Set<number>();
for (const node of issueData.timelineItems.nodes) {
const pr = node?.source || node?.subject;
if (pr?.number && !seenPRs.has(pr.number)) {
seenPRs.add(pr.number);
linkedPRs.push({
number: pr.number,
title: pr.title || '',
state: (pr.state || '').toLowerCase(),
url: pr.url || '',
});
}
}
if (linkedPRs.length > 0) {
linkedPRsMap.set(issueNum, linkedPRs);
}
}
});
}
} catch (error) {
// If GraphQL fails, continue without linked PRs
console.warn(
'Failed to fetch linked PRs via GraphQL:',
error instanceof Error ? error.message : error
);
}
}
return linkedPRsMap;
}
export function createListIssuesHandler() { export function createListIssuesHandler() {
return async (req: Request, res: Response): Promise<void> => { return async (req: Request, res: Response): Promise<void> => {
@@ -30,17 +209,17 @@ export function createListIssuesHandler() {
return; return;
} }
// Fetch open and closed issues in parallel // Fetch open and closed issues in parallel (now including assignees)
const [openResult, closedResult] = await Promise.all([ const [openResult, closedResult] = await Promise.all([
execAsync( execAsync(
'gh issue list --state open --json number,title,state,author,createdAt,labels,url,body --limit 100', 'gh issue list --state open --json number,title,state,author,createdAt,labels,url,body,assignees --limit 100',
{ {
cwd: projectPath, cwd: projectPath,
env: execEnv, env: execEnv,
} }
), ),
execAsync( execAsync(
'gh issue list --state closed --json number,title,state,author,createdAt,labels,url,body --limit 50', 'gh issue list --state closed --json number,title,state,author,createdAt,labels,url,body,assignees --limit 50',
{ {
cwd: projectPath, cwd: projectPath,
env: execEnv, env: execEnv,
@@ -54,6 +233,24 @@ export function createListIssuesHandler() {
const openIssues: GitHubIssue[] = JSON.parse(openStdout || '[]'); const openIssues: GitHubIssue[] = JSON.parse(openStdout || '[]');
const closedIssues: GitHubIssue[] = JSON.parse(closedStdout || '[]'); const closedIssues: GitHubIssue[] = JSON.parse(closedStdout || '[]');
// Fetch linked PRs for open issues (more relevant for active work)
if (remoteStatus.owner && remoteStatus.repo && openIssues.length > 0) {
const linkedPRsMap = await fetchLinkedPRs(
projectPath,
remoteStatus.owner,
remoteStatus.repo,
openIssues.map((i) => i.number)
);
// Attach linked PRs to issues
for (const issue of openIssues) {
const linkedPRs = linkedPRsMap.get(issue.number);
if (linkedPRs) {
issue.linkedPRs = linkedPRs;
}
}
}
res.json({ res.json({
success: true, success: true,
openIssues, openIssues,

View File

@@ -3,12 +3,39 @@
*/ */
import type { Request, Response } from 'express'; import type { Request, Response } from 'express';
import type { GitHubPR, ListPRsResult } from '@automaker/types';
import { execAsync, execEnv, getErrorMessage, logError } from './common.js'; import { execAsync, execEnv, getErrorMessage, logError } from './common.js';
import { checkGitHubRemote } from './check-github-remote.js'; import { checkGitHubRemote } from './check-github-remote.js';
// Re-export types for convenience export interface GitHubLabel {
export type { GitHubLabel, GitHubAuthor, GitHubPR, ListPRsResult } from '@automaker/types'; name: string;
color: string;
}
export interface GitHubAuthor {
login: string;
}
export interface GitHubPR {
number: number;
title: string;
state: string;
author: GitHubAuthor;
createdAt: string;
labels: GitHubLabel[];
url: string;
isDraft: boolean;
headRefName: string;
reviewDecision: string | null;
mergeable: string;
body: string;
}
export interface ListPRsResult {
success: boolean;
openPRs?: GitHubPR[];
mergedPRs?: GitHubPR[];
error?: string;
}
export function createListPRsHandler() { export function createListPRsHandler() {
return async (req: Request, res: Response): Promise<void> => { return async (req: Request, res: Response): Promise<void> => {

View File

@@ -0,0 +1,287 @@
/**
* POST /validate-issue endpoint - Validate a GitHub issue using Claude SDK (async)
*
* Scans the codebase to determine if an issue is valid, invalid, or needs clarification.
* Runs asynchronously and emits events for progress and completion.
*/
import type { Request, Response } from 'express';
import { query } from '@anthropic-ai/claude-agent-sdk';
import type { EventEmitter } from '../../../lib/events.js';
import type { IssueValidationResult, IssueValidationEvent, AgentModel } from '@automaker/types';
import { createSuggestionsOptions } from '../../../lib/sdk-options.js';
import { writeValidation } from '../../../lib/validation-storage.js';
import {
issueValidationSchema,
ISSUE_VALIDATION_SYSTEM_PROMPT,
buildValidationPrompt,
} from './validation-schema.js';
import {
trySetValidationRunning,
clearValidationStatus,
getErrorMessage,
logError,
logger,
} from './validation-common.js';
/** Valid model values for validation */
const VALID_MODELS: readonly AgentModel[] = ['opus', 'sonnet', 'haiku'] as const;
/**
* Request body for issue validation
*/
interface ValidateIssueRequestBody {
projectPath: string;
issueNumber: number;
issueTitle: string;
issueBody: string;
issueLabels?: string[];
/** Model to use for validation (opus, sonnet, haiku) */
model?: AgentModel;
}
/**
* Run the validation asynchronously
*
* Emits events for start, progress, complete, and error.
* Stores result on completion.
*/
async function runValidation(
projectPath: string,
issueNumber: number,
issueTitle: string,
issueBody: string,
issueLabels: string[] | undefined,
model: AgentModel,
events: EventEmitter,
abortController: AbortController
): Promise<void> {
// Emit start event
const startEvent: IssueValidationEvent = {
type: 'issue_validation_start',
issueNumber,
issueTitle,
projectPath,
};
events.emit('issue-validation:event', startEvent);
// Set up timeout (6 minutes)
const VALIDATION_TIMEOUT_MS = 360000;
const timeoutId = setTimeout(() => {
logger.warn(`Validation timeout reached after ${VALIDATION_TIMEOUT_MS}ms`);
abortController.abort();
}, VALIDATION_TIMEOUT_MS);
try {
// Build the prompt
const prompt = buildValidationPrompt(issueNumber, issueTitle, issueBody, issueLabels);
// Create SDK options with structured output and abort controller
const options = createSuggestionsOptions({
cwd: projectPath,
model,
systemPrompt: ISSUE_VALIDATION_SYSTEM_PROMPT,
abortController,
outputFormat: {
type: 'json_schema',
schema: issueValidationSchema as Record<string, unknown>,
},
});
// Execute the query
const stream = query({ prompt, options });
let validationResult: IssueValidationResult | null = null;
let responseText = '';
for await (const msg of stream) {
// Collect assistant text for debugging and emit progress
if (msg.type === 'assistant' && msg.message?.content) {
for (const block of msg.message.content) {
if (block.type === 'text') {
responseText += block.text;
// Emit progress event
const progressEvent: IssueValidationEvent = {
type: 'issue_validation_progress',
issueNumber,
content: block.text,
projectPath,
};
events.emit('issue-validation:event', progressEvent);
}
}
}
// Extract structured output on success
if (msg.type === 'result' && msg.subtype === 'success') {
const resultMsg = msg as { structured_output?: IssueValidationResult };
if (resultMsg.structured_output) {
validationResult = resultMsg.structured_output;
logger.debug('Received structured output:', validationResult);
}
}
// Handle errors
if (msg.type === 'result') {
const resultMsg = msg as { subtype?: string };
if (resultMsg.subtype === 'error_max_structured_output_retries') {
logger.error('Failed to produce valid structured output after retries');
throw new Error('Could not produce valid validation output');
}
}
}
// Clear timeout
clearTimeout(timeoutId);
// Require structured output
if (!validationResult) {
logger.error('No structured output received from Claude SDK');
logger.debug('Raw response text:', responseText);
throw new Error('Validation failed: no structured output received');
}
logger.info(`Issue #${issueNumber} validation complete: ${validationResult.verdict}`);
// Store the result
await writeValidation(projectPath, issueNumber, {
issueNumber,
issueTitle,
validatedAt: new Date().toISOString(),
model,
result: validationResult,
});
// Emit completion event
const completeEvent: IssueValidationEvent = {
type: 'issue_validation_complete',
issueNumber,
issueTitle,
result: validationResult,
projectPath,
model,
};
events.emit('issue-validation:event', completeEvent);
} catch (error) {
clearTimeout(timeoutId);
const errorMessage = getErrorMessage(error);
logError(error, `Issue #${issueNumber} validation failed`);
// Emit error event
const errorEvent: IssueValidationEvent = {
type: 'issue_validation_error',
issueNumber,
error: errorMessage,
projectPath,
};
events.emit('issue-validation:event', errorEvent);
throw error;
}
}
/**
* Creates the handler for validating GitHub issues against the codebase.
*
* Uses Claude SDK with:
* - Read-only tools (Read, Glob, Grep) for codebase analysis
* - JSON schema structured output for reliable parsing
* - System prompt guiding the validation process
* - Async execution with event emission
*/
export function createValidateIssueHandler(events: EventEmitter) {
return async (req: Request, res: Response): Promise<void> => {
try {
const {
projectPath,
issueNumber,
issueTitle,
issueBody,
issueLabels,
model = 'opus',
} = req.body as ValidateIssueRequestBody;
// Validate required fields
if (!projectPath) {
res.status(400).json({ success: false, error: 'projectPath is required' });
return;
}
if (!issueNumber || typeof issueNumber !== 'number') {
res
.status(400)
.json({ success: false, error: 'issueNumber is required and must be a number' });
return;
}
if (!issueTitle || typeof issueTitle !== 'string') {
res.status(400).json({ success: false, error: 'issueTitle is required' });
return;
}
if (typeof issueBody !== 'string') {
res.status(400).json({ success: false, error: 'issueBody must be a string' });
return;
}
// Validate model parameter at runtime
if (!VALID_MODELS.includes(model)) {
res.status(400).json({
success: false,
error: `Invalid model. Must be one of: ${VALID_MODELS.join(', ')}`,
});
return;
}
logger.info(`Starting async validation for issue #${issueNumber}: ${issueTitle}`);
// Create abort controller and atomically try to claim validation slot
// This prevents TOCTOU race conditions
const abortController = new AbortController();
if (!trySetValidationRunning(projectPath, issueNumber, abortController)) {
res.json({
success: false,
error: `Validation is already running for issue #${issueNumber}`,
});
return;
}
// Start validation in background (fire-and-forget)
runValidation(
projectPath,
issueNumber,
issueTitle,
issueBody,
issueLabels,
model,
events,
abortController
)
.catch((error) => {
// Error is already handled inside runValidation (event emitted)
logger.debug('Validation error caught in background handler:', error);
})
.finally(() => {
clearValidationStatus(projectPath, issueNumber);
});
// Return immediately
res.json({
success: true,
message: `Validation started for issue #${issueNumber}`,
issueNumber,
});
} catch (error) {
logError(error, `Issue validation failed`);
logger.error('Issue validation error:', error);
if (!res.headersSent) {
res.status(500).json({
success: false,
error: getErrorMessage(error),
});
}
}
};
}

View File

@@ -0,0 +1,174 @@
/**
* Common utilities and state for issue validation routes
*
* Tracks running validation status per issue to support:
* - Checking if a validation is in progress
* - Cancelling a running validation
* - Preventing duplicate validations for the same issue
*/
import { createLogger } from '@automaker/utils';
import { getErrorMessage as getErrorMessageShared, createLogError } from '../../common.js';
const logger = createLogger('IssueValidation');
/**
* Status of a validation in progress
*/
interface ValidationStatus {
isRunning: boolean;
abortController: AbortController;
startedAt: Date;
}
/**
* Map of issue number to validation status
* Key format: `${projectPath}||${issueNumber}` to support multiple projects
* Note: Using `||` as delimiter since `:` appears in Windows paths (e.g., C:\)
*/
const validationStatusMap = new Map<string, ValidationStatus>();
/** Maximum age for stale validation entries before cleanup (1 hour) */
const MAX_VALIDATION_AGE_MS = 60 * 60 * 1000;
/**
* Create a unique key for a validation
* Uses `||` as delimiter since `:` appears in Windows paths
*/
function getValidationKey(projectPath: string, issueNumber: number): string {
return `${projectPath}||${issueNumber}`;
}
/**
* Check if a validation is currently running for an issue
*/
export function isValidationRunning(projectPath: string, issueNumber: number): boolean {
const key = getValidationKey(projectPath, issueNumber);
const status = validationStatusMap.get(key);
return status?.isRunning ?? false;
}
/**
* Get validation status for an issue
*/
export function getValidationStatus(
projectPath: string,
issueNumber: number
): { isRunning: boolean; startedAt?: Date } | null {
const key = getValidationKey(projectPath, issueNumber);
const status = validationStatusMap.get(key);
if (!status) {
return null;
}
return {
isRunning: status.isRunning,
startedAt: status.startedAt,
};
}
/**
* Get all running validations for a project
*/
export function getRunningValidations(projectPath: string): number[] {
const runningIssues: number[] = [];
const prefix = `${projectPath}||`;
for (const [key, status] of validationStatusMap.entries()) {
if (status.isRunning && key.startsWith(prefix)) {
const issueNumber = parseInt(key.slice(prefix.length), 10);
if (!isNaN(issueNumber)) {
runningIssues.push(issueNumber);
}
}
}
return runningIssues;
}
/**
* Set a validation as running
*/
export function setValidationRunning(
projectPath: string,
issueNumber: number,
abortController: AbortController
): void {
const key = getValidationKey(projectPath, issueNumber);
validationStatusMap.set(key, {
isRunning: true,
abortController,
startedAt: new Date(),
});
}
/**
* Atomically try to set a validation as running (check-and-set)
* Prevents TOCTOU race conditions when starting validations
*
* @returns true if successfully claimed, false if already running
*/
export function trySetValidationRunning(
projectPath: string,
issueNumber: number,
abortController: AbortController
): boolean {
const key = getValidationKey(projectPath, issueNumber);
if (validationStatusMap.has(key)) {
return false; // Already running
}
validationStatusMap.set(key, {
isRunning: true,
abortController,
startedAt: new Date(),
});
return true; // Successfully claimed
}
/**
* Cleanup stale validation entries (e.g., from crashed validations)
* Should be called periodically to prevent memory leaks
*/
export function cleanupStaleValidations(): number {
const now = Date.now();
let cleanedCount = 0;
for (const [key, status] of validationStatusMap.entries()) {
if (now - status.startedAt.getTime() > MAX_VALIDATION_AGE_MS) {
status.abortController.abort();
validationStatusMap.delete(key);
cleanedCount++;
}
}
if (cleanedCount > 0) {
logger.info(`Cleaned up ${cleanedCount} stale validation entries`);
}
return cleanedCount;
}
/**
* Clear validation status (call when validation completes or errors)
*/
export function clearValidationStatus(projectPath: string, issueNumber: number): void {
const key = getValidationKey(projectPath, issueNumber);
validationStatusMap.delete(key);
}
/**
* Abort a running validation
*
* @returns true if validation was aborted, false if not running
*/
export function abortValidation(projectPath: string, issueNumber: number): boolean {
const key = getValidationKey(projectPath, issueNumber);
const status = validationStatusMap.get(key);
if (!status || !status.isRunning) {
return false;
}
status.abortController.abort();
validationStatusMap.delete(key);
return true;
}
// Re-export shared utilities
export { getErrorMessageShared as getErrorMessage };
export const logError = createLogError(logger);
export { logger };

View File

@@ -0,0 +1,236 @@
/**
* Additional validation endpoints for status, stop, and retrieving stored validations
*/
import type { Request, Response } from 'express';
import type { EventEmitter } from '../../../lib/events.js';
import type { IssueValidationEvent } from '@automaker/types';
import {
isValidationRunning,
getValidationStatus,
getRunningValidations,
abortValidation,
getErrorMessage,
logError,
logger,
} from './validation-common.js';
import {
readValidation,
getAllValidations,
getValidationWithFreshness,
deleteValidation,
markValidationViewed,
} from '../../../lib/validation-storage.js';
/**
* POST /validation-status - Check if validation is running for an issue
*/
export function createValidationStatusHandler() {
return async (req: Request, res: Response): Promise<void> => {
try {
const { projectPath, issueNumber } = req.body as {
projectPath: string;
issueNumber?: number;
};
if (!projectPath) {
res.status(400).json({ success: false, error: 'projectPath is required' });
return;
}
// If issueNumber provided, check specific issue
if (issueNumber !== undefined) {
const status = getValidationStatus(projectPath, issueNumber);
res.json({
success: true,
isRunning: status?.isRunning ?? false,
startedAt: status?.startedAt?.toISOString(),
});
return;
}
// Otherwise, return all running validations for the project
const runningIssues = getRunningValidations(projectPath);
res.json({
success: true,
runningIssues,
});
} catch (error) {
logError(error, 'Validation status check failed');
res.status(500).json({ success: false, error: getErrorMessage(error) });
}
};
}
/**
* POST /validation-stop - Cancel a running validation
*/
export function createValidationStopHandler() {
return async (req: Request, res: Response): Promise<void> => {
try {
const { projectPath, issueNumber } = req.body as {
projectPath: string;
issueNumber: number;
};
if (!projectPath) {
res.status(400).json({ success: false, error: 'projectPath is required' });
return;
}
if (!issueNumber || typeof issueNumber !== 'number') {
res
.status(400)
.json({ success: false, error: 'issueNumber is required and must be a number' });
return;
}
const wasAborted = abortValidation(projectPath, issueNumber);
if (wasAborted) {
logger.info(`Validation for issue #${issueNumber} was stopped`);
res.json({
success: true,
message: `Validation for issue #${issueNumber} has been stopped`,
});
} else {
res.json({
success: false,
error: `No validation is running for issue #${issueNumber}`,
});
}
} catch (error) {
logError(error, 'Validation stop failed');
res.status(500).json({ success: false, error: getErrorMessage(error) });
}
};
}
/**
* POST /validations - Get stored validations for a project
*/
export function createGetValidationsHandler() {
return async (req: Request, res: Response): Promise<void> => {
try {
const { projectPath, issueNumber } = req.body as {
projectPath: string;
issueNumber?: number;
};
if (!projectPath) {
res.status(400).json({ success: false, error: 'projectPath is required' });
return;
}
// If issueNumber provided, get specific validation with freshness info
if (issueNumber !== undefined) {
const result = await getValidationWithFreshness(projectPath, issueNumber);
if (!result) {
res.json({
success: true,
validation: null,
});
return;
}
res.json({
success: true,
validation: result.validation,
isStale: result.isStale,
});
return;
}
// Otherwise, get all validations for the project
const validations = await getAllValidations(projectPath);
res.json({
success: true,
validations,
});
} catch (error) {
logError(error, 'Get validations failed');
res.status(500).json({ success: false, error: getErrorMessage(error) });
}
};
}
/**
* POST /validation-delete - Delete a stored validation
*/
export function createDeleteValidationHandler() {
return async (req: Request, res: Response): Promise<void> => {
try {
const { projectPath, issueNumber } = req.body as {
projectPath: string;
issueNumber: number;
};
if (!projectPath) {
res.status(400).json({ success: false, error: 'projectPath is required' });
return;
}
if (!issueNumber || typeof issueNumber !== 'number') {
res
.status(400)
.json({ success: false, error: 'issueNumber is required and must be a number' });
return;
}
const deleted = await deleteValidation(projectPath, issueNumber);
res.json({
success: true,
deleted,
});
} catch (error) {
logError(error, 'Delete validation failed');
res.status(500).json({ success: false, error: getErrorMessage(error) });
}
};
}
/**
* POST /validation-mark-viewed - Mark a validation as viewed by the user
*/
export function createMarkViewedHandler(events: EventEmitter) {
return async (req: Request, res: Response): Promise<void> => {
try {
const { projectPath, issueNumber } = req.body as {
projectPath: string;
issueNumber: number;
};
if (!projectPath) {
res.status(400).json({ success: false, error: 'projectPath is required' });
return;
}
if (!issueNumber || typeof issueNumber !== 'number') {
res
.status(400)
.json({ success: false, error: 'issueNumber is required and must be a number' });
return;
}
const success = await markValidationViewed(projectPath, issueNumber);
if (success) {
// Emit event so UI can update the unviewed count
const viewedEvent: IssueValidationEvent = {
type: 'issue_validation_viewed',
issueNumber,
projectPath,
};
events.emit('issue-validation:event', viewedEvent);
}
res.json({ success });
} catch (error) {
logError(error, 'Mark validation viewed failed');
res.status(500).json({ success: false, error: getErrorMessage(error) });
}
};
}

View File

@@ -0,0 +1,138 @@
/**
* Issue Validation Schema and System Prompt
*
* Defines the JSON schema for Claude's structured output and
* the system prompt that guides the validation process.
*/
/**
* JSON Schema for issue validation structured output.
* Used with Claude SDK's outputFormat option to ensure reliable parsing.
*/
export const issueValidationSchema = {
type: 'object',
properties: {
verdict: {
type: 'string',
enum: ['valid', 'invalid', 'needs_clarification'],
description: 'The validation verdict for the issue',
},
confidence: {
type: 'string',
enum: ['high', 'medium', 'low'],
description: 'How confident the AI is in its assessment',
},
reasoning: {
type: 'string',
description: 'Detailed explanation of the verdict',
},
bugConfirmed: {
type: 'boolean',
description: 'For bug reports: whether the bug was confirmed in the codebase',
},
relatedFiles: {
type: 'array',
items: { type: 'string' },
description: 'Files related to the issue found during analysis',
},
suggestedFix: {
type: 'string',
description: 'Suggested approach to fix or implement the issue',
},
missingInfo: {
type: 'array',
items: { type: 'string' },
description: 'Information needed when verdict is needs_clarification',
},
estimatedComplexity: {
type: 'string',
enum: ['trivial', 'simple', 'moderate', 'complex', 'very_complex'],
description: 'Estimated effort to address the issue',
},
},
required: ['verdict', 'confidence', 'reasoning'],
additionalProperties: false,
} as const;
/**
* System prompt that guides Claude in validating GitHub issues.
* Instructs the model to use read-only tools to analyze the codebase.
*/
export const ISSUE_VALIDATION_SYSTEM_PROMPT = `You are an expert code analyst validating GitHub issues against a codebase.
Your task is to analyze a GitHub issue and determine if it's valid by scanning the codebase.
## Validation Process
1. **Read the issue carefully** - Understand what is being reported or requested
2. **Search the codebase** - Use Glob to find relevant files by pattern, Grep to search for keywords
3. **Examine the code** - Use Read to look at the actual implementation in relevant files
4. **Form your verdict** - Based on your analysis, determine if the issue is valid
## Verdicts
- **valid**: The issue describes a real problem that exists in the codebase, or a clear feature request that can be implemented. The referenced files/components exist and the issue is actionable.
- **invalid**: The issue describes behavior that doesn't exist, references non-existent files or components, is based on a misunderstanding of the code, or the described "bug" is actually expected behavior.
- **needs_clarification**: The issue lacks sufficient detail to verify. Specify what additional information is needed in the missingInfo field.
## For Bug Reports, Check:
- Do the referenced files/components exist?
- Does the code match what the issue describes?
- Is the described behavior actually a bug or expected?
- Can you locate the code that would cause the reported issue?
## For Feature Requests, Check:
- Does the feature already exist?
- Is the implementation location clear?
- Is the request technically feasible given the codebase structure?
## Response Guidelines
- **Always include relatedFiles** when you find relevant code
- **Set bugConfirmed to true** only if you can definitively confirm a bug exists in the code
- **Provide a suggestedFix** when you have a clear idea of how to address the issue
- **Use missingInfo** when the verdict is needs_clarification to list what's needed
- **Set estimatedComplexity** to help prioritize:
- trivial: Simple text changes, one-line fixes
- simple: Small changes to one file
- moderate: Changes to multiple files or moderate logic changes
- complex: Significant refactoring or new feature implementation
- very_complex: Major architectural changes or cross-cutting concerns
Be thorough in your analysis but focus on files that are directly relevant to the issue.`;
/**
* Build the user prompt for issue validation.
*
* Creates a structured prompt that includes the issue details for Claude
* to analyze against the codebase.
*
* @param issueNumber - The GitHub issue number
* @param issueTitle - The issue title
* @param issueBody - The issue body/description
* @param issueLabels - Optional array of label names
* @returns Formatted prompt string for the validation request
*/
export function buildValidationPrompt(
issueNumber: number,
issueTitle: string,
issueBody: string,
issueLabels?: string[]
): string {
const labelsSection = issueLabels?.length ? `\n\n**Labels:** ${issueLabels.join(', ')}` : '';
return `Please validate the following GitHub issue by analyzing the codebase:
## Issue #${issueNumber}: ${issueTitle}
${labelsSection}
### Description
${issueBody || '(No description provided)'}
---
Scan the codebase to verify this issue. Look for the files, components, or functionality mentioned. Determine if this issue is valid, invalid, or needs clarification.`;
}

View File

@@ -10,7 +10,7 @@
import type { Request, Response } from 'express'; import type { Request, Response } from 'express';
import type { SettingsService } from '../../../services/settings-service.js'; import type { SettingsService } from '../../../services/settings-service.js';
import { getErrorMessage, logError } from '../common.js'; import { logError } from '../common.js';
/** /**
* Create handler factory for GET /api/settings/credentials * Create handler factory for GET /api/settings/credentials
@@ -29,7 +29,7 @@ export function createGetCredentialsHandler(settingsService: SettingsService) {
}); });
} catch (error) { } catch (error) {
logError(error, 'Get credentials failed'); logError(error, 'Get credentials failed');
res.status(500).json({ success: false, error: getErrorMessage(error) }); res.status(500).json({ success: false, error: 'Failed to retrieve credentials' });
} }
}; };
} }

View File

@@ -11,7 +11,71 @@
import type { Request, Response } from 'express'; import type { Request, Response } from 'express';
import type { SettingsService } from '../../../services/settings-service.js'; import type { SettingsService } from '../../../services/settings-service.js';
import type { Credentials } from '../../../types/settings.js'; import type { Credentials } from '../../../types/settings.js';
import { getErrorMessage, logError } from '../common.js'; import { logError } from '../common.js';
/** Maximum allowed length for API keys to prevent abuse */
const MAX_API_KEY_LENGTH = 512;
/** Known API key provider names that are valid */
const VALID_API_KEY_PROVIDERS = ['anthropic', 'google', 'openai'] as const;
/**
* Validates that the provided updates object has the correct structure
* and all apiKeys values are strings within acceptable length limits.
*
* @param updates - The partial credentials update object to validate
* @returns An error message if validation fails, or null if valid
*/
function validateCredentialsUpdate(updates: unknown): string | null {
if (!updates || typeof updates !== 'object' || Array.isArray(updates)) {
return 'Invalid request body - expected credentials object';
}
const obj = updates as Record<string, unknown>;
// If apiKeys is provided, validate its structure
if ('apiKeys' in obj) {
const apiKeys = obj.apiKeys;
if (apiKeys === null || apiKeys === undefined) {
// Allow null/undefined to clear
return null;
}
if (typeof apiKeys !== 'object' || Array.isArray(apiKeys)) {
return 'Invalid apiKeys - expected object';
}
const keysObj = apiKeys as Record<string, unknown>;
// Validate each provided API key
for (const [provider, value] of Object.entries(keysObj)) {
// Check provider name is valid
if (!VALID_API_KEY_PROVIDERS.includes(provider as (typeof VALID_API_KEY_PROVIDERS)[number])) {
return `Invalid API key provider: ${provider}. Valid providers: ${VALID_API_KEY_PROVIDERS.join(', ')}`;
}
// Check value is a string
if (typeof value !== 'string') {
return `Invalid API key for ${provider} - expected string`;
}
// Check length limit
if (value.length > MAX_API_KEY_LENGTH) {
return `API key for ${provider} exceeds maximum length of ${MAX_API_KEY_LENGTH} characters`;
}
}
}
// Validate version if provided
if ('version' in obj && obj.version !== undefined) {
if (typeof obj.version !== 'number' || !Number.isInteger(obj.version) || obj.version < 0) {
return 'Invalid version - expected non-negative integer';
}
}
return null;
}
/** /**
* Create handler factory for PUT /api/settings/credentials * Create handler factory for PUT /api/settings/credentials
@@ -22,16 +86,19 @@ import { getErrorMessage, logError } from '../common.js';
export function createUpdateCredentialsHandler(settingsService: SettingsService) { export function createUpdateCredentialsHandler(settingsService: SettingsService) {
return async (req: Request, res: Response): Promise<void> => { return async (req: Request, res: Response): Promise<void> => {
try { try {
const updates = req.body as Partial<Credentials>; // Validate the request body before type assertion
const validationError = validateCredentialsUpdate(req.body);
if (!updates || typeof updates !== 'object') { if (validationError) {
res.status(400).json({ res.status(400).json({
success: false, success: false,
error: 'Invalid request body - expected credentials object', error: validationError,
}); });
return; return;
} }
// Safe to cast after validation
const updates = req.body as Partial<Credentials>;
await settingsService.updateCredentials(updates); await settingsService.updateCredentials(updates);
// Return masked credentials for confirmation // Return masked credentials for confirmation
@@ -43,7 +110,7 @@ export function createUpdateCredentialsHandler(settingsService: SettingsService)
}); });
} catch (error) { } catch (error) {
logError(error, 'Update credentials failed'); logError(error, 'Update credentials failed');
res.status(500).json({ success: false, error: getErrorMessage(error) }); res.status(500).json({ success: false, error: 'Failed to update credentials' });
} }
}; };
} }

View File

@@ -9,6 +9,17 @@ import { getErrorMessage as getErrorMessageShared, createLogError } from '../com
const logger = createLogger('Setup'); const logger = createLogger('Setup');
/**
* Escapes special regex characters in a string to prevent regex injection.
* This ensures user input can be safely used in RegExp constructors.
*
* @param str - The string to escape
* @returns The escaped string safe for use in RegExp
*/
export function escapeRegExp(str: string): string {
return str.replace(/[.*+?^${}()|[\]\\]/g, '\\$&');
}
// Storage for API keys (in-memory cache) - private // Storage for API keys (in-memory cache) - private
const apiKeys: Record<string, string> = {}; const apiKeys: Record<string, string> = {};
@@ -33,6 +44,32 @@ export function getAllApiKeys(): Record<string, string> {
return { ...apiKeys }; return { ...apiKeys };
} }
/**
* Escape a value for safe inclusion in a .env file.
* Handles special characters like quotes, newlines, dollar signs, and backslashes.
* Returns a properly quoted string if needed.
*/
function escapeEnvValue(value: string): string {
// Check if the value contains any characters that require quoting
const requiresQuoting = /[\s"'$`\\#\n\r]/.test(value) || value.includes('=');
if (!requiresQuoting) {
return value;
}
// Use double quotes and escape special characters within
// Escape backslashes first to avoid double-escaping
let escaped = value
.replace(/\\/g, '\\\\') // Escape backslashes
.replace(/"/g, '\\"') // Escape double quotes
.replace(/\$/g, '\\$') // Escape dollar signs (prevents variable expansion)
.replace(/`/g, '\\`') // Escape backticks
.replace(/\n/g, '\\n') // Escape newlines
.replace(/\r/g, '\\r'); // Escape carriage returns
return `"${escaped}"`;
}
/** /**
* Helper to persist API keys to .env file * Helper to persist API keys to .env file
*/ */
@@ -47,21 +84,24 @@ export async function persistApiKeyToEnv(key: string, value: string): Promise<vo
// .env file doesn't exist, we'll create it // .env file doesn't exist, we'll create it
} }
// Parse existing env content // Escape the value for safe .env file storage
const escapedValue = escapeEnvValue(value);
// Parse existing env content - match key with optional quoted values
const lines = envContent.split('\n'); const lines = envContent.split('\n');
const keyRegex = new RegExp(`^${key}=`); const keyRegex = new RegExp(`^${escapeRegExp(key)}=`);
let found = false; let found = false;
const newLines = lines.map((line) => { const newLines = lines.map((line) => {
if (keyRegex.test(line)) { if (keyRegex.test(line)) {
found = true; found = true;
return `${key}=${value}`; return `${key}=${escapedValue}`;
} }
return line; return line;
}); });
if (!found) { if (!found) {
// Add the key at the end // Add the key at the end
newLines.push(`${key}=${value}`); newLines.push(`${key}=${escapedValue}`);
} }
await fs.writeFile(envPath, newLines.join('\n')); await fs.writeFile(envPath, newLines.join('\n'));

View File

@@ -3,7 +3,7 @@
*/ */
import type { Request, Response } from 'express'; import type { Request, Response } from 'express';
import { getApiKey, getErrorMessage, logError } from '../common.js'; import { getApiKey, logError } from '../common.js';
export function createApiKeysHandler() { export function createApiKeysHandler() {
return async (_req: Request, res: Response): Promise<void> => { return async (_req: Request, res: Response): Promise<void> => {
@@ -14,7 +14,7 @@ export function createApiKeysHandler() {
}); });
} catch (error) { } catch (error) {
logError(error, 'Get API keys failed'); logError(error, 'Get API keys failed');
res.status(500).json({ success: false, error: getErrorMessage(error) }); res.status(500).json({ success: false, error: 'Failed to retrieve API keys status' });
} }
}; };
} }

View File

@@ -11,7 +11,7 @@ const logger = createLogger('Setup');
// In-memory storage reference (imported from common.ts pattern) // In-memory storage reference (imported from common.ts pattern)
// We need to modify common.ts to export a deleteApiKey function // We need to modify common.ts to export a deleteApiKey function
import { setApiKey } from '../common.js'; import { setApiKey, escapeRegExp } from '../common.js';
/** /**
* Remove an API key from the .env file * Remove an API key from the .env file
@@ -30,7 +30,7 @@ async function removeApiKeyFromEnv(key: string): Promise<void> {
// Parse existing env content and remove the key // Parse existing env content and remove the key
const lines = envContent.split('\n'); const lines = envContent.split('\n');
const keyRegex = new RegExp(`^${key}=`); const keyRegex = new RegExp(`^${escapeRegExp(key)}=`);
const newLines = lines.filter((line) => !keyRegex.test(line)); const newLines = lines.filter((line) => !keyRegex.test(line));
// Remove empty lines at the end // Remove empty lines at the end
@@ -68,9 +68,10 @@ export function createDeleteApiKeyHandler() {
const envKey = envKeyMap[provider]; const envKey = envKeyMap[provider];
if (!envKey) { if (!envKey) {
logger.warn(`[Setup] Unknown provider requested for deletion: ${provider}`);
res.status(400).json({ res.status(400).json({
success: false, success: false,
error: `Unknown provider: ${provider}. Only anthropic is supported.`, error: 'Unknown provider. Only anthropic is supported.',
}); });
return; return;
} }
@@ -94,7 +95,7 @@ export function createDeleteApiKeyHandler() {
logger.error('[Setup] Delete API key error:', error); logger.error('[Setup] Delete API key error:', error);
res.status(500).json({ res.status(500).json({
success: false, success: false,
error: error instanceof Error ? error.message : 'Failed to delete API key', error: 'Failed to delete API key',
}); });
} }
}; };

View File

@@ -3,7 +3,7 @@
*/ */
import type { Request, Response } from 'express'; import type { Request, Response } from 'express';
import { setApiKey, persistApiKeyToEnv, getErrorMessage, logError } from '../common.js'; import { setApiKey, persistApiKeyToEnv, logError } from '../common.js';
import { createLogger } from '@automaker/utils'; import { createLogger } from '@automaker/utils';
const logger = createLogger('Setup'); const logger = createLogger('Setup');
@@ -30,9 +30,10 @@ export function createStoreApiKeyHandler() {
await persistApiKeyToEnv('ANTHROPIC_API_KEY', apiKey); await persistApiKeyToEnv('ANTHROPIC_API_KEY', apiKey);
logger.info('[Setup] Stored API key as ANTHROPIC_API_KEY'); logger.info('[Setup] Stored API key as ANTHROPIC_API_KEY');
} else { } else {
logger.warn(`[Setup] Unsupported provider requested: ${provider}`);
res.status(400).json({ res.status(400).json({
success: false, success: false,
error: `Unsupported provider: ${provider}. Only anthropic is supported.`, error: 'Unsupported provider. Only anthropic is supported.',
}); });
return; return;
} }
@@ -40,7 +41,7 @@ export function createStoreApiKeyHandler() {
res.json({ success: true }); res.json({ success: true });
} catch (error) { } catch (error) {
logError(error, 'Store API key failed'); logError(error, 'Store API key failed');
res.status(500).json({ success: false, error: getErrorMessage(error) }); res.status(500).json({ success: false, error: 'Failed to store API key' });
} }
}; };
} }

View File

@@ -10,6 +10,41 @@ import { getApiKey } from '../common.js';
const logger = createLogger('Setup'); const logger = createLogger('Setup');
/**
* Simple mutex implementation to prevent race conditions when
* modifying process.env during concurrent verification requests.
*
* The Claude Agent SDK reads ANTHROPIC_API_KEY from process.env,
* so we must temporarily modify it for verification. This mutex
* ensures only one verification runs at a time.
*/
class VerificationMutex {
private locked = false;
private queue: Array<() => void> = [];
async acquire(): Promise<void> {
return new Promise((resolve) => {
if (!this.locked) {
this.locked = true;
resolve();
} else {
this.queue.push(resolve);
}
});
}
release(): void {
if (this.queue.length > 0) {
const next = this.queue.shift();
if (next) next();
} else {
this.locked = false;
}
}
}
const verificationMutex = new VerificationMutex();
// Known error patterns that indicate auth failure // Known error patterns that indicate auth failure
const AUTH_ERROR_PATTERNS = [ const AUTH_ERROR_PATTERNS = [
'OAuth token revoked', 'OAuth token revoked',
@@ -68,14 +103,79 @@ function containsAuthError(text: string): boolean {
return AUTH_ERROR_PATTERNS.some((pattern) => lowerText.includes(pattern.toLowerCase())); return AUTH_ERROR_PATTERNS.some((pattern) => lowerText.includes(pattern.toLowerCase()));
} }
/** Valid authentication method values */
const VALID_AUTH_METHODS = ['cli', 'api_key'] as const;
type AuthMethod = (typeof VALID_AUTH_METHODS)[number];
/**
* Validates and extracts the authMethod from the request body.
*
* @param body - The request body to validate
* @returns The validated authMethod or undefined if not provided
* @throws Error if authMethod is provided but invalid
*/
function validateAuthMethod(body: unknown): AuthMethod | undefined {
if (!body || typeof body !== 'object') {
return undefined;
}
const obj = body as Record<string, unknown>;
if (!('authMethod' in obj) || obj.authMethod === undefined || obj.authMethod === null) {
return undefined;
}
const authMethod = obj.authMethod;
if (typeof authMethod !== 'string') {
throw new Error(`Invalid authMethod type: expected string, got ${typeof authMethod}`);
}
if (!VALID_AUTH_METHODS.includes(authMethod as AuthMethod)) {
throw new Error(
`Invalid authMethod value: '${authMethod}'. Valid values: ${VALID_AUTH_METHODS.join(', ')}`
);
}
return authMethod as AuthMethod;
}
export function createVerifyClaudeAuthHandler() { export function createVerifyClaudeAuthHandler() {
return async (req: Request, res: Response): Promise<void> => { return async (req: Request, res: Response): Promise<void> => {
try { try {
// Get the auth method from the request body // Validate and extract the auth method from the request body
const { authMethod } = req.body as { authMethod?: 'cli' | 'api_key' }; let authMethod: AuthMethod | undefined;
try {
authMethod = validateAuthMethod(req.body);
} catch (validationError) {
res.status(400).json({
success: false,
authenticated: false,
error: validationError instanceof Error ? validationError.message : 'Invalid request',
});
return;
}
logger.info(`[Setup] Verifying Claude authentication using method: ${authMethod || 'auto'}`); logger.info(`[Setup] Verifying Claude authentication using method: ${authMethod || 'auto'}`);
// Early validation before acquiring mutex - check if API key is needed but missing
if (authMethod === 'api_key') {
const storedApiKey = getApiKey('anthropic');
if (!storedApiKey && !process.env.ANTHROPIC_API_KEY) {
res.json({
success: true,
authenticated: false,
error: 'No API key configured. Please enter an API key first.',
});
return;
}
}
// Acquire mutex to prevent race conditions when modifying process.env
// The SDK reads ANTHROPIC_API_KEY from environment, so concurrent requests
// could interfere with each other without this lock
await verificationMutex.acquire();
// Create an AbortController with a 30-second timeout // Create an AbortController with a 30-second timeout
const abortController = new AbortController(); const abortController = new AbortController();
const timeoutId = setTimeout(() => abortController.abort(), 30000); const timeoutId = setTimeout(() => abortController.abort(), 30000);
@@ -84,7 +184,7 @@ export function createVerifyClaudeAuthHandler() {
let errorMessage = ''; let errorMessage = '';
let receivedAnyContent = false; let receivedAnyContent = false;
// Save original env values // Save original env values (inside mutex to ensure consistency)
const originalAnthropicKey = process.env.ANTHROPIC_API_KEY; const originalAnthropicKey = process.env.ANTHROPIC_API_KEY;
try { try {
@@ -99,17 +199,8 @@ export function createVerifyClaudeAuthHandler() {
if (storedApiKey) { if (storedApiKey) {
process.env.ANTHROPIC_API_KEY = storedApiKey; process.env.ANTHROPIC_API_KEY = storedApiKey;
logger.info('[Setup] Using stored API key for verification'); logger.info('[Setup] Using stored API key for verification');
} else {
// Check env var
if (!process.env.ANTHROPIC_API_KEY) {
res.json({
success: true,
authenticated: false,
error: 'No API key configured. Please enter an API key first.',
});
return;
}
} }
// Note: if no stored key, we use the existing env var (already validated above)
} }
// Run a minimal query to verify authentication // Run a minimal query to verify authentication
@@ -129,7 +220,8 @@ export function createVerifyClaudeAuthHandler() {
for await (const msg of stream) { for await (const msg of stream) {
const msgStr = JSON.stringify(msg); const msgStr = JSON.stringify(msg);
allMessages.push(msgStr); allMessages.push(msgStr);
logger.info('[Setup] Stream message:', msgStr.substring(0, 500)); // Debug log only message type to avoid leaking sensitive data
logger.debug('[Setup] Stream message type:', msg.type);
// Check for billing errors FIRST - these should fail verification // Check for billing errors FIRST - these should fail verification
if (isBillingError(msgStr)) { if (isBillingError(msgStr)) {
@@ -221,7 +313,8 @@ export function createVerifyClaudeAuthHandler() {
} else { } else {
// No content received - might be an issue // No content received - might be an issue
logger.warn('[Setup] No content received from stream'); logger.warn('[Setup] No content received from stream');
logger.warn('[Setup] All messages:', allMessages.join('\n')); // Log only message count to avoid leaking sensitive data
logger.warn('[Setup] Total messages received:', allMessages.length);
errorMessage = 'No response received from Claude. Please check your authentication.'; errorMessage = 'No response received from Claude. Please check your authentication.';
} }
} catch (error: unknown) { } catch (error: unknown) {
@@ -277,6 +370,8 @@ export function createVerifyClaudeAuthHandler() {
// If we cleared it and there was no original, keep it cleared // If we cleared it and there was no original, keep it cleared
delete process.env.ANTHROPIC_API_KEY; delete process.env.ANTHROPIC_API_KEY;
} }
// Release the mutex so other verification requests can proceed
verificationMutex.release();
} }
logger.info('[Setup] Verification result:', { logger.info('[Setup] Verification result:', {

View File

@@ -1,12 +1,11 @@
/** /**
* Business logic for generating suggestions * Business logic for generating suggestions
*
* Uses ClaudeProvider.executeStreamingQuery() for SDK interaction.
*/ */
import { query } from '@anthropic-ai/claude-agent-sdk';
import type { EventEmitter } from '../../lib/events.js'; import type { EventEmitter } from '../../lib/events.js';
import { createLogger } from '@automaker/utils'; import { createLogger } from '@automaker/utils';
import { ProviderFactory } from '../../providers/provider-factory.js'; import { createSuggestionsOptions } from '../../lib/sdk-options.js';
const logger = createLogger('Suggestions'); const logger = createLogger('Suggestions');
@@ -64,49 +63,65 @@ For each suggestion, provide:
The response will be automatically formatted as structured JSON.`; The response will be automatically formatted as structured JSON.`;
events.emit('suggestions:event', { // Don't send initial message - let the agent output speak for itself
type: 'suggestions_progress', // The first agent message will be captured as an info entry
content: `Starting ${suggestionType} analysis...\n`,
});
const provider = ProviderFactory.getProviderForModel('haiku'); const options = createSuggestionsOptions({
const result = await provider.executeStreamingQuery({
prompt,
model: 'haiku',
cwd: projectPath, cwd: projectPath,
maxTurns: 250,
allowedTools: ['Read', 'Glob', 'Grep'],
abortController, abortController,
outputFormat: { outputFormat: {
type: 'json_schema', type: 'json_schema',
schema: suggestionsSchema, schema: suggestionsSchema,
}, },
onText: (text) => { });
const stream = query({ prompt, options });
let responseText = '';
let structuredOutput: { suggestions: Array<Record<string, unknown>> } | null = null;
for await (const msg of stream) {
if (msg.type === 'assistant' && msg.message.content) {
for (const block of msg.message.content) {
if (block.type === 'text') {
responseText += block.text;
events.emit('suggestions:event', { events.emit('suggestions:event', {
type: 'suggestions_progress', type: 'suggestions_progress',
content: text, content: block.text,
}); });
}, } else if (block.type === 'tool_use') {
onToolUse: (name, input) => {
events.emit('suggestions:event', { events.emit('suggestions:event', {
type: 'suggestions_tool', type: 'suggestions_tool',
tool: name, tool: block.name,
input, input: block.input,
});
},
}); });
}
}
} else if (msg.type === 'result' && msg.subtype === 'success') {
// Check for structured output
const resultMsg = msg as any;
if (resultMsg.structured_output) {
structuredOutput = resultMsg.structured_output as {
suggestions: Array<Record<string, unknown>>;
};
logger.debug('Received structured output:', structuredOutput);
}
} else if (msg.type === 'result') {
const resultMsg = msg as any;
if (resultMsg.subtype === 'error_max_structured_output_retries') {
logger.error('Failed to produce valid structured output after retries');
throw new Error('Could not produce valid suggestions output');
} else if (resultMsg.subtype === 'error_max_turns') {
logger.error('Hit max turns limit before completing suggestions generation');
logger.warn(`Response text length: ${responseText.length} chars`);
// Still try to parse what we have
}
}
}
// Use structured output if available, otherwise fall back to parsing text // Use structured output if available, otherwise fall back to parsing text
try { try {
const structuredOutput = result.structuredOutput as
| {
suggestions: Array<Record<string, unknown>>;
}
| undefined;
if (structuredOutput && structuredOutput.suggestions) { if (structuredOutput && structuredOutput.suggestions) {
// Use structured output directly // Use structured output directly
logger.debug('Received structured output:', structuredOutput);
events.emit('suggestions:event', { events.emit('suggestions:event', {
type: 'suggestions_complete', type: 'suggestions_complete',
suggestions: structuredOutput.suggestions.map((s: Record<string, unknown>, i: number) => ({ suggestions: structuredOutput.suggestions.map((s: Record<string, unknown>, i: number) => ({
@@ -117,7 +132,7 @@ The response will be automatically formatted as structured JSON.`;
} else { } else {
// Fallback: try to parse from text (for backwards compatibility) // Fallback: try to parse from text (for backwards compatibility)
logger.warn('No structured output received, attempting to parse from text'); logger.warn('No structured output received, attempting to parse from text');
const jsonMatch = result.text.match(/\{[\s\S]*"suggestions"[\s\S]*\}/); const jsonMatch = responseText.match(/\{[\s\S]*"suggestions"[\s\S]*\}/);
if (jsonMatch) { if (jsonMatch) {
const parsed = JSON.parse(jsonMatch[0]); const parsed = JSON.parse(jsonMatch[0]);
events.emit('suggestions:event', { events.emit('suggestions:event', {

View File

@@ -5,7 +5,8 @@
import type { Request, Response } from 'express'; import type { Request, Response } from 'express';
import { spawn } from 'child_process'; import { spawn } from 'child_process';
import path from 'path'; import path from 'path';
import { secureFs, PathNotAllowedError } from '@automaker/platform'; import * as secureFs from '../../../lib/secure-fs.js';
import { PathNotAllowedError } from '@automaker/platform';
import { logger, getErrorMessage, logError } from '../common.js'; import { logger, getErrorMessage, logError } from '../common.js';
export function createCloneHandler() { export function createCloneHandler() {

View File

@@ -65,6 +65,18 @@ export function cleanupExpiredTokens(): void {
// Clean up expired tokens every 5 minutes // Clean up expired tokens every 5 minutes
setInterval(cleanupExpiredTokens, 5 * 60 * 1000); setInterval(cleanupExpiredTokens, 5 * 60 * 1000);
/**
* Extract Bearer token from Authorization header
* Returns undefined if header is missing or malformed
*/
export function extractBearerToken(req: Request): string | undefined {
const authHeader = req.headers.authorization;
if (!authHeader || !authHeader.startsWith('Bearer ')) {
return undefined;
}
return authHeader.slice(7); // Remove 'Bearer ' prefix
}
/** /**
* Validate a terminal session token * Validate a terminal session token
*/ */
@@ -116,8 +128,9 @@ export function terminalAuthMiddleware(req: Request, res: Response, next: NextFu
return; return;
} }
// Check for session token // Extract token from Authorization header only (Bearer token format)
const token = (req.headers['x-terminal-token'] as string) || (req.query.token as string); // Query string tokens are not supported due to security risks (URL logging, referrer leakage)
const token = extractBearerToken(req);
if (!validateTerminalToken(token)) { if (!validateTerminalToken(token)) {
res.status(401).json({ res.status(401).json({

View File

@@ -1,7 +1,9 @@
/** /**
* POST /auth endpoint - Authenticate with password to get a session token * POST /auth endpoint - Authenticate with password to get a session token
* Includes rate limiting to prevent brute-force attacks.
*/ */
import * as crypto from 'crypto';
import type { Request, Response } from 'express'; import type { Request, Response } from 'express';
import { import {
getTerminalEnabledConfigValue, getTerminalEnabledConfigValue,
@@ -11,6 +13,25 @@ import {
getTokenExpiryMs, getTokenExpiryMs,
getErrorMessage, getErrorMessage,
} from '../common.js'; } from '../common.js';
import { terminalAuthRateLimiter } from '../../../lib/rate-limiter.js';
/**
* Performs a constant-time string comparison to prevent timing attacks.
* Uses crypto.timingSafeEqual with proper buffer handling.
*/
function secureCompare(a: string, b: string): boolean {
const bufferA = Buffer.from(a, 'utf8');
const bufferB = Buffer.from(b, 'utf8');
// If lengths differ, we still need to do a constant-time comparison
// to avoid leaking length information. We compare against bufferA twice.
if (bufferA.length !== bufferB.length) {
crypto.timingSafeEqual(bufferA, bufferA);
return false;
}
return crypto.timingSafeEqual(bufferA, bufferB);
}
export function createAuthHandler() { export function createAuthHandler() {
return (req: Request, res: Response): void => { return (req: Request, res: Response): void => {
@@ -36,9 +57,28 @@ export function createAuthHandler() {
return; return;
} }
const clientIp = terminalAuthRateLimiter.getClientIp(req);
// Check if client is rate limited
if (terminalAuthRateLimiter.isBlocked(clientIp)) {
const retryAfterMs = terminalAuthRateLimiter.getBlockTimeRemaining(clientIp);
const retryAfterSeconds = Math.ceil(retryAfterMs / 1000);
res.setHeader('Retry-After', retryAfterSeconds.toString());
res.status(429).json({
success: false,
error: 'Too many failed authentication attempts. Please try again later.',
retryAfter: retryAfterSeconds,
});
return;
}
const { password } = req.body; const { password } = req.body;
if (!password || password !== terminalPassword) { if (!password || !secureCompare(password, terminalPassword)) {
// Record failed attempt
terminalAuthRateLimiter.recordFailure(clientIp);
res.status(401).json({ res.status(401).json({
success: false, success: false,
error: 'Invalid password', error: 'Invalid password',
@@ -46,6 +86,9 @@ export function createAuthHandler() {
return; return;
} }
// Successful authentication - reset rate limiter for this IP
terminalAuthRateLimiter.reset(clientIp);
// Generate session token // Generate session token
const token = generateToken(); const token = generateToken();
const now = new Date(); const now = new Date();

View File

@@ -1,18 +1,36 @@
/** /**
* POST /logout endpoint - Invalidate a session token * POST /logout endpoint - Invalidate a session token
*
* Security: Only allows invalidating the token used for authentication.
* This ensures users can only log out their own sessions.
*/ */
import type { Request, Response } from 'express'; import type { Request, Response } from 'express';
import { deleteToken } from '../common.js'; import { deleteToken, extractBearerToken, validateTerminalToken } from '../common.js';
export function createLogoutHandler() { export function createLogoutHandler() {
return (req: Request, res: Response): void => { return (req: Request, res: Response): void => {
const token = (req.headers['x-terminal-token'] as string) || req.body.token; const token = extractBearerToken(req);
if (token) { if (!token) {
deleteToken(token); res.status(401).json({
success: false,
error: 'Authorization header with Bearer token is required',
});
return;
} }
if (!validateTerminalToken(token)) {
res.status(401).json({
success: false,
error: 'Invalid or expired token',
});
return;
}
// Token is valid and belongs to the requester - safe to invalidate
deleteToken(token);
res.json({ res.json({
success: true, success: true,
}); });

View File

@@ -3,8 +3,9 @@
*/ */
import type { Request, Response } from 'express'; import type { Request, Response } from 'express';
import { secureFs, getAllowedRootDirectory, getDataDirectory } from '@automaker/platform'; import * as secureFs from '../../../lib/secure-fs.js';
import path from 'path'; import path from 'path';
import { getAllowedRootDirectory, getDataDirectory } from '@automaker/platform';
import { getErrorMessage, logError } from '../common.js'; import { getErrorMessage, logError } from '../common.js';
export function createConfigHandler() { export function createConfigHandler() {

View File

@@ -3,8 +3,9 @@
*/ */
import type { Request, Response } from 'express'; import type { Request, Response } from 'express';
import { secureFs, getAllowedRootDirectory } from '@automaker/platform'; import * as secureFs from '../../../lib/secure-fs.js';
import path from 'path'; import path from 'path';
import { getAllowedRootDirectory } from '@automaker/platform';
import { getErrorMessage, logError } from '../common.js'; import { getErrorMessage, logError } from '../common.js';
export function createDirectoriesHandler() { export function createDirectoriesHandler() {

View File

@@ -3,16 +3,16 @@
*/ */
import { createLogger } from '@automaker/utils'; import { createLogger } from '@automaker/utils';
import { exec } from 'child_process';
import { promisify } from 'util';
import path from 'path';
import { getErrorMessage as getErrorMessageShared, createLogError } from '../common.js'; import { getErrorMessage as getErrorMessageShared, createLogError } from '../common.js';
import { execAsync, execEnv, isENOENT } from '../../lib/exec-utils.js';
import { FeatureLoader } from '../../services/feature-loader.js'; import { FeatureLoader } from '../../services/feature-loader.js';
const logger = createLogger('Worktree'); const logger = createLogger('Worktree');
export const execAsync = promisify(exec);
const featureLoader = new FeatureLoader(); const featureLoader = new FeatureLoader();
// Re-export exec utilities for convenience
export { execAsync, execEnv, isENOENT } from '../../lib/exec-utils.js';
// ============================================================================ // ============================================================================
// Constants // Constants
// ============================================================================ // ============================================================================
@@ -20,6 +20,48 @@ export { execAsync, execEnv, isENOENT } from '../../lib/exec-utils.js';
/** Maximum allowed length for git branch names */ /** Maximum allowed length for git branch names */
export const MAX_BRANCH_NAME_LENGTH = 250; export const MAX_BRANCH_NAME_LENGTH = 250;
// ============================================================================
// Extended PATH configuration for Electron apps
// ============================================================================
const pathSeparator = process.platform === 'win32' ? ';' : ':';
const additionalPaths: string[] = [];
if (process.platform === 'win32') {
// Windows paths
if (process.env.LOCALAPPDATA) {
additionalPaths.push(`${process.env.LOCALAPPDATA}\\Programs\\Git\\cmd`);
}
if (process.env.PROGRAMFILES) {
additionalPaths.push(`${process.env.PROGRAMFILES}\\Git\\cmd`);
}
if (process.env['ProgramFiles(x86)']) {
additionalPaths.push(`${process.env['ProgramFiles(x86)']}\\Git\\cmd`);
}
} else {
// Unix/Mac paths
additionalPaths.push(
'/opt/homebrew/bin', // Homebrew on Apple Silicon
'/usr/local/bin', // Homebrew on Intel Mac, common Linux location
'/home/linuxbrew/.linuxbrew/bin', // Linuxbrew
`${process.env.HOME}/.local/bin` // pipx, other user installs
);
}
const extendedPath = [process.env.PATH, ...additionalPaths.filter(Boolean)]
.filter(Boolean)
.join(pathSeparator);
/**
* Environment variables with extended PATH for executing shell commands.
* Electron apps don't inherit the user's shell PATH, so we need to add
* common tool installation locations.
*/
export const execEnv = {
...process.env,
PATH: extendedPath,
};
// ============================================================================ // ============================================================================
// Validation utilities // Validation utilities
// ============================================================================ // ============================================================================
@@ -69,6 +111,27 @@ export async function isGitRepo(repoPath: string): Promise<boolean> {
} }
} }
/**
* Check if a git repository has at least one commit (i.e., HEAD exists)
* Returns false for freshly initialized repos with no commits
*/
export async function hasCommits(repoPath: string): Promise<boolean> {
try {
await execAsync('git rev-parse --verify HEAD', { cwd: repoPath });
return true;
} catch {
return false;
}
}
/**
* Check if an error is ENOENT (file/path not found or spawn failed)
* These are expected in test environments with mock paths
*/
export function isENOENT(error: unknown): boolean {
return error !== null && typeof error === 'object' && 'code' in error && error.code === 'ENOENT';
}
/** /**
* Check if a path is a mock/test path that doesn't exist * Check if a path is a mock/test path that doesn't exist
*/ */

View File

@@ -4,6 +4,7 @@
import { Router } from 'express'; import { Router } from 'express';
import { validatePathParams } from '../../middleware/validate-paths.js'; import { validatePathParams } from '../../middleware/validate-paths.js';
import { requireValidWorktree, requireValidProject, requireGitRepoOnly } from './middleware.js';
import { createInfoHandler } from './routes/info.js'; import { createInfoHandler } from './routes/info.js';
import { createStatusHandler } from './routes/status.js'; import { createStatusHandler } from './routes/status.js';
import { createListHandler } from './routes/list.js'; import { createListHandler } from './routes/list.js';
@@ -38,17 +39,42 @@ export function createWorktreeRoutes(): Router {
router.post('/list', createListHandler()); router.post('/list', createListHandler());
router.post('/diffs', validatePathParams('projectPath'), createDiffsHandler()); router.post('/diffs', validatePathParams('projectPath'), createDiffsHandler());
router.post('/file-diff', validatePathParams('projectPath', 'filePath'), createFileDiffHandler()); router.post('/file-diff', validatePathParams('projectPath', 'filePath'), createFileDiffHandler());
router.post('/merge', validatePathParams('projectPath'), createMergeHandler()); router.post(
'/merge',
validatePathParams('projectPath'),
requireValidProject,
createMergeHandler()
);
router.post('/create', validatePathParams('projectPath'), createCreateHandler()); router.post('/create', validatePathParams('projectPath'), createCreateHandler());
router.post('/delete', validatePathParams('projectPath', 'worktreePath'), createDeleteHandler()); router.post('/delete', validatePathParams('projectPath', 'worktreePath'), createDeleteHandler());
router.post('/create-pr', createCreatePRHandler()); router.post('/create-pr', createCreatePRHandler());
router.post('/pr-info', createPRInfoHandler()); router.post('/pr-info', createPRInfoHandler());
router.post('/commit', validatePathParams('worktreePath'), createCommitHandler()); router.post(
router.post('/push', validatePathParams('worktreePath'), createPushHandler()); '/commit',
router.post('/pull', validatePathParams('worktreePath'), createPullHandler()); validatePathParams('worktreePath'),
router.post('/checkout-branch', createCheckoutBranchHandler()); requireGitRepoOnly,
router.post('/list-branches', validatePathParams('worktreePath'), createListBranchesHandler()); createCommitHandler()
router.post('/switch-branch', createSwitchBranchHandler()); );
router.post(
'/push',
validatePathParams('worktreePath'),
requireValidWorktree,
createPushHandler()
);
router.post(
'/pull',
validatePathParams('worktreePath'),
requireValidWorktree,
createPullHandler()
);
router.post('/checkout-branch', requireValidWorktree, createCheckoutBranchHandler());
router.post(
'/list-branches',
validatePathParams('worktreePath'),
requireValidWorktree,
createListBranchesHandler()
);
router.post('/switch-branch', requireValidWorktree, createSwitchBranchHandler());
router.post('/open-in-editor', validatePathParams('worktreePath'), createOpenInEditorHandler()); router.post('/open-in-editor', validatePathParams('worktreePath'), createOpenInEditorHandler());
router.get('/default-editor', createGetDefaultEditorHandler()); router.get('/default-editor', createGetDefaultEditorHandler());
router.post('/init-git', validatePathParams('projectPath'), createInitGitHandler()); router.post('/init-git', validatePathParams('projectPath'), createInitGitHandler());

View File

@@ -0,0 +1,74 @@
/**
* Middleware for worktree route validation
*/
import type { Request, Response, NextFunction } from 'express';
import { isGitRepo, hasCommits } from './common.js';
interface ValidationOptions {
/** Check if the path is a git repository (default: true) */
requireGitRepo?: boolean;
/** Check if the repository has at least one commit (default: true) */
requireCommits?: boolean;
/** The name of the request body field containing the path (default: 'worktreePath') */
pathField?: 'worktreePath' | 'projectPath';
}
/**
* Middleware factory to validate that a path is a valid git repository with commits.
* This reduces code duplication across route handlers.
*
* @param options - Validation options
* @returns Express middleware function
*/
export function requireValidGitRepo(options: ValidationOptions = {}) {
const { requireGitRepo = true, requireCommits = true, pathField = 'worktreePath' } = options;
return async (req: Request, res: Response, next: NextFunction): Promise<void> => {
const repoPath = req.body[pathField] as string | undefined;
if (!repoPath) {
// Let the route handler deal with missing path validation
next();
return;
}
if (requireGitRepo && !(await isGitRepo(repoPath))) {
res.status(400).json({
success: false,
error: 'Not a git repository',
code: 'NOT_GIT_REPO',
});
return;
}
if (requireCommits && !(await hasCommits(repoPath))) {
res.status(400).json({
success: false,
error: 'Repository has no commits yet',
code: 'NO_COMMITS',
});
return;
}
next();
};
}
/**
* Middleware to validate git repo for worktreePath field
*/
export const requireValidWorktree = requireValidGitRepo({ pathField: 'worktreePath' });
/**
* Middleware to validate git repo for projectPath field
*/
export const requireValidProject = requireValidGitRepo({ pathField: 'projectPath' });
/**
* Middleware to validate git repo without requiring commits (for commit route)
*/
export const requireGitRepoOnly = requireValidGitRepo({
pathField: 'worktreePath',
requireCommits: false,
});

View File

@@ -5,12 +5,15 @@
* can switch between branches even after worktrees are removed. * can switch between branches even after worktrees are removed.
*/ */
import { secureFs, getBranchTrackingPath, ensureAutomakerDir } from '@automaker/platform'; import * as secureFs from '../../../lib/secure-fs.js';
import type { TrackedBranch } from '@automaker/types';
import path from 'path'; import path from 'path';
import { getBranchTrackingPath, ensureAutomakerDir } from '@automaker/platform';
// Re-export type for convenience export interface TrackedBranch {
export type { TrackedBranch } from '@automaker/types'; name: string;
createdAt: string;
lastActivatedAt?: string;
}
interface BranchTrackingData { interface BranchTrackingData {
branches: TrackedBranch[]; branches: TrackedBranch[];

View File

@@ -1,5 +1,8 @@
/** /**
* POST /checkout-branch endpoint - Create and checkout a new branch * POST /checkout-branch endpoint - Create and checkout a new branch
*
* Note: Git repository validation (isGitRepo, hasCommits) is handled by
* the requireValidWorktree middleware in index.ts
*/ */
import type { Request, Response } from 'express'; import type { Request, Response } from 'express';

View File

@@ -1,5 +1,8 @@
/** /**
* POST /commit endpoint - Commit changes in a worktree * POST /commit endpoint - Commit changes in a worktree
*
* Note: Git repository validation (isGitRepo) is handled by
* the requireGitRepoOnly middleware in index.ts
*/ */
import type { Request, Response } from 'express'; import type { Request, Response } from 'express';

View File

@@ -11,7 +11,7 @@ import type { Request, Response } from 'express';
import { exec } from 'child_process'; import { exec } from 'child_process';
import { promisify } from 'util'; import { promisify } from 'util';
import path from 'path'; import path from 'path';
import { secureFs } from '@automaker/platform'; import * as secureFs from '../../../lib/secure-fs.js';
import { import {
isGitRepo, isGitRepo,
getErrorMessage, getErrorMessage,

View File

@@ -4,7 +4,7 @@
import type { Request, Response } from 'express'; import type { Request, Response } from 'express';
import path from 'path'; import path from 'path';
import { secureFs } from '@automaker/platform'; import * as secureFs from '../../../lib/secure-fs.js';
import { getErrorMessage, logError } from '../common.js'; import { getErrorMessage, logError } from '../common.js';
import { getGitRepositoryDiffs } from '../../common.js'; import { getGitRepositoryDiffs } from '../../common.js';

View File

@@ -6,7 +6,7 @@ import type { Request, Response } from 'express';
import { exec } from 'child_process'; import { exec } from 'child_process';
import { promisify } from 'util'; import { promisify } from 'util';
import path from 'path'; import path from 'path';
import { secureFs } from '@automaker/platform'; import * as secureFs from '../../../lib/secure-fs.js';
import { getErrorMessage, logError } from '../common.js'; import { getErrorMessage, logError } from '../common.js';
import { generateSyntheticDiffForNewFile } from '../../common.js'; import { generateSyntheticDiffForNewFile } from '../../common.js';

View File

@@ -6,7 +6,7 @@ import type { Request, Response } from 'express';
import { exec } from 'child_process'; import { exec } from 'child_process';
import { promisify } from 'util'; import { promisify } from 'util';
import path from 'path'; import path from 'path';
import { secureFs } from '@automaker/platform'; import * as secureFs from '../../../lib/secure-fs.js';
import { getErrorMessage, logError, normalizePath } from '../common.js'; import { getErrorMessage, logError, normalizePath } from '../common.js';
const execAsync = promisify(exec); const execAsync = promisify(exec);

View File

@@ -5,7 +5,7 @@
import type { Request, Response } from 'express'; import type { Request, Response } from 'express';
import { exec } from 'child_process'; import { exec } from 'child_process';
import { promisify } from 'util'; import { promisify } from 'util';
import { secureFs } from '@automaker/platform'; import * as secureFs from '../../../lib/secure-fs.js';
import { join } from 'path'; import { join } from 'path';
import { getErrorMessage, logError } from '../common.js'; import { getErrorMessage, logError } from '../common.js';

View File

@@ -1,5 +1,8 @@
/** /**
* POST /list-branches endpoint - List all local branches * POST /list-branches endpoint - List all local branches
*
* Note: Git repository validation (isGitRepo, hasCommits) is handled by
* the requireValidWorktree middleware in index.ts
*/ */
import type { Request, Response } from 'express'; import type { Request, Response } from 'express';

View File

@@ -8,7 +8,7 @@
import type { Request, Response } from 'express'; import type { Request, Response } from 'express';
import { exec } from 'child_process'; import { exec } from 'child_process';
import { promisify } from 'util'; import { promisify } from 'util';
import { secureFs } from '@automaker/platform'; import * as secureFs from '../../../lib/secure-fs.js';
import { isGitRepo } from '@automaker/git-utils'; import { isGitRepo } from '@automaker/git-utils';
import { getErrorMessage, logError, normalizePath } from '../common.js'; import { getErrorMessage, logError, normalizePath } from '../common.js';
import { readAllWorktreeMetadata, type WorktreePRInfo } from '../../../lib/worktree-metadata.js'; import { readAllWorktreeMetadata, type WorktreePRInfo } from '../../../lib/worktree-metadata.js';

View File

@@ -1,5 +1,8 @@
/** /**
* POST /merge endpoint - Merge feature (merge worktree branch into main) * POST /merge endpoint - Merge feature (merge worktree branch into main)
*
* Note: Git repository validation (isGitRepo, hasCommits) is handled by
* the requireValidProject middleware in index.ts
*/ */
import type { Request, Response } from 'express'; import type { Request, Response } from 'express';

View File

@@ -3,7 +3,6 @@
*/ */
import type { Request, Response } from 'express'; import type { Request, Response } from 'express';
import type { PRComment, PRInfo } from '@automaker/types';
import { import {
getErrorMessage, getErrorMessage,
logError, logError,
@@ -13,8 +12,26 @@ import {
isGhCliAvailable, isGhCliAvailable,
} from '../common.js'; } from '../common.js';
// Re-export types for convenience export interface PRComment {
export type { PRComment, PRInfo } from '@automaker/types'; id: number;
author: string;
body: string;
path?: string;
line?: number;
createdAt: string;
isReviewComment: boolean;
}
export interface PRInfo {
number: number;
title: string;
url: string;
state: string;
author: string;
body: string;
comments: PRComment[];
reviewComments: PRComment[];
}
export function createPRInfoHandler() { export function createPRInfoHandler() {
return async (req: Request, res: Response): Promise<void> => { return async (req: Request, res: Response): Promise<void> => {

View File

@@ -1,5 +1,8 @@
/** /**
* POST /pull endpoint - Pull latest changes for a worktree/branch * POST /pull endpoint - Pull latest changes for a worktree/branch
*
* Note: Git repository validation (isGitRepo, hasCommits) is handled by
* the requireValidWorktree middleware in index.ts
*/ */
import type { Request, Response } from 'express'; import type { Request, Response } from 'express';

View File

@@ -1,5 +1,8 @@
/** /**
* POST /push endpoint - Push a worktree branch to remote * POST /push endpoint - Push a worktree branch to remote
*
* Note: Git repository validation (isGitRepo, hasCommits) is handled by
* the requireValidWorktree middleware in index.ts
*/ */
import type { Request, Response } from 'express'; import type { Request, Response } from 'express';

View File

@@ -6,7 +6,7 @@ import type { Request, Response } from 'express';
import { exec } from 'child_process'; import { exec } from 'child_process';
import { promisify } from 'util'; import { promisify } from 'util';
import path from 'path'; import path from 'path';
import { secureFs } from '@automaker/platform'; import * as secureFs from '../../../lib/secure-fs.js';
import { getErrorMessage, logError } from '../common.js'; import { getErrorMessage, logError } from '../common.js';
const execAsync = promisify(exec); const execAsync = promisify(exec);

View File

@@ -4,6 +4,9 @@
* Simple branch switching. * Simple branch switching.
* If there are uncommitted changes, the switch will fail and * If there are uncommitted changes, the switch will fail and
* the user should commit first. * the user should commit first.
*
* Note: Git repository validation (isGitRepo, hasCommits) is handled by
* the requireValidWorktree middleware in index.ts
*/ */
import type { Request, Response } from 'express'; import type { Request, Response } from 'express';

View File

@@ -4,6 +4,7 @@
*/ */
import path from 'path'; import path from 'path';
import * as secureFs from '../lib/secure-fs.js';
import type { EventEmitter } from '../lib/events.js'; import type { EventEmitter } from '../lib/events.js';
import type { ExecuteOptions } from '@automaker/types'; import type { ExecuteOptions } from '@automaker/types';
import { import {
@@ -11,13 +12,10 @@ import {
buildPromptWithImages, buildPromptWithImages,
isAbortError, isAbortError,
loadContextFiles, loadContextFiles,
createLogger,
} from '@automaker/utils'; } from '@automaker/utils';
import { ProviderFactory } from '../providers/provider-factory.js'; import { ProviderFactory } from '../providers/provider-factory.js';
import { createChatOptions, validateWorkingDirectory } from '../lib/sdk-options.js'; import { createChatOptions, validateWorkingDirectory } from '../lib/sdk-options.js';
import { PathNotAllowedError, secureFs } from '@automaker/platform'; import { PathNotAllowedError } from '@automaker/platform';
const logger = createLogger('AgentService');
interface Message { interface Message {
id: string; id: string;
@@ -152,7 +150,7 @@ export class AgentService {
filename: imageData.filename, filename: imageData.filename,
}); });
} catch (error) { } catch (error) {
logger.error(`Failed to load image ${imagePath}:`, error); console.error(`[AgentService] Failed to load image ${imagePath}:`, error);
} }
} }
} }
@@ -217,7 +215,9 @@ export class AgentService {
// Get provider for this model // Get provider for this model
const provider = ProviderFactory.getProviderForModel(effectiveModel); const provider = ProviderFactory.getProviderForModel(effectiveModel);
logger.info(`Using provider "${provider.getName()}" for model "${effectiveModel}"`); console.log(
`[AgentService] Using provider "${provider.getName()}" for model "${effectiveModel}"`
);
// Build options for provider // Build options for provider
const options: ExecuteOptions = { const options: ExecuteOptions = {
@@ -254,7 +254,7 @@ export class AgentService {
// Capture SDK session ID from any message and persist it // Capture SDK session ID from any message and persist it
if (msg.session_id && !session.sdkSessionId) { if (msg.session_id && !session.sdkSessionId) {
session.sdkSessionId = msg.session_id; session.sdkSessionId = msg.session_id;
logger.info(`Captured SDK session ID: ${msg.session_id}`); console.log(`[AgentService] Captured SDK session ID: ${msg.session_id}`);
// Persist the SDK session ID to ensure conversation continuity across server restarts // Persist the SDK session ID to ensure conversation continuity across server restarts
await this.updateSession(sessionId, { sdkSessionId: msg.session_id }); await this.updateSession(sessionId, { sdkSessionId: msg.session_id });
} }
@@ -330,7 +330,7 @@ export class AgentService {
return { success: false, aborted: true }; return { success: false, aborted: true };
} }
logger.error('Error:', error); console.error('[AgentService] Error:', error);
session.isRunning = false; session.isRunning = false;
session.abortController = null; session.abortController = null;
@@ -424,7 +424,7 @@ export class AgentService {
await secureFs.writeFile(sessionFile, JSON.stringify(messages, null, 2), 'utf-8'); await secureFs.writeFile(sessionFile, JSON.stringify(messages, null, 2), 'utf-8');
await this.updateSessionTimestamp(sessionId); await this.updateSessionTimestamp(sessionId);
} catch (error) { } catch (error) {
logger.error('Failed to save session:', error); console.error('[AgentService] Failed to save session:', error);
} }
} }

File diff suppressed because it is too large Load Diff

View File

@@ -1,162 +0,0 @@
/**
* Feature Verification Service - Handles verification and commit operations
*
* Provides functionality to verify feature implementations (lint, typecheck, test, build)
* and commit changes to git.
*/
import { createLogger } from '@automaker/utils';
import {
runVerificationChecks,
hasUncommittedChanges,
commitAll,
shortHash,
} from '@automaker/git-utils';
import { extractTitleFromDescription } from '@automaker/prompts';
import { getFeatureDir, secureFs } from '@automaker/platform';
import path from 'path';
import type { EventEmitter } from '../../lib/events.js';
import type { Feature } from '@automaker/types';
const logger = createLogger('FeatureVerification');
export interface VerificationResult {
success: boolean;
failedCheck?: string;
}
export interface CommitResult {
hash: string | null;
shortHash?: string;
}
export class FeatureVerificationService {
private events: EventEmitter;
constructor(events: EventEmitter) {
this.events = events;
}
/**
* Resolve the working directory for a feature (checks for worktree)
*/
async resolveWorkDir(projectPath: string, featureId: string): Promise<string> {
const worktreePath = path.join(projectPath, '.worktrees', featureId);
try {
await secureFs.access(worktreePath);
return worktreePath;
} catch {
return projectPath;
}
}
/**
* Verify a feature's implementation by running checks
*/
async verify(projectPath: string, featureId: string): Promise<VerificationResult> {
const workDir = await this.resolveWorkDir(projectPath, featureId);
const result = await runVerificationChecks(workDir);
if (result.success) {
this.emitEvent('auto_mode_feature_complete', {
featureId,
passes: true,
message: 'All verification checks passed',
});
} else {
this.emitEvent('auto_mode_feature_complete', {
featureId,
passes: false,
message: `Verification failed: ${result.failedCheck}`,
});
}
return result;
}
/**
* Commit feature changes
*/
async commit(
projectPath: string,
featureId: string,
feature: Feature | null,
providedWorktreePath?: string
): Promise<CommitResult> {
let workDir = projectPath;
if (providedWorktreePath) {
try {
await secureFs.access(providedWorktreePath);
workDir = providedWorktreePath;
} catch {
// Use project path
}
} else {
workDir = await this.resolveWorkDir(projectPath, featureId);
}
// Check for changes
const hasChanges = await hasUncommittedChanges(workDir);
if (!hasChanges) {
return { hash: null };
}
// Build commit message
const title = feature
? extractTitleFromDescription(feature.description)
: `Feature ${featureId}`;
const commitMessage = `feat: ${title}\n\nImplemented by Automaker auto-mode`;
// Commit changes
const hash = await commitAll(workDir, commitMessage);
if (hash) {
const short = shortHash(hash);
this.emitEvent('auto_mode_feature_complete', {
featureId,
passes: true,
message: `Changes committed: ${short}`,
});
return { hash, shortHash: short };
}
logger.error(`Commit failed for ${featureId}`);
return { hash: null };
}
/**
* Check if context (agent-output.md) exists for a feature
*/
async contextExists(projectPath: string, featureId: string): Promise<boolean> {
const featureDir = getFeatureDir(projectPath, featureId);
const contextPath = path.join(featureDir, 'agent-output.md');
try {
await secureFs.access(contextPath);
return true;
} catch {
return false;
}
}
/**
* Load existing context for a feature
*/
async loadContext(projectPath: string, featureId: string): Promise<string | null> {
const featureDir = getFeatureDir(projectPath, featureId);
const contextPath = path.join(featureDir, 'agent-output.md');
try {
return (await secureFs.readFile(contextPath, 'utf-8')) as string;
} catch {
return null;
}
}
private emitEvent(eventType: string, data: Record<string, unknown>): void {
this.events.emit('auto-mode:event', { type: eventType, ...data });
}
}

View File

@@ -1,28 +0,0 @@
/**
* Auto Mode Services
*
* Re-exports all auto-mode related services and types.
*/
// Services
export { PlanApprovalService } from './plan-approval-service.js';
export { TaskExecutor } from './task-executor.js';
export { WorktreeManager, worktreeManager } from './worktree-manager.js';
export { OutputWriter, createFeatureOutputWriter } from './output-writer.js';
export { ProjectAnalyzer } from './project-analyzer.js';
export { FeatureVerificationService } from './feature-verification.js';
export type { VerificationResult, CommitResult } from './feature-verification.js';
// Types
export type {
RunningFeature,
AutoLoopState,
AutoModeConfig,
PendingApproval,
ApprovalResult,
FeatureExecutionOptions,
RunAgentOptions,
FeatureWithPlanning,
TaskExecutionContext,
TaskProgress,
} from './types.js';

View File

@@ -1,154 +0,0 @@
/**
* Output Writer - Incremental file writing for agent output
*
* Handles debounced file writes to avoid excessive I/O during streaming.
* Used to persist agent output to agent-output.md in the feature directory.
*/
import { secureFs } from '@automaker/platform';
import path from 'path';
import { createLogger } from '@automaker/utils';
const logger = createLogger('OutputWriter');
/**
* Handles incremental, debounced file writing for agent output
*/
export class OutputWriter {
private content = '';
private writeTimeout: ReturnType<typeof setTimeout> | null = null;
private readonly debounceMs: number;
private readonly outputPath: string;
/**
* Create a new output writer
*
* @param outputPath - Full path to the output file
* @param debounceMs - Debounce interval for writes (default: 500ms)
* @param initialContent - Optional initial content to start with
*/
constructor(outputPath: string, debounceMs = 500, initialContent = '') {
this.outputPath = outputPath;
this.debounceMs = debounceMs;
this.content = initialContent;
}
/**
* Append text to the output
*
* Schedules a debounced write to the file.
*/
append(text: string): void {
this.content += text;
this.scheduleWrite();
}
/**
* Append text with automatic separator handling
*
* Ensures proper spacing between sections.
*/
appendWithSeparator(text: string): void {
if (this.content.length > 0 && !this.content.endsWith('\n\n')) {
if (this.content.endsWith('\n')) {
this.content += '\n';
} else {
this.content += '\n\n';
}
}
this.append(text);
}
/**
* Append a tool use entry
*/
appendToolUse(toolName: string, input?: unknown): void {
if (this.content.length > 0 && !this.content.endsWith('\n')) {
this.content += '\n';
}
this.content += `\n🔧 Tool: ${toolName}\n`;
if (input) {
this.content += `Input: ${JSON.stringify(input, null, 2)}\n`;
}
this.scheduleWrite();
}
/**
* Get the current accumulated content
*/
getContent(): string {
return this.content;
}
/**
* Set content directly (for follow-up sessions with previous content)
*/
setContent(content: string): void {
this.content = content;
}
/**
* Schedule a debounced write
*/
private scheduleWrite(): void {
if (this.writeTimeout) {
clearTimeout(this.writeTimeout);
}
this.writeTimeout = setTimeout(() => {
this.flush().catch((error) => {
logger.error('Failed to flush output', error);
});
}, this.debounceMs);
}
/**
* Flush content to disk immediately
*
* Call this to ensure all content is written, e.g., at the end of execution.
*/
async flush(): Promise<void> {
if (this.writeTimeout) {
clearTimeout(this.writeTimeout);
this.writeTimeout = null;
}
try {
await secureFs.mkdir(path.dirname(this.outputPath), { recursive: true });
await secureFs.writeFile(this.outputPath, this.content);
} catch (error) {
logger.error(`Failed to write to ${this.outputPath}`, error);
// Don't throw - file write errors shouldn't crash execution
}
}
/**
* Cancel any pending writes
*/
cancel(): void {
if (this.writeTimeout) {
clearTimeout(this.writeTimeout);
this.writeTimeout = null;
}
}
}
/**
* Create an output writer for a feature
*
* @param featureDir - The feature directory path
* @param previousContent - Optional content from previous session
* @returns Configured output writer
*/
export function createFeatureOutputWriter(
featureDir: string,
previousContent?: string
): OutputWriter {
const outputPath = path.join(featureDir, 'agent-output.md');
// If there's previous content, add a follow-up separator
const initialContent = previousContent
? `${previousContent}\n\n---\n\n## Follow-up Session\n\n`
: '';
return new OutputWriter(outputPath, 500, initialContent);
}

View File

@@ -1,236 +0,0 @@
/**
* Plan Approval Service - Handles plan/spec approval workflow
*
* Manages the async approval flow where:
* 1. Agent generates a spec with [SPEC_GENERATED] marker
* 2. Service emits plan_approval_required event
* 3. User reviews and approves/rejects via API
* 4. Service resolves the waiting promise to continue execution
*/
import type { EventEmitter } from '../../lib/events.js';
import type { PlanSpec, PlanningMode } from '@automaker/types';
import { createLogger } from '@automaker/utils';
import type { PendingApproval, ApprovalResult } from './types.js';
const logger = createLogger('PlanApprovalService');
/**
* Manages plan approval workflow for spec-driven development
*/
export class PlanApprovalService {
private pendingApprovals = new Map<string, PendingApproval>();
private events: EventEmitter;
constructor(events: EventEmitter) {
this.events = events;
}
/**
* Wait for plan approval from the user
*
* Returns a promise that resolves when the user approves or rejects
* the plan via the API.
*
* @param featureId - The feature awaiting approval
* @param projectPath - The project path
* @returns Promise resolving to approval result
*/
waitForApproval(featureId: string, projectPath: string): Promise<ApprovalResult> {
logger.debug(`Registering pending approval for feature ${featureId}`);
logger.debug(
`Current pending approvals: ${Array.from(this.pendingApprovals.keys()).join(', ') || 'none'}`
);
return new Promise((resolve, reject) => {
this.pendingApprovals.set(featureId, {
resolve,
reject,
featureId,
projectPath,
});
logger.debug(`Pending approval registered for feature ${featureId}`);
});
}
/**
* Resolve a pending plan approval
*
* Called when the user approves or rejects the plan via API.
*
* @param featureId - The feature ID
* @param approved - Whether the plan was approved
* @param editedPlan - Optional edited plan content
* @param feedback - Optional user feedback
* @returns Result indicating success or error
*/
resolve(
featureId: string,
approved: boolean,
editedPlan?: string,
feedback?: string
): { success: boolean; error?: string; projectPath?: string } {
logger.debug(`resolvePlanApproval called for feature ${featureId}, approved=${approved}`);
logger.debug(
`Current pending approvals: ${Array.from(this.pendingApprovals.keys()).join(', ') || 'none'}`
);
const pending = this.pendingApprovals.get(featureId);
if (!pending) {
logger.warn(`No pending approval found for feature ${featureId}`);
return {
success: false,
error: `No pending approval for feature ${featureId}`,
};
}
logger.debug(`Found pending approval for feature ${featureId}, resolving...`);
// Resolve the promise with all data including feedback
pending.resolve({ approved, editedPlan, feedback });
this.pendingApprovals.delete(featureId);
return { success: true, projectPath: pending.projectPath };
}
/**
* Cancel a pending plan approval
*
* Called when a feature is stopped while waiting for approval.
*
* @param featureId - The feature ID to cancel
*/
cancel(featureId: string): void {
logger.debug(`cancelPlanApproval called for feature ${featureId}`);
const pending = this.pendingApprovals.get(featureId);
if (pending) {
logger.debug(`Found and cancelling pending approval for feature ${featureId}`);
pending.reject(new Error('Plan approval cancelled - feature was stopped'));
this.pendingApprovals.delete(featureId);
} else {
logger.debug(`No pending approval to cancel for feature ${featureId}`);
}
}
/**
* Check if a feature has a pending plan approval
*
* @param featureId - The feature ID to check
* @returns True if there's a pending approval
*/
hasPending(featureId: string): boolean {
return this.pendingApprovals.has(featureId);
}
/**
* Get the project path for a pending approval
*
* Useful for recovery scenarios where we need to know which
* project a pending approval belongs to.
*
* @param featureId - The feature ID
* @returns The project path or undefined
*/
getProjectPath(featureId: string): string | undefined {
return this.pendingApprovals.get(featureId)?.projectPath;
}
/**
* Get all pending approval feature IDs
*
* @returns Array of feature IDs with pending approvals
*/
getAllPending(): string[] {
return Array.from(this.pendingApprovals.keys());
}
/**
* Emit a plan-related event
*/
emitPlanEvent(
eventType: string,
featureId: string,
projectPath: string,
data: Record<string, unknown> = {}
): void {
this.events.emit('auto-mode:event', {
type: eventType,
featureId,
projectPath,
...data,
});
}
/**
* Emit plan approval required event
*/
emitApprovalRequired(
featureId: string,
projectPath: string,
planContent: string,
planningMode: PlanningMode,
planVersion: number
): void {
this.emitPlanEvent('plan_approval_required', featureId, projectPath, {
planContent,
planningMode,
planVersion,
});
}
/**
* Emit plan approved event
*/
emitApproved(
featureId: string,
projectPath: string,
hasEdits: boolean,
planVersion: number
): void {
this.emitPlanEvent('plan_approved', featureId, projectPath, {
hasEdits,
planVersion,
});
}
/**
* Emit plan rejected event
*/
emitRejected(featureId: string, projectPath: string, feedback?: string): void {
this.emitPlanEvent('plan_rejected', featureId, projectPath, { feedback });
}
/**
* Emit plan auto-approved event
*/
emitAutoApproved(
featureId: string,
projectPath: string,
planContent: string,
planningMode: PlanningMode
): void {
this.emitPlanEvent('plan_auto_approved', featureId, projectPath, {
planContent,
planningMode,
});
}
/**
* Emit plan revision requested event
*/
emitRevisionRequested(
featureId: string,
projectPath: string,
feedback: string | undefined,
hasEdits: boolean,
planVersion: number
): void {
this.emitPlanEvent('plan_revision_requested', featureId, projectPath, {
feedback,
hasEdits,
planVersion,
});
}
}

View File

@@ -1,109 +0,0 @@
/**
* Project Analyzer - Analyzes project structure and context
*
* Provides project analysis functionality using Claude to understand
* codebase architecture, patterns, and conventions.
*/
import type { ExecuteOptions } from '@automaker/types';
import { createLogger, classifyError, processStream } from '@automaker/utils';
import { resolveModelString, DEFAULT_MODELS } from '@automaker/model-resolver';
import { getAutomakerDir, secureFs } from '@automaker/platform';
import { ProviderFactory } from '../../providers/provider-factory.js';
import { validateWorkingDirectory } from '../../lib/sdk-options.js';
import path from 'path';
import type { EventEmitter } from '../../lib/events.js';
const logger = createLogger('ProjectAnalyzer');
const ANALYSIS_PROMPT = `Analyze this project and provide a summary of:
1. Project structure and architecture
2. Main technologies and frameworks used
3. Key components and their responsibilities
4. Build and test commands
5. Any existing conventions or patterns
Format your response as a structured markdown document.`;
export class ProjectAnalyzer {
private events: EventEmitter;
constructor(events: EventEmitter) {
this.events = events;
}
/**
* Analyze project to gather context
*/
async analyze(projectPath: string): Promise<void> {
validateWorkingDirectory(projectPath);
const abortController = new AbortController();
const analysisFeatureId = `analysis-${Date.now()}`;
this.emitEvent('auto_mode_feature_start', {
featureId: analysisFeatureId,
projectPath,
feature: {
id: analysisFeatureId,
title: 'Project Analysis',
description: 'Analyzing project structure',
},
});
try {
const analysisModel = resolveModelString(undefined, DEFAULT_MODELS.claude);
const provider = ProviderFactory.getProviderForModel(analysisModel);
const options: ExecuteOptions = {
prompt: ANALYSIS_PROMPT,
model: analysisModel,
maxTurns: 5,
cwd: projectPath,
allowedTools: ['Read', 'Glob', 'Grep'],
abortController,
};
const stream = provider.executeQuery(options);
let analysisResult = '';
const result = await processStream(stream, {
onText: (text) => {
analysisResult += text;
this.emitEvent('auto_mode_progress', {
featureId: analysisFeatureId,
content: text,
projectPath,
});
},
});
analysisResult = result.text || analysisResult;
// Save analysis
const automakerDir = getAutomakerDir(projectPath);
const analysisPath = path.join(automakerDir, 'project-analysis.md');
await secureFs.mkdir(automakerDir, { recursive: true });
await secureFs.writeFile(analysisPath, analysisResult);
this.emitEvent('auto_mode_feature_complete', {
featureId: analysisFeatureId,
passes: true,
message: 'Project analysis completed',
projectPath,
});
} catch (error) {
const errorInfo = classifyError(error);
this.emitEvent('auto_mode_error', {
featureId: analysisFeatureId,
error: errorInfo.message,
errorType: errorInfo.type,
projectPath,
});
}
}
private emitEvent(eventType: string, data: Record<string, unknown>): void {
this.events.emit('auto-mode:event', { type: eventType, ...data });
}
}

View File

@@ -1,267 +0,0 @@
/**
* Task Executor - Multi-agent task execution for spec-driven development
*
* Handles the sequential execution of parsed tasks from a spec,
* where each task gets its own focused agent call.
*/
import type { ExecuteOptions, ParsedTask } from '@automaker/types';
import type { EventEmitter } from '../../lib/events.js';
import type { BaseProvider } from '../../providers/base-provider.js';
import { buildTaskPrompt } from '@automaker/prompts';
import { createLogger, processStream } from '@automaker/utils';
import type { TaskExecutionContext, TaskProgress } from './types.js';
const logger = createLogger('TaskExecutor');
/**
* Handles multi-agent task execution for spec-driven development
*/
export class TaskExecutor {
private events: EventEmitter;
constructor(events: EventEmitter) {
this.events = events;
}
/**
* Execute all tasks sequentially
*
* Each task gets its own focused agent call with context about
* completed and remaining tasks.
*
* @param tasks - Parsed tasks from the spec
* @param context - Execution context including provider, model, etc.
* @param provider - The provider to use for execution
* @yields TaskProgress events for each task
*/
async *executeAll(
tasks: ParsedTask[],
context: TaskExecutionContext,
provider: BaseProvider
): AsyncGenerator<TaskProgress> {
logger.info(
`Starting multi-agent execution: ${tasks.length} tasks for feature ${context.featureId}`
);
for (let taskIndex = 0; taskIndex < tasks.length; taskIndex++) {
const task = tasks[taskIndex];
// Check for abort
if (context.abortController.signal.aborted) {
throw new Error('Feature execution aborted');
}
// Emit task started
logger.info(`Starting task ${task.id}: ${task.description}`);
this.emitTaskEvent('auto_mode_task_started', context, {
taskId: task.id,
taskDescription: task.description,
taskIndex,
tasksTotal: tasks.length,
});
yield {
taskId: task.id,
taskIndex,
tasksTotal: tasks.length,
status: 'started',
};
// Build focused prompt for this task
const taskPrompt = buildTaskPrompt(
task,
tasks,
taskIndex,
context.planContent,
context.userFeedback
);
// Execute task with dedicated agent call
const taskOptions: ExecuteOptions = {
prompt: taskPrompt,
model: context.model,
maxTurns: Math.min(context.maxTurns, 50), // Limit turns per task
cwd: context.workDir,
allowedTools: context.allowedTools,
abortController: context.abortController,
};
const taskStream = provider.executeQuery(taskOptions);
// Process task stream
let taskOutput = '';
try {
const result = await processStream(taskStream, {
onText: (text) => {
taskOutput += text;
this.emitProgressEvent(context.featureId, text);
},
onToolUse: (name, input) => {
this.emitToolEvent(context.featureId, name, input);
},
});
taskOutput = result.text;
} catch (error) {
const errorMessage = error instanceof Error ? error.message : String(error);
logger.error(`Task ${task.id} failed: ${errorMessage}`);
yield {
taskId: task.id,
taskIndex,
tasksTotal: tasks.length,
status: 'failed',
output: errorMessage,
};
throw error;
}
// Emit task completed
logger.info(`Task ${task.id} completed for feature ${context.featureId}`);
this.emitTaskEvent('auto_mode_task_complete', context, {
taskId: task.id,
tasksCompleted: taskIndex + 1,
tasksTotal: tasks.length,
});
// Check for phase completion
const phaseComplete = this.checkPhaseComplete(task, tasks, taskIndex);
yield {
taskId: task.id,
taskIndex,
tasksTotal: tasks.length,
status: 'completed',
output: taskOutput,
phaseComplete,
};
// Emit phase complete if needed
if (phaseComplete !== undefined) {
this.emitPhaseComplete(context, phaseComplete);
}
}
logger.info(`All ${tasks.length} tasks completed for feature ${context.featureId}`);
}
/**
* Execute a single task (for cases where you don't need the full loop)
*
* @param task - The task to execute
* @param allTasks - All tasks for context
* @param taskIndex - Index of this task
* @param context - Execution context
* @param provider - The provider to use
* @returns Task output text
*/
async executeOne(
task: ParsedTask,
allTasks: ParsedTask[],
taskIndex: number,
context: TaskExecutionContext,
provider: BaseProvider
): Promise<string> {
const taskPrompt = buildTaskPrompt(
task,
allTasks,
taskIndex,
context.planContent,
context.userFeedback
);
const taskOptions: ExecuteOptions = {
prompt: taskPrompt,
model: context.model,
maxTurns: Math.min(context.maxTurns, 50),
cwd: context.workDir,
allowedTools: context.allowedTools,
abortController: context.abortController,
};
const taskStream = provider.executeQuery(taskOptions);
const result = await processStream(taskStream, {
onText: (text) => {
this.emitProgressEvent(context.featureId, text);
},
onToolUse: (name, input) => {
this.emitToolEvent(context.featureId, name, input);
},
});
return result.text;
}
/**
* Check if completing this task completes a phase
*/
private checkPhaseComplete(
task: ParsedTask,
allTasks: ParsedTask[],
taskIndex: number
): number | undefined {
if (!task.phase) {
return undefined;
}
const nextTask = allTasks[taskIndex + 1];
if (!nextTask || nextTask.phase !== task.phase) {
// Phase changed or no more tasks
const phaseMatch = task.phase.match(/Phase\s*(\d+)/i);
return phaseMatch ? parseInt(phaseMatch[1], 10) : undefined;
}
return undefined;
}
/**
* Emit a task-related event
*/
private emitTaskEvent(
eventType: string,
context: TaskExecutionContext,
data: Record<string, unknown>
): void {
this.events.emit('auto-mode:event', {
type: eventType,
featureId: context.featureId,
projectPath: context.projectPath,
...data,
});
}
/**
* Emit progress event for text output
*/
private emitProgressEvent(featureId: string, content: string): void {
this.events.emit('auto-mode:event', {
type: 'auto_mode_progress',
featureId,
content,
});
}
/**
* Emit tool use event
*/
private emitToolEvent(featureId: string, tool: string, input: unknown): void {
this.events.emit('auto-mode:event', {
type: 'auto_mode_tool',
featureId,
tool,
input,
});
}
/**
* Emit phase complete event
*/
private emitPhaseComplete(context: TaskExecutionContext, phaseNumber: number): void {
this.events.emit('auto-mode:event', {
type: 'auto_mode_phase_complete',
featureId: context.featureId,
projectPath: context.projectPath,
phaseNumber,
});
}
}

View File

@@ -1,121 +0,0 @@
/**
* Internal types for AutoModeService
*
* These types are used internally by the auto-mode services
* and are not exported to the public API.
*/
import type { PlanningMode, PlanSpec } from '@automaker/types';
/**
* Running feature state
*/
export interface RunningFeature {
featureId: string;
projectPath: string;
worktreePath: string | null;
branchName: string | null;
abortController: AbortController;
isAutoMode: boolean;
startTime: number;
}
/**
* Auto-loop configuration
*/
export interface AutoLoopState {
projectPath: string;
maxConcurrency: number;
abortController: AbortController;
isRunning: boolean;
}
/**
* Auto-mode configuration
*/
export interface AutoModeConfig {
maxConcurrency: number;
useWorktrees: boolean;
projectPath: string;
}
/**
* Pending plan approval state
*/
export interface PendingApproval {
resolve: (result: ApprovalResult) => void;
reject: (error: Error) => void;
featureId: string;
projectPath: string;
}
/**
* Result of plan approval
*/
export interface ApprovalResult {
approved: boolean;
editedPlan?: string;
feedback?: string;
}
/**
* Options for executing a feature
*/
export interface FeatureExecutionOptions {
continuationPrompt?: string;
}
/**
* Options for running the agent
*/
export interface RunAgentOptions {
projectPath: string;
planningMode?: PlanningMode;
requirePlanApproval?: boolean;
previousContent?: string;
systemPrompt?: string;
}
/**
* Feature with planning fields for internal use
*/
export interface FeatureWithPlanning {
id: string;
description: string;
spec?: string;
model?: string;
imagePaths?: Array<string | { path: string; filename?: string; mimeType?: string }>;
branchName?: string;
skipTests?: boolean;
planningMode?: PlanningMode;
requirePlanApproval?: boolean;
planSpec?: PlanSpec;
[key: string]: unknown;
}
/**
* Task execution context
*/
export interface TaskExecutionContext {
workDir: string;
featureId: string;
projectPath: string;
model: string;
maxTurns: number;
allowedTools?: string[];
abortController: AbortController;
planContent: string;
userFeedback?: string;
}
/**
* Task progress event
*/
export interface TaskProgress {
taskId: string;
taskIndex: number;
tasksTotal: number;
status: 'started' | 'completed' | 'failed';
output?: string;
phaseComplete?: number;
}

View File

@@ -1,157 +0,0 @@
/**
* Worktree Manager - Git worktree operations for feature isolation
*
* Handles finding and resolving git worktrees for feature branches.
* Worktrees are created when features are added/edited, this service
* finds existing worktrees for execution.
*/
import { exec } from 'child_process';
import { promisify } from 'util';
import path from 'path';
import { createLogger } from '@automaker/utils';
const execAsync = promisify(exec);
const logger = createLogger('WorktreeManager');
/**
* Result of resolving a working directory
*/
export interface WorkDirResult {
/** The resolved working directory path */
workDir: string;
/** The worktree path if using a worktree, null otherwise */
worktreePath: string | null;
}
/**
* Manages git worktree operations for feature isolation
*/
export class WorktreeManager {
/**
* Find existing worktree path for a branch
*
* Parses `git worktree list --porcelain` output to find the worktree
* associated with a specific branch.
*
* @param projectPath - The main project path
* @param branchName - The branch to find a worktree for
* @returns The absolute path to the worktree, or null if not found
*/
async findWorktreeForBranch(projectPath: string, branchName: string): Promise<string | null> {
try {
const { stdout } = await execAsync('git worktree list --porcelain', {
cwd: projectPath,
});
const lines = stdout.split('\n');
let currentPath: string | null = null;
let currentBranch: string | null = null;
for (const line of lines) {
if (line.startsWith('worktree ')) {
currentPath = line.slice(9);
} else if (line.startsWith('branch ')) {
currentBranch = line.slice(7).replace('refs/heads/', '');
} else if (line === '' && currentPath && currentBranch) {
// End of a worktree entry
if (currentBranch === branchName) {
// Resolve to absolute path - git may return relative paths
// On Windows, this is critical for cwd to work correctly
const resolvedPath = path.isAbsolute(currentPath)
? path.resolve(currentPath)
: path.resolve(projectPath, currentPath);
return resolvedPath;
}
currentPath = null;
currentBranch = null;
}
}
// Check the last entry (if file doesn't end with newline)
if (currentPath && currentBranch && currentBranch === branchName) {
const resolvedPath = path.isAbsolute(currentPath)
? path.resolve(currentPath)
: path.resolve(projectPath, currentPath);
return resolvedPath;
}
return null;
} catch (error) {
logger.warn(`Failed to find worktree for branch ${branchName}`, error);
return null;
}
}
/**
* Resolve the working directory for feature execution
*
* If worktrees are enabled and a branch name is provided, attempts to
* find an existing worktree. Falls back to the project path if no
* worktree is found.
*
* @param projectPath - The main project path
* @param branchName - Optional branch name to look for
* @param useWorktrees - Whether to use worktrees
* @returns The resolved work directory and worktree path
*/
async resolveWorkDir(
projectPath: string,
branchName: string | undefined,
useWorktrees: boolean
): Promise<WorkDirResult> {
let worktreePath: string | null = null;
if (useWorktrees && branchName) {
worktreePath = await this.findWorktreeForBranch(projectPath, branchName);
if (worktreePath) {
logger.info(`Using worktree for branch "${branchName}": ${worktreePath}`);
} else {
logger.warn(`Worktree for branch "${branchName}" not found, using project path`);
}
}
const workDir = worktreePath ? path.resolve(worktreePath) : path.resolve(projectPath);
return { workDir, worktreePath };
}
/**
* Check if a path is a valid worktree
*
* @param worktreePath - Path to check
* @returns True if the path is a valid git worktree
*/
async isValidWorktree(worktreePath: string): Promise<boolean> {
try {
// Check if .git file exists (worktrees have a .git file, not directory)
const { stdout } = await execAsync('git rev-parse --is-inside-work-tree', {
cwd: worktreePath,
});
return stdout.trim() === 'true';
} catch {
return false;
}
}
/**
* Get the branch name for a worktree
*
* @param worktreePath - Path to the worktree
* @returns The branch name or null if not a valid worktree
*/
async getWorktreeBranch(worktreePath: string): Promise<string | null> {
try {
const { stdout } = await execAsync('git rev-parse --abbrev-ref HEAD', {
cwd: worktreePath,
});
return stdout.trim();
} catch {
return null;
}
}
}
// Export a singleton instance for convenience
export const worktreeManager = new WorktreeManager();

View File

@@ -8,13 +8,10 @@
*/ */
import { spawn, execSync, type ChildProcess } from 'child_process'; import { spawn, execSync, type ChildProcess } from 'child_process';
import { secureFs } from '@automaker/platform'; import * as secureFs from '../lib/secure-fs.js';
import { createLogger } from '@automaker/utils';
import path from 'path'; import path from 'path';
import net from 'net'; import net from 'net';
const logger = createLogger('DevServerService');
export interface DevServerInfo { export interface DevServerInfo {
worktreePath: string; worktreePath: string;
port: number; port: number;
@@ -72,7 +69,7 @@ class DevServerService {
for (const pid of pids) { for (const pid of pids) {
try { try {
execSync(`taskkill /F /PID ${pid}`, { stdio: 'ignore' }); execSync(`taskkill /F /PID ${pid}`, { stdio: 'ignore' });
logger.info(`Killed process ${pid} on port ${port}`); console.log(`[DevServerService] Killed process ${pid} on port ${port}`);
} catch { } catch {
// Process may have already exited // Process may have already exited
} }
@@ -85,7 +82,7 @@ class DevServerService {
for (const pid of pids) { for (const pid of pids) {
try { try {
execSync(`kill -9 ${pid}`, { stdio: 'ignore' }); execSync(`kill -9 ${pid}`, { stdio: 'ignore' });
logger.info(`Killed process ${pid} on port ${port}`); console.log(`[DevServerService] Killed process ${pid} on port ${port}`);
} catch { } catch {
// Process may have already exited // Process may have already exited
} }
@@ -96,7 +93,7 @@ class DevServerService {
} }
} catch (error) { } catch (error) {
// Ignore errors - port might not have any process // Ignore errors - port might not have any process
logger.info(`No process to kill on port ${port}`); console.log(`[DevServerService] No process to kill on port ${port}`);
} }
} }
@@ -254,9 +251,11 @@ class DevServerService {
// Small delay to ensure related ports are freed // Small delay to ensure related ports are freed
await new Promise((resolve) => setTimeout(resolve, 100)); await new Promise((resolve) => setTimeout(resolve, 100));
logger.info(`Starting dev server on port ${port}`); console.log(`[DevServerService] Starting dev server on port ${port}`);
logger.info(`Working directory (cwd): ${worktreePath}`); console.log(`[DevServerService] Working directory (cwd): ${worktreePath}`);
logger.info(`Command: ${devCommand.cmd} ${devCommand.args.join(' ')} with PORT=${port}`); console.log(
`[DevServerService] Command: ${devCommand.cmd} ${devCommand.args.join(' ')} with PORT=${port}`
);
// Spawn the dev process with PORT environment variable // Spawn the dev process with PORT environment variable
const env = { const env = {
@@ -277,26 +276,26 @@ class DevServerService {
// Log output for debugging // Log output for debugging
if (devProcess.stdout) { if (devProcess.stdout) {
devProcess.stdout.on('data', (data: Buffer) => { devProcess.stdout.on('data', (data: Buffer) => {
logger.info(`[DevServer:${port}] ${data.toString().trim()}`); console.log(`[DevServer:${port}] ${data.toString().trim()}`);
}); });
} }
if (devProcess.stderr) { if (devProcess.stderr) {
devProcess.stderr.on('data', (data: Buffer) => { devProcess.stderr.on('data', (data: Buffer) => {
const msg = data.toString().trim(); const msg = data.toString().trim();
logger.error(`[DevServer:${port}] ${msg}`); console.error(`[DevServer:${port}] ${msg}`);
}); });
} }
devProcess.on('error', (error) => { devProcess.on('error', (error) => {
logger.error(`Process error:`, error); console.error(`[DevServerService] Process error:`, error);
status.error = error.message; status.error = error.message;
this.allocatedPorts.delete(port); this.allocatedPorts.delete(port);
this.runningServers.delete(worktreePath); this.runningServers.delete(worktreePath);
}); });
devProcess.on('exit', (code) => { devProcess.on('exit', (code) => {
logger.info(`Process for ${worktreePath} exited with code ${code}`); console.log(`[DevServerService] Process for ${worktreePath} exited with code ${code}`);
status.exited = true; status.exited = true;
this.allocatedPorts.delete(port); this.allocatedPorts.delete(port);
this.runningServers.delete(worktreePath); this.runningServers.delete(worktreePath);
@@ -353,7 +352,9 @@ class DevServerService {
// If we don't have a record of this server, it may have crashed/exited on its own // If we don't have a record of this server, it may have crashed/exited on its own
// Return success so the frontend can clear its state // Return success so the frontend can clear its state
if (!server) { if (!server) {
logger.info(`No server record for ${worktreePath}, may have already stopped`); console.log(
`[DevServerService] No server record for ${worktreePath}, may have already stopped`
);
return { return {
success: true, success: true,
result: { result: {
@@ -363,7 +364,7 @@ class DevServerService {
}; };
} }
logger.info(`Stopping dev server for ${worktreePath}`); console.log(`[DevServerService] Stopping dev server for ${worktreePath}`);
// Kill the process // Kill the process
if (server.process && !server.process.killed) { if (server.process && !server.process.killed) {
@@ -433,7 +434,7 @@ class DevServerService {
* Stop all running dev servers (for cleanup) * Stop all running dev servers (for cleanup)
*/ */
async stopAll(): Promise<void> { async stopAll(): Promise<void> {
logger.info(`Stopping all ${this.runningServers.size} dev servers`); console.log(`[DevServerService] Stopping all ${this.runningServers.size} dev servers`);
for (const [worktreePath] of this.runningServers) { for (const [worktreePath] of this.runningServers) {
await this.stopDevServer(worktreePath); await this.stopDevServer(worktreePath);

View File

@@ -4,15 +4,14 @@
*/ */
import path from 'path'; import path from 'path';
import type { Feature, PlanSpec, FeatureStatus } from '@automaker/types'; import type { Feature } from '@automaker/types';
import { createLogger } from '@automaker/utils'; import { createLogger } from '@automaker/utils';
import { resolveDependencies, areDependenciesSatisfied } from '@automaker/dependency-resolver'; import * as secureFs from '../lib/secure-fs.js';
import { import {
getFeaturesDir, getFeaturesDir,
getFeatureDir, getFeatureDir,
getFeatureImagesDir, getFeatureImagesDir,
ensureAutomakerDir, ensureAutomakerDir,
secureFs,
} from '@automaker/platform'; } from '@automaker/platform';
const logger = createLogger('FeatureLoader'); const logger = createLogger('FeatureLoader');
@@ -57,7 +56,7 @@ export class FeatureLoader {
try { try {
// Paths are now absolute // Paths are now absolute
await secureFs.unlink(oldPath); await secureFs.unlink(oldPath);
logger.info(`Deleted orphaned image: ${oldPath}`); console.log(`[FeatureLoader] Deleted orphaned image: ${oldPath}`);
} catch (error) { } catch (error) {
// Ignore errors when deleting (file may already be gone) // Ignore errors when deleting (file may already be gone)
logger.warn(`[FeatureLoader] Failed to delete image: ${oldPath}`, error); logger.warn(`[FeatureLoader] Failed to delete image: ${oldPath}`, error);
@@ -112,7 +111,7 @@ export class FeatureLoader {
// Copy the file // Copy the file
await secureFs.copyFile(fullOriginalPath, newPath); await secureFs.copyFile(fullOriginalPath, newPath);
logger.info(`Copied image: ${originalPath} -> ${newPath}`); console.log(`[FeatureLoader] Copied image: ${originalPath} -> ${newPath}`);
// Try to delete the original temp file // Try to delete the original temp file
try { try {
@@ -333,7 +332,7 @@ export class FeatureLoader {
try { try {
const featureDir = this.getFeatureDir(projectPath, featureId); const featureDir = this.getFeatureDir(projectPath, featureId);
await secureFs.rm(featureDir, { recursive: true, force: true }); await secureFs.rm(featureDir, { recursive: true, force: true });
logger.info(`Deleted feature ${featureId}`); console.log(`[FeatureLoader] Deleted feature ${featureId}`);
return true; return true;
} catch (error) { } catch (error) {
logger.error(`[FeatureLoader] Failed to delete feature ${featureId}:`, error); logger.error(`[FeatureLoader] Failed to delete feature ${featureId}:`, error);
@@ -382,115 +381,4 @@ export class FeatureLoader {
} }
} }
} }
/**
* Check if agent output exists for a feature
*/
async hasAgentOutput(projectPath: string, featureId: string): Promise<boolean> {
try {
const agentOutputPath = this.getAgentOutputPath(projectPath, featureId);
await secureFs.access(agentOutputPath);
return true;
} catch {
return false;
}
}
/**
* Update feature status with proper timestamp handling
* Used by auto-mode to update feature status during execution
*/
async updateStatus(
projectPath: string,
featureId: string,
status: FeatureStatus
): Promise<Feature | null> {
try {
const featureJsonPath = this.getFeatureJsonPath(projectPath, featureId);
const content = (await secureFs.readFile(featureJsonPath, 'utf-8')) as string;
const feature = JSON.parse(content) as Feature;
feature.status = status;
feature.updatedAt = new Date().toISOString();
// Handle justFinishedAt for waiting_approval status
if (status === 'waiting_approval') {
feature.justFinishedAt = new Date().toISOString();
} else {
feature.justFinishedAt = undefined;
}
await secureFs.writeFile(featureJsonPath, JSON.stringify(feature, null, 2));
return feature;
} catch (error) {
if ((error as NodeJS.ErrnoException).code === 'ENOENT') {
return null;
}
logger.error(`[FeatureLoader] Failed to update status for ${featureId}:`, error);
return null;
}
}
/**
* Update feature plan specification
* Handles version incrementing and timestamp management
*/
async updatePlanSpec(
projectPath: string,
featureId: string,
updates: Partial<PlanSpec>
): Promise<Feature | null> {
try {
const featureJsonPath = this.getFeatureJsonPath(projectPath, featureId);
const content = (await secureFs.readFile(featureJsonPath, 'utf-8')) as string;
const feature = JSON.parse(content) as Feature;
// Initialize planSpec if not present
if (!feature.planSpec) {
feature.planSpec = { status: 'pending', version: 1, reviewedByUser: false };
}
// Increment version if content changed
if (updates.content && updates.content !== feature.planSpec.content) {
feature.planSpec.version = (feature.planSpec.version || 0) + 1;
}
// Merge updates
Object.assign(feature.planSpec, updates);
feature.updatedAt = new Date().toISOString();
await secureFs.writeFile(featureJsonPath, JSON.stringify(feature, null, 2));
return feature;
} catch (error) {
if ((error as NodeJS.ErrnoException).code === 'ENOENT') {
return null;
}
logger.error(`[FeatureLoader] Failed to update planSpec for ${featureId}:`, error);
return null;
}
}
/**
* Get features that are pending and ready to execute
* Filters by status and resolves dependencies
*/
async getPending(projectPath: string): Promise<Feature[]> {
try {
const allFeatures = await this.getAll(projectPath);
const pendingFeatures = allFeatures.filter(
(f) => f.status && ['pending', 'ready', 'backlog'].includes(f.status)
);
// Resolve dependencies and order features
const { orderedFeatures } = resolveDependencies(pendingFeatures);
// Filter to features whose dependencies are satisfied
return orderedFeatures.filter((feature: Feature) =>
areDependenciesSatisfied(feature, allFeatures)
);
} catch (error) {
logger.error('[FeatureLoader] Failed to get pending features:', error);
return [];
}
}
} }

View File

@@ -8,13 +8,14 @@
*/ */
import { createLogger } from '@automaker/utils'; import { createLogger } from '@automaker/utils';
import * as secureFs from '../lib/secure-fs.js';
import { import {
getGlobalSettingsPath, getGlobalSettingsPath,
getCredentialsPath, getCredentialsPath,
getProjectSettingsPath, getProjectSettingsPath,
ensureDataDir, ensureDataDir,
ensureAutomakerDir, ensureAutomakerDir,
secureFs,
} from '@automaker/platform'; } from '@automaker/platform';
import type { import type {
GlobalSettings, GlobalSettings,

View File

@@ -10,9 +10,6 @@ import { EventEmitter } from 'events';
import * as os from 'os'; import * as os from 'os';
import * as fs from 'fs'; import * as fs from 'fs';
import * as path from 'path'; import * as path from 'path';
import { createLogger } from '@automaker/utils';
const logger = createLogger('Terminal');
// Maximum scrollback buffer size (characters) // Maximum scrollback buffer size (characters)
const MAX_SCROLLBACK_SIZE = 50000; // ~50KB per terminal const MAX_SCROLLBACK_SIZE = 50000; // ~50KB per terminal
@@ -174,7 +171,7 @@ export class TerminalService extends EventEmitter {
// Reject paths with null bytes (could bypass path checks) // Reject paths with null bytes (could bypass path checks)
if (cwd.includes('\0')) { if (cwd.includes('\0')) {
logger.warn(`Rejecting path with null byte: ${cwd.replace(/\0/g, '\\0')}`); console.warn(`[Terminal] Rejecting path with null byte: ${cwd.replace(/\0/g, '\\0')}`);
return homeDir; return homeDir;
} }
@@ -195,10 +192,10 @@ export class TerminalService extends EventEmitter {
if (stat.isDirectory()) { if (stat.isDirectory()) {
return cwd; return cwd;
} }
logger.warn(`Path exists but is not a directory: ${cwd}, falling back to home`); console.warn(`[Terminal] Path exists but is not a directory: ${cwd}, falling back to home`);
return homeDir; return homeDir;
} catch { } catch {
logger.warn(`Working directory does not exist: ${cwd}, falling back to home`); console.warn(`[Terminal] Working directory does not exist: ${cwd}, falling back to home`);
return homeDir; return homeDir;
} }
} }
@@ -223,7 +220,7 @@ export class TerminalService extends EventEmitter {
setMaxSessions(limit: number): void { setMaxSessions(limit: number): void {
if (limit >= MIN_MAX_SESSIONS && limit <= MAX_MAX_SESSIONS) { if (limit >= MIN_MAX_SESSIONS && limit <= MAX_MAX_SESSIONS) {
maxSessions = limit; maxSessions = limit;
logger.info(`Max sessions limit updated to ${limit}`); console.log(`[Terminal] Max sessions limit updated to ${limit}`);
} }
} }
@@ -234,7 +231,7 @@ export class TerminalService extends EventEmitter {
createSession(options: TerminalOptions = {}): TerminalSession | null { createSession(options: TerminalOptions = {}): TerminalSession | null {
// Check session limit // Check session limit
if (this.sessions.size >= maxSessions) { if (this.sessions.size >= maxSessions) {
logger.error(`Max sessions (${maxSessions}) reached, refusing new session`); console.error(`[Terminal] Max sessions (${maxSessions}) reached, refusing new session`);
return null; return null;
} }
@@ -259,7 +256,7 @@ export class TerminalService extends EventEmitter {
...options.env, ...options.env,
}; };
logger.info(`Creating session ${id} with shell: ${shell} in ${cwd}`); console.log(`[Terminal] Creating session ${id} with shell: ${shell} in ${cwd}`);
const ptyProcess = pty.spawn(shell, shellArgs, { const ptyProcess = pty.spawn(shell, shellArgs, {
name: 'xterm-256color', name: 'xterm-256color',
@@ -331,13 +328,13 @@ export class TerminalService extends EventEmitter {
// Handle exit // Handle exit
ptyProcess.onExit(({ exitCode }) => { ptyProcess.onExit(({ exitCode }) => {
logger.info(`Session ${id} exited with code ${exitCode}`); console.log(`[Terminal] Session ${id} exited with code ${exitCode}`);
this.sessions.delete(id); this.sessions.delete(id);
this.exitCallbacks.forEach((cb) => cb(id, exitCode)); this.exitCallbacks.forEach((cb) => cb(id, exitCode));
this.emit('exit', id, exitCode); this.emit('exit', id, exitCode);
}); });
logger.info(`Session ${id} created successfully`); console.log(`[Terminal] Session ${id} created successfully`);
return session; return session;
} }
@@ -347,7 +344,7 @@ export class TerminalService extends EventEmitter {
write(sessionId: string, data: string): boolean { write(sessionId: string, data: string): boolean {
const session = this.sessions.get(sessionId); const session = this.sessions.get(sessionId);
if (!session) { if (!session) {
logger.warn(`Session ${sessionId} not found`); console.warn(`[Terminal] Session ${sessionId} not found`);
return false; return false;
} }
session.pty.write(data); session.pty.write(data);
@@ -362,7 +359,7 @@ export class TerminalService extends EventEmitter {
resize(sessionId: string, cols: number, rows: number, suppressOutput: boolean = true): boolean { resize(sessionId: string, cols: number, rows: number, suppressOutput: boolean = true): boolean {
const session = this.sessions.get(sessionId); const session = this.sessions.get(sessionId);
if (!session) { if (!session) {
logger.warn(`Session ${sessionId} not found for resize`); console.warn(`[Terminal] Session ${sessionId} not found for resize`);
return false; return false;
} }
try { try {
@@ -388,7 +385,7 @@ export class TerminalService extends EventEmitter {
return true; return true;
} catch (error) { } catch (error) {
logger.error(`Error resizing session ${sessionId}:`, error); console.error(`[Terminal] Error resizing session ${sessionId}:`, error);
session.resizeInProgress = false; // Clear flag on error session.resizeInProgress = false; // Clear flag on error
return false; return false;
} }
@@ -416,14 +413,14 @@ export class TerminalService extends EventEmitter {
} }
// First try graceful SIGTERM to allow process cleanup // First try graceful SIGTERM to allow process cleanup
logger.info(`Session ${sessionId} sending SIGTERM`); console.log(`[Terminal] Session ${sessionId} sending SIGTERM`);
session.pty.kill('SIGTERM'); session.pty.kill('SIGTERM');
// Schedule SIGKILL fallback if process doesn't exit gracefully // Schedule SIGKILL fallback if process doesn't exit gracefully
// The onExit handler will remove session from map when it actually exits // The onExit handler will remove session from map when it actually exits
setTimeout(() => { setTimeout(() => {
if (this.sessions.has(sessionId)) { if (this.sessions.has(sessionId)) {
logger.info(`Session ${sessionId} still alive after SIGTERM, sending SIGKILL`); console.log(`[Terminal] Session ${sessionId} still alive after SIGTERM, sending SIGKILL`);
try { try {
session.pty.kill('SIGKILL'); session.pty.kill('SIGKILL');
} catch { } catch {
@@ -434,10 +431,10 @@ export class TerminalService extends EventEmitter {
} }
}, 1000); }, 1000);
logger.info(`Session ${sessionId} kill initiated`); console.log(`[Terminal] Session ${sessionId} kill initiated`);
return true; return true;
} catch (error) { } catch (error) {
logger.error(`Error killing session ${sessionId}:`, error); console.error(`[Terminal] Error killing session ${sessionId}:`, error);
// Still try to remove from map even if kill fails // Still try to remove from map even if kill fails
this.sessions.delete(sessionId); this.sessions.delete(sessionId);
return false; return false;
@@ -520,7 +517,7 @@ export class TerminalService extends EventEmitter {
* Clean up all sessions * Clean up all sessions
*/ */
cleanup(): void { cleanup(): void {
logger.info(`Cleaning up ${this.sessions.size} sessions`); console.log(`[Terminal] Cleaning up ${this.sessions.size} sessions`);
this.sessions.forEach((session, id) => { this.sessions.forEach((session, id) => {
try { try {
// Clean up flush timeout // Clean up flush timeout

View File

@@ -1,6 +1,16 @@
import { describe, it, expect, beforeEach, vi } from 'vitest'; import { describe, it, expect, beforeEach, vi } from 'vitest';
import { createMockExpressContext } from '../../utils/mocks.js'; import { createMockExpressContext } from '../../utils/mocks.js';
/**
* Creates a mock Express context with socket properties for rate limiter support
*/
function createMockExpressContextWithSocket() {
const ctx = createMockExpressContext();
ctx.req.socket = { remoteAddress: '127.0.0.1' } as any;
ctx.res.setHeader = vi.fn().mockReturnThis();
return ctx;
}
/** /**
* Note: auth.ts reads AUTOMAKER_API_KEY at module load time. * Note: auth.ts reads AUTOMAKER_API_KEY at module load time.
* We need to reset modules and reimport for each test to get fresh state. * We need to reset modules and reimport for each test to get fresh state.
@@ -29,7 +39,7 @@ describe('auth.ts', () => {
process.env.AUTOMAKER_API_KEY = 'test-secret-key'; process.env.AUTOMAKER_API_KEY = 'test-secret-key';
const { authMiddleware } = await import('@/lib/auth.js'); const { authMiddleware } = await import('@/lib/auth.js');
const { req, res, next } = createMockExpressContext(); const { req, res, next } = createMockExpressContextWithSocket();
authMiddleware(req, res, next); authMiddleware(req, res, next);
@@ -45,7 +55,7 @@ describe('auth.ts', () => {
process.env.AUTOMAKER_API_KEY = 'test-secret-key'; process.env.AUTOMAKER_API_KEY = 'test-secret-key';
const { authMiddleware } = await import('@/lib/auth.js'); const { authMiddleware } = await import('@/lib/auth.js');
const { req, res, next } = createMockExpressContext(); const { req, res, next } = createMockExpressContextWithSocket();
req.headers['x-api-key'] = 'wrong-key'; req.headers['x-api-key'] = 'wrong-key';
authMiddleware(req, res, next); authMiddleware(req, res, next);
@@ -62,7 +72,7 @@ describe('auth.ts', () => {
process.env.AUTOMAKER_API_KEY = 'test-secret-key'; process.env.AUTOMAKER_API_KEY = 'test-secret-key';
const { authMiddleware } = await import('@/lib/auth.js'); const { authMiddleware } = await import('@/lib/auth.js');
const { req, res, next } = createMockExpressContext(); const { req, res, next } = createMockExpressContextWithSocket();
req.headers['x-api-key'] = 'test-secret-key'; req.headers['x-api-key'] = 'test-secret-key';
authMiddleware(req, res, next); authMiddleware(req, res, next);
@@ -113,4 +123,197 @@ describe('auth.ts', () => {
}); });
}); });
}); });
describe('security - AUTOMAKER_API_KEY not set', () => {
it('should allow requests without any authentication when API key is not configured', async () => {
delete process.env.AUTOMAKER_API_KEY;
const { authMiddleware } = await import('@/lib/auth.js');
const { req, res, next } = createMockExpressContext();
authMiddleware(req, res, next);
expect(next).toHaveBeenCalled();
expect(res.status).not.toHaveBeenCalled();
expect(res.json).not.toHaveBeenCalled();
});
it('should allow requests even with invalid key header when API key is not configured', async () => {
delete process.env.AUTOMAKER_API_KEY;
const { authMiddleware } = await import('@/lib/auth.js');
const { req, res, next } = createMockExpressContext();
req.headers['x-api-key'] = 'some-random-key';
authMiddleware(req, res, next);
expect(next).toHaveBeenCalled();
expect(res.status).not.toHaveBeenCalled();
});
it('should report auth as disabled when no API key is configured', async () => {
delete process.env.AUTOMAKER_API_KEY;
const { isAuthEnabled, getAuthStatus } = await import('@/lib/auth.js');
expect(isAuthEnabled()).toBe(false);
expect(getAuthStatus()).toEqual({
enabled: false,
method: 'none',
});
});
});
describe('security - authentication correctness', () => {
it('should correctly authenticate with matching API key', async () => {
const testKey = 'correct-secret-key-12345';
process.env.AUTOMAKER_API_KEY = testKey;
const { authMiddleware } = await import('@/lib/auth.js');
const { req, res, next } = createMockExpressContextWithSocket();
req.headers['x-api-key'] = testKey;
authMiddleware(req, res, next);
expect(next).toHaveBeenCalled();
expect(res.status).not.toHaveBeenCalled();
});
it('should reject keys that differ by a single character', async () => {
process.env.AUTOMAKER_API_KEY = 'correct-secret-key';
const { authMiddleware } = await import('@/lib/auth.js');
const { req, res, next } = createMockExpressContextWithSocket();
req.headers['x-api-key'] = 'correct-secret-keY'; // Last char uppercase
authMiddleware(req, res, next);
expect(res.status).toHaveBeenCalledWith(403);
expect(next).not.toHaveBeenCalled();
});
it('should reject keys with extra characters', async () => {
process.env.AUTOMAKER_API_KEY = 'secret-key';
const { authMiddleware } = await import('@/lib/auth.js');
const { req, res, next } = createMockExpressContextWithSocket();
req.headers['x-api-key'] = 'secret-key-extra';
authMiddleware(req, res, next);
expect(res.status).toHaveBeenCalledWith(403);
expect(next).not.toHaveBeenCalled();
});
it('should reject keys that are a prefix of the actual key', async () => {
process.env.AUTOMAKER_API_KEY = 'full-secret-key';
const { authMiddleware } = await import('@/lib/auth.js');
const { req, res, next } = createMockExpressContextWithSocket();
req.headers['x-api-key'] = 'full-secret';
authMiddleware(req, res, next);
expect(res.status).toHaveBeenCalledWith(403);
expect(next).not.toHaveBeenCalled();
});
it('should reject empty string API key header', async () => {
process.env.AUTOMAKER_API_KEY = 'secret-key';
const { authMiddleware } = await import('@/lib/auth.js');
const { req, res, next } = createMockExpressContextWithSocket();
req.headers['x-api-key'] = '';
authMiddleware(req, res, next);
// Empty string is falsy, so should get 401 (no key provided)
expect(res.status).toHaveBeenCalledWith(401);
expect(next).not.toHaveBeenCalled();
});
it('should handle keys with special characters correctly', async () => {
const specialKey = 'key-with-$pecial!@#chars_123';
process.env.AUTOMAKER_API_KEY = specialKey;
const { authMiddleware } = await import('@/lib/auth.js');
const { req, res, next } = createMockExpressContextWithSocket();
req.headers['x-api-key'] = specialKey;
authMiddleware(req, res, next);
expect(next).toHaveBeenCalled();
expect(res.status).not.toHaveBeenCalled();
});
});
describe('security - rate limiting', () => {
it('should block requests after multiple failed attempts', async () => {
process.env.AUTOMAKER_API_KEY = 'correct-key';
const { authMiddleware } = await import('@/lib/auth.js');
const { apiKeyRateLimiter } = await import('@/lib/rate-limiter.js');
// Reset the rate limiter for this test
apiKeyRateLimiter.reset('192.168.1.100');
// Simulate multiple failed attempts
for (let i = 0; i < 5; i++) {
const { req, res, next } = createMockExpressContextWithSocket();
req.socket.remoteAddress = '192.168.1.100';
req.headers['x-api-key'] = 'wrong-key';
authMiddleware(req, res, next);
}
// Next request should be rate limited
const { req, res, next } = createMockExpressContextWithSocket();
req.socket.remoteAddress = '192.168.1.100';
req.headers['x-api-key'] = 'correct-key'; // Even with correct key
authMiddleware(req, res, next);
expect(res.status).toHaveBeenCalledWith(429);
expect(next).not.toHaveBeenCalled();
// Cleanup
apiKeyRateLimiter.reset('192.168.1.100');
});
it('should reset rate limit on successful authentication', async () => {
process.env.AUTOMAKER_API_KEY = 'correct-key';
const { authMiddleware } = await import('@/lib/auth.js');
const { apiKeyRateLimiter } = await import('@/lib/rate-limiter.js');
// Reset the rate limiter for this test
apiKeyRateLimiter.reset('192.168.1.101');
// Simulate a few failed attempts (not enough to trigger block)
for (let i = 0; i < 3; i++) {
const { req, res, next } = createMockExpressContextWithSocket();
req.socket.remoteAddress = '192.168.1.101';
req.headers['x-api-key'] = 'wrong-key';
authMiddleware(req, res, next);
}
// Successful authentication should reset the counter
const {
req: successReq,
res: successRes,
next: successNext,
} = createMockExpressContextWithSocket();
successReq.socket.remoteAddress = '192.168.1.101';
successReq.headers['x-api-key'] = 'correct-key';
authMiddleware(successReq, successRes, successNext);
expect(successNext).toHaveBeenCalled();
// After reset, we should have full attempts available again
expect(apiKeyRateLimiter.getAttemptsRemaining('192.168.1.101')).toBe(5);
// Cleanup
apiKeyRateLimiter.reset('192.168.1.101');
});
});
}); });

View File

@@ -15,7 +15,7 @@ import {
SIMPLIFY_EXAMPLES, SIMPLIFY_EXAMPLES,
ACCEPTANCE_EXAMPLES, ACCEPTANCE_EXAMPLES,
type EnhancementMode, type EnhancementMode,
} from '@automaker/prompts'; } from '@/lib/enhancement-prompts.js';
describe('enhancement-prompts.ts', () => { describe('enhancement-prompts.ts', () => {
describe('System Prompt Constants', () => { describe('System Prompt Constants', () => {

View File

@@ -1,143 +0,0 @@
import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest';
// Store original platform and env
const originalPlatform = process.platform;
const originalEnv = { ...process.env };
describe('exec-utils.ts', () => {
beforeEach(() => {
vi.resetModules();
});
afterEach(() => {
// Restore original values
Object.defineProperty(process, 'platform', { value: originalPlatform });
process.env = { ...originalEnv };
});
describe('execAsync', () => {
it('should be a promisified exec function', async () => {
const { execAsync } = await import('@/lib/exec-utils.js');
expect(typeof execAsync).toBe('function');
});
it('should execute shell commands successfully', async () => {
const { execAsync } = await import('@/lib/exec-utils.js');
const result = await execAsync('echo "hello"');
expect(result.stdout.trim()).toBe('hello');
});
it('should reject on invalid commands', async () => {
const { execAsync } = await import('@/lib/exec-utils.js');
await expect(execAsync('nonexistent-command-12345')).rejects.toThrow();
});
});
describe('extendedPath', () => {
it('should include the original PATH', async () => {
const { extendedPath } = await import('@/lib/exec-utils.js');
expect(extendedPath).toContain(process.env.PATH);
});
it('should include additional Unix paths on non-Windows', async () => {
Object.defineProperty(process, 'platform', { value: 'darwin' });
vi.resetModules();
const { extendedPath } = await import('@/lib/exec-utils.js');
expect(extendedPath).toContain('/opt/homebrew/bin');
expect(extendedPath).toContain('/usr/local/bin');
});
});
describe('execEnv', () => {
it('should have PATH set to extendedPath', async () => {
const { execEnv, extendedPath } = await import('@/lib/exec-utils.js');
expect(execEnv.PATH).toBe(extendedPath);
});
it('should include all original environment variables', async () => {
const { execEnv } = await import('@/lib/exec-utils.js');
// Should have common env vars
expect(execEnv.HOME || execEnv.USERPROFILE).toBeDefined();
});
});
describe('isENOENT', () => {
it('should return true for ENOENT errors', async () => {
const { isENOENT } = await import('@/lib/exec-utils.js');
const error = { code: 'ENOENT' };
expect(isENOENT(error)).toBe(true);
});
it('should return false for other error codes', async () => {
const { isENOENT } = await import('@/lib/exec-utils.js');
const error = { code: 'EACCES' };
expect(isENOENT(error)).toBe(false);
});
it('should return false for null', async () => {
const { isENOENT } = await import('@/lib/exec-utils.js');
expect(isENOENT(null)).toBe(false);
});
it('should return false for undefined', async () => {
const { isENOENT } = await import('@/lib/exec-utils.js');
expect(isENOENT(undefined)).toBe(false);
});
it('should return false for non-objects', async () => {
const { isENOENT } = await import('@/lib/exec-utils.js');
expect(isENOENT('ENOENT')).toBe(false);
expect(isENOENT(123)).toBe(false);
});
it('should return false for objects without code property', async () => {
const { isENOENT } = await import('@/lib/exec-utils.js');
expect(isENOENT({})).toBe(false);
expect(isENOENT({ message: 'error' })).toBe(false);
});
it('should handle Error objects with code', async () => {
const { isENOENT } = await import('@/lib/exec-utils.js');
const error = new Error('File not found') as Error & { code: string };
error.code = 'ENOENT';
expect(isENOENT(error)).toBe(true);
});
});
describe('Windows platform handling', () => {
it('should use semicolon as path separator on Windows', async () => {
Object.defineProperty(process, 'platform', { value: 'win32' });
process.env.LOCALAPPDATA = 'C:\\Users\\Test\\AppData\\Local';
process.env.PROGRAMFILES = 'C:\\Program Files';
vi.resetModules();
const { extendedPath } = await import('@/lib/exec-utils.js');
// Windows uses semicolon separator
expect(extendedPath).toContain(';');
expect(extendedPath).toContain('\\Git\\cmd');
});
});
describe('Unix platform handling', () => {
it('should use colon as path separator on Unix', async () => {
Object.defineProperty(process, 'platform', { value: 'linux' });
process.env.HOME = '/home/testuser';
vi.resetModules();
const { extendedPath } = await import('@/lib/exec-utils.js');
// Unix uses colon separator
expect(extendedPath).toContain(':');
expect(extendedPath).toContain('/home/linuxbrew/.linuxbrew/bin');
});
it('should include HOME/.local/bin path', async () => {
Object.defineProperty(process, 'platform', { value: 'darwin' });
process.env.HOME = '/Users/testuser';
vi.resetModules();
const { extendedPath } = await import('@/lib/exec-utils.js');
expect(extendedPath).toContain('/Users/testuser/.local/bin');
});
});
});

View File

@@ -0,0 +1,249 @@
import { describe, it, expect, beforeEach, vi, afterEach } from 'vitest';
import { RateLimiter } from '../../../src/lib/rate-limiter.js';
import type { Request } from 'express';
describe('RateLimiter', () => {
let rateLimiter: RateLimiter;
beforeEach(() => {
rateLimiter = new RateLimiter({
maxAttempts: 3,
windowMs: 60000, // 1 minute
blockDurationMs: 60000, // 1 minute
});
vi.useFakeTimers();
});
afterEach(() => {
vi.useRealTimers();
});
describe('getClientIp', () => {
it('should extract IP from x-forwarded-for header', () => {
const req = {
headers: { 'x-forwarded-for': '192.168.1.100' },
socket: { remoteAddress: '127.0.0.1' },
} as unknown as Request;
expect(rateLimiter.getClientIp(req)).toBe('192.168.1.100');
});
it('should use first IP from x-forwarded-for with multiple IPs', () => {
const req = {
headers: { 'x-forwarded-for': '192.168.1.100, 10.0.0.1, 172.16.0.1' },
socket: { remoteAddress: '127.0.0.1' },
} as unknown as Request;
expect(rateLimiter.getClientIp(req)).toBe('192.168.1.100');
});
it('should fall back to socket remoteAddress when no x-forwarded-for', () => {
const req = {
headers: {},
socket: { remoteAddress: '127.0.0.1' },
} as unknown as Request;
expect(rateLimiter.getClientIp(req)).toBe('127.0.0.1');
});
it('should return "unknown" when no IP can be determined', () => {
const req = {
headers: {},
socket: { remoteAddress: undefined },
} as unknown as Request;
expect(rateLimiter.getClientIp(req)).toBe('unknown');
});
});
describe('isBlocked', () => {
it('should return false for unknown keys', () => {
expect(rateLimiter.isBlocked('192.168.1.1')).toBe(false);
});
it('should return false after recording fewer failures than max', () => {
rateLimiter.recordFailure('192.168.1.1');
rateLimiter.recordFailure('192.168.1.1');
expect(rateLimiter.isBlocked('192.168.1.1')).toBe(false);
});
it('should return true after reaching max failures', () => {
rateLimiter.recordFailure('192.168.1.1');
rateLimiter.recordFailure('192.168.1.1');
rateLimiter.recordFailure('192.168.1.1');
expect(rateLimiter.isBlocked('192.168.1.1')).toBe(true);
});
it('should return false after block expires', () => {
rateLimiter.recordFailure('192.168.1.1');
rateLimiter.recordFailure('192.168.1.1');
rateLimiter.recordFailure('192.168.1.1');
expect(rateLimiter.isBlocked('192.168.1.1')).toBe(true);
// Advance time past block duration
vi.advanceTimersByTime(60001);
expect(rateLimiter.isBlocked('192.168.1.1')).toBe(false);
});
});
describe('recordFailure', () => {
it('should return false when not yet blocked', () => {
expect(rateLimiter.recordFailure('192.168.1.1')).toBe(false);
expect(rateLimiter.recordFailure('192.168.1.1')).toBe(false);
});
it('should return true when threshold is reached', () => {
rateLimiter.recordFailure('192.168.1.1');
rateLimiter.recordFailure('192.168.1.1');
expect(rateLimiter.recordFailure('192.168.1.1')).toBe(true);
});
it('should reset counter after window expires', () => {
rateLimiter.recordFailure('192.168.1.1');
rateLimiter.recordFailure('192.168.1.1');
// Advance time past window
vi.advanceTimersByTime(60001);
// Should start fresh
expect(rateLimiter.recordFailure('192.168.1.1')).toBe(false);
expect(rateLimiter.getAttemptsRemaining('192.168.1.1')).toBe(2);
});
it('should track different IPs independently', () => {
rateLimiter.recordFailure('192.168.1.1');
rateLimiter.recordFailure('192.168.1.1');
rateLimiter.recordFailure('192.168.1.1');
rateLimiter.recordFailure('192.168.1.2');
expect(rateLimiter.isBlocked('192.168.1.1')).toBe(true);
expect(rateLimiter.isBlocked('192.168.1.2')).toBe(false);
});
});
describe('reset', () => {
it('should clear record for a key', () => {
rateLimiter.recordFailure('192.168.1.1');
rateLimiter.recordFailure('192.168.1.1');
rateLimiter.reset('192.168.1.1');
expect(rateLimiter.getAttemptsRemaining('192.168.1.1')).toBe(3);
});
it('should clear blocked status', () => {
rateLimiter.recordFailure('192.168.1.1');
rateLimiter.recordFailure('192.168.1.1');
rateLimiter.recordFailure('192.168.1.1');
expect(rateLimiter.isBlocked('192.168.1.1')).toBe(true);
rateLimiter.reset('192.168.1.1');
expect(rateLimiter.isBlocked('192.168.1.1')).toBe(false);
});
});
describe('getAttemptsRemaining', () => {
it('should return max attempts for unknown key', () => {
expect(rateLimiter.getAttemptsRemaining('192.168.1.1')).toBe(3);
});
it('should decrease as failures are recorded', () => {
rateLimiter.recordFailure('192.168.1.1');
expect(rateLimiter.getAttemptsRemaining('192.168.1.1')).toBe(2);
rateLimiter.recordFailure('192.168.1.1');
expect(rateLimiter.getAttemptsRemaining('192.168.1.1')).toBe(1);
rateLimiter.recordFailure('192.168.1.1');
expect(rateLimiter.getAttemptsRemaining('192.168.1.1')).toBe(0);
});
it('should return max attempts after window expires', () => {
rateLimiter.recordFailure('192.168.1.1');
rateLimiter.recordFailure('192.168.1.1');
vi.advanceTimersByTime(60001);
expect(rateLimiter.getAttemptsRemaining('192.168.1.1')).toBe(3);
});
});
describe('getBlockTimeRemaining', () => {
it('should return 0 for non-blocked key', () => {
expect(rateLimiter.getBlockTimeRemaining('192.168.1.1')).toBe(0);
});
it('should return remaining block time for blocked key', () => {
rateLimiter.recordFailure('192.168.1.1');
rateLimiter.recordFailure('192.168.1.1');
rateLimiter.recordFailure('192.168.1.1');
vi.advanceTimersByTime(30000); // Advance 30 seconds
const remaining = rateLimiter.getBlockTimeRemaining('192.168.1.1');
expect(remaining).toBeGreaterThan(29000);
expect(remaining).toBeLessThanOrEqual(30000);
});
it('should return 0 after block expires', () => {
rateLimiter.recordFailure('192.168.1.1');
rateLimiter.recordFailure('192.168.1.1');
rateLimiter.recordFailure('192.168.1.1');
vi.advanceTimersByTime(60001);
expect(rateLimiter.getBlockTimeRemaining('192.168.1.1')).toBe(0);
});
});
describe('cleanup', () => {
it('should remove expired blocks', () => {
rateLimiter.recordFailure('192.168.1.1');
rateLimiter.recordFailure('192.168.1.1');
rateLimiter.recordFailure('192.168.1.1');
vi.advanceTimersByTime(60001);
rateLimiter.cleanup();
// After cleanup, the record should be gone
expect(rateLimiter.getAttemptsRemaining('192.168.1.1')).toBe(3);
});
it('should remove expired windows', () => {
rateLimiter.recordFailure('192.168.1.1');
vi.advanceTimersByTime(60001);
rateLimiter.cleanup();
expect(rateLimiter.getAttemptsRemaining('192.168.1.1')).toBe(3);
});
it('should preserve active records', () => {
rateLimiter.recordFailure('192.168.1.1');
vi.advanceTimersByTime(30000); // Half the window
rateLimiter.cleanup();
expect(rateLimiter.getAttemptsRemaining('192.168.1.1')).toBe(2);
});
});
describe('default configuration', () => {
it('should use sensible defaults', () => {
const defaultLimiter = new RateLimiter();
// Should have 5 max attempts by default
expect(defaultLimiter.getAttemptsRemaining('test')).toBe(5);
});
});
});

Some files were not shown because too many files have changed in this diff Show More