mirror of
https://github.com/AutoMaker-Org/automaker.git
synced 2026-01-30 14:22:02 +00:00
Compare commits
1 Commits
feature/mc
...
weird-side
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
2eb92a0402 |
3
.gitignore
vendored
3
.gitignore
vendored
@@ -79,5 +79,4 @@ blob-report/
|
||||
# Misc
|
||||
*.pem
|
||||
|
||||
docker-compose.override.yml
|
||||
.claude/
|
||||
docker-compose.override.yml
|
||||
172
CLAUDE.md
172
CLAUDE.md
@@ -1,172 +0,0 @@
|
||||
# CLAUDE.md
|
||||
|
||||
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
|
||||
|
||||
## Project Overview
|
||||
|
||||
Automaker is an autonomous AI development studio built as an npm workspace monorepo. It provides a Kanban-based workflow where AI agents (powered by Claude Agent SDK) implement features in isolated git worktrees.
|
||||
|
||||
## Common Commands
|
||||
|
||||
```bash
|
||||
# Development
|
||||
npm run dev # Interactive launcher (choose web or electron)
|
||||
npm run dev:web # Web browser mode (localhost:3007)
|
||||
npm run dev:electron # Desktop app mode
|
||||
npm run dev:electron:debug # Desktop with DevTools open
|
||||
|
||||
# Building
|
||||
npm run build # Build web application
|
||||
npm run build:packages # Build all shared packages (required before other builds)
|
||||
npm run build:electron # Build desktop app for current platform
|
||||
npm run build:server # Build server only
|
||||
|
||||
# Testing
|
||||
npm run test # E2E tests (Playwright, headless)
|
||||
npm run test:headed # E2E tests with browser visible
|
||||
npm run test:server # Server unit tests (Vitest)
|
||||
npm run test:packages # All shared package tests
|
||||
npm run test:all # All tests (packages + server)
|
||||
|
||||
# Single test file
|
||||
npm run test:server -- tests/unit/specific.test.ts
|
||||
|
||||
# Linting and formatting
|
||||
npm run lint # ESLint
|
||||
npm run format # Prettier write
|
||||
npm run format:check # Prettier check
|
||||
```
|
||||
|
||||
## Architecture
|
||||
|
||||
### Monorepo Structure
|
||||
|
||||
```
|
||||
automaker/
|
||||
├── apps/
|
||||
│ ├── ui/ # React + Vite + Electron frontend (port 3007)
|
||||
│ └── server/ # Express + WebSocket backend (port 3008)
|
||||
└── libs/ # Shared packages (@automaker/*)
|
||||
├── types/ # Core TypeScript definitions (no dependencies)
|
||||
├── utils/ # Logging, errors, image processing, context loading
|
||||
├── prompts/ # AI prompt templates
|
||||
├── platform/ # Path management, security, process spawning
|
||||
├── model-resolver/ # Claude model alias resolution
|
||||
├── dependency-resolver/ # Feature dependency ordering
|
||||
└── git-utils/ # Git operations & worktree management
|
||||
```
|
||||
|
||||
### Package Dependency Chain
|
||||
|
||||
Packages can only depend on packages above them:
|
||||
|
||||
```
|
||||
@automaker/types (no dependencies)
|
||||
↓
|
||||
@automaker/utils, @automaker/prompts, @automaker/platform, @automaker/model-resolver, @automaker/dependency-resolver
|
||||
↓
|
||||
@automaker/git-utils
|
||||
↓
|
||||
@automaker/server, @automaker/ui
|
||||
```
|
||||
|
||||
### Key Technologies
|
||||
|
||||
- **Frontend**: React 19, Vite 7, Electron 39, TanStack Router, Zustand 5, Tailwind CSS 4
|
||||
- **Backend**: Express 5, WebSocket (ws), Claude Agent SDK, node-pty
|
||||
- **Testing**: Playwright (E2E), Vitest (unit)
|
||||
|
||||
### Server Architecture
|
||||
|
||||
The server (`apps/server/src/`) follows a modular pattern:
|
||||
|
||||
- `routes/` - Express route handlers organized by feature (agent, features, auto-mode, worktree, etc.)
|
||||
- `services/` - Business logic (AgentService, AutoModeService, FeatureLoader, TerminalService)
|
||||
- `providers/` - AI provider abstraction (currently Claude via Claude Agent SDK)
|
||||
- `lib/` - Utilities (events, auth, worktree metadata)
|
||||
|
||||
### Frontend Architecture
|
||||
|
||||
The UI (`apps/ui/src/`) uses:
|
||||
|
||||
- `routes/` - TanStack Router file-based routing
|
||||
- `components/views/` - Main view components (board, settings, terminal, etc.)
|
||||
- `store/` - Zustand stores with persistence (app-store.ts, setup-store.ts)
|
||||
- `hooks/` - Custom React hooks
|
||||
- `lib/` - Utilities and API client
|
||||
|
||||
## Data Storage
|
||||
|
||||
### Per-Project Data (`.automaker/`)
|
||||
|
||||
```
|
||||
.automaker/
|
||||
├── features/ # Feature JSON files and images
|
||||
│ └── {featureId}/
|
||||
│ ├── feature.json
|
||||
│ ├── agent-output.md
|
||||
│ └── images/
|
||||
├── context/ # Context files for AI agents (CLAUDE.md, etc.)
|
||||
├── settings.json # Project-specific settings
|
||||
├── spec.md # Project specification
|
||||
└── analysis.json # Project structure analysis
|
||||
```
|
||||
|
||||
### Global Data (`DATA_DIR`, default `./data`)
|
||||
|
||||
```
|
||||
data/
|
||||
├── settings.json # Global settings, profiles, shortcuts
|
||||
├── credentials.json # API keys
|
||||
├── sessions-metadata.json # Chat session metadata
|
||||
└── agent-sessions/ # Conversation histories
|
||||
```
|
||||
|
||||
## Import Conventions
|
||||
|
||||
Always import from shared packages, never from old paths:
|
||||
|
||||
```typescript
|
||||
// ✅ Correct
|
||||
import type { Feature, ExecuteOptions } from '@automaker/types';
|
||||
import { createLogger, classifyError } from '@automaker/utils';
|
||||
import { getEnhancementPrompt } from '@automaker/prompts';
|
||||
import { getFeatureDir, ensureAutomakerDir } from '@automaker/platform';
|
||||
import { resolveModelString } from '@automaker/model-resolver';
|
||||
import { resolveDependencies } from '@automaker/dependency-resolver';
|
||||
import { getGitRepositoryDiffs } from '@automaker/git-utils';
|
||||
|
||||
// ❌ Never import from old paths
|
||||
import { Feature } from '../services/feature-loader'; // Wrong
|
||||
import { createLogger } from '../lib/logger'; // Wrong
|
||||
```
|
||||
|
||||
## Key Patterns
|
||||
|
||||
### Event-Driven Architecture
|
||||
|
||||
All server operations emit events that stream to the frontend via WebSocket. Events are created using `createEventEmitter()` from `lib/events.ts`.
|
||||
|
||||
### Git Worktree Isolation
|
||||
|
||||
Each feature executes in an isolated git worktree, created via `@automaker/git-utils`. This protects the main branch during AI agent execution.
|
||||
|
||||
### Context Files
|
||||
|
||||
Project-specific rules are stored in `.automaker/context/` and automatically loaded into agent prompts via `loadContextFiles()` from `@automaker/utils`.
|
||||
|
||||
### Model Resolution
|
||||
|
||||
Use `resolveModelString()` from `@automaker/model-resolver` to convert model aliases:
|
||||
|
||||
- `haiku` → `claude-haiku-4-5`
|
||||
- `sonnet` → `claude-sonnet-4-20250514`
|
||||
- `opus` → `claude-opus-4-5-20251101`
|
||||
|
||||
## Environment Variables
|
||||
|
||||
- `ANTHROPIC_API_KEY` - Anthropic API key (or use Claude Code CLI auth)
|
||||
- `PORT` - Server port (default: 3008)
|
||||
- `DATA_DIR` - Data storage directory (default: ./data)
|
||||
- `ALLOWED_ROOT_DIRECTORY` - Restrict file operations to specific directory
|
||||
- `AUTOMAKER_MOCK_AGENT=true` - Enable mock agent mode for CI testing
|
||||
412
README.md
412
README.md
@@ -19,7 +19,7 @@
|
||||
|
||||
- [What Makes Automaker Different?](#what-makes-automaker-different)
|
||||
- [The Workflow](#the-workflow)
|
||||
- [Powered by Claude Agent SDK](#powered-by-claude-agent-sdk)
|
||||
- [Powered by Claude Code](#powered-by-claude-code)
|
||||
- [Why This Matters](#why-this-matters)
|
||||
- [Security Disclaimer](#security-disclaimer)
|
||||
- [Community & Support](#community--support)
|
||||
@@ -28,36 +28,22 @@
|
||||
- [Quick Start](#quick-start)
|
||||
- [How to Run](#how-to-run)
|
||||
- [Development Mode](#development-mode)
|
||||
- [Electron Desktop App (Recommended)](#electron-desktop-app-recommended)
|
||||
- [Web Browser Mode](#web-browser-mode)
|
||||
- [Building for Production](#building-for-production)
|
||||
- [Running Production Build](#running-production-build)
|
||||
- [Testing](#testing)
|
||||
- [Linting](#linting)
|
||||
- [Environment Configuration](#environment-configuration)
|
||||
- [Authentication Setup](#authentication-setup)
|
||||
- [Authentication Options](#authentication-options)
|
||||
- [Persistent Setup (Optional)](#persistent-setup-optional)
|
||||
- [Features](#features)
|
||||
- [Core Workflow](#core-workflow)
|
||||
- [AI & Planning](#ai--planning)
|
||||
- [Project Management](#project-management)
|
||||
- [Collaboration & Review](#collaboration--review)
|
||||
- [Developer Tools](#developer-tools)
|
||||
- [Advanced Features](#advanced-features)
|
||||
- [Tech Stack](#tech-stack)
|
||||
- [Frontend](#frontend)
|
||||
- [Backend](#backend)
|
||||
- [Testing & Quality](#testing--quality)
|
||||
- [Shared Libraries](#shared-libraries)
|
||||
- [Available Views](#available-views)
|
||||
- [Architecture](#architecture)
|
||||
- [Monorepo Structure](#monorepo-structure)
|
||||
- [How It Works](#how-it-works)
|
||||
- [Key Architectural Patterns](#key-architectural-patterns)
|
||||
- [Security & Isolation](#security--isolation)
|
||||
- [Data Storage](#data-storage)
|
||||
- [Learn More](#learn-more)
|
||||
- [License](#license)
|
||||
|
||||
</details>
|
||||
|
||||
Automaker is an autonomous AI development studio that transforms how you build software. Instead of manually writing every line of code, you describe features on a Kanban board and watch as AI agents powered by Claude Agent SDK automatically implement them. Built with React, Vite, Electron, and Express, Automaker provides a complete workflow for managing AI agents through a desktop application (or web browser), with features like real-time streaming, git worktree isolation, plan approval, and multi-agent task execution.
|
||||
Automaker is an autonomous AI development studio that transforms how you build software. Instead of manually writing every line of code, you describe features on a Kanban board and watch as AI agents powered by Claude Code automatically implement them.
|
||||
|
||||

|
||||
|
||||
@@ -73,9 +59,9 @@ Traditional development tools help you write code. Automaker helps you **orchest
|
||||
4. **Review & Verify** - Review the changes, run tests, and approve when ready
|
||||
5. **Ship Faster** - Build entire applications in days, not weeks
|
||||
|
||||
### Powered by Claude Agent SDK
|
||||
### Powered by Claude Code
|
||||
|
||||
Automaker leverages the [Claude Agent SDK](https://www.npmjs.com/package/@anthropic-ai/claude-agent-sdk) to give AI agents full access to your codebase. Agents can read files, write code, execute commands, run tests, and make git commits—all while working in isolated git worktrees to keep your main branch safe. The SDK provides autonomous AI agents that can use tools, make decisions, and complete complex multi-step tasks without constant human intervention.
|
||||
Automaker leverages the [Claude Agent SDK](https://docs.anthropic.com/en/docs/claude-code) to give AI agents full access to your codebase. Agents can read files, write code, execute commands, run tests, and make git commits—all while working in isolated git worktrees to keep your main branch safe.
|
||||
|
||||
### Why This Matters
|
||||
|
||||
@@ -109,7 +95,8 @@ In the Discord, you can:
|
||||
- 🚀 Show off projects built with AI agents
|
||||
- 🤝 Collaborate with other developers and contributors
|
||||
|
||||
👉 **Join the Discord:** [Agentic Jumpstart Discord](https://discord.gg/jjem7aEDKU)
|
||||
👉 **Join the Discord:**
|
||||
https://discord.gg/jjem7aEDKU
|
||||
|
||||
---
|
||||
|
||||
@@ -117,49 +104,28 @@ In the Discord, you can:
|
||||
|
||||
### Prerequisites
|
||||
|
||||
- **Node.js 18+** (tested with Node.js 22)
|
||||
- **npm** (comes with Node.js)
|
||||
- **Authentication** (choose one):
|
||||
- **[Claude Code CLI](https://code.claude.com/docs/en/overview)** (recommended) - Install and authenticate, credentials used automatically
|
||||
- **Anthropic API Key** - Direct API key for Claude Agent SDK ([get one here](https://console.anthropic.com/))
|
||||
- Node.js 18+
|
||||
- npm
|
||||
- [Claude Code CLI](https://docs.anthropic.com/en/docs/claude-code) installed and authenticated
|
||||
|
||||
### Quick Start
|
||||
|
||||
```bash
|
||||
# 1. Clone the repository
|
||||
# 1. Clone the repo
|
||||
git clone https://github.com/AutoMaker-Org/automaker.git
|
||||
cd automaker
|
||||
|
||||
# 2. Install dependencies
|
||||
npm install
|
||||
|
||||
# 3. Build shared packages (Now can be skipped npm install / run dev does it automaticly)
|
||||
# 3. Build local shared packages
|
||||
npm run build:packages
|
||||
|
||||
# 4. Set up authentication (skip if using Claude Code CLI)
|
||||
# If using Claude Code CLI: credentials are detected automatically
|
||||
# If using API key directly, choose one method:
|
||||
|
||||
# Option A: Environment variable
|
||||
export ANTHROPIC_API_KEY="sk-ant-..."
|
||||
|
||||
# Option B: Create .env file in project root
|
||||
echo "ANTHROPIC_API_KEY=sk-ant-..." > .env
|
||||
|
||||
# 5. Start Automaker (interactive launcher)
|
||||
# 4. Run Automaker (pick your mode)
|
||||
npm run dev
|
||||
# Choose between:
|
||||
# 1. Web Application (browser at localhost:3007)
|
||||
# 2. Desktop Application (Electron - recommended)
|
||||
# Then choose your run mode when prompted, or use specific commands below
|
||||
```
|
||||
|
||||
**Note:** The `npm run dev` command will:
|
||||
|
||||
- Check for dependencies and install if needed
|
||||
- Install Playwright browsers for E2E tests
|
||||
- Kill any processes on ports 3007/3008
|
||||
- Present an interactive menu to choose your run mode
|
||||
|
||||
## How to Run
|
||||
|
||||
### Development Mode
|
||||
@@ -197,65 +163,31 @@ npm run dev:web
|
||||
|
||||
### Building for Production
|
||||
|
||||
#### Web Application
|
||||
|
||||
```bash
|
||||
# Build for web deployment (uses Vite)
|
||||
# Build Next.js app
|
||||
npm run build
|
||||
|
||||
# Run production build
|
||||
npm run start
|
||||
```
|
||||
|
||||
#### Desktop Application
|
||||
|
||||
```bash
|
||||
# Build for current platform (macOS/Windows/Linux)
|
||||
# Build Electron app for distribution
|
||||
npm run build:electron
|
||||
|
||||
# Platform-specific builds
|
||||
npm run build:electron:mac # macOS (DMG + ZIP, x64 + arm64)
|
||||
npm run build:electron:win # Windows (NSIS installer, x64)
|
||||
npm run build:electron:linux # Linux (AppImage + DEB, x64)
|
||||
|
||||
# Output directory: apps/ui/release/
|
||||
```
|
||||
|
||||
#### Docker Deployment
|
||||
### Running Production Build
|
||||
|
||||
```bash
|
||||
# Build and run with Docker Compose (recommended for security)
|
||||
docker-compose up -d
|
||||
|
||||
# Access at http://localhost:3007
|
||||
# API at http://localhost:3008
|
||||
# Start production Next.js server
|
||||
npm run start
|
||||
```
|
||||
|
||||
### Testing
|
||||
|
||||
#### End-to-End Tests (Playwright)
|
||||
|
||||
```bash
|
||||
npm run test # Headless E2E tests
|
||||
npm run test:headed # Browser visible E2E tests
|
||||
# Run tests headless
|
||||
npm run test
|
||||
|
||||
# Run tests with browser visible
|
||||
npm run test:headed
|
||||
```
|
||||
|
||||
#### Unit Tests (Vitest)
|
||||
|
||||
```bash
|
||||
npm run test:server # Server unit tests
|
||||
npm run test:server:coverage # Server tests with coverage
|
||||
npm run test:packages # All shared package tests
|
||||
npm run test:all # Packages + server tests
|
||||
```
|
||||
|
||||
#### Test Configuration
|
||||
|
||||
- E2E tests run on ports 3007 (UI) and 3008 (server)
|
||||
- Automatically starts test servers before running
|
||||
- Uses Chromium browser via Playwright
|
||||
- Mock agent mode available in CI with `AUTOMAKER_MOCK_AGENT=true`
|
||||
|
||||
### Linting
|
||||
|
||||
```bash
|
||||
@@ -263,283 +195,59 @@ npm run test:all # Packages + server tests
|
||||
npm run lint
|
||||
```
|
||||
|
||||
### Environment Configuration
|
||||
### Authentication Options
|
||||
|
||||
#### Authentication (if not using Claude Code CLI)
|
||||
Automaker supports multiple authentication methods (in order of priority):
|
||||
|
||||
- `ANTHROPIC_API_KEY` - Your Anthropic API key for Claude Agent SDK (not needed if using Claude Code CLI)
|
||||
| Method | Environment Variable | Description |
|
||||
| ---------------- | -------------------- | ------------------------------- |
|
||||
| API Key (env) | `ANTHROPIC_API_KEY` | Anthropic API key |
|
||||
| API Key (stored) | — | Anthropic API key stored in app |
|
||||
|
||||
#### Optional - Server
|
||||
|
||||
- `PORT` - Server port (default: 3008)
|
||||
- `DATA_DIR` - Data storage directory (default: ./data)
|
||||
- `ENABLE_REQUEST_LOGGING` - HTTP request logging (default: true)
|
||||
|
||||
#### Optional - Security
|
||||
|
||||
- `AUTOMAKER_API_KEY` - Optional API authentication for the server
|
||||
- `ALLOWED_ROOT_DIRECTORY` - Restrict file operations to specific directory
|
||||
- `CORS_ORIGIN` - CORS policy (default: \*)
|
||||
|
||||
#### Optional - Development
|
||||
|
||||
- `VITE_SKIP_ELECTRON` - Skip Electron in dev mode
|
||||
- `OPEN_DEVTOOLS` - Auto-open DevTools in Electron
|
||||
|
||||
### Authentication Setup
|
||||
|
||||
#### Option 1: Claude Code CLI (Recommended)
|
||||
|
||||
Install and authenticate the Claude Code CLI following the [official quickstart guide](https://code.claude.com/docs/en/quickstart).
|
||||
|
||||
Once authenticated, Automaker will automatically detect and use your CLI credentials. No additional configuration needed!
|
||||
|
||||
#### Option 2: Direct API Key
|
||||
|
||||
If you prefer not to use the CLI, you can provide an Anthropic API key directly using one of these methods:
|
||||
|
||||
##### 2a. Shell Configuration
|
||||
### Persistent Setup (Optional)
|
||||
|
||||
Add to your `~/.bashrc` or `~/.zshrc`:
|
||||
|
||||
```bash
|
||||
export ANTHROPIC_API_KEY="sk-ant-..."
|
||||
export ANTHROPIC_API_KEY="YOUR_API_KEY_HERE"
|
||||
```
|
||||
|
||||
Then restart your terminal or run `source ~/.bashrc` (or `source ~/.zshrc`).
|
||||
|
||||
##### 2b. .env File
|
||||
|
||||
Create a `.env` file in the project root (gitignored):
|
||||
|
||||
```bash
|
||||
ANTHROPIC_API_KEY=sk-ant-...
|
||||
PORT=3008
|
||||
DATA_DIR=./data
|
||||
```
|
||||
|
||||
##### 2c. In-App Storage
|
||||
|
||||
The application can store your API key securely in the settings UI. The key is persisted in the `DATA_DIR` directory.
|
||||
Then restart your terminal or run `source ~/.bashrc`.
|
||||
|
||||
## Features
|
||||
|
||||
### Core Workflow
|
||||
|
||||
- 📋 **Kanban Board** - Visual drag-and-drop board to manage features through backlog, in progress, waiting approval, and verified stages
|
||||
- 🤖 **AI Agent Integration** - Automatic AI agent assignment to implement features when moved to "In Progress"
|
||||
- 🔀 **Git Worktree Isolation** - Each feature executes in isolated git worktrees to protect your main branch
|
||||
- 📡 **Real-time Streaming** - Watch AI agents work in real-time with live tool usage, progress updates, and task completion
|
||||
- 🔄 **Follow-up Instructions** - Send additional instructions to running agents without stopping them
|
||||
|
||||
### AI & Planning
|
||||
|
||||
- 🧠 **Multi-Model Support** - Choose from Claude Opus, Sonnet, and Haiku per feature
|
||||
- 💭 **Extended Thinking** - Enable thinking modes (none, medium, deep, ultra) for complex problem-solving
|
||||
- 📝 **Planning Modes** - Four planning levels: skip (direct implementation), lite (quick plan), spec (task breakdown), full (phased execution)
|
||||
- ✅ **Plan Approval** - Review and approve AI-generated plans before implementation begins
|
||||
- 📊 **Multi-Agent Task Execution** - Spec mode spawns dedicated agents per task for focused implementation
|
||||
|
||||
### Project Management
|
||||
|
||||
- 🔍 **Project Analysis** - AI-powered codebase analysis to understand your project structure
|
||||
- 💡 **Feature Suggestions** - AI-generated feature suggestions based on project analysis
|
||||
- 📁 **Context Management** - Add markdown, images, and documentation files that agents automatically reference
|
||||
- 🔗 **Dependency Blocking** - Features can depend on other features, enforcing execution order
|
||||
- 🌳 **Graph View** - Visualize feature dependencies with interactive graph visualization
|
||||
- 📋 **GitHub Integration** - Import issues, validate feasibility, and convert to tasks automatically
|
||||
|
||||
### Collaboration & Review
|
||||
|
||||
- 🧪 **Verification Workflow** - Features move to "Waiting Approval" for review and testing
|
||||
- 💬 **Agent Chat** - Interactive chat sessions with AI agents for exploratory work
|
||||
- 👤 **AI Profiles** - Create custom agent configurations with different prompts, models, and settings
|
||||
- 📜 **Session History** - Persistent chat sessions across restarts with full conversation history
|
||||
- 🔍 **Git Diff Viewer** - Review changes made by agents before approving
|
||||
|
||||
### Developer Tools
|
||||
|
||||
- 🖥️ **Integrated Terminal** - Full terminal access with tabs, splits, and persistent sessions
|
||||
- 🖼️ **Image Support** - Attach screenshots and diagrams to feature descriptions for visual context
|
||||
- ⚡ **Concurrent Execution** - Configure how many features can run simultaneously (default: 3)
|
||||
- ⌨️ **Keyboard Shortcuts** - Fully customizable shortcuts for navigation and actions
|
||||
- 🎨 **Theme System** - 25+ themes including Dark, Light, Dracula, Nord, Catppuccin, and more
|
||||
- 🖥️ **Cross-Platform** - Desktop app for macOS (x64, arm64), Windows (x64), and Linux (x64)
|
||||
- 🌐 **Web Mode** - Run in browser or as Electron desktop app
|
||||
|
||||
### Advanced Features
|
||||
|
||||
- 🔐 **Docker Isolation** - Security-focused Docker deployment with no host filesystem access
|
||||
- 🎯 **Worktree Management** - Create, switch, commit, and create PRs from worktrees
|
||||
- 📊 **Usage Tracking** - Monitor Claude API usage with detailed metrics
|
||||
- 🔊 **Audio Notifications** - Optional completion sounds (mutable in settings)
|
||||
- 💾 **Auto-save** - All work automatically persisted to `.automaker/` directory
|
||||
- 🧠 **Multi-Model Support** - Choose from multiple AI models including Claude Opus, Sonnet, and more
|
||||
- 💭 **Extended Thinking** - Enable extended thinking modes for complex problem-solving
|
||||
- 📡 **Real-time Agent Output** - View live agent output, logs, and file diffs as features are being implemented
|
||||
- 🔍 **Project Analysis** - AI-powered project structure analysis to understand your codebase
|
||||
- 📁 **Context Management** - Add context files to help AI agents understand your project better
|
||||
- 💡 **Feature Suggestions** - AI-generated feature suggestions based on your project
|
||||
- 🖼️ **Image Support** - Attach images and screenshots to feature descriptions
|
||||
- ⚡ **Concurrent Processing** - Configure concurrency to process multiple features simultaneously
|
||||
- 🧪 **Test Integration** - Automatic test running and verification for implemented features
|
||||
- 🔀 **Git Integration** - View git diffs and track changes made by AI agents
|
||||
- 👤 **AI Profiles** - Create and manage different AI agent profiles for various tasks
|
||||
- 💬 **Chat History** - Keep track of conversations and interactions with AI agents
|
||||
- ⌨️ **Keyboard Shortcuts** - Efficient navigation and actions via keyboard shortcuts
|
||||
- 🎨 **Dark/Light Theme** - Beautiful UI with theme support
|
||||
- 🖥️ **Cross-Platform** - Desktop application built with Electron for Windows, macOS, and Linux
|
||||
|
||||
## Tech Stack
|
||||
|
||||
### Frontend
|
||||
|
||||
- **React 19** - UI framework
|
||||
- **Vite 7** - Build tool and development server
|
||||
- **Electron 39** - Desktop application framework
|
||||
- **TypeScript 5.9** - Type safety
|
||||
- **TanStack Router** - File-based routing
|
||||
- **Zustand 5** - State management with persistence
|
||||
- **Tailwind CSS 4** - Utility-first styling with 25+ themes
|
||||
- **Radix UI** - Accessible component primitives
|
||||
- **dnd-kit** - Drag and drop for Kanban board
|
||||
- **@xyflow/react** - Graph visualization for dependencies
|
||||
- **xterm.js** - Integrated terminal emulator
|
||||
- **CodeMirror 6** - Code editor for XML/syntax highlighting
|
||||
- **Lucide Icons** - Icon library
|
||||
|
||||
### Backend
|
||||
|
||||
- **Node.js** - JavaScript runtime with ES modules
|
||||
- **Express 5** - HTTP server framework
|
||||
- **TypeScript 5.9** - Type safety
|
||||
- **Claude Agent SDK** - AI agent integration (@anthropic-ai/claude-agent-sdk)
|
||||
- **WebSocket (ws)** - Real-time event streaming
|
||||
- **node-pty** - PTY terminal sessions
|
||||
|
||||
### Testing & Quality
|
||||
|
||||
- **Playwright** - End-to-end testing
|
||||
- **Vitest** - Unit testing framework
|
||||
- **ESLint 9** - Code linting
|
||||
- **Prettier 3** - Code formatting
|
||||
- **Husky** - Git hooks for pre-commit formatting
|
||||
|
||||
### Shared Libraries
|
||||
|
||||
- **@automaker/types** - Shared TypeScript definitions
|
||||
- **@automaker/utils** - Logging, error handling, image processing
|
||||
- **@automaker/prompts** - AI prompt templates
|
||||
- **@automaker/platform** - Path management and security
|
||||
- **@automaker/model-resolver** - Claude model alias resolution
|
||||
- **@automaker/dependency-resolver** - Feature dependency ordering
|
||||
- **@automaker/git-utils** - Git operations and worktree management
|
||||
|
||||
## Available Views
|
||||
|
||||
Automaker provides several specialized views accessible via the sidebar or keyboard shortcuts:
|
||||
|
||||
| View | Shortcut | Description |
|
||||
| ------------------ | -------- | ------------------------------------------------------------------------------------------------ |
|
||||
| **Board** | `K` | Kanban board for managing feature workflow (Backlog → In Progress → Waiting Approval → Verified) |
|
||||
| **Agent** | `A` | Interactive chat sessions with AI agents for exploratory work and questions |
|
||||
| **Spec** | `D` | Project specification editor with AI-powered generation and feature suggestions |
|
||||
| **Context** | `C` | Manage context files (markdown, images) that AI agents automatically reference |
|
||||
| **Profiles** | `M` | Create and manage AI agent profiles with custom prompts and configurations |
|
||||
| **Settings** | `S` | Configure themes, shortcuts, defaults, authentication, and more |
|
||||
| **Terminal** | `T` | Integrated terminal with tabs, splits, and persistent sessions |
|
||||
| **GitHub Issues** | - | Import and validate GitHub issues, convert to tasks |
|
||||
| **Running Agents** | - | View all active agents across projects with status and progress |
|
||||
|
||||
### Keyboard Navigation
|
||||
|
||||
All shortcuts are customizable in Settings. Default shortcuts:
|
||||
|
||||
- **Navigation:** `K` (Board), `A` (Agent), `D` (Spec), `C` (Context), `S` (Settings), `M` (Profiles), `T` (Terminal)
|
||||
- **UI:** `` ` `` (Toggle sidebar)
|
||||
- **Actions:** `N` (New item in current view), `G` (Start next features), `O` (Open project), `P` (Project picker)
|
||||
- **Projects:** `Q`/`E` (Cycle previous/next project)
|
||||
|
||||
## Architecture
|
||||
|
||||
### Monorepo Structure
|
||||
|
||||
Automaker is built as an npm workspace monorepo with two main applications and seven shared packages:
|
||||
|
||||
```text
|
||||
automaker/
|
||||
├── apps/
|
||||
│ ├── ui/ # React + Vite + Electron frontend
|
||||
│ └── server/ # Express + WebSocket backend
|
||||
└── libs/ # Shared packages
|
||||
├── types/ # Core TypeScript definitions
|
||||
├── utils/ # Logging, errors, utilities
|
||||
├── prompts/ # AI prompt templates
|
||||
├── platform/ # Path management, security
|
||||
├── model-resolver/ # Claude model aliasing
|
||||
├── dependency-resolver/ # Feature dependency ordering
|
||||
└── git-utils/ # Git operations & worktree management
|
||||
```
|
||||
|
||||
### How It Works
|
||||
|
||||
1. **Feature Definition** - Users create feature cards on the Kanban board with descriptions, images, and configuration
|
||||
2. **Git Worktree Creation** - When a feature starts, a git worktree is created for isolated development
|
||||
3. **Agent Execution** - Claude Agent SDK executes in the worktree with full file system and command access
|
||||
4. **Real-time Streaming** - Agent output streams via WebSocket to the frontend for live monitoring
|
||||
5. **Plan Approval** (optional) - For spec/full planning modes, agents generate plans that require user approval
|
||||
6. **Multi-Agent Tasks** (spec mode) - Each task in the spec gets a dedicated agent for focused implementation
|
||||
7. **Verification** - Features move to "Waiting Approval" where changes can be reviewed via git diff
|
||||
8. **Integration** - After approval, changes can be committed and PRs created from the worktree
|
||||
|
||||
### Key Architectural Patterns
|
||||
|
||||
- **Event-Driven Architecture** - All server operations emit events that stream to the frontend
|
||||
- **Provider Pattern** - Extensible AI provider system (currently Claude, designed for future providers)
|
||||
- **Service-Oriented Backend** - Modular services for agent management, features, terminals, settings
|
||||
- **State Management** - Zustand with persistence for frontend state across restarts
|
||||
- **File-Based Storage** - No database; features stored as JSON files in `.automaker/` directory
|
||||
|
||||
### Security & Isolation
|
||||
|
||||
- **Git Worktrees** - Each feature executes in an isolated git worktree, protecting your main branch
|
||||
- **Path Sandboxing** - Optional `ALLOWED_ROOT_DIRECTORY` restricts file access
|
||||
- **Docker Isolation** - Recommended deployment uses Docker with no host filesystem access
|
||||
- **Plan Approval** - Optional plan review before implementation prevents unwanted changes
|
||||
|
||||
### Data Storage
|
||||
|
||||
Automaker uses a file-based storage system (no database required):
|
||||
|
||||
#### Per-Project Data
|
||||
|
||||
Stored in `{projectPath}/.automaker/`:
|
||||
|
||||
```text
|
||||
.automaker/
|
||||
├── features/ # Feature JSON files and images
|
||||
│ └── {featureId}/
|
||||
│ ├── feature.json # Feature metadata
|
||||
│ ├── agent-output.md # AI agent output log
|
||||
│ └── images/ # Attached images
|
||||
├── context/ # Context files for AI agents
|
||||
├── settings.json # Project-specific settings
|
||||
├── spec.md # Project specification
|
||||
├── analysis.json # Project structure analysis
|
||||
└── feature-suggestions.json # AI-generated suggestions
|
||||
```
|
||||
|
||||
#### Global Data
|
||||
|
||||
Stored in `DATA_DIR` (default `./data`):
|
||||
|
||||
```text
|
||||
data/
|
||||
├── settings.json # Global settings, profiles, shortcuts
|
||||
├── credentials.json # API keys (encrypted)
|
||||
├── sessions-metadata.json # Chat session metadata
|
||||
└── agent-sessions/ # Conversation histories
|
||||
└── {sessionId}.json
|
||||
```
|
||||
- [Next.js](https://nextjs.org) - React framework
|
||||
- [Electron](https://www.electronjs.org/) - Desktop application framework
|
||||
- [Tailwind CSS](https://tailwindcss.com/) - Styling
|
||||
- [Zustand](https://zustand-demo.pmnd.rs/) - State management
|
||||
- [dnd-kit](https://dndkit.com/) - Drag and drop functionality
|
||||
|
||||
## Learn More
|
||||
|
||||
### Documentation
|
||||
To learn more about Next.js, take a look at the following resources:
|
||||
|
||||
- [Project Documentation](./docs/) - Architecture guides, patterns, and developer docs
|
||||
- [Docker Isolation Guide](./docs/docker-isolation.md) - Security-focused Docker deployment
|
||||
- [Shared Packages Guide](./docs/llm-shared-packages.md) - Using monorepo packages
|
||||
|
||||
### Community
|
||||
|
||||
Join the **Agentic Jumpstart** Discord to connect with other builders exploring **agentic coding**:
|
||||
|
||||
👉 [Agentic Jumpstart Discord](https://discord.gg/jjem7aEDKU)
|
||||
- [Next.js Documentation](https://nextjs.org/docs) - learn about Next.js features and API.
|
||||
- [Learn Next.js](https://nextjs.org/learn) - an interactive Next.js tutorial.
|
||||
|
||||
## License
|
||||
|
||||
|
||||
1282
apps/app/server-bundle/package-lock.json
generated
Normal file
1282
apps/app/server-bundle/package-lock.json
generated
Normal file
File diff suppressed because it is too large
Load Diff
15
apps/app/server-bundle/package.json
Normal file
15
apps/app/server-bundle/package.json
Normal file
@@ -0,0 +1,15 @@
|
||||
{
|
||||
"name": "@automaker/server-bundle",
|
||||
"version": "0.1.0",
|
||||
"type": "module",
|
||||
"main": "dist/index.js",
|
||||
"dependencies": {
|
||||
"@anthropic-ai/claude-agent-sdk": "^0.1.61",
|
||||
"cors": "^2.8.5",
|
||||
"dotenv": "^17.2.3",
|
||||
"express": "^5.1.0",
|
||||
"morgan": "^1.10.1",
|
||||
"node-pty": "1.1.0-beta41",
|
||||
"ws": "^8.18.0"
|
||||
}
|
||||
}
|
||||
@@ -28,7 +28,6 @@
|
||||
"@automaker/prompts": "^1.0.0",
|
||||
"@automaker/types": "^1.0.0",
|
||||
"@automaker/utils": "^1.0.0",
|
||||
"@modelcontextprotocol/sdk": "^1.25.1",
|
||||
"cors": "^2.8.5",
|
||||
"dotenv": "^17.2.3",
|
||||
"express": "^5.2.1",
|
||||
|
||||
@@ -48,12 +48,6 @@ import { createClaudeRoutes } from './routes/claude/index.js';
|
||||
import { ClaudeUsageService } from './services/claude-usage-service.js';
|
||||
import { createGitHubRoutes } from './routes/github/index.js';
|
||||
import { createContextRoutes } from './routes/context/index.js';
|
||||
import { createBacklogPlanRoutes } from './routes/backlog-plan/index.js';
|
||||
import { cleanupStaleValidations } from './routes/github/routes/validation-common.js';
|
||||
import { createMCPRoutes } from './routes/mcp/index.js';
|
||||
import { MCPTestService } from './services/mcp-test-service.js';
|
||||
import { createPipelineRoutes } from './routes/pipeline/index.js';
|
||||
import { pipelineService } from './services/pipeline-service.js';
|
||||
|
||||
// Load environment variables
|
||||
dotenv.config();
|
||||
@@ -105,13 +99,9 @@ if (ENABLE_REQUEST_LOGGING) {
|
||||
})
|
||||
);
|
||||
}
|
||||
// SECURITY: Restrict CORS to localhost UI origins to prevent drive-by attacks
|
||||
// from malicious websites. MCP server endpoints can execute arbitrary commands,
|
||||
// so allowing any origin would enable RCE from any website visited while Automaker runs.
|
||||
const DEFAULT_CORS_ORIGINS = ['http://localhost:3007', 'http://127.0.0.1:3007'];
|
||||
app.use(
|
||||
cors({
|
||||
origin: process.env.CORS_ORIGIN || DEFAULT_CORS_ORIGINS,
|
||||
origin: process.env.CORS_ORIGIN || '*',
|
||||
credentials: true,
|
||||
})
|
||||
);
|
||||
@@ -121,13 +111,11 @@ app.use(express.json({ limit: '50mb' }));
|
||||
const events: EventEmitter = createEventEmitter();
|
||||
|
||||
// Create services
|
||||
// Note: settingsService is created first so it can be injected into other services
|
||||
const settingsService = new SettingsService(DATA_DIR);
|
||||
const agentService = new AgentService(DATA_DIR, events, settingsService);
|
||||
const agentService = new AgentService(DATA_DIR, events);
|
||||
const featureLoader = new FeatureLoader();
|
||||
const autoModeService = new AutoModeService(events, settingsService);
|
||||
const autoModeService = new AutoModeService(events);
|
||||
const settingsService = new SettingsService(DATA_DIR);
|
||||
const claudeUsageService = new ClaudeUsageService();
|
||||
const mcpTestService = new MCPTestService(settingsService);
|
||||
|
||||
// Initialize services
|
||||
(async () => {
|
||||
@@ -135,15 +123,6 @@ const mcpTestService = new MCPTestService(settingsService);
|
||||
console.log('[Server] Agent service initialized');
|
||||
})();
|
||||
|
||||
// Run stale validation cleanup every hour to prevent memory leaks from crashed validations
|
||||
const VALIDATION_CLEANUP_INTERVAL_MS = 60 * 60 * 1000; // 1 hour
|
||||
setInterval(() => {
|
||||
const cleaned = cleanupStaleValidations();
|
||||
if (cleaned > 0) {
|
||||
console.log(`[Server] Cleaned up ${cleaned} stale validation entries`);
|
||||
}
|
||||
}, VALIDATION_CLEANUP_INTERVAL_MS);
|
||||
|
||||
// Mount API routes - health is unauthenticated for monitoring
|
||||
app.use('/api/health', createHealthRoutes());
|
||||
|
||||
@@ -159,20 +138,17 @@ app.use('/api/enhance-prompt', createEnhancePromptRoutes());
|
||||
app.use('/api/worktree', createWorktreeRoutes());
|
||||
app.use('/api/git', createGitRoutes());
|
||||
app.use('/api/setup', createSetupRoutes());
|
||||
app.use('/api/suggestions', createSuggestionsRoutes(events, settingsService));
|
||||
app.use('/api/suggestions', createSuggestionsRoutes(events));
|
||||
app.use('/api/models', createModelsRoutes());
|
||||
app.use('/api/spec-regeneration', createSpecRegenerationRoutes(events, settingsService));
|
||||
app.use('/api/spec-regeneration', createSpecRegenerationRoutes(events));
|
||||
app.use('/api/running-agents', createRunningAgentsRoutes(autoModeService));
|
||||
app.use('/api/workspace', createWorkspaceRoutes());
|
||||
app.use('/api/templates', createTemplatesRoutes());
|
||||
app.use('/api/terminal', createTerminalRoutes());
|
||||
app.use('/api/settings', createSettingsRoutes(settingsService));
|
||||
app.use('/api/claude', createClaudeRoutes(claudeUsageService));
|
||||
app.use('/api/github', createGitHubRoutes(events, settingsService));
|
||||
app.use('/api/context', createContextRoutes(settingsService));
|
||||
app.use('/api/backlog-plan', createBacklogPlanRoutes(events, settingsService));
|
||||
app.use('/api/mcp', createMCPRoutes(mcpTestService));
|
||||
app.use('/api/pipeline', createPipelineRoutes(pipelineService));
|
||||
app.use('/api/github', createGitHubRoutes());
|
||||
app.use('/api/context', createContextRoutes());
|
||||
|
||||
// Create HTTP server
|
||||
const server = createServer(app);
|
||||
@@ -201,31 +177,12 @@ server.on('upgrade', (request, socket, head) => {
|
||||
|
||||
// Events WebSocket connection handler
|
||||
wss.on('connection', (ws: WebSocket) => {
|
||||
console.log('[WebSocket] Client connected, ready state:', ws.readyState);
|
||||
console.log('[WebSocket] Client connected');
|
||||
|
||||
// Subscribe to all events and forward to this client
|
||||
const unsubscribe = events.subscribe((type, payload) => {
|
||||
console.log('[WebSocket] Event received:', {
|
||||
type,
|
||||
hasPayload: !!payload,
|
||||
payloadKeys: payload ? Object.keys(payload) : [],
|
||||
wsReadyState: ws.readyState,
|
||||
wsOpen: ws.readyState === WebSocket.OPEN,
|
||||
});
|
||||
|
||||
if (ws.readyState === WebSocket.OPEN) {
|
||||
const message = JSON.stringify({ type, payload });
|
||||
console.log('[WebSocket] Sending event to client:', {
|
||||
type,
|
||||
messageLength: message.length,
|
||||
sessionId: (payload as any)?.sessionId,
|
||||
});
|
||||
ws.send(message);
|
||||
} else {
|
||||
console.log(
|
||||
'[WebSocket] WARNING: Cannot send event, WebSocket not open. ReadyState:',
|
||||
ws.readyState
|
||||
);
|
||||
ws.send(JSON.stringify({ type, payload }));
|
||||
}
|
||||
});
|
||||
|
||||
@@ -235,7 +192,7 @@ wss.on('connection', (ws: WebSocket) => {
|
||||
});
|
||||
|
||||
ws.on('error', (error) => {
|
||||
console.error('[WebSocket] ERROR:', error);
|
||||
console.error('[WebSocket] Error:', error);
|
||||
unsubscribe();
|
||||
});
|
||||
});
|
||||
|
||||
@@ -18,7 +18,7 @@
|
||||
import type { Options } from '@anthropic-ai/claude-agent-sdk';
|
||||
import path from 'path';
|
||||
import { resolveModelString } from '@automaker/model-resolver';
|
||||
import { DEFAULT_MODELS, CLAUDE_MODEL_MAP, type McpServerConfig } from '@automaker/types';
|
||||
import { DEFAULT_MODELS, CLAUDE_MODEL_MAP } from '@automaker/types';
|
||||
import { isPathAllowed, PathNotAllowedError, getAllowedRootDirectory } from '@automaker/platform';
|
||||
|
||||
/**
|
||||
@@ -136,106 +136,6 @@ function getBaseOptions(): Partial<Options> {
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* MCP permission options result
|
||||
*/
|
||||
interface McpPermissionOptions {
|
||||
/** Whether tools should be restricted to a preset */
|
||||
shouldRestrictTools: boolean;
|
||||
/** Options to spread when MCP bypass is enabled */
|
||||
bypassOptions: Partial<Options>;
|
||||
/** Options to spread for MCP servers */
|
||||
mcpServerOptions: Partial<Options>;
|
||||
}
|
||||
|
||||
/**
|
||||
* Build MCP-related options based on configuration.
|
||||
* Centralizes the logic for determining permission modes and tool restrictions
|
||||
* when MCP servers are configured.
|
||||
*
|
||||
* @param config - The SDK options config
|
||||
* @returns Object with MCP permission settings to spread into final options
|
||||
*/
|
||||
function buildMcpOptions(config: CreateSdkOptionsConfig): McpPermissionOptions {
|
||||
const hasMcpServers = config.mcpServers && Object.keys(config.mcpServers).length > 0;
|
||||
// Default to true for autonomous workflow. Security is enforced when adding servers
|
||||
// via the security warning dialog that explains the risks.
|
||||
const mcpAutoApprove = config.mcpAutoApproveTools ?? true;
|
||||
const mcpUnrestricted = config.mcpUnrestrictedTools ?? true;
|
||||
|
||||
// Determine if we should bypass permissions based on settings
|
||||
const shouldBypassPermissions = hasMcpServers && mcpAutoApprove;
|
||||
// Determine if we should restrict tools (only when no MCP or unrestricted is disabled)
|
||||
const shouldRestrictTools = !hasMcpServers || !mcpUnrestricted;
|
||||
|
||||
return {
|
||||
shouldRestrictTools,
|
||||
// Only include bypass options when MCP is configured and auto-approve is enabled
|
||||
bypassOptions: shouldBypassPermissions
|
||||
? {
|
||||
permissionMode: 'bypassPermissions' as const,
|
||||
// Required flag when using bypassPermissions mode
|
||||
allowDangerouslySkipPermissions: true,
|
||||
}
|
||||
: {},
|
||||
// Include MCP servers if configured
|
||||
mcpServerOptions: config.mcpServers ? { mcpServers: config.mcpServers } : {},
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Build system prompt configuration based on autoLoadClaudeMd setting.
|
||||
* When autoLoadClaudeMd is true:
|
||||
* - Uses preset mode with 'claude_code' to enable CLAUDE.md auto-loading
|
||||
* - If there's a custom systemPrompt, appends it to the preset
|
||||
* - Sets settingSources to ['project'] for SDK to load CLAUDE.md files
|
||||
*
|
||||
* @param config - The SDK options config
|
||||
* @returns Object with systemPrompt and settingSources for SDK options
|
||||
*/
|
||||
function buildClaudeMdOptions(config: CreateSdkOptionsConfig): {
|
||||
systemPrompt?: string | SystemPromptConfig;
|
||||
settingSources?: Array<'user' | 'project' | 'local'>;
|
||||
} {
|
||||
if (!config.autoLoadClaudeMd) {
|
||||
// Standard mode - just pass through the system prompt as-is
|
||||
return config.systemPrompt ? { systemPrompt: config.systemPrompt } : {};
|
||||
}
|
||||
|
||||
// Auto-load CLAUDE.md mode - use preset with settingSources
|
||||
const result: {
|
||||
systemPrompt: SystemPromptConfig;
|
||||
settingSources: Array<'user' | 'project' | 'local'>;
|
||||
} = {
|
||||
systemPrompt: {
|
||||
type: 'preset',
|
||||
preset: 'claude_code',
|
||||
},
|
||||
// Load both user (~/.claude/CLAUDE.md) and project (.claude/CLAUDE.md) settings
|
||||
settingSources: ['user', 'project'],
|
||||
};
|
||||
|
||||
// If there's a custom system prompt, append it to the preset
|
||||
if (config.systemPrompt) {
|
||||
result.systemPrompt.append = config.systemPrompt;
|
||||
}
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
/**
|
||||
* System prompt configuration for SDK options
|
||||
* When using preset mode with claude_code, CLAUDE.md files are automatically loaded
|
||||
*/
|
||||
export interface SystemPromptConfig {
|
||||
/** Use preset mode with claude_code to enable CLAUDE.md auto-loading */
|
||||
type: 'preset';
|
||||
/** The preset to use - 'claude_code' enables CLAUDE.md loading */
|
||||
preset: 'claude_code';
|
||||
/** Optional additional prompt to append to the preset */
|
||||
append?: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* Options configuration for creating SDK options
|
||||
*/
|
||||
@@ -260,31 +160,8 @@ export interface CreateSdkOptionsConfig {
|
||||
type: 'json_schema';
|
||||
schema: Record<string, unknown>;
|
||||
};
|
||||
|
||||
/** Enable auto-loading of CLAUDE.md files via SDK's settingSources */
|
||||
autoLoadClaudeMd?: boolean;
|
||||
|
||||
/** Enable sandbox mode for bash command isolation */
|
||||
enableSandboxMode?: boolean;
|
||||
|
||||
/** MCP servers to make available to the agent */
|
||||
mcpServers?: Record<string, McpServerConfig>;
|
||||
|
||||
/** Auto-approve MCP tool calls without permission prompts */
|
||||
mcpAutoApproveTools?: boolean;
|
||||
|
||||
/** Allow unrestricted tools when MCP servers are enabled */
|
||||
mcpUnrestrictedTools?: boolean;
|
||||
}
|
||||
|
||||
// Re-export MCP types from @automaker/types for convenience
|
||||
export type {
|
||||
McpServerConfig,
|
||||
McpStdioServerConfig,
|
||||
McpSSEServerConfig,
|
||||
McpHttpServerConfig,
|
||||
} from '@automaker/types';
|
||||
|
||||
/**
|
||||
* Create SDK options for spec generation
|
||||
*
|
||||
@@ -292,15 +169,11 @@ export type {
|
||||
* - Uses read-only tools for codebase analysis
|
||||
* - Extended turns for thorough exploration
|
||||
* - Opus model by default (can be overridden)
|
||||
* - When autoLoadClaudeMd is true, uses preset mode and settingSources for CLAUDE.md loading
|
||||
*/
|
||||
export function createSpecGenerationOptions(config: CreateSdkOptionsConfig): Options {
|
||||
// Validate working directory before creating options
|
||||
validateWorkingDirectory(config.cwd);
|
||||
|
||||
// Build CLAUDE.md auto-loading options if enabled
|
||||
const claudeMdOptions = buildClaudeMdOptions(config);
|
||||
|
||||
return {
|
||||
...getBaseOptions(),
|
||||
// Override permissionMode - spec generation only needs read-only tools
|
||||
@@ -311,7 +184,7 @@ export function createSpecGenerationOptions(config: CreateSdkOptionsConfig): Opt
|
||||
maxTurns: MAX_TURNS.maximum,
|
||||
cwd: config.cwd,
|
||||
allowedTools: [...TOOL_PRESETS.specGeneration],
|
||||
...claudeMdOptions,
|
||||
...(config.systemPrompt && { systemPrompt: config.systemPrompt }),
|
||||
...(config.abortController && { abortController: config.abortController }),
|
||||
...(config.outputFormat && { outputFormat: config.outputFormat }),
|
||||
};
|
||||
@@ -324,15 +197,11 @@ export function createSpecGenerationOptions(config: CreateSdkOptionsConfig): Opt
|
||||
* - Uses read-only tools (just needs to read the spec)
|
||||
* - Quick turns since it's mostly JSON generation
|
||||
* - Sonnet model by default for speed
|
||||
* - When autoLoadClaudeMd is true, uses preset mode and settingSources for CLAUDE.md loading
|
||||
*/
|
||||
export function createFeatureGenerationOptions(config: CreateSdkOptionsConfig): Options {
|
||||
// Validate working directory before creating options
|
||||
validateWorkingDirectory(config.cwd);
|
||||
|
||||
// Build CLAUDE.md auto-loading options if enabled
|
||||
const claudeMdOptions = buildClaudeMdOptions(config);
|
||||
|
||||
return {
|
||||
...getBaseOptions(),
|
||||
// Override permissionMode - feature generation only needs read-only tools
|
||||
@@ -341,7 +210,7 @@ export function createFeatureGenerationOptions(config: CreateSdkOptionsConfig):
|
||||
maxTurns: MAX_TURNS.quick,
|
||||
cwd: config.cwd,
|
||||
allowedTools: [...TOOL_PRESETS.readOnly],
|
||||
...claudeMdOptions,
|
||||
...(config.systemPrompt && { systemPrompt: config.systemPrompt }),
|
||||
...(config.abortController && { abortController: config.abortController }),
|
||||
};
|
||||
}
|
||||
@@ -353,22 +222,18 @@ export function createFeatureGenerationOptions(config: CreateSdkOptionsConfig):
|
||||
* - Uses read-only tools for analysis
|
||||
* - Standard turns to allow thorough codebase exploration and structured output generation
|
||||
* - Opus model by default for thorough analysis
|
||||
* - When autoLoadClaudeMd is true, uses preset mode and settingSources for CLAUDE.md loading
|
||||
*/
|
||||
export function createSuggestionsOptions(config: CreateSdkOptionsConfig): Options {
|
||||
// Validate working directory before creating options
|
||||
validateWorkingDirectory(config.cwd);
|
||||
|
||||
// Build CLAUDE.md auto-loading options if enabled
|
||||
const claudeMdOptions = buildClaudeMdOptions(config);
|
||||
|
||||
return {
|
||||
...getBaseOptions(),
|
||||
model: getModelForUseCase('suggestions', config.model),
|
||||
maxTurns: MAX_TURNS.extended,
|
||||
cwd: config.cwd,
|
||||
allowedTools: [...TOOL_PRESETS.readOnly],
|
||||
...claudeMdOptions,
|
||||
...(config.systemPrompt && { systemPrompt: config.systemPrompt }),
|
||||
...(config.abortController && { abortController: config.abortController }),
|
||||
...(config.outputFormat && { outputFormat: config.outputFormat }),
|
||||
};
|
||||
@@ -381,8 +246,7 @@ export function createSuggestionsOptions(config: CreateSdkOptionsConfig): Option
|
||||
* - Full tool access for code modification
|
||||
* - Standard turns for interactive sessions
|
||||
* - Model priority: explicit model > session model > chat default
|
||||
* - Sandbox mode controlled by enableSandboxMode setting
|
||||
* - When autoLoadClaudeMd is true, uses preset mode and settingSources for CLAUDE.md loading
|
||||
* - Sandbox enabled for bash safety
|
||||
*/
|
||||
export function createChatOptions(config: CreateSdkOptionsConfig): Options {
|
||||
// Validate working directory before creating options
|
||||
@@ -391,30 +255,18 @@ export function createChatOptions(config: CreateSdkOptionsConfig): Options {
|
||||
// Model priority: explicit model > session model > chat default
|
||||
const effectiveModel = config.model || config.sessionModel;
|
||||
|
||||
// Build CLAUDE.md auto-loading options if enabled
|
||||
const claudeMdOptions = buildClaudeMdOptions(config);
|
||||
|
||||
// Build MCP-related options
|
||||
const mcpOptions = buildMcpOptions(config);
|
||||
|
||||
return {
|
||||
...getBaseOptions(),
|
||||
model: getModelForUseCase('chat', effectiveModel),
|
||||
maxTurns: MAX_TURNS.standard,
|
||||
cwd: config.cwd,
|
||||
// Only restrict tools if no MCP servers configured or unrestricted is disabled
|
||||
...(mcpOptions.shouldRestrictTools && { allowedTools: [...TOOL_PRESETS.chat] }),
|
||||
// Apply MCP bypass options if configured
|
||||
...mcpOptions.bypassOptions,
|
||||
...(config.enableSandboxMode && {
|
||||
sandbox: {
|
||||
enabled: true,
|
||||
autoAllowBashIfSandboxed: true,
|
||||
},
|
||||
}),
|
||||
...claudeMdOptions,
|
||||
allowedTools: [...TOOL_PRESETS.chat],
|
||||
sandbox: {
|
||||
enabled: true,
|
||||
autoAllowBashIfSandboxed: true,
|
||||
},
|
||||
...(config.systemPrompt && { systemPrompt: config.systemPrompt }),
|
||||
...(config.abortController && { abortController: config.abortController }),
|
||||
...mcpOptions.mcpServerOptions,
|
||||
};
|
||||
}
|
||||
|
||||
@@ -425,37 +277,24 @@ export function createChatOptions(config: CreateSdkOptionsConfig): Options {
|
||||
* - Full tool access for code modification and implementation
|
||||
* - Extended turns for thorough feature implementation
|
||||
* - Uses default model (can be overridden)
|
||||
* - Sandbox mode controlled by enableSandboxMode setting
|
||||
* - When autoLoadClaudeMd is true, uses preset mode and settingSources for CLAUDE.md loading
|
||||
* - Sandbox enabled for bash safety
|
||||
*/
|
||||
export function createAutoModeOptions(config: CreateSdkOptionsConfig): Options {
|
||||
// Validate working directory before creating options
|
||||
validateWorkingDirectory(config.cwd);
|
||||
|
||||
// Build CLAUDE.md auto-loading options if enabled
|
||||
const claudeMdOptions = buildClaudeMdOptions(config);
|
||||
|
||||
// Build MCP-related options
|
||||
const mcpOptions = buildMcpOptions(config);
|
||||
|
||||
return {
|
||||
...getBaseOptions(),
|
||||
model: getModelForUseCase('auto', config.model),
|
||||
maxTurns: MAX_TURNS.maximum,
|
||||
cwd: config.cwd,
|
||||
// Only restrict tools if no MCP servers configured or unrestricted is disabled
|
||||
...(mcpOptions.shouldRestrictTools && { allowedTools: [...TOOL_PRESETS.fullAccess] }),
|
||||
// Apply MCP bypass options if configured
|
||||
...mcpOptions.bypassOptions,
|
||||
...(config.enableSandboxMode && {
|
||||
sandbox: {
|
||||
enabled: true,
|
||||
autoAllowBashIfSandboxed: true,
|
||||
},
|
||||
}),
|
||||
...claudeMdOptions,
|
||||
allowedTools: [...TOOL_PRESETS.fullAccess],
|
||||
sandbox: {
|
||||
enabled: true,
|
||||
autoAllowBashIfSandboxed: true,
|
||||
},
|
||||
...(config.systemPrompt && { systemPrompt: config.systemPrompt }),
|
||||
...(config.abortController && { abortController: config.abortController }),
|
||||
...mcpOptions.mcpServerOptions,
|
||||
};
|
||||
}
|
||||
|
||||
@@ -463,7 +302,6 @@ export function createAutoModeOptions(config: CreateSdkOptionsConfig): Options {
|
||||
* Create custom SDK options with explicit configuration
|
||||
*
|
||||
* Use this when the preset options don't fit your use case.
|
||||
* When autoLoadClaudeMd is true, uses preset mode and settingSources for CLAUDE.md loading
|
||||
*/
|
||||
export function createCustomOptions(
|
||||
config: CreateSdkOptionsConfig & {
|
||||
@@ -475,30 +313,14 @@ export function createCustomOptions(
|
||||
// Validate working directory before creating options
|
||||
validateWorkingDirectory(config.cwd);
|
||||
|
||||
// Build CLAUDE.md auto-loading options if enabled
|
||||
const claudeMdOptions = buildClaudeMdOptions(config);
|
||||
|
||||
// Build MCP-related options
|
||||
const mcpOptions = buildMcpOptions(config);
|
||||
|
||||
// For custom options: use explicit allowedTools if provided, otherwise use preset based on MCP settings
|
||||
const effectiveAllowedTools = config.allowedTools
|
||||
? [...config.allowedTools]
|
||||
: mcpOptions.shouldRestrictTools
|
||||
? [...TOOL_PRESETS.readOnly]
|
||||
: undefined;
|
||||
|
||||
return {
|
||||
...getBaseOptions(),
|
||||
model: getModelForUseCase('default', config.model),
|
||||
maxTurns: config.maxTurns ?? MAX_TURNS.maximum,
|
||||
cwd: config.cwd,
|
||||
...(effectiveAllowedTools && { allowedTools: effectiveAllowedTools }),
|
||||
allowedTools: config.allowedTools ? [...config.allowedTools] : [...TOOL_PRESETS.readOnly],
|
||||
...(config.sandbox && { sandbox: config.sandbox }),
|
||||
// Apply MCP bypass options if configured
|
||||
...mcpOptions.bypassOptions,
|
||||
...claudeMdOptions,
|
||||
...(config.systemPrompt && { systemPrompt: config.systemPrompt }),
|
||||
...(config.abortController && { abortController: config.abortController }),
|
||||
...mcpOptions.mcpServerOptions,
|
||||
};
|
||||
}
|
||||
|
||||
@@ -20,9 +20,4 @@ export const {
|
||||
lstat,
|
||||
joinPath,
|
||||
resolvePath,
|
||||
// Throttling configuration and monitoring
|
||||
configureThrottling,
|
||||
getThrottlingConfig,
|
||||
getPendingOperations,
|
||||
getActiveOperations,
|
||||
} = secureFs;
|
||||
|
||||
@@ -1,257 +0,0 @@
|
||||
/**
|
||||
* Helper utilities for loading settings and context file handling across different parts of the server
|
||||
*/
|
||||
|
||||
import type { SettingsService } from '../services/settings-service.js';
|
||||
import type { ContextFilesResult, ContextFileInfo } from '@automaker/utils';
|
||||
import type { MCPServerConfig, McpServerConfig } from '@automaker/types';
|
||||
|
||||
/**
|
||||
* Get the autoLoadClaudeMd setting, with project settings taking precedence over global.
|
||||
* Returns false if settings service is not available.
|
||||
*
|
||||
* @param projectPath - Path to the project
|
||||
* @param settingsService - Optional settings service instance
|
||||
* @param logPrefix - Prefix for log messages (e.g., '[DescribeImage]')
|
||||
* @returns Promise resolving to the autoLoadClaudeMd setting value
|
||||
*/
|
||||
export async function getAutoLoadClaudeMdSetting(
|
||||
projectPath: string,
|
||||
settingsService?: SettingsService | null,
|
||||
logPrefix = '[SettingsHelper]'
|
||||
): Promise<boolean> {
|
||||
if (!settingsService) {
|
||||
console.log(`${logPrefix} SettingsService not available, autoLoadClaudeMd disabled`);
|
||||
return false;
|
||||
}
|
||||
|
||||
try {
|
||||
// Check project settings first (takes precedence)
|
||||
const projectSettings = await settingsService.getProjectSettings(projectPath);
|
||||
if (projectSettings.autoLoadClaudeMd !== undefined) {
|
||||
console.log(
|
||||
`${logPrefix} autoLoadClaudeMd from project settings: ${projectSettings.autoLoadClaudeMd}`
|
||||
);
|
||||
return projectSettings.autoLoadClaudeMd;
|
||||
}
|
||||
|
||||
// Fall back to global settings
|
||||
const globalSettings = await settingsService.getGlobalSettings();
|
||||
const result = globalSettings.autoLoadClaudeMd ?? false;
|
||||
console.log(`${logPrefix} autoLoadClaudeMd from global settings: ${result}`);
|
||||
return result;
|
||||
} catch (error) {
|
||||
console.error(`${logPrefix} Failed to load autoLoadClaudeMd setting:`, error);
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the enableSandboxMode setting from global settings.
|
||||
* Returns false if settings service is not available.
|
||||
*
|
||||
* @param settingsService - Optional settings service instance
|
||||
* @param logPrefix - Prefix for log messages (e.g., '[AgentService]')
|
||||
* @returns Promise resolving to the enableSandboxMode setting value
|
||||
*/
|
||||
export async function getEnableSandboxModeSetting(
|
||||
settingsService?: SettingsService | null,
|
||||
logPrefix = '[SettingsHelper]'
|
||||
): Promise<boolean> {
|
||||
if (!settingsService) {
|
||||
console.log(`${logPrefix} SettingsService not available, sandbox mode disabled`);
|
||||
return false;
|
||||
}
|
||||
|
||||
try {
|
||||
const globalSettings = await settingsService.getGlobalSettings();
|
||||
const result = globalSettings.enableSandboxMode ?? true;
|
||||
console.log(`${logPrefix} enableSandboxMode from global settings: ${result}`);
|
||||
return result;
|
||||
} catch (error) {
|
||||
console.error(`${logPrefix} Failed to load enableSandboxMode setting:`, error);
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Filters out CLAUDE.md from context files when autoLoadClaudeMd is enabled
|
||||
* and rebuilds the formatted prompt without it.
|
||||
*
|
||||
* When autoLoadClaudeMd is true, the SDK handles CLAUDE.md loading via settingSources,
|
||||
* so we need to exclude it from the manual context loading to avoid duplication.
|
||||
* Other context files (CODE_QUALITY.md, CONVENTIONS.md, etc.) are preserved.
|
||||
*
|
||||
* @param contextResult - Result from loadContextFiles
|
||||
* @param autoLoadClaudeMd - Whether SDK auto-loading is enabled
|
||||
* @returns Filtered context prompt (empty string if no non-CLAUDE.md files)
|
||||
*/
|
||||
export function filterClaudeMdFromContext(
|
||||
contextResult: ContextFilesResult,
|
||||
autoLoadClaudeMd: boolean
|
||||
): string {
|
||||
// If autoLoadClaudeMd is disabled, return the original prompt unchanged
|
||||
if (!autoLoadClaudeMd || contextResult.files.length === 0) {
|
||||
return contextResult.formattedPrompt;
|
||||
}
|
||||
|
||||
// Filter out CLAUDE.md (case-insensitive)
|
||||
const nonClaudeFiles = contextResult.files.filter((f) => f.name.toLowerCase() !== 'claude.md');
|
||||
|
||||
// If all files were CLAUDE.md, return empty string
|
||||
if (nonClaudeFiles.length === 0) {
|
||||
return '';
|
||||
}
|
||||
|
||||
// Rebuild prompt without CLAUDE.md using the same format as loadContextFiles
|
||||
const formattedFiles = nonClaudeFiles.map((file) => formatContextFileEntry(file));
|
||||
|
||||
return `# Project Context Files
|
||||
|
||||
The following context files provide project-specific rules, conventions, and guidelines.
|
||||
Each file serves a specific purpose - use the description to understand when to reference it.
|
||||
If you need more details about a context file, you can read the full file at the path provided.
|
||||
|
||||
**IMPORTANT**: You MUST follow the rules and conventions specified in these files.
|
||||
- Follow ALL commands exactly as shown (e.g., if the project uses \`pnpm\`, NEVER use \`npm\` or \`npx\`)
|
||||
- Follow ALL coding conventions, commit message formats, and architectural patterns specified
|
||||
- Reference these rules before running ANY shell commands or making commits
|
||||
|
||||
---
|
||||
|
||||
${formattedFiles.join('\n\n---\n\n')}
|
||||
|
||||
---
|
||||
|
||||
**REMINDER**: Before taking any action, verify you are following the conventions specified above.
|
||||
`;
|
||||
}
|
||||
|
||||
/**
|
||||
* Format a single context file entry for the prompt
|
||||
* (Matches the format used in @automaker/utils/context-loader.ts)
|
||||
*/
|
||||
function formatContextFileEntry(file: ContextFileInfo): string {
|
||||
const header = `## ${file.name}`;
|
||||
const pathInfo = `**Path:** \`${file.path}\``;
|
||||
const descriptionInfo = file.description ? `\n**Purpose:** ${file.description}` : '';
|
||||
return `${header}\n${pathInfo}${descriptionInfo}\n\n${file.content}`;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get enabled MCP servers from global settings, converted to SDK format.
|
||||
* Returns an empty object if settings service is not available or no servers are configured.
|
||||
*
|
||||
* @param settingsService - Optional settings service instance
|
||||
* @param logPrefix - Prefix for log messages (e.g., '[AgentService]')
|
||||
* @returns Promise resolving to MCP servers in SDK format (keyed by name)
|
||||
*/
|
||||
export async function getMCPServersFromSettings(
|
||||
settingsService?: SettingsService | null,
|
||||
logPrefix = '[SettingsHelper]'
|
||||
): Promise<Record<string, McpServerConfig>> {
|
||||
if (!settingsService) {
|
||||
return {};
|
||||
}
|
||||
|
||||
try {
|
||||
const globalSettings = await settingsService.getGlobalSettings();
|
||||
const mcpServers = globalSettings.mcpServers || [];
|
||||
|
||||
// Filter to only enabled servers and convert to SDK format
|
||||
const enabledServers = mcpServers.filter((s) => s.enabled !== false);
|
||||
|
||||
if (enabledServers.length === 0) {
|
||||
return {};
|
||||
}
|
||||
|
||||
// Convert settings format to SDK format (keyed by name)
|
||||
const sdkServers: Record<string, McpServerConfig> = {};
|
||||
for (const server of enabledServers) {
|
||||
sdkServers[server.name] = convertToSdkFormat(server);
|
||||
}
|
||||
|
||||
console.log(
|
||||
`${logPrefix} Loaded ${enabledServers.length} MCP server(s): ${enabledServers.map((s) => s.name).join(', ')}`
|
||||
);
|
||||
|
||||
return sdkServers;
|
||||
} catch (error) {
|
||||
console.error(`${logPrefix} Failed to load MCP servers setting:`, error);
|
||||
return {};
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get MCP permission settings from global settings.
|
||||
*
|
||||
* @param settingsService - Optional settings service instance
|
||||
* @param logPrefix - Prefix for log messages (e.g., '[AgentService]')
|
||||
* @returns Promise resolving to MCP permission settings
|
||||
*/
|
||||
export async function getMCPPermissionSettings(
|
||||
settingsService?: SettingsService | null,
|
||||
logPrefix = '[SettingsHelper]'
|
||||
): Promise<{ mcpAutoApproveTools: boolean; mcpUnrestrictedTools: boolean }> {
|
||||
// Default to true for autonomous workflow. Security is enforced when adding servers
|
||||
// via the security warning dialog that explains the risks.
|
||||
const defaults = { mcpAutoApproveTools: true, mcpUnrestrictedTools: true };
|
||||
|
||||
if (!settingsService) {
|
||||
return defaults;
|
||||
}
|
||||
|
||||
try {
|
||||
const globalSettings = await settingsService.getGlobalSettings();
|
||||
const result = {
|
||||
mcpAutoApproveTools: globalSettings.mcpAutoApproveTools ?? true,
|
||||
mcpUnrestrictedTools: globalSettings.mcpUnrestrictedTools ?? true,
|
||||
};
|
||||
console.log(
|
||||
`${logPrefix} MCP permission settings: autoApprove=${result.mcpAutoApproveTools}, unrestricted=${result.mcpUnrestrictedTools}`
|
||||
);
|
||||
return result;
|
||||
} catch (error) {
|
||||
console.error(`${logPrefix} Failed to load MCP permission settings:`, error);
|
||||
return defaults;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Convert a settings MCPServerConfig to SDK McpServerConfig format.
|
||||
* Validates required fields and throws informative errors if missing.
|
||||
*/
|
||||
function convertToSdkFormat(server: MCPServerConfig): McpServerConfig {
|
||||
if (server.type === 'sse') {
|
||||
if (!server.url) {
|
||||
throw new Error(`SSE MCP server "${server.name}" is missing a URL.`);
|
||||
}
|
||||
return {
|
||||
type: 'sse',
|
||||
url: server.url,
|
||||
headers: server.headers,
|
||||
};
|
||||
}
|
||||
|
||||
if (server.type === 'http') {
|
||||
if (!server.url) {
|
||||
throw new Error(`HTTP MCP server "${server.name}" is missing a URL.`);
|
||||
}
|
||||
return {
|
||||
type: 'http',
|
||||
url: server.url,
|
||||
headers: server.headers,
|
||||
};
|
||||
}
|
||||
|
||||
// Default to stdio
|
||||
if (!server.command) {
|
||||
throw new Error(`Stdio MCP server "${server.name}" is missing a command.`);
|
||||
}
|
||||
return {
|
||||
type: 'stdio',
|
||||
command: server.command,
|
||||
args: server.args,
|
||||
env: server.env,
|
||||
};
|
||||
}
|
||||
@@ -1,181 +0,0 @@
|
||||
/**
|
||||
* Validation Storage - CRUD operations for GitHub issue validation results
|
||||
*
|
||||
* Stores validation results in .automaker/validations/{issueNumber}/validation.json
|
||||
* Results include the validation verdict, metadata, and timestamp for cache invalidation.
|
||||
*/
|
||||
|
||||
import * as secureFs from './secure-fs.js';
|
||||
import { getValidationsDir, getValidationDir, getValidationPath } from '@automaker/platform';
|
||||
import type { StoredValidation } from '@automaker/types';
|
||||
|
||||
// Re-export StoredValidation for convenience
|
||||
export type { StoredValidation };
|
||||
|
||||
/** Number of hours before a validation is considered stale */
|
||||
const VALIDATION_CACHE_TTL_HOURS = 24;
|
||||
|
||||
/**
|
||||
* Write validation result to storage
|
||||
*
|
||||
* Creates the validation directory if needed and stores the result as JSON.
|
||||
*
|
||||
* @param projectPath - Absolute path to project directory
|
||||
* @param issueNumber - GitHub issue number
|
||||
* @param data - Validation data to store
|
||||
*/
|
||||
export async function writeValidation(
|
||||
projectPath: string,
|
||||
issueNumber: number,
|
||||
data: StoredValidation
|
||||
): Promise<void> {
|
||||
const validationDir = getValidationDir(projectPath, issueNumber);
|
||||
const validationPath = getValidationPath(projectPath, issueNumber);
|
||||
|
||||
// Ensure directory exists
|
||||
await secureFs.mkdir(validationDir, { recursive: true });
|
||||
|
||||
// Write validation result
|
||||
await secureFs.writeFile(validationPath, JSON.stringify(data, null, 2), 'utf-8');
|
||||
}
|
||||
|
||||
/**
|
||||
* Read validation result from storage
|
||||
*
|
||||
* @param projectPath - Absolute path to project directory
|
||||
* @param issueNumber - GitHub issue number
|
||||
* @returns Stored validation or null if not found
|
||||
*/
|
||||
export async function readValidation(
|
||||
projectPath: string,
|
||||
issueNumber: number
|
||||
): Promise<StoredValidation | null> {
|
||||
try {
|
||||
const validationPath = getValidationPath(projectPath, issueNumber);
|
||||
const content = (await secureFs.readFile(validationPath, 'utf-8')) as string;
|
||||
return JSON.parse(content) as StoredValidation;
|
||||
} catch {
|
||||
// File doesn't exist or can't be read
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get all stored validations for a project
|
||||
*
|
||||
* @param projectPath - Absolute path to project directory
|
||||
* @returns Array of stored validations
|
||||
*/
|
||||
export async function getAllValidations(projectPath: string): Promise<StoredValidation[]> {
|
||||
const validationsDir = getValidationsDir(projectPath);
|
||||
|
||||
try {
|
||||
const dirs = await secureFs.readdir(validationsDir, { withFileTypes: true });
|
||||
|
||||
// Read all validation files in parallel for better performance
|
||||
const promises = dirs
|
||||
.filter((dir) => dir.isDirectory())
|
||||
.map((dir) => {
|
||||
const issueNumber = parseInt(dir.name, 10);
|
||||
if (!isNaN(issueNumber)) {
|
||||
return readValidation(projectPath, issueNumber);
|
||||
}
|
||||
return Promise.resolve(null);
|
||||
});
|
||||
|
||||
const results = await Promise.all(promises);
|
||||
const validations = results.filter((v): v is StoredValidation => v !== null);
|
||||
|
||||
// Sort by issue number
|
||||
validations.sort((a, b) => a.issueNumber - b.issueNumber);
|
||||
|
||||
return validations;
|
||||
} catch {
|
||||
// Directory doesn't exist
|
||||
return [];
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Delete a validation from storage
|
||||
*
|
||||
* @param projectPath - Absolute path to project directory
|
||||
* @param issueNumber - GitHub issue number
|
||||
* @returns true if validation was deleted, false if not found
|
||||
*/
|
||||
export async function deleteValidation(projectPath: string, issueNumber: number): Promise<boolean> {
|
||||
try {
|
||||
const validationDir = getValidationDir(projectPath, issueNumber);
|
||||
await secureFs.rm(validationDir, { recursive: true, force: true });
|
||||
return true;
|
||||
} catch {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if a validation is stale (older than TTL)
|
||||
*
|
||||
* @param validation - Stored validation to check
|
||||
* @returns true if validation is older than 24 hours
|
||||
*/
|
||||
export function isValidationStale(validation: StoredValidation): boolean {
|
||||
const validatedAt = new Date(validation.validatedAt);
|
||||
const now = new Date();
|
||||
const hoursDiff = (now.getTime() - validatedAt.getTime()) / (1000 * 60 * 60);
|
||||
return hoursDiff > VALIDATION_CACHE_TTL_HOURS;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get validation with freshness info
|
||||
*
|
||||
* @param projectPath - Absolute path to project directory
|
||||
* @param issueNumber - GitHub issue number
|
||||
* @returns Object with validation and isStale flag, or null if not found
|
||||
*/
|
||||
export async function getValidationWithFreshness(
|
||||
projectPath: string,
|
||||
issueNumber: number
|
||||
): Promise<{ validation: StoredValidation; isStale: boolean } | null> {
|
||||
const validation = await readValidation(projectPath, issueNumber);
|
||||
if (!validation) {
|
||||
return null;
|
||||
}
|
||||
|
||||
return {
|
||||
validation,
|
||||
isStale: isValidationStale(validation),
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Mark a validation as viewed by the user
|
||||
*
|
||||
* @param projectPath - Absolute path to project directory
|
||||
* @param issueNumber - GitHub issue number
|
||||
* @returns true if validation was marked as viewed, false if not found
|
||||
*/
|
||||
export async function markValidationViewed(
|
||||
projectPath: string,
|
||||
issueNumber: number
|
||||
): Promise<boolean> {
|
||||
const validation = await readValidation(projectPath, issueNumber);
|
||||
if (!validation) {
|
||||
return false;
|
||||
}
|
||||
|
||||
validation.viewedAt = new Date().toISOString();
|
||||
await writeValidation(projectPath, issueNumber, validation);
|
||||
return true;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get count of unviewed, non-stale validations for a project
|
||||
*
|
||||
* @param projectPath - Absolute path to project directory
|
||||
* @returns Number of unviewed validations
|
||||
*/
|
||||
export async function getUnviewedValidationsCount(projectPath: string): Promise<number> {
|
||||
const validations = await getAllValidations(projectPath);
|
||||
return validations.filter((v) => !v.viewedAt && !isValidationStale(v)).length;
|
||||
}
|
||||
@@ -36,44 +36,25 @@ export class ClaudeProvider extends BaseProvider {
|
||||
} = options;
|
||||
|
||||
// Build Claude SDK options
|
||||
// MCP permission logic - determines how to handle tool permissions when MCP servers are configured.
|
||||
// This logic mirrors buildMcpOptions() in sdk-options.ts but is applied here since
|
||||
// the provider is the final point where SDK options are constructed.
|
||||
const hasMcpServers = options.mcpServers && Object.keys(options.mcpServers).length > 0;
|
||||
// Default to true for autonomous workflow. Security is enforced when adding servers
|
||||
// via the security warning dialog that explains the risks.
|
||||
const mcpAutoApprove = options.mcpAutoApproveTools ?? true;
|
||||
const mcpUnrestricted = options.mcpUnrestrictedTools ?? true;
|
||||
const defaultTools = ['Read', 'Write', 'Edit', 'Glob', 'Grep', 'Bash', 'WebSearch', 'WebFetch'];
|
||||
|
||||
// Determine permission mode based on settings
|
||||
const shouldBypassPermissions = hasMcpServers && mcpAutoApprove;
|
||||
// Determine if we should restrict tools (only when no MCP or unrestricted is disabled)
|
||||
const shouldRestrictTools = !hasMcpServers || !mcpUnrestricted;
|
||||
const toolsToUse = allowedTools || defaultTools;
|
||||
|
||||
const sdkOptions: Options = {
|
||||
model,
|
||||
systemPrompt,
|
||||
maxTurns,
|
||||
cwd,
|
||||
// Only restrict tools if explicitly set OR (no MCP / unrestricted disabled)
|
||||
...(allowedTools && shouldRestrictTools && { allowedTools }),
|
||||
...(!allowedTools && shouldRestrictTools && { allowedTools: defaultTools }),
|
||||
// When MCP servers are configured and auto-approve is enabled, use bypassPermissions
|
||||
permissionMode: shouldBypassPermissions ? 'bypassPermissions' : 'default',
|
||||
// Required when using bypassPermissions mode
|
||||
...(shouldBypassPermissions && { allowDangerouslySkipPermissions: true }),
|
||||
allowedTools: toolsToUse,
|
||||
permissionMode: 'acceptEdits',
|
||||
sandbox: {
|
||||
enabled: true,
|
||||
autoAllowBashIfSandboxed: true,
|
||||
},
|
||||
abortController,
|
||||
// Resume existing SDK session if we have a session ID
|
||||
...(sdkSessionId && conversationHistory && conversationHistory.length > 0
|
||||
? { resume: sdkSessionId }
|
||||
: {}),
|
||||
// Forward settingSources for CLAUDE.md file loading
|
||||
...(options.settingSources && { settingSources: options.settingSources }),
|
||||
// Forward sandbox configuration
|
||||
...(options.sandbox && { sandbox: options.sandbox }),
|
||||
// Forward MCP servers configuration
|
||||
...(options.mcpServers && { mcpServers: options.mcpServers }),
|
||||
};
|
||||
|
||||
// Build prompt payload
|
||||
@@ -107,8 +88,7 @@ export class ClaudeProvider extends BaseProvider {
|
||||
yield msg as ProviderMessage;
|
||||
}
|
||||
} catch (error) {
|
||||
console.error('[ClaudeProvider] ERROR: executeQuery() error during execution:', error);
|
||||
console.error('[ClaudeProvider] ERROR stack:', (error as Error).stack);
|
||||
console.error('[ClaudeProvider] executeQuery() error during execution:', error);
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,19 +1,39 @@
|
||||
/**
|
||||
* Shared types for AI model providers
|
||||
*
|
||||
* Re-exports types from @automaker/types for consistency across the codebase.
|
||||
*/
|
||||
|
||||
// Re-export all provider types from @automaker/types
|
||||
export type {
|
||||
ProviderConfig,
|
||||
ConversationMessage,
|
||||
ExecuteOptions,
|
||||
McpServerConfig,
|
||||
McpStdioServerConfig,
|
||||
McpSSEServerConfig,
|
||||
McpHttpServerConfig,
|
||||
} from '@automaker/types';
|
||||
/**
|
||||
* Configuration for a provider instance
|
||||
*/
|
||||
export interface ProviderConfig {
|
||||
apiKey?: string;
|
||||
cliPath?: string;
|
||||
env?: Record<string, string>;
|
||||
}
|
||||
|
||||
/**
|
||||
* Message in conversation history
|
||||
*/
|
||||
export interface ConversationMessage {
|
||||
role: 'user' | 'assistant';
|
||||
content: string | Array<{ type: string; text?: string; source?: object }>;
|
||||
}
|
||||
|
||||
/**
|
||||
* Options for executing a query via a provider
|
||||
*/
|
||||
export interface ExecuteOptions {
|
||||
prompt: string | Array<{ type: string; text?: string; source?: object }>;
|
||||
model: string;
|
||||
cwd: string;
|
||||
systemPrompt?: string;
|
||||
maxTurns?: number;
|
||||
allowedTools?: string[];
|
||||
mcpServers?: Record<string, unknown>;
|
||||
abortController?: AbortController;
|
||||
conversationHistory?: ConversationMessage[]; // Previous messages for context
|
||||
sdkSessionId?: string; // Claude SDK session ID for resuming conversations
|
||||
}
|
||||
|
||||
/**
|
||||
* Content block in a provider message (matches Claude SDK format)
|
||||
|
||||
@@ -12,10 +12,6 @@ import { createHistoryHandler } from './routes/history.js';
|
||||
import { createStopHandler } from './routes/stop.js';
|
||||
import { createClearHandler } from './routes/clear.js';
|
||||
import { createModelHandler } from './routes/model.js';
|
||||
import { createQueueAddHandler } from './routes/queue-add.js';
|
||||
import { createQueueListHandler } from './routes/queue-list.js';
|
||||
import { createQueueRemoveHandler } from './routes/queue-remove.js';
|
||||
import { createQueueClearHandler } from './routes/queue-clear.js';
|
||||
|
||||
export function createAgentRoutes(agentService: AgentService, _events: EventEmitter): Router {
|
||||
const router = Router();
|
||||
@@ -31,15 +27,5 @@ export function createAgentRoutes(agentService: AgentService, _events: EventEmit
|
||||
router.post('/clear', createClearHandler(agentService));
|
||||
router.post('/model', createModelHandler(agentService));
|
||||
|
||||
// Queue routes
|
||||
router.post(
|
||||
'/queue/add',
|
||||
validatePathParams('imagePaths[]'),
|
||||
createQueueAddHandler(agentService)
|
||||
);
|
||||
router.post('/queue/list', createQueueListHandler(agentService));
|
||||
router.post('/queue/remove', createQueueRemoveHandler(agentService));
|
||||
router.post('/queue/clear', createQueueClearHandler(agentService));
|
||||
|
||||
return router;
|
||||
}
|
||||
|
||||
@@ -1,34 +0,0 @@
|
||||
/**
|
||||
* POST /queue/add endpoint - Add a prompt to the queue
|
||||
*/
|
||||
|
||||
import type { Request, Response } from 'express';
|
||||
import { AgentService } from '../../../services/agent-service.js';
|
||||
import { getErrorMessage, logError } from '../common.js';
|
||||
|
||||
export function createQueueAddHandler(agentService: AgentService) {
|
||||
return async (req: Request, res: Response): Promise<void> => {
|
||||
try {
|
||||
const { sessionId, message, imagePaths, model } = req.body as {
|
||||
sessionId: string;
|
||||
message: string;
|
||||
imagePaths?: string[];
|
||||
model?: string;
|
||||
};
|
||||
|
||||
if (!sessionId || !message) {
|
||||
res.status(400).json({
|
||||
success: false,
|
||||
error: 'sessionId and message are required',
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
const result = await agentService.addToQueue(sessionId, { message, imagePaths, model });
|
||||
res.json(result);
|
||||
} catch (error) {
|
||||
logError(error, 'Add to queue failed');
|
||||
res.status(500).json({ success: false, error: getErrorMessage(error) });
|
||||
}
|
||||
};
|
||||
}
|
||||
@@ -1,29 +0,0 @@
|
||||
/**
|
||||
* POST /queue/clear endpoint - Clear all prompts from the queue
|
||||
*/
|
||||
|
||||
import type { Request, Response } from 'express';
|
||||
import { AgentService } from '../../../services/agent-service.js';
|
||||
import { getErrorMessage, logError } from '../common.js';
|
||||
|
||||
export function createQueueClearHandler(agentService: AgentService) {
|
||||
return async (req: Request, res: Response): Promise<void> => {
|
||||
try {
|
||||
const { sessionId } = req.body as { sessionId: string };
|
||||
|
||||
if (!sessionId) {
|
||||
res.status(400).json({
|
||||
success: false,
|
||||
error: 'sessionId is required',
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
const result = await agentService.clearQueue(sessionId);
|
||||
res.json(result);
|
||||
} catch (error) {
|
||||
logError(error, 'Clear queue failed');
|
||||
res.status(500).json({ success: false, error: getErrorMessage(error) });
|
||||
}
|
||||
};
|
||||
}
|
||||
@@ -1,29 +0,0 @@
|
||||
/**
|
||||
* POST /queue/list endpoint - List queued prompts
|
||||
*/
|
||||
|
||||
import type { Request, Response } from 'express';
|
||||
import { AgentService } from '../../../services/agent-service.js';
|
||||
import { getErrorMessage, logError } from '../common.js';
|
||||
|
||||
export function createQueueListHandler(agentService: AgentService) {
|
||||
return async (req: Request, res: Response): Promise<void> => {
|
||||
try {
|
||||
const { sessionId } = req.body as { sessionId: string };
|
||||
|
||||
if (!sessionId) {
|
||||
res.status(400).json({
|
||||
success: false,
|
||||
error: 'sessionId is required',
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
const result = agentService.getQueue(sessionId);
|
||||
res.json(result);
|
||||
} catch (error) {
|
||||
logError(error, 'List queue failed');
|
||||
res.status(500).json({ success: false, error: getErrorMessage(error) });
|
||||
}
|
||||
};
|
||||
}
|
||||
@@ -1,32 +0,0 @@
|
||||
/**
|
||||
* POST /queue/remove endpoint - Remove a prompt from the queue
|
||||
*/
|
||||
|
||||
import type { Request, Response } from 'express';
|
||||
import { AgentService } from '../../../services/agent-service.js';
|
||||
import { getErrorMessage, logError } from '../common.js';
|
||||
|
||||
export function createQueueRemoveHandler(agentService: AgentService) {
|
||||
return async (req: Request, res: Response): Promise<void> => {
|
||||
try {
|
||||
const { sessionId, promptId } = req.body as {
|
||||
sessionId: string;
|
||||
promptId: string;
|
||||
};
|
||||
|
||||
if (!sessionId || !promptId) {
|
||||
res.status(400).json({
|
||||
success: false,
|
||||
error: 'sessionId and promptId are required',
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
const result = await agentService.removeFromQueue(sessionId, promptId);
|
||||
res.json(result);
|
||||
} catch (error) {
|
||||
logError(error, 'Remove from queue failed');
|
||||
res.status(500).json({ success: false, error: getErrorMessage(error) });
|
||||
}
|
||||
};
|
||||
}
|
||||
@@ -19,16 +19,7 @@ export function createSendHandler(agentService: AgentService) {
|
||||
model?: string;
|
||||
};
|
||||
|
||||
console.log('[Send Handler] Received request:', {
|
||||
sessionId,
|
||||
messageLength: message?.length,
|
||||
workingDirectory,
|
||||
imageCount: imagePaths?.length || 0,
|
||||
model,
|
||||
});
|
||||
|
||||
if (!sessionId || !message) {
|
||||
console.log('[Send Handler] ERROR: Validation failed - missing sessionId or message');
|
||||
res.status(400).json({
|
||||
success: false,
|
||||
error: 'sessionId and message are required',
|
||||
@@ -36,8 +27,6 @@ export function createSendHandler(agentService: AgentService) {
|
||||
return;
|
||||
}
|
||||
|
||||
console.log('[Send Handler] Validation passed, calling agentService.sendMessage()');
|
||||
|
||||
// Start the message processing (don't await - it streams via WebSocket)
|
||||
agentService
|
||||
.sendMessage({
|
||||
@@ -48,16 +37,12 @@ export function createSendHandler(agentService: AgentService) {
|
||||
model,
|
||||
})
|
||||
.catch((error) => {
|
||||
console.error('[Send Handler] ERROR: Background error in sendMessage():', error);
|
||||
logError(error, 'Send message failed (background)');
|
||||
});
|
||||
|
||||
console.log('[Send Handler] Returning immediate response to client');
|
||||
|
||||
// Return immediately - responses come via WebSocket
|
||||
res.json({ success: true, message: 'Message sent' });
|
||||
} catch (error) {
|
||||
console.error('[Send Handler] ERROR: Synchronous error:', error);
|
||||
logError(error, 'Send message failed');
|
||||
res.status(500).json({ success: false, error: getErrorMessage(error) });
|
||||
}
|
||||
|
||||
@@ -10,8 +10,6 @@ import { createFeatureGenerationOptions } from '../../lib/sdk-options.js';
|
||||
import { logAuthStatus } from './common.js';
|
||||
import { parseAndCreateFeatures } from './parse-and-create-features.js';
|
||||
import { getAppSpecPath } from '@automaker/platform';
|
||||
import type { SettingsService } from '../../services/settings-service.js';
|
||||
import { getAutoLoadClaudeMdSetting } from '../../lib/settings-helpers.js';
|
||||
|
||||
const logger = createLogger('SpecRegeneration');
|
||||
|
||||
@@ -21,8 +19,7 @@ export async function generateFeaturesFromSpec(
|
||||
projectPath: string,
|
||||
events: EventEmitter,
|
||||
abortController: AbortController,
|
||||
maxFeatures?: number,
|
||||
settingsService?: SettingsService
|
||||
maxFeatures?: number
|
||||
): Promise<void> {
|
||||
const featureCount = maxFeatures ?? DEFAULT_MAX_FEATURES;
|
||||
logger.debug('========== generateFeaturesFromSpec() started ==========');
|
||||
@@ -94,17 +91,9 @@ IMPORTANT: Do not ask for clarification. The specification is provided above. Ge
|
||||
projectPath: projectPath,
|
||||
});
|
||||
|
||||
// Load autoLoadClaudeMd setting
|
||||
const autoLoadClaudeMd = await getAutoLoadClaudeMdSetting(
|
||||
projectPath,
|
||||
settingsService,
|
||||
'[FeatureGeneration]'
|
||||
);
|
||||
|
||||
const options = createFeatureGenerationOptions({
|
||||
cwd: projectPath,
|
||||
abortController,
|
||||
autoLoadClaudeMd,
|
||||
});
|
||||
|
||||
logger.debug('SDK Options:', JSON.stringify(options, null, 2));
|
||||
|
||||
@@ -17,8 +17,6 @@ import { createSpecGenerationOptions } from '../../lib/sdk-options.js';
|
||||
import { logAuthStatus } from './common.js';
|
||||
import { generateFeaturesFromSpec } from './generate-features-from-spec.js';
|
||||
import { ensureAutomakerDir, getAppSpecPath } from '@automaker/platform';
|
||||
import type { SettingsService } from '../../services/settings-service.js';
|
||||
import { getAutoLoadClaudeMdSetting } from '../../lib/settings-helpers.js';
|
||||
|
||||
const logger = createLogger('SpecRegeneration');
|
||||
|
||||
@@ -29,8 +27,7 @@ export async function generateSpec(
|
||||
abortController: AbortController,
|
||||
generateFeatures?: boolean,
|
||||
analyzeProject?: boolean,
|
||||
maxFeatures?: number,
|
||||
settingsService?: SettingsService
|
||||
maxFeatures?: number
|
||||
): Promise<void> {
|
||||
logger.info('========== generateSpec() started ==========');
|
||||
logger.info('projectPath:', projectPath);
|
||||
@@ -86,17 +83,9 @@ ${getStructuredSpecPromptInstruction()}`;
|
||||
content: 'Starting spec generation...\n',
|
||||
});
|
||||
|
||||
// Load autoLoadClaudeMd setting
|
||||
const autoLoadClaudeMd = await getAutoLoadClaudeMdSetting(
|
||||
projectPath,
|
||||
settingsService,
|
||||
'[SpecRegeneration]'
|
||||
);
|
||||
|
||||
const options = createSpecGenerationOptions({
|
||||
cwd: projectPath,
|
||||
abortController,
|
||||
autoLoadClaudeMd,
|
||||
outputFormat: {
|
||||
type: 'json_schema',
|
||||
schema: specOutputSchema,
|
||||
@@ -280,13 +269,7 @@ ${getStructuredSpecPromptInstruction()}`;
|
||||
// Create a new abort controller for feature generation
|
||||
const featureAbortController = new AbortController();
|
||||
try {
|
||||
await generateFeaturesFromSpec(
|
||||
projectPath,
|
||||
events,
|
||||
featureAbortController,
|
||||
maxFeatures,
|
||||
settingsService
|
||||
);
|
||||
await generateFeaturesFromSpec(projectPath, events, featureAbortController, maxFeatures);
|
||||
// Final completion will be emitted by generateFeaturesFromSpec -> parseAndCreateFeatures
|
||||
} catch (featureError) {
|
||||
logger.error('Feature generation failed:', featureError);
|
||||
|
||||
@@ -9,17 +9,13 @@ import { createGenerateHandler } from './routes/generate.js';
|
||||
import { createGenerateFeaturesHandler } from './routes/generate-features.js';
|
||||
import { createStopHandler } from './routes/stop.js';
|
||||
import { createStatusHandler } from './routes/status.js';
|
||||
import type { SettingsService } from '../../services/settings-service.js';
|
||||
|
||||
export function createSpecRegenerationRoutes(
|
||||
events: EventEmitter,
|
||||
settingsService?: SettingsService
|
||||
): Router {
|
||||
export function createSpecRegenerationRoutes(events: EventEmitter): Router {
|
||||
const router = Router();
|
||||
|
||||
router.post('/create', createCreateHandler(events));
|
||||
router.post('/generate', createGenerateHandler(events, settingsService));
|
||||
router.post('/generate-features', createGenerateFeaturesHandler(events, settingsService));
|
||||
router.post('/generate', createGenerateHandler(events));
|
||||
router.post('/generate-features', createGenerateFeaturesHandler(events));
|
||||
router.post('/stop', createStopHandler());
|
||||
router.get('/status', createStatusHandler());
|
||||
|
||||
|
||||
@@ -13,14 +13,10 @@ import {
|
||||
getErrorMessage,
|
||||
} from '../common.js';
|
||||
import { generateFeaturesFromSpec } from '../generate-features-from-spec.js';
|
||||
import type { SettingsService } from '../../../services/settings-service.js';
|
||||
|
||||
const logger = createLogger('SpecRegeneration');
|
||||
|
||||
export function createGenerateFeaturesHandler(
|
||||
events: EventEmitter,
|
||||
settingsService?: SettingsService
|
||||
) {
|
||||
export function createGenerateFeaturesHandler(events: EventEmitter) {
|
||||
return async (req: Request, res: Response): Promise<void> => {
|
||||
logger.info('========== /generate-features endpoint called ==========');
|
||||
logger.debug('Request body:', JSON.stringify(req.body, null, 2));
|
||||
@@ -53,7 +49,7 @@ export function createGenerateFeaturesHandler(
|
||||
setRunningState(true, abortController);
|
||||
logger.info('Starting background feature generation task...');
|
||||
|
||||
generateFeaturesFromSpec(projectPath, events, abortController, maxFeatures, settingsService)
|
||||
generateFeaturesFromSpec(projectPath, events, abortController, maxFeatures)
|
||||
.catch((error) => {
|
||||
logError(error, 'Feature generation failed with error');
|
||||
events.emit('spec-regeneration:event', {
|
||||
|
||||
@@ -13,11 +13,10 @@ import {
|
||||
getErrorMessage,
|
||||
} from '../common.js';
|
||||
import { generateSpec } from '../generate-spec.js';
|
||||
import type { SettingsService } from '../../../services/settings-service.js';
|
||||
|
||||
const logger = createLogger('SpecRegeneration');
|
||||
|
||||
export function createGenerateHandler(events: EventEmitter, settingsService?: SettingsService) {
|
||||
export function createGenerateHandler(events: EventEmitter) {
|
||||
return async (req: Request, res: Response): Promise<void> => {
|
||||
logger.info('========== /generate endpoint called ==========');
|
||||
logger.debug('Request body:', JSON.stringify(req.body, null, 2));
|
||||
@@ -68,8 +67,7 @@ export function createGenerateHandler(events: EventEmitter, settingsService?: Se
|
||||
abortController,
|
||||
generateFeatures,
|
||||
analyzeProject,
|
||||
maxFeatures,
|
||||
settingsService
|
||||
maxFeatures
|
||||
)
|
||||
.catch((error) => {
|
||||
logError(error, 'Generation failed with error');
|
||||
|
||||
@@ -1,39 +0,0 @@
|
||||
/**
|
||||
* Common utilities for backlog plan routes
|
||||
*/
|
||||
|
||||
import { createLogger } from '@automaker/utils';
|
||||
|
||||
const logger = createLogger('BacklogPlan');
|
||||
|
||||
// State for tracking running generation
|
||||
let isRunning = false;
|
||||
let currentAbortController: AbortController | null = null;
|
||||
|
||||
export function getBacklogPlanStatus(): { isRunning: boolean } {
|
||||
return { isRunning };
|
||||
}
|
||||
|
||||
export function setRunningState(running: boolean, abortController?: AbortController | null): void {
|
||||
isRunning = running;
|
||||
if (abortController !== undefined) {
|
||||
currentAbortController = abortController;
|
||||
}
|
||||
}
|
||||
|
||||
export function getAbortController(): AbortController | null {
|
||||
return currentAbortController;
|
||||
}
|
||||
|
||||
export function getErrorMessage(error: unknown): string {
|
||||
if (error instanceof Error) {
|
||||
return error.message;
|
||||
}
|
||||
return String(error);
|
||||
}
|
||||
|
||||
export function logError(error: unknown, context: string): void {
|
||||
logger.error(`[BacklogPlan] ${context}:`, getErrorMessage(error));
|
||||
}
|
||||
|
||||
export { logger };
|
||||
@@ -1,217 +0,0 @@
|
||||
/**
|
||||
* Generate backlog plan using Claude AI
|
||||
*/
|
||||
|
||||
import type { EventEmitter } from '../../lib/events.js';
|
||||
import type { Feature, BacklogPlanResult, BacklogChange, DependencyUpdate } from '@automaker/types';
|
||||
import { FeatureLoader } from '../../services/feature-loader.js';
|
||||
import { ProviderFactory } from '../../providers/provider-factory.js';
|
||||
import { logger, setRunningState, getErrorMessage } from './common.js';
|
||||
import type { SettingsService } from '../../services/settings-service.js';
|
||||
import { getAutoLoadClaudeMdSetting } from '../../lib/settings-helpers.js';
|
||||
|
||||
const featureLoader = new FeatureLoader();
|
||||
|
||||
/**
|
||||
* Format features for the AI prompt
|
||||
*/
|
||||
function formatFeaturesForPrompt(features: Feature[]): string {
|
||||
if (features.length === 0) {
|
||||
return 'No features in backlog yet.';
|
||||
}
|
||||
|
||||
return features
|
||||
.map((f) => {
|
||||
const deps = f.dependencies?.length ? `Dependencies: [${f.dependencies.join(', ')}]` : '';
|
||||
const priority = f.priority !== undefined ? `Priority: ${f.priority}` : '';
|
||||
return `- ID: ${f.id}
|
||||
Title: ${f.title || 'Untitled'}
|
||||
Description: ${f.description}
|
||||
Category: ${f.category}
|
||||
Status: ${f.status || 'backlog'}
|
||||
${priority}
|
||||
${deps}`.trim();
|
||||
})
|
||||
.join('\n\n');
|
||||
}
|
||||
|
||||
/**
|
||||
* Parse the AI response into a BacklogPlanResult
|
||||
*/
|
||||
function parsePlanResponse(response: string): BacklogPlanResult {
|
||||
try {
|
||||
// Try to extract JSON from the response
|
||||
const jsonMatch = response.match(/```json\n?([\s\S]*?)\n?```/);
|
||||
if (jsonMatch) {
|
||||
return JSON.parse(jsonMatch[1]);
|
||||
}
|
||||
|
||||
// Try to parse the whole response as JSON
|
||||
return JSON.parse(response);
|
||||
} catch {
|
||||
// If parsing fails, return an empty result
|
||||
logger.warn('[BacklogPlan] Failed to parse AI response as JSON');
|
||||
return {
|
||||
changes: [],
|
||||
summary: 'Failed to parse AI response',
|
||||
dependencyUpdates: [],
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Generate a backlog modification plan based on user prompt
|
||||
*/
|
||||
export async function generateBacklogPlan(
|
||||
projectPath: string,
|
||||
prompt: string,
|
||||
events: EventEmitter,
|
||||
abortController: AbortController,
|
||||
settingsService?: SettingsService,
|
||||
model?: string
|
||||
): Promise<BacklogPlanResult> {
|
||||
try {
|
||||
// Load current features
|
||||
const features = await featureLoader.getAll(projectPath);
|
||||
|
||||
events.emit('backlog-plan:event', {
|
||||
type: 'backlog_plan_progress',
|
||||
content: `Loaded ${features.length} features from backlog`,
|
||||
});
|
||||
|
||||
// Build the system prompt
|
||||
const systemPrompt = `You are an AI assistant helping to modify a software project's feature backlog.
|
||||
You will be given the current list of features and a user request to modify the backlog.
|
||||
|
||||
IMPORTANT CONTEXT (automatically injected):
|
||||
- Remember to update the dependency graph if deleting existing features
|
||||
- Remember to define dependencies on new features hooked into relevant existing ones
|
||||
- Maintain dependency graph integrity (no orphaned dependencies)
|
||||
- When deleting a feature, identify which other features depend on it
|
||||
|
||||
Your task is to analyze the request and produce a structured JSON plan with:
|
||||
1. Features to ADD (include title, description, category, and dependencies)
|
||||
2. Features to UPDATE (specify featureId and the updates)
|
||||
3. Features to DELETE (specify featureId)
|
||||
4. A summary of the changes
|
||||
5. Any dependency updates needed (removed dependencies due to deletions, new dependencies for new features)
|
||||
|
||||
Respond with ONLY a JSON object in this exact format:
|
||||
\`\`\`json
|
||||
{
|
||||
"changes": [
|
||||
{
|
||||
"type": "add",
|
||||
"feature": {
|
||||
"title": "Feature title",
|
||||
"description": "Feature description",
|
||||
"category": "Category name",
|
||||
"dependencies": ["existing-feature-id"],
|
||||
"priority": 1
|
||||
},
|
||||
"reason": "Why this feature should be added"
|
||||
},
|
||||
{
|
||||
"type": "update",
|
||||
"featureId": "existing-feature-id",
|
||||
"feature": {
|
||||
"title": "Updated title"
|
||||
},
|
||||
"reason": "Why this feature should be updated"
|
||||
},
|
||||
{
|
||||
"type": "delete",
|
||||
"featureId": "feature-id-to-delete",
|
||||
"reason": "Why this feature should be deleted"
|
||||
}
|
||||
],
|
||||
"summary": "Brief overview of all proposed changes",
|
||||
"dependencyUpdates": [
|
||||
{
|
||||
"featureId": "feature-that-depended-on-deleted",
|
||||
"removedDependencies": ["deleted-feature-id"],
|
||||
"addedDependencies": []
|
||||
}
|
||||
]
|
||||
}
|
||||
\`\`\``;
|
||||
|
||||
// Build the user prompt
|
||||
const userPrompt = `Current Features in Backlog:
|
||||
${formatFeaturesForPrompt(features)}
|
||||
|
||||
---
|
||||
|
||||
User Request: ${prompt}
|
||||
|
||||
Please analyze the current backlog and the user's request, then provide a JSON plan for the modifications.`;
|
||||
|
||||
events.emit('backlog-plan:event', {
|
||||
type: 'backlog_plan_progress',
|
||||
content: 'Generating plan with AI...',
|
||||
});
|
||||
|
||||
// Get the model to use
|
||||
const effectiveModel = model || 'sonnet';
|
||||
const provider = ProviderFactory.getProviderForModel(effectiveModel);
|
||||
|
||||
// Get autoLoadClaudeMd setting
|
||||
const autoLoadClaudeMd = await getAutoLoadClaudeMdSetting(
|
||||
projectPath,
|
||||
settingsService,
|
||||
'[BacklogPlan]'
|
||||
);
|
||||
|
||||
// Execute the query
|
||||
const stream = provider.executeQuery({
|
||||
prompt: userPrompt,
|
||||
model: effectiveModel,
|
||||
cwd: projectPath,
|
||||
systemPrompt,
|
||||
maxTurns: 1,
|
||||
allowedTools: [], // No tools needed for this
|
||||
abortController,
|
||||
settingSources: autoLoadClaudeMd ? ['user', 'project'] : undefined,
|
||||
});
|
||||
|
||||
let responseText = '';
|
||||
|
||||
for await (const msg of stream) {
|
||||
if (abortController.signal.aborted) {
|
||||
throw new Error('Generation aborted');
|
||||
}
|
||||
|
||||
if (msg.type === 'assistant') {
|
||||
if (msg.message?.content) {
|
||||
for (const block of msg.message.content) {
|
||||
if (block.type === 'text') {
|
||||
responseText += block.text;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Parse the response
|
||||
const result = parsePlanResponse(responseText);
|
||||
|
||||
events.emit('backlog-plan:event', {
|
||||
type: 'backlog_plan_complete',
|
||||
result,
|
||||
});
|
||||
|
||||
return result;
|
||||
} catch (error) {
|
||||
const errorMessage = getErrorMessage(error);
|
||||
logger.error('[BacklogPlan] Generation failed:', errorMessage);
|
||||
|
||||
events.emit('backlog-plan:event', {
|
||||
type: 'backlog_plan_error',
|
||||
error: errorMessage,
|
||||
});
|
||||
|
||||
throw error;
|
||||
} finally {
|
||||
setRunningState(false, null);
|
||||
}
|
||||
}
|
||||
@@ -1,30 +0,0 @@
|
||||
/**
|
||||
* Backlog Plan routes - HTTP API for AI-assisted backlog modification
|
||||
*/
|
||||
|
||||
import { Router } from 'express';
|
||||
import type { EventEmitter } from '../../lib/events.js';
|
||||
import { validatePathParams } from '../../middleware/validate-paths.js';
|
||||
import { createGenerateHandler } from './routes/generate.js';
|
||||
import { createStopHandler } from './routes/stop.js';
|
||||
import { createStatusHandler } from './routes/status.js';
|
||||
import { createApplyHandler } from './routes/apply.js';
|
||||
import type { SettingsService } from '../../services/settings-service.js';
|
||||
|
||||
export function createBacklogPlanRoutes(
|
||||
events: EventEmitter,
|
||||
settingsService?: SettingsService
|
||||
): Router {
|
||||
const router = Router();
|
||||
|
||||
router.post(
|
||||
'/generate',
|
||||
validatePathParams('projectPath'),
|
||||
createGenerateHandler(events, settingsService)
|
||||
);
|
||||
router.post('/stop', createStopHandler());
|
||||
router.get('/status', createStatusHandler());
|
||||
router.post('/apply', validatePathParams('projectPath'), createApplyHandler());
|
||||
|
||||
return router;
|
||||
}
|
||||
@@ -1,147 +0,0 @@
|
||||
/**
|
||||
* POST /apply endpoint - Apply a backlog plan
|
||||
*/
|
||||
|
||||
import type { Request, Response } from 'express';
|
||||
import type { BacklogPlanResult, BacklogChange, Feature } from '@automaker/types';
|
||||
import { FeatureLoader } from '../../../services/feature-loader.js';
|
||||
import { getErrorMessage, logError, logger } from '../common.js';
|
||||
|
||||
const featureLoader = new FeatureLoader();
|
||||
|
||||
export function createApplyHandler() {
|
||||
return async (req: Request, res: Response): Promise<void> => {
|
||||
try {
|
||||
const { projectPath, plan } = req.body as {
|
||||
projectPath: string;
|
||||
plan: BacklogPlanResult;
|
||||
};
|
||||
|
||||
if (!projectPath) {
|
||||
res.status(400).json({ success: false, error: 'projectPath required' });
|
||||
return;
|
||||
}
|
||||
|
||||
if (!plan || !plan.changes) {
|
||||
res.status(400).json({ success: false, error: 'plan with changes required' });
|
||||
return;
|
||||
}
|
||||
|
||||
const appliedChanges: string[] = [];
|
||||
|
||||
// Load current features for dependency validation
|
||||
const allFeatures = await featureLoader.getAll(projectPath);
|
||||
const featureMap = new Map(allFeatures.map((f) => [f.id, f]));
|
||||
|
||||
// Process changes in order: deletes first, then adds, then updates
|
||||
// This ensures we can remove dependencies before they cause issues
|
||||
|
||||
// 1. First pass: Handle deletes
|
||||
const deletions = plan.changes.filter((c) => c.type === 'delete');
|
||||
for (const change of deletions) {
|
||||
if (!change.featureId) continue;
|
||||
|
||||
try {
|
||||
// Before deleting, update any features that depend on this one
|
||||
for (const feature of allFeatures) {
|
||||
if (feature.dependencies?.includes(change.featureId)) {
|
||||
const newDeps = feature.dependencies.filter((d) => d !== change.featureId);
|
||||
await featureLoader.update(projectPath, feature.id, { dependencies: newDeps });
|
||||
logger.info(
|
||||
`[BacklogPlan] Removed dependency ${change.featureId} from ${feature.id}`
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
// Now delete the feature
|
||||
const deleted = await featureLoader.delete(projectPath, change.featureId);
|
||||
if (deleted) {
|
||||
appliedChanges.push(`deleted:${change.featureId}`);
|
||||
featureMap.delete(change.featureId);
|
||||
logger.info(`[BacklogPlan] Deleted feature ${change.featureId}`);
|
||||
}
|
||||
} catch (error) {
|
||||
logger.error(
|
||||
`[BacklogPlan] Failed to delete ${change.featureId}:`,
|
||||
getErrorMessage(error)
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
// 2. Second pass: Handle adds
|
||||
const additions = plan.changes.filter((c) => c.type === 'add');
|
||||
for (const change of additions) {
|
||||
if (!change.feature) continue;
|
||||
|
||||
try {
|
||||
// Create the new feature
|
||||
const newFeature = await featureLoader.create(projectPath, {
|
||||
title: change.feature.title,
|
||||
description: change.feature.description || '',
|
||||
category: change.feature.category || 'Uncategorized',
|
||||
dependencies: change.feature.dependencies,
|
||||
priority: change.feature.priority,
|
||||
status: 'backlog',
|
||||
});
|
||||
|
||||
appliedChanges.push(`added:${newFeature.id}`);
|
||||
featureMap.set(newFeature.id, newFeature);
|
||||
logger.info(`[BacklogPlan] Created feature ${newFeature.id}: ${newFeature.title}`);
|
||||
} catch (error) {
|
||||
logger.error(`[BacklogPlan] Failed to add feature:`, getErrorMessage(error));
|
||||
}
|
||||
}
|
||||
|
||||
// 3. Third pass: Handle updates
|
||||
const updates = plan.changes.filter((c) => c.type === 'update');
|
||||
for (const change of updates) {
|
||||
if (!change.featureId || !change.feature) continue;
|
||||
|
||||
try {
|
||||
const updated = await featureLoader.update(projectPath, change.featureId, change.feature);
|
||||
appliedChanges.push(`updated:${change.featureId}`);
|
||||
featureMap.set(change.featureId, updated);
|
||||
logger.info(`[BacklogPlan] Updated feature ${change.featureId}`);
|
||||
} catch (error) {
|
||||
logger.error(
|
||||
`[BacklogPlan] Failed to update ${change.featureId}:`,
|
||||
getErrorMessage(error)
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
// 4. Apply dependency updates from the plan
|
||||
if (plan.dependencyUpdates) {
|
||||
for (const depUpdate of plan.dependencyUpdates) {
|
||||
try {
|
||||
const feature = featureMap.get(depUpdate.featureId);
|
||||
if (feature) {
|
||||
const currentDeps = feature.dependencies || [];
|
||||
const newDeps = currentDeps
|
||||
.filter((d) => !depUpdate.removedDependencies.includes(d))
|
||||
.concat(depUpdate.addedDependencies.filter((d) => !currentDeps.includes(d)));
|
||||
|
||||
await featureLoader.update(projectPath, depUpdate.featureId, {
|
||||
dependencies: newDeps,
|
||||
});
|
||||
logger.info(`[BacklogPlan] Updated dependencies for ${depUpdate.featureId}`);
|
||||
}
|
||||
} catch (error) {
|
||||
logger.error(
|
||||
`[BacklogPlan] Failed to update dependencies for ${depUpdate.featureId}:`,
|
||||
getErrorMessage(error)
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
res.json({
|
||||
success: true,
|
||||
appliedChanges,
|
||||
});
|
||||
} catch (error) {
|
||||
logError(error, 'Apply backlog plan failed');
|
||||
res.status(500).json({ success: false, error: getErrorMessage(error) });
|
||||
}
|
||||
};
|
||||
}
|
||||
@@ -1,62 +0,0 @@
|
||||
/**
|
||||
* POST /generate endpoint - Generate a backlog plan
|
||||
*/
|
||||
|
||||
import type { Request, Response } from 'express';
|
||||
import type { EventEmitter } from '../../../lib/events.js';
|
||||
import { getBacklogPlanStatus, setRunningState, getErrorMessage, logError } from '../common.js';
|
||||
import { generateBacklogPlan } from '../generate-plan.js';
|
||||
import type { SettingsService } from '../../../services/settings-service.js';
|
||||
|
||||
export function createGenerateHandler(events: EventEmitter, settingsService?: SettingsService) {
|
||||
return async (req: Request, res: Response): Promise<void> => {
|
||||
try {
|
||||
const { projectPath, prompt, model } = req.body as {
|
||||
projectPath: string;
|
||||
prompt: string;
|
||||
model?: string;
|
||||
};
|
||||
|
||||
if (!projectPath) {
|
||||
res.status(400).json({ success: false, error: 'projectPath required' });
|
||||
return;
|
||||
}
|
||||
|
||||
if (!prompt) {
|
||||
res.status(400).json({ success: false, error: 'prompt required' });
|
||||
return;
|
||||
}
|
||||
|
||||
const { isRunning } = getBacklogPlanStatus();
|
||||
if (isRunning) {
|
||||
res.json({
|
||||
success: false,
|
||||
error: 'Backlog plan generation is already running',
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
setRunningState(true);
|
||||
const abortController = new AbortController();
|
||||
setRunningState(true, abortController);
|
||||
|
||||
// Start generation in background
|
||||
generateBacklogPlan(projectPath, prompt, events, abortController, settingsService, model)
|
||||
.catch((error) => {
|
||||
logError(error, 'Generate backlog plan failed (background)');
|
||||
events.emit('backlog-plan:event', {
|
||||
type: 'backlog_plan_error',
|
||||
error: getErrorMessage(error),
|
||||
});
|
||||
})
|
||||
.finally(() => {
|
||||
setRunningState(false, null);
|
||||
});
|
||||
|
||||
res.json({ success: true });
|
||||
} catch (error) {
|
||||
logError(error, 'Generate backlog plan failed');
|
||||
res.status(500).json({ success: false, error: getErrorMessage(error) });
|
||||
}
|
||||
};
|
||||
}
|
||||
@@ -1,18 +0,0 @@
|
||||
/**
|
||||
* GET /status endpoint - Get backlog plan generation status
|
||||
*/
|
||||
|
||||
import type { Request, Response } from 'express';
|
||||
import { getBacklogPlanStatus, getErrorMessage, logError } from '../common.js';
|
||||
|
||||
export function createStatusHandler() {
|
||||
return async (_req: Request, res: Response): Promise<void> => {
|
||||
try {
|
||||
const status = getBacklogPlanStatus();
|
||||
res.json({ success: true, ...status });
|
||||
} catch (error) {
|
||||
logError(error, 'Get backlog plan status failed');
|
||||
res.status(500).json({ success: false, error: getErrorMessage(error) });
|
||||
}
|
||||
};
|
||||
}
|
||||
@@ -1,22 +0,0 @@
|
||||
/**
|
||||
* POST /stop endpoint - Stop the current backlog plan generation
|
||||
*/
|
||||
|
||||
import type { Request, Response } from 'express';
|
||||
import { getAbortController, setRunningState, getErrorMessage, logError } from '../common.js';
|
||||
|
||||
export function createStopHandler() {
|
||||
return async (_req: Request, res: Response): Promise<void> => {
|
||||
try {
|
||||
const abortController = getAbortController();
|
||||
if (abortController) {
|
||||
abortController.abort();
|
||||
setRunningState(false, null);
|
||||
}
|
||||
res.json({ success: true });
|
||||
} catch (error) {
|
||||
logError(error, 'Stop backlog plan failed');
|
||||
res.status(500).json({ success: false, error: getErrorMessage(error) });
|
||||
}
|
||||
};
|
||||
}
|
||||
@@ -8,19 +8,17 @@
|
||||
import { Router } from 'express';
|
||||
import { createDescribeImageHandler } from './routes/describe-image.js';
|
||||
import { createDescribeFileHandler } from './routes/describe-file.js';
|
||||
import type { SettingsService } from '../../services/settings-service.js';
|
||||
|
||||
/**
|
||||
* Create the context router
|
||||
*
|
||||
* @param settingsService - Optional settings service for loading autoLoadClaudeMd setting
|
||||
* @returns Express router with context endpoints
|
||||
*/
|
||||
export function createContextRoutes(settingsService?: SettingsService): Router {
|
||||
export function createContextRoutes(): Router {
|
||||
const router = Router();
|
||||
|
||||
router.post('/describe-image', createDescribeImageHandler(settingsService));
|
||||
router.post('/describe-file', createDescribeFileHandler(settingsService));
|
||||
router.post('/describe-image', createDescribeImageHandler());
|
||||
router.post('/describe-file', createDescribeFileHandler());
|
||||
|
||||
return router;
|
||||
}
|
||||
|
||||
@@ -17,8 +17,6 @@ import { PathNotAllowedError } from '@automaker/platform';
|
||||
import { createCustomOptions } from '../../../lib/sdk-options.js';
|
||||
import * as secureFs from '../../../lib/secure-fs.js';
|
||||
import * as path from 'path';
|
||||
import type { SettingsService } from '../../../services/settings-service.js';
|
||||
import { getAutoLoadClaudeMdSetting } from '../../../lib/settings-helpers.js';
|
||||
|
||||
const logger = createLogger('DescribeFile');
|
||||
|
||||
@@ -74,12 +72,9 @@ async function extractTextFromStream(
|
||||
/**
|
||||
* Create the describe-file request handler
|
||||
*
|
||||
* @param settingsService - Optional settings service for loading autoLoadClaudeMd setting
|
||||
* @returns Express request handler for file description
|
||||
*/
|
||||
export function createDescribeFileHandler(
|
||||
settingsService?: SettingsService
|
||||
): (req: Request, res: Response) => Promise<void> {
|
||||
export function createDescribeFileHandler(): (req: Request, res: Response) => Promise<void> {
|
||||
return async (req: Request, res: Response): Promise<void> => {
|
||||
try {
|
||||
const { filePath } = req.body as DescribeFileRequestBody;
|
||||
@@ -170,13 +165,6 @@ File: ${fileName}${truncated ? ' (truncated)' : ''}`;
|
||||
// Use the file's directory as the working directory
|
||||
const cwd = path.dirname(resolvedPath);
|
||||
|
||||
// Load autoLoadClaudeMd setting
|
||||
const autoLoadClaudeMd = await getAutoLoadClaudeMdSetting(
|
||||
cwd,
|
||||
settingsService,
|
||||
'[DescribeFile]'
|
||||
);
|
||||
|
||||
// Use centralized SDK options with proper cwd validation
|
||||
// No tools needed since we're passing file content directly
|
||||
const sdkOptions = createCustomOptions({
|
||||
@@ -184,7 +172,6 @@ File: ${fileName}${truncated ? ' (truncated)' : ''}`;
|
||||
model: CLAUDE_MODEL_MAP.haiku,
|
||||
maxTurns: 1,
|
||||
allowedTools: [],
|
||||
autoLoadClaudeMd,
|
||||
sandbox: { enabled: true, autoAllowBashIfSandboxed: true },
|
||||
});
|
||||
|
||||
|
||||
@@ -17,8 +17,6 @@ import { CLAUDE_MODEL_MAP } from '@automaker/types';
|
||||
import { createCustomOptions } from '../../../lib/sdk-options.js';
|
||||
import * as fs from 'fs';
|
||||
import * as path from 'path';
|
||||
import type { SettingsService } from '../../../services/settings-service.js';
|
||||
import { getAutoLoadClaudeMdSetting } from '../../../lib/settings-helpers.js';
|
||||
|
||||
const logger = createLogger('DescribeImage');
|
||||
|
||||
@@ -228,12 +226,9 @@ async function extractTextFromStream(
|
||||
* Uses Claude SDK query with multi-part content blocks to include the image (base64),
|
||||
* matching the agent runner behavior.
|
||||
*
|
||||
* @param settingsService - Optional settings service for loading autoLoadClaudeMd setting
|
||||
* @returns Express request handler for image description
|
||||
*/
|
||||
export function createDescribeImageHandler(
|
||||
settingsService?: SettingsService
|
||||
): (req: Request, res: Response) => Promise<void> {
|
||||
export function createDescribeImageHandler(): (req: Request, res: Response) => Promise<void> {
|
||||
return async (req: Request, res: Response): Promise<void> => {
|
||||
const requestId = `describe-image-${Date.now()}-${Math.random().toString(36).slice(2, 9)}`;
|
||||
const startedAt = Date.now();
|
||||
@@ -330,20 +325,12 @@ export function createDescribeImageHandler(
|
||||
const cwd = path.dirname(actualPath);
|
||||
logger.info(`[${requestId}] Using cwd=${cwd}`);
|
||||
|
||||
// Load autoLoadClaudeMd setting
|
||||
const autoLoadClaudeMd = await getAutoLoadClaudeMdSetting(
|
||||
cwd,
|
||||
settingsService,
|
||||
'[DescribeImage]'
|
||||
);
|
||||
|
||||
// Use the same centralized option builder used across the server (validates cwd)
|
||||
const sdkOptions = createCustomOptions({
|
||||
cwd,
|
||||
model: CLAUDE_MODEL_MAP.haiku,
|
||||
maxTurns: 1,
|
||||
allowedTools: [],
|
||||
autoLoadClaudeMd,
|
||||
sandbox: { enabled: true, autoAllowBashIfSandboxed: true },
|
||||
});
|
||||
|
||||
|
||||
@@ -3,54 +3,16 @@
|
||||
*/
|
||||
|
||||
import { Router } from 'express';
|
||||
import type { EventEmitter } from '../../lib/events.js';
|
||||
import { validatePathParams } from '../../middleware/validate-paths.js';
|
||||
import { createCheckGitHubRemoteHandler } from './routes/check-github-remote.js';
|
||||
import { createListIssuesHandler } from './routes/list-issues.js';
|
||||
import { createListPRsHandler } from './routes/list-prs.js';
|
||||
import { createValidateIssueHandler } from './routes/validate-issue.js';
|
||||
import {
|
||||
createValidationStatusHandler,
|
||||
createValidationStopHandler,
|
||||
createGetValidationsHandler,
|
||||
createDeleteValidationHandler,
|
||||
createMarkViewedHandler,
|
||||
} from './routes/validation-endpoints.js';
|
||||
import type { SettingsService } from '../../services/settings-service.js';
|
||||
|
||||
export function createGitHubRoutes(
|
||||
events: EventEmitter,
|
||||
settingsService?: SettingsService
|
||||
): Router {
|
||||
export function createGitHubRoutes(): Router {
|
||||
const router = Router();
|
||||
|
||||
router.post('/check-remote', validatePathParams('projectPath'), createCheckGitHubRemoteHandler());
|
||||
router.post('/issues', validatePathParams('projectPath'), createListIssuesHandler());
|
||||
router.post('/prs', validatePathParams('projectPath'), createListPRsHandler());
|
||||
router.post(
|
||||
'/validate-issue',
|
||||
validatePathParams('projectPath'),
|
||||
createValidateIssueHandler(events, settingsService)
|
||||
);
|
||||
|
||||
// Validation management endpoints
|
||||
router.post(
|
||||
'/validation-status',
|
||||
validatePathParams('projectPath'),
|
||||
createValidationStatusHandler()
|
||||
);
|
||||
router.post('/validation-stop', validatePathParams('projectPath'), createValidationStopHandler());
|
||||
router.post('/validations', validatePathParams('projectPath'), createGetValidationsHandler());
|
||||
router.post(
|
||||
'/validation-delete',
|
||||
validatePathParams('projectPath'),
|
||||
createDeleteValidationHandler()
|
||||
);
|
||||
router.post(
|
||||
'/validation-mark-viewed',
|
||||
validatePathParams('projectPath'),
|
||||
createMarkViewedHandler(events)
|
||||
);
|
||||
router.post('/check-remote', createCheckGitHubRemoteHandler());
|
||||
router.post('/issues', createListIssuesHandler());
|
||||
router.post('/prs', createListPRsHandler());
|
||||
|
||||
return router;
|
||||
}
|
||||
|
||||
@@ -2,7 +2,6 @@
|
||||
* POST /list-issues endpoint - List GitHub issues for a project
|
||||
*/
|
||||
|
||||
import { spawn } from 'child_process';
|
||||
import type { Request, Response } from 'express';
|
||||
import { execAsync, execEnv, getErrorMessage, logError } from './common.js';
|
||||
import { checkGitHubRemote } from './check-github-remote.js';
|
||||
@@ -14,19 +13,6 @@ export interface GitHubLabel {
|
||||
|
||||
export interface GitHubAuthor {
|
||||
login: string;
|
||||
avatarUrl?: string;
|
||||
}
|
||||
|
||||
export interface GitHubAssignee {
|
||||
login: string;
|
||||
avatarUrl?: string;
|
||||
}
|
||||
|
||||
export interface LinkedPullRequest {
|
||||
number: number;
|
||||
title: string;
|
||||
state: string;
|
||||
url: string;
|
||||
}
|
||||
|
||||
export interface GitHubIssue {
|
||||
@@ -38,8 +24,6 @@ export interface GitHubIssue {
|
||||
labels: GitHubLabel[];
|
||||
url: string;
|
||||
body: string;
|
||||
assignees: GitHubAssignee[];
|
||||
linkedPRs?: LinkedPullRequest[];
|
||||
}
|
||||
|
||||
export interface ListIssuesResult {
|
||||
@@ -49,146 +33,6 @@ export interface ListIssuesResult {
|
||||
error?: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* Fetch linked PRs for a list of issues using GitHub GraphQL API
|
||||
*/
|
||||
async function fetchLinkedPRs(
|
||||
projectPath: string,
|
||||
owner: string,
|
||||
repo: string,
|
||||
issueNumbers: number[]
|
||||
): Promise<Map<number, LinkedPullRequest[]>> {
|
||||
const linkedPRsMap = new Map<number, LinkedPullRequest[]>();
|
||||
|
||||
if (issueNumbers.length === 0) {
|
||||
return linkedPRsMap;
|
||||
}
|
||||
|
||||
// Build GraphQL query for batch fetching linked PRs
|
||||
// We fetch up to 20 issues at a time to avoid query limits
|
||||
const batchSize = 20;
|
||||
for (let i = 0; i < issueNumbers.length; i += batchSize) {
|
||||
const batch = issueNumbers.slice(i, i + batchSize);
|
||||
|
||||
const issueQueries = batch
|
||||
.map(
|
||||
(num, idx) => `
|
||||
issue${idx}: issue(number: ${num}) {
|
||||
number
|
||||
timelineItems(first: 10, itemTypes: [CROSS_REFERENCED_EVENT, CONNECTED_EVENT]) {
|
||||
nodes {
|
||||
... on CrossReferencedEvent {
|
||||
source {
|
||||
... on PullRequest {
|
||||
number
|
||||
title
|
||||
state
|
||||
url
|
||||
}
|
||||
}
|
||||
}
|
||||
... on ConnectedEvent {
|
||||
subject {
|
||||
... on PullRequest {
|
||||
number
|
||||
title
|
||||
state
|
||||
url
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}`
|
||||
)
|
||||
.join('\n');
|
||||
|
||||
const query = `{
|
||||
repository(owner: "${owner}", name: "${repo}") {
|
||||
${issueQueries}
|
||||
}
|
||||
}`;
|
||||
|
||||
try {
|
||||
// Use spawn with stdin to avoid shell injection vulnerabilities
|
||||
// --input - reads the JSON request body from stdin
|
||||
const requestBody = JSON.stringify({ query });
|
||||
const response = await new Promise<Record<string, unknown>>((resolve, reject) => {
|
||||
const gh = spawn('gh', ['api', 'graphql', '--input', '-'], {
|
||||
cwd: projectPath,
|
||||
env: execEnv,
|
||||
});
|
||||
|
||||
let stdout = '';
|
||||
let stderr = '';
|
||||
gh.stdout.on('data', (data: Buffer) => (stdout += data.toString()));
|
||||
gh.stderr.on('data', (data: Buffer) => (stderr += data.toString()));
|
||||
|
||||
gh.on('close', (code) => {
|
||||
if (code !== 0) {
|
||||
return reject(new Error(`gh process exited with code ${code}: ${stderr}`));
|
||||
}
|
||||
try {
|
||||
resolve(JSON.parse(stdout));
|
||||
} catch (e) {
|
||||
reject(e);
|
||||
}
|
||||
});
|
||||
|
||||
gh.stdin.write(requestBody);
|
||||
gh.stdin.end();
|
||||
});
|
||||
|
||||
const repoData = (response?.data as Record<string, unknown>)?.repository as Record<
|
||||
string,
|
||||
unknown
|
||||
> | null;
|
||||
|
||||
if (repoData) {
|
||||
batch.forEach((issueNum, idx) => {
|
||||
const issueData = repoData[`issue${idx}`] as {
|
||||
timelineItems?: {
|
||||
nodes?: Array<{
|
||||
source?: { number?: number; title?: string; state?: string; url?: string };
|
||||
subject?: { number?: number; title?: string; state?: string; url?: string };
|
||||
}>;
|
||||
};
|
||||
} | null;
|
||||
if (issueData?.timelineItems?.nodes) {
|
||||
const linkedPRs: LinkedPullRequest[] = [];
|
||||
const seenPRs = new Set<number>();
|
||||
|
||||
for (const node of issueData.timelineItems.nodes) {
|
||||
const pr = node?.source || node?.subject;
|
||||
if (pr?.number && !seenPRs.has(pr.number)) {
|
||||
seenPRs.add(pr.number);
|
||||
linkedPRs.push({
|
||||
number: pr.number,
|
||||
title: pr.title || '',
|
||||
state: (pr.state || '').toLowerCase(),
|
||||
url: pr.url || '',
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
if (linkedPRs.length > 0) {
|
||||
linkedPRsMap.set(issueNum, linkedPRs);
|
||||
}
|
||||
}
|
||||
});
|
||||
}
|
||||
} catch (error) {
|
||||
// If GraphQL fails, continue without linked PRs
|
||||
console.warn(
|
||||
'Failed to fetch linked PRs via GraphQL:',
|
||||
error instanceof Error ? error.message : error
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
return linkedPRsMap;
|
||||
}
|
||||
|
||||
export function createListIssuesHandler() {
|
||||
return async (req: Request, res: Response): Promise<void> => {
|
||||
try {
|
||||
@@ -209,17 +53,17 @@ export function createListIssuesHandler() {
|
||||
return;
|
||||
}
|
||||
|
||||
// Fetch open and closed issues in parallel (now including assignees)
|
||||
// Fetch open and closed issues in parallel
|
||||
const [openResult, closedResult] = await Promise.all([
|
||||
execAsync(
|
||||
'gh issue list --state open --json number,title,state,author,createdAt,labels,url,body,assignees --limit 100',
|
||||
'gh issue list --state open --json number,title,state,author,createdAt,labels,url,body --limit 100',
|
||||
{
|
||||
cwd: projectPath,
|
||||
env: execEnv,
|
||||
}
|
||||
),
|
||||
execAsync(
|
||||
'gh issue list --state closed --json number,title,state,author,createdAt,labels,url,body,assignees --limit 50',
|
||||
'gh issue list --state closed --json number,title,state,author,createdAt,labels,url,body --limit 50',
|
||||
{
|
||||
cwd: projectPath,
|
||||
env: execEnv,
|
||||
@@ -233,24 +77,6 @@ export function createListIssuesHandler() {
|
||||
const openIssues: GitHubIssue[] = JSON.parse(openStdout || '[]');
|
||||
const closedIssues: GitHubIssue[] = JSON.parse(closedStdout || '[]');
|
||||
|
||||
// Fetch linked PRs for open issues (more relevant for active work)
|
||||
if (remoteStatus.owner && remoteStatus.repo && openIssues.length > 0) {
|
||||
const linkedPRsMap = await fetchLinkedPRs(
|
||||
projectPath,
|
||||
remoteStatus.owner,
|
||||
remoteStatus.repo,
|
||||
openIssues.map((i) => i.number)
|
||||
);
|
||||
|
||||
// Attach linked PRs to issues
|
||||
for (const issue of openIssues) {
|
||||
const linkedPRs = linkedPRsMap.get(issue.number);
|
||||
if (linkedPRs) {
|
||||
issue.linkedPRs = linkedPRs;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
res.json({
|
||||
success: true,
|
||||
openIssues,
|
||||
|
||||
@@ -1,302 +0,0 @@
|
||||
/**
|
||||
* POST /validate-issue endpoint - Validate a GitHub issue using Claude SDK (async)
|
||||
*
|
||||
* Scans the codebase to determine if an issue is valid, invalid, or needs clarification.
|
||||
* Runs asynchronously and emits events for progress and completion.
|
||||
*/
|
||||
|
||||
import type { Request, Response } from 'express';
|
||||
import { query } from '@anthropic-ai/claude-agent-sdk';
|
||||
import type { EventEmitter } from '../../../lib/events.js';
|
||||
import type { IssueValidationResult, IssueValidationEvent, AgentModel } from '@automaker/types';
|
||||
import { createSuggestionsOptions } from '../../../lib/sdk-options.js';
|
||||
import { writeValidation } from '../../../lib/validation-storage.js';
|
||||
import {
|
||||
issueValidationSchema,
|
||||
ISSUE_VALIDATION_SYSTEM_PROMPT,
|
||||
buildValidationPrompt,
|
||||
} from './validation-schema.js';
|
||||
import {
|
||||
trySetValidationRunning,
|
||||
clearValidationStatus,
|
||||
getErrorMessage,
|
||||
logError,
|
||||
logger,
|
||||
} from './validation-common.js';
|
||||
import type { SettingsService } from '../../../services/settings-service.js';
|
||||
import { getAutoLoadClaudeMdSetting } from '../../../lib/settings-helpers.js';
|
||||
|
||||
/** Valid model values for validation */
|
||||
const VALID_MODELS: readonly AgentModel[] = ['opus', 'sonnet', 'haiku'] as const;
|
||||
|
||||
/**
|
||||
* Request body for issue validation
|
||||
*/
|
||||
interface ValidateIssueRequestBody {
|
||||
projectPath: string;
|
||||
issueNumber: number;
|
||||
issueTitle: string;
|
||||
issueBody: string;
|
||||
issueLabels?: string[];
|
||||
/** Model to use for validation (opus, sonnet, haiku) */
|
||||
model?: AgentModel;
|
||||
}
|
||||
|
||||
/**
|
||||
* Run the validation asynchronously
|
||||
*
|
||||
* Emits events for start, progress, complete, and error.
|
||||
* Stores result on completion.
|
||||
*/
|
||||
async function runValidation(
|
||||
projectPath: string,
|
||||
issueNumber: number,
|
||||
issueTitle: string,
|
||||
issueBody: string,
|
||||
issueLabels: string[] | undefined,
|
||||
model: AgentModel,
|
||||
events: EventEmitter,
|
||||
abortController: AbortController,
|
||||
settingsService?: SettingsService
|
||||
): Promise<void> {
|
||||
// Emit start event
|
||||
const startEvent: IssueValidationEvent = {
|
||||
type: 'issue_validation_start',
|
||||
issueNumber,
|
||||
issueTitle,
|
||||
projectPath,
|
||||
};
|
||||
events.emit('issue-validation:event', startEvent);
|
||||
|
||||
// Set up timeout (6 minutes)
|
||||
const VALIDATION_TIMEOUT_MS = 360000;
|
||||
const timeoutId = setTimeout(() => {
|
||||
logger.warn(`Validation timeout reached after ${VALIDATION_TIMEOUT_MS}ms`);
|
||||
abortController.abort();
|
||||
}, VALIDATION_TIMEOUT_MS);
|
||||
|
||||
try {
|
||||
// Build the prompt
|
||||
const prompt = buildValidationPrompt(issueNumber, issueTitle, issueBody, issueLabels);
|
||||
|
||||
// Load autoLoadClaudeMd setting
|
||||
const autoLoadClaudeMd = await getAutoLoadClaudeMdSetting(
|
||||
projectPath,
|
||||
settingsService,
|
||||
'[ValidateIssue]'
|
||||
);
|
||||
|
||||
// Create SDK options with structured output and abort controller
|
||||
const options = createSuggestionsOptions({
|
||||
cwd: projectPath,
|
||||
model,
|
||||
systemPrompt: ISSUE_VALIDATION_SYSTEM_PROMPT,
|
||||
abortController,
|
||||
autoLoadClaudeMd,
|
||||
outputFormat: {
|
||||
type: 'json_schema',
|
||||
schema: issueValidationSchema as Record<string, unknown>,
|
||||
},
|
||||
});
|
||||
|
||||
// Execute the query
|
||||
const stream = query({ prompt, options });
|
||||
let validationResult: IssueValidationResult | null = null;
|
||||
let responseText = '';
|
||||
|
||||
for await (const msg of stream) {
|
||||
// Collect assistant text for debugging and emit progress
|
||||
if (msg.type === 'assistant' && msg.message?.content) {
|
||||
for (const block of msg.message.content) {
|
||||
if (block.type === 'text') {
|
||||
responseText += block.text;
|
||||
|
||||
// Emit progress event
|
||||
const progressEvent: IssueValidationEvent = {
|
||||
type: 'issue_validation_progress',
|
||||
issueNumber,
|
||||
content: block.text,
|
||||
projectPath,
|
||||
};
|
||||
events.emit('issue-validation:event', progressEvent);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Extract structured output on success
|
||||
if (msg.type === 'result' && msg.subtype === 'success') {
|
||||
const resultMsg = msg as { structured_output?: IssueValidationResult };
|
||||
if (resultMsg.structured_output) {
|
||||
validationResult = resultMsg.structured_output;
|
||||
logger.debug('Received structured output:', validationResult);
|
||||
}
|
||||
}
|
||||
|
||||
// Handle errors
|
||||
if (msg.type === 'result') {
|
||||
const resultMsg = msg as { subtype?: string };
|
||||
if (resultMsg.subtype === 'error_max_structured_output_retries') {
|
||||
logger.error('Failed to produce valid structured output after retries');
|
||||
throw new Error('Could not produce valid validation output');
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Clear timeout
|
||||
clearTimeout(timeoutId);
|
||||
|
||||
// Require structured output
|
||||
if (!validationResult) {
|
||||
logger.error('No structured output received from Claude SDK');
|
||||
logger.debug('Raw response text:', responseText);
|
||||
throw new Error('Validation failed: no structured output received');
|
||||
}
|
||||
|
||||
logger.info(`Issue #${issueNumber} validation complete: ${validationResult.verdict}`);
|
||||
|
||||
// Store the result
|
||||
await writeValidation(projectPath, issueNumber, {
|
||||
issueNumber,
|
||||
issueTitle,
|
||||
validatedAt: new Date().toISOString(),
|
||||
model,
|
||||
result: validationResult,
|
||||
});
|
||||
|
||||
// Emit completion event
|
||||
const completeEvent: IssueValidationEvent = {
|
||||
type: 'issue_validation_complete',
|
||||
issueNumber,
|
||||
issueTitle,
|
||||
result: validationResult,
|
||||
projectPath,
|
||||
model,
|
||||
};
|
||||
events.emit('issue-validation:event', completeEvent);
|
||||
} catch (error) {
|
||||
clearTimeout(timeoutId);
|
||||
|
||||
const errorMessage = getErrorMessage(error);
|
||||
logError(error, `Issue #${issueNumber} validation failed`);
|
||||
|
||||
// Emit error event
|
||||
const errorEvent: IssueValidationEvent = {
|
||||
type: 'issue_validation_error',
|
||||
issueNumber,
|
||||
error: errorMessage,
|
||||
projectPath,
|
||||
};
|
||||
events.emit('issue-validation:event', errorEvent);
|
||||
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Creates the handler for validating GitHub issues against the codebase.
|
||||
*
|
||||
* Uses Claude SDK with:
|
||||
* - Read-only tools (Read, Glob, Grep) for codebase analysis
|
||||
* - JSON schema structured output for reliable parsing
|
||||
* - System prompt guiding the validation process
|
||||
* - Async execution with event emission
|
||||
*/
|
||||
export function createValidateIssueHandler(
|
||||
events: EventEmitter,
|
||||
settingsService?: SettingsService
|
||||
) {
|
||||
return async (req: Request, res: Response): Promise<void> => {
|
||||
try {
|
||||
const {
|
||||
projectPath,
|
||||
issueNumber,
|
||||
issueTitle,
|
||||
issueBody,
|
||||
issueLabels,
|
||||
model = 'opus',
|
||||
} = req.body as ValidateIssueRequestBody;
|
||||
|
||||
// Validate required fields
|
||||
if (!projectPath) {
|
||||
res.status(400).json({ success: false, error: 'projectPath is required' });
|
||||
return;
|
||||
}
|
||||
|
||||
if (!issueNumber || typeof issueNumber !== 'number') {
|
||||
res
|
||||
.status(400)
|
||||
.json({ success: false, error: 'issueNumber is required and must be a number' });
|
||||
return;
|
||||
}
|
||||
|
||||
if (!issueTitle || typeof issueTitle !== 'string') {
|
||||
res.status(400).json({ success: false, error: 'issueTitle is required' });
|
||||
return;
|
||||
}
|
||||
|
||||
if (typeof issueBody !== 'string') {
|
||||
res.status(400).json({ success: false, error: 'issueBody must be a string' });
|
||||
return;
|
||||
}
|
||||
|
||||
// Validate model parameter at runtime
|
||||
if (!VALID_MODELS.includes(model)) {
|
||||
res.status(400).json({
|
||||
success: false,
|
||||
error: `Invalid model. Must be one of: ${VALID_MODELS.join(', ')}`,
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
logger.info(`Starting async validation for issue #${issueNumber}: ${issueTitle}`);
|
||||
|
||||
// Create abort controller and atomically try to claim validation slot
|
||||
// This prevents TOCTOU race conditions
|
||||
const abortController = new AbortController();
|
||||
if (!trySetValidationRunning(projectPath, issueNumber, abortController)) {
|
||||
res.json({
|
||||
success: false,
|
||||
error: `Validation is already running for issue #${issueNumber}`,
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
// Start validation in background (fire-and-forget)
|
||||
runValidation(
|
||||
projectPath,
|
||||
issueNumber,
|
||||
issueTitle,
|
||||
issueBody,
|
||||
issueLabels,
|
||||
model,
|
||||
events,
|
||||
abortController,
|
||||
settingsService
|
||||
)
|
||||
.catch((error) => {
|
||||
// Error is already handled inside runValidation (event emitted)
|
||||
logger.debug('Validation error caught in background handler:', error);
|
||||
})
|
||||
.finally(() => {
|
||||
clearValidationStatus(projectPath, issueNumber);
|
||||
});
|
||||
|
||||
// Return immediately
|
||||
res.json({
|
||||
success: true,
|
||||
message: `Validation started for issue #${issueNumber}`,
|
||||
issueNumber,
|
||||
});
|
||||
} catch (error) {
|
||||
logError(error, `Issue validation failed`);
|
||||
logger.error('Issue validation error:', error);
|
||||
|
||||
if (!res.headersSent) {
|
||||
res.status(500).json({
|
||||
success: false,
|
||||
error: getErrorMessage(error),
|
||||
});
|
||||
}
|
||||
}
|
||||
};
|
||||
}
|
||||
@@ -1,174 +0,0 @@
|
||||
/**
|
||||
* Common utilities and state for issue validation routes
|
||||
*
|
||||
* Tracks running validation status per issue to support:
|
||||
* - Checking if a validation is in progress
|
||||
* - Cancelling a running validation
|
||||
* - Preventing duplicate validations for the same issue
|
||||
*/
|
||||
|
||||
import { createLogger } from '@automaker/utils';
|
||||
import { getErrorMessage as getErrorMessageShared, createLogError } from '../../common.js';
|
||||
|
||||
const logger = createLogger('IssueValidation');
|
||||
|
||||
/**
|
||||
* Status of a validation in progress
|
||||
*/
|
||||
interface ValidationStatus {
|
||||
isRunning: boolean;
|
||||
abortController: AbortController;
|
||||
startedAt: Date;
|
||||
}
|
||||
|
||||
/**
|
||||
* Map of issue number to validation status
|
||||
* Key format: `${projectPath}||${issueNumber}` to support multiple projects
|
||||
* Note: Using `||` as delimiter since `:` appears in Windows paths (e.g., C:\)
|
||||
*/
|
||||
const validationStatusMap = new Map<string, ValidationStatus>();
|
||||
|
||||
/** Maximum age for stale validation entries before cleanup (1 hour) */
|
||||
const MAX_VALIDATION_AGE_MS = 60 * 60 * 1000;
|
||||
|
||||
/**
|
||||
* Create a unique key for a validation
|
||||
* Uses `||` as delimiter since `:` appears in Windows paths
|
||||
*/
|
||||
function getValidationKey(projectPath: string, issueNumber: number): string {
|
||||
return `${projectPath}||${issueNumber}`;
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if a validation is currently running for an issue
|
||||
*/
|
||||
export function isValidationRunning(projectPath: string, issueNumber: number): boolean {
|
||||
const key = getValidationKey(projectPath, issueNumber);
|
||||
const status = validationStatusMap.get(key);
|
||||
return status?.isRunning ?? false;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get validation status for an issue
|
||||
*/
|
||||
export function getValidationStatus(
|
||||
projectPath: string,
|
||||
issueNumber: number
|
||||
): { isRunning: boolean; startedAt?: Date } | null {
|
||||
const key = getValidationKey(projectPath, issueNumber);
|
||||
const status = validationStatusMap.get(key);
|
||||
if (!status) {
|
||||
return null;
|
||||
}
|
||||
return {
|
||||
isRunning: status.isRunning,
|
||||
startedAt: status.startedAt,
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Get all running validations for a project
|
||||
*/
|
||||
export function getRunningValidations(projectPath: string): number[] {
|
||||
const runningIssues: number[] = [];
|
||||
const prefix = `${projectPath}||`;
|
||||
for (const [key, status] of validationStatusMap.entries()) {
|
||||
if (status.isRunning && key.startsWith(prefix)) {
|
||||
const issueNumber = parseInt(key.slice(prefix.length), 10);
|
||||
if (!isNaN(issueNumber)) {
|
||||
runningIssues.push(issueNumber);
|
||||
}
|
||||
}
|
||||
}
|
||||
return runningIssues;
|
||||
}
|
||||
|
||||
/**
|
||||
* Set a validation as running
|
||||
*/
|
||||
export function setValidationRunning(
|
||||
projectPath: string,
|
||||
issueNumber: number,
|
||||
abortController: AbortController
|
||||
): void {
|
||||
const key = getValidationKey(projectPath, issueNumber);
|
||||
validationStatusMap.set(key, {
|
||||
isRunning: true,
|
||||
abortController,
|
||||
startedAt: new Date(),
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Atomically try to set a validation as running (check-and-set)
|
||||
* Prevents TOCTOU race conditions when starting validations
|
||||
*
|
||||
* @returns true if successfully claimed, false if already running
|
||||
*/
|
||||
export function trySetValidationRunning(
|
||||
projectPath: string,
|
||||
issueNumber: number,
|
||||
abortController: AbortController
|
||||
): boolean {
|
||||
const key = getValidationKey(projectPath, issueNumber);
|
||||
if (validationStatusMap.has(key)) {
|
||||
return false; // Already running
|
||||
}
|
||||
validationStatusMap.set(key, {
|
||||
isRunning: true,
|
||||
abortController,
|
||||
startedAt: new Date(),
|
||||
});
|
||||
return true; // Successfully claimed
|
||||
}
|
||||
|
||||
/**
|
||||
* Cleanup stale validation entries (e.g., from crashed validations)
|
||||
* Should be called periodically to prevent memory leaks
|
||||
*/
|
||||
export function cleanupStaleValidations(): number {
|
||||
const now = Date.now();
|
||||
let cleanedCount = 0;
|
||||
for (const [key, status] of validationStatusMap.entries()) {
|
||||
if (now - status.startedAt.getTime() > MAX_VALIDATION_AGE_MS) {
|
||||
status.abortController.abort();
|
||||
validationStatusMap.delete(key);
|
||||
cleanedCount++;
|
||||
}
|
||||
}
|
||||
if (cleanedCount > 0) {
|
||||
logger.info(`Cleaned up ${cleanedCount} stale validation entries`);
|
||||
}
|
||||
return cleanedCount;
|
||||
}
|
||||
|
||||
/**
|
||||
* Clear validation status (call when validation completes or errors)
|
||||
*/
|
||||
export function clearValidationStatus(projectPath: string, issueNumber: number): void {
|
||||
const key = getValidationKey(projectPath, issueNumber);
|
||||
validationStatusMap.delete(key);
|
||||
}
|
||||
|
||||
/**
|
||||
* Abort a running validation
|
||||
*
|
||||
* @returns true if validation was aborted, false if not running
|
||||
*/
|
||||
export function abortValidation(projectPath: string, issueNumber: number): boolean {
|
||||
const key = getValidationKey(projectPath, issueNumber);
|
||||
const status = validationStatusMap.get(key);
|
||||
|
||||
if (!status || !status.isRunning) {
|
||||
return false;
|
||||
}
|
||||
|
||||
status.abortController.abort();
|
||||
validationStatusMap.delete(key);
|
||||
return true;
|
||||
}
|
||||
|
||||
// Re-export shared utilities
|
||||
export { getErrorMessageShared as getErrorMessage };
|
||||
export const logError = createLogError(logger);
|
||||
export { logger };
|
||||
@@ -1,236 +0,0 @@
|
||||
/**
|
||||
* Additional validation endpoints for status, stop, and retrieving stored validations
|
||||
*/
|
||||
|
||||
import type { Request, Response } from 'express';
|
||||
import type { EventEmitter } from '../../../lib/events.js';
|
||||
import type { IssueValidationEvent } from '@automaker/types';
|
||||
import {
|
||||
isValidationRunning,
|
||||
getValidationStatus,
|
||||
getRunningValidations,
|
||||
abortValidation,
|
||||
getErrorMessage,
|
||||
logError,
|
||||
logger,
|
||||
} from './validation-common.js';
|
||||
import {
|
||||
readValidation,
|
||||
getAllValidations,
|
||||
getValidationWithFreshness,
|
||||
deleteValidation,
|
||||
markValidationViewed,
|
||||
} from '../../../lib/validation-storage.js';
|
||||
|
||||
/**
|
||||
* POST /validation-status - Check if validation is running for an issue
|
||||
*/
|
||||
export function createValidationStatusHandler() {
|
||||
return async (req: Request, res: Response): Promise<void> => {
|
||||
try {
|
||||
const { projectPath, issueNumber } = req.body as {
|
||||
projectPath: string;
|
||||
issueNumber?: number;
|
||||
};
|
||||
|
||||
if (!projectPath) {
|
||||
res.status(400).json({ success: false, error: 'projectPath is required' });
|
||||
return;
|
||||
}
|
||||
|
||||
// If issueNumber provided, check specific issue
|
||||
if (issueNumber !== undefined) {
|
||||
const status = getValidationStatus(projectPath, issueNumber);
|
||||
res.json({
|
||||
success: true,
|
||||
isRunning: status?.isRunning ?? false,
|
||||
startedAt: status?.startedAt?.toISOString(),
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
// Otherwise, return all running validations for the project
|
||||
const runningIssues = getRunningValidations(projectPath);
|
||||
res.json({
|
||||
success: true,
|
||||
runningIssues,
|
||||
});
|
||||
} catch (error) {
|
||||
logError(error, 'Validation status check failed');
|
||||
res.status(500).json({ success: false, error: getErrorMessage(error) });
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* POST /validation-stop - Cancel a running validation
|
||||
*/
|
||||
export function createValidationStopHandler() {
|
||||
return async (req: Request, res: Response): Promise<void> => {
|
||||
try {
|
||||
const { projectPath, issueNumber } = req.body as {
|
||||
projectPath: string;
|
||||
issueNumber: number;
|
||||
};
|
||||
|
||||
if (!projectPath) {
|
||||
res.status(400).json({ success: false, error: 'projectPath is required' });
|
||||
return;
|
||||
}
|
||||
|
||||
if (!issueNumber || typeof issueNumber !== 'number') {
|
||||
res
|
||||
.status(400)
|
||||
.json({ success: false, error: 'issueNumber is required and must be a number' });
|
||||
return;
|
||||
}
|
||||
|
||||
const wasAborted = abortValidation(projectPath, issueNumber);
|
||||
|
||||
if (wasAborted) {
|
||||
logger.info(`Validation for issue #${issueNumber} was stopped`);
|
||||
res.json({
|
||||
success: true,
|
||||
message: `Validation for issue #${issueNumber} has been stopped`,
|
||||
});
|
||||
} else {
|
||||
res.json({
|
||||
success: false,
|
||||
error: `No validation is running for issue #${issueNumber}`,
|
||||
});
|
||||
}
|
||||
} catch (error) {
|
||||
logError(error, 'Validation stop failed');
|
||||
res.status(500).json({ success: false, error: getErrorMessage(error) });
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* POST /validations - Get stored validations for a project
|
||||
*/
|
||||
export function createGetValidationsHandler() {
|
||||
return async (req: Request, res: Response): Promise<void> => {
|
||||
try {
|
||||
const { projectPath, issueNumber } = req.body as {
|
||||
projectPath: string;
|
||||
issueNumber?: number;
|
||||
};
|
||||
|
||||
if (!projectPath) {
|
||||
res.status(400).json({ success: false, error: 'projectPath is required' });
|
||||
return;
|
||||
}
|
||||
|
||||
// If issueNumber provided, get specific validation with freshness info
|
||||
if (issueNumber !== undefined) {
|
||||
const result = await getValidationWithFreshness(projectPath, issueNumber);
|
||||
|
||||
if (!result) {
|
||||
res.json({
|
||||
success: true,
|
||||
validation: null,
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
res.json({
|
||||
success: true,
|
||||
validation: result.validation,
|
||||
isStale: result.isStale,
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
// Otherwise, get all validations for the project
|
||||
const validations = await getAllValidations(projectPath);
|
||||
|
||||
res.json({
|
||||
success: true,
|
||||
validations,
|
||||
});
|
||||
} catch (error) {
|
||||
logError(error, 'Get validations failed');
|
||||
res.status(500).json({ success: false, error: getErrorMessage(error) });
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* POST /validation-delete - Delete a stored validation
|
||||
*/
|
||||
export function createDeleteValidationHandler() {
|
||||
return async (req: Request, res: Response): Promise<void> => {
|
||||
try {
|
||||
const { projectPath, issueNumber } = req.body as {
|
||||
projectPath: string;
|
||||
issueNumber: number;
|
||||
};
|
||||
|
||||
if (!projectPath) {
|
||||
res.status(400).json({ success: false, error: 'projectPath is required' });
|
||||
return;
|
||||
}
|
||||
|
||||
if (!issueNumber || typeof issueNumber !== 'number') {
|
||||
res
|
||||
.status(400)
|
||||
.json({ success: false, error: 'issueNumber is required and must be a number' });
|
||||
return;
|
||||
}
|
||||
|
||||
const deleted = await deleteValidation(projectPath, issueNumber);
|
||||
|
||||
res.json({
|
||||
success: true,
|
||||
deleted,
|
||||
});
|
||||
} catch (error) {
|
||||
logError(error, 'Delete validation failed');
|
||||
res.status(500).json({ success: false, error: getErrorMessage(error) });
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* POST /validation-mark-viewed - Mark a validation as viewed by the user
|
||||
*/
|
||||
export function createMarkViewedHandler(events: EventEmitter) {
|
||||
return async (req: Request, res: Response): Promise<void> => {
|
||||
try {
|
||||
const { projectPath, issueNumber } = req.body as {
|
||||
projectPath: string;
|
||||
issueNumber: number;
|
||||
};
|
||||
|
||||
if (!projectPath) {
|
||||
res.status(400).json({ success: false, error: 'projectPath is required' });
|
||||
return;
|
||||
}
|
||||
|
||||
if (!issueNumber || typeof issueNumber !== 'number') {
|
||||
res
|
||||
.status(400)
|
||||
.json({ success: false, error: 'issueNumber is required and must be a number' });
|
||||
return;
|
||||
}
|
||||
|
||||
const success = await markValidationViewed(projectPath, issueNumber);
|
||||
|
||||
if (success) {
|
||||
// Emit event so UI can update the unviewed count
|
||||
const viewedEvent: IssueValidationEvent = {
|
||||
type: 'issue_validation_viewed',
|
||||
issueNumber,
|
||||
projectPath,
|
||||
};
|
||||
events.emit('issue-validation:event', viewedEvent);
|
||||
}
|
||||
|
||||
res.json({ success });
|
||||
} catch (error) {
|
||||
logError(error, 'Mark validation viewed failed');
|
||||
res.status(500).json({ success: false, error: getErrorMessage(error) });
|
||||
}
|
||||
};
|
||||
}
|
||||
@@ -1,138 +0,0 @@
|
||||
/**
|
||||
* Issue Validation Schema and System Prompt
|
||||
*
|
||||
* Defines the JSON schema for Claude's structured output and
|
||||
* the system prompt that guides the validation process.
|
||||
*/
|
||||
|
||||
/**
|
||||
* JSON Schema for issue validation structured output.
|
||||
* Used with Claude SDK's outputFormat option to ensure reliable parsing.
|
||||
*/
|
||||
export const issueValidationSchema = {
|
||||
type: 'object',
|
||||
properties: {
|
||||
verdict: {
|
||||
type: 'string',
|
||||
enum: ['valid', 'invalid', 'needs_clarification'],
|
||||
description: 'The validation verdict for the issue',
|
||||
},
|
||||
confidence: {
|
||||
type: 'string',
|
||||
enum: ['high', 'medium', 'low'],
|
||||
description: 'How confident the AI is in its assessment',
|
||||
},
|
||||
reasoning: {
|
||||
type: 'string',
|
||||
description: 'Detailed explanation of the verdict',
|
||||
},
|
||||
bugConfirmed: {
|
||||
type: 'boolean',
|
||||
description: 'For bug reports: whether the bug was confirmed in the codebase',
|
||||
},
|
||||
relatedFiles: {
|
||||
type: 'array',
|
||||
items: { type: 'string' },
|
||||
description: 'Files related to the issue found during analysis',
|
||||
},
|
||||
suggestedFix: {
|
||||
type: 'string',
|
||||
description: 'Suggested approach to fix or implement the issue',
|
||||
},
|
||||
missingInfo: {
|
||||
type: 'array',
|
||||
items: { type: 'string' },
|
||||
description: 'Information needed when verdict is needs_clarification',
|
||||
},
|
||||
estimatedComplexity: {
|
||||
type: 'string',
|
||||
enum: ['trivial', 'simple', 'moderate', 'complex', 'very_complex'],
|
||||
description: 'Estimated effort to address the issue',
|
||||
},
|
||||
},
|
||||
required: ['verdict', 'confidence', 'reasoning'],
|
||||
additionalProperties: false,
|
||||
} as const;
|
||||
|
||||
/**
|
||||
* System prompt that guides Claude in validating GitHub issues.
|
||||
* Instructs the model to use read-only tools to analyze the codebase.
|
||||
*/
|
||||
export const ISSUE_VALIDATION_SYSTEM_PROMPT = `You are an expert code analyst validating GitHub issues against a codebase.
|
||||
|
||||
Your task is to analyze a GitHub issue and determine if it's valid by scanning the codebase.
|
||||
|
||||
## Validation Process
|
||||
|
||||
1. **Read the issue carefully** - Understand what is being reported or requested
|
||||
2. **Search the codebase** - Use Glob to find relevant files by pattern, Grep to search for keywords
|
||||
3. **Examine the code** - Use Read to look at the actual implementation in relevant files
|
||||
4. **Form your verdict** - Based on your analysis, determine if the issue is valid
|
||||
|
||||
## Verdicts
|
||||
|
||||
- **valid**: The issue describes a real problem that exists in the codebase, or a clear feature request that can be implemented. The referenced files/components exist and the issue is actionable.
|
||||
|
||||
- **invalid**: The issue describes behavior that doesn't exist, references non-existent files or components, is based on a misunderstanding of the code, or the described "bug" is actually expected behavior.
|
||||
|
||||
- **needs_clarification**: The issue lacks sufficient detail to verify. Specify what additional information is needed in the missingInfo field.
|
||||
|
||||
## For Bug Reports, Check:
|
||||
- Do the referenced files/components exist?
|
||||
- Does the code match what the issue describes?
|
||||
- Is the described behavior actually a bug or expected?
|
||||
- Can you locate the code that would cause the reported issue?
|
||||
|
||||
## For Feature Requests, Check:
|
||||
- Does the feature already exist?
|
||||
- Is the implementation location clear?
|
||||
- Is the request technically feasible given the codebase structure?
|
||||
|
||||
## Response Guidelines
|
||||
|
||||
- **Always include relatedFiles** when you find relevant code
|
||||
- **Set bugConfirmed to true** only if you can definitively confirm a bug exists in the code
|
||||
- **Provide a suggestedFix** when you have a clear idea of how to address the issue
|
||||
- **Use missingInfo** when the verdict is needs_clarification to list what's needed
|
||||
- **Set estimatedComplexity** to help prioritize:
|
||||
- trivial: Simple text changes, one-line fixes
|
||||
- simple: Small changes to one file
|
||||
- moderate: Changes to multiple files or moderate logic changes
|
||||
- complex: Significant refactoring or new feature implementation
|
||||
- very_complex: Major architectural changes or cross-cutting concerns
|
||||
|
||||
Be thorough in your analysis but focus on files that are directly relevant to the issue.`;
|
||||
|
||||
/**
|
||||
* Build the user prompt for issue validation.
|
||||
*
|
||||
* Creates a structured prompt that includes the issue details for Claude
|
||||
* to analyze against the codebase.
|
||||
*
|
||||
* @param issueNumber - The GitHub issue number
|
||||
* @param issueTitle - The issue title
|
||||
* @param issueBody - The issue body/description
|
||||
* @param issueLabels - Optional array of label names
|
||||
* @returns Formatted prompt string for the validation request
|
||||
*/
|
||||
export function buildValidationPrompt(
|
||||
issueNumber: number,
|
||||
issueTitle: string,
|
||||
issueBody: string,
|
||||
issueLabels?: string[]
|
||||
): string {
|
||||
const labelsSection = issueLabels?.length ? `\n\n**Labels:** ${issueLabels.join(', ')}` : '';
|
||||
|
||||
return `Please validate the following GitHub issue by analyzing the codebase:
|
||||
|
||||
## Issue #${issueNumber}: ${issueTitle}
|
||||
${labelsSection}
|
||||
|
||||
### Description
|
||||
|
||||
${issueBody || '(No description provided)'}
|
||||
|
||||
---
|
||||
|
||||
Scan the codebase to verify this issue. Look for the files, components, or functionality mentioned. Determine if this issue is valid, invalid, or needs clarification.`;
|
||||
}
|
||||
@@ -1,20 +0,0 @@
|
||||
/**
|
||||
* Common utilities for MCP routes
|
||||
*/
|
||||
|
||||
/**
|
||||
* Extract error message from unknown error
|
||||
*/
|
||||
export function getErrorMessage(error: unknown): string {
|
||||
if (error instanceof Error) {
|
||||
return error.message;
|
||||
}
|
||||
return String(error);
|
||||
}
|
||||
|
||||
/**
|
||||
* Log error with prefix
|
||||
*/
|
||||
export function logError(error: unknown, message: string): void {
|
||||
console.error(`[MCP] ${message}:`, error);
|
||||
}
|
||||
@@ -1,36 +0,0 @@
|
||||
/**
|
||||
* MCP routes - HTTP API for testing MCP servers
|
||||
*
|
||||
* Provides endpoints for:
|
||||
* - Testing MCP server connections
|
||||
* - Listing available tools from MCP servers
|
||||
*
|
||||
* Mounted at /api/mcp in the main server.
|
||||
*/
|
||||
|
||||
import { Router } from 'express';
|
||||
import type { MCPTestService } from '../../services/mcp-test-service.js';
|
||||
import { createTestServerHandler } from './routes/test-server.js';
|
||||
import { createListToolsHandler } from './routes/list-tools.js';
|
||||
|
||||
/**
|
||||
* Create MCP router with all endpoints
|
||||
*
|
||||
* Endpoints:
|
||||
* - POST /test - Test MCP server connection
|
||||
* - POST /tools - List tools from MCP server
|
||||
*
|
||||
* @param mcpTestService - Instance of MCPTestService for testing connections
|
||||
* @returns Express Router configured with all MCP endpoints
|
||||
*/
|
||||
export function createMCPRoutes(mcpTestService: MCPTestService): Router {
|
||||
const router = Router();
|
||||
|
||||
// Test MCP server connection
|
||||
router.post('/test', createTestServerHandler(mcpTestService));
|
||||
|
||||
// List tools from MCP server
|
||||
router.post('/tools', createListToolsHandler(mcpTestService));
|
||||
|
||||
return router;
|
||||
}
|
||||
@@ -1,57 +0,0 @@
|
||||
/**
|
||||
* POST /api/mcp/tools - List tools for an MCP server
|
||||
*
|
||||
* Lists available tools for an MCP server.
|
||||
* Similar to test but focused on tool discovery.
|
||||
*
|
||||
* SECURITY: Only accepts serverId to look up saved configs. Does NOT accept
|
||||
* arbitrary serverConfig to prevent drive-by command execution attacks.
|
||||
* Users must explicitly save a server config through the UI before testing.
|
||||
*
|
||||
* Request body:
|
||||
* { serverId: string } - Get tools by server ID from settings
|
||||
*
|
||||
* Response: { success: boolean, tools?: MCPToolInfo[], error?: string }
|
||||
*/
|
||||
|
||||
import type { Request, Response } from 'express';
|
||||
import type { MCPTestService } from '../../../services/mcp-test-service.js';
|
||||
import { getErrorMessage, logError } from '../common.js';
|
||||
|
||||
interface ListToolsRequest {
|
||||
serverId: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* Create handler factory for POST /api/mcp/tools
|
||||
*/
|
||||
export function createListToolsHandler(mcpTestService: MCPTestService) {
|
||||
return async (req: Request, res: Response): Promise<void> => {
|
||||
try {
|
||||
const body = req.body as ListToolsRequest;
|
||||
|
||||
if (!body.serverId || typeof body.serverId !== 'string') {
|
||||
res.status(400).json({
|
||||
success: false,
|
||||
error: 'serverId is required',
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
const result = await mcpTestService.testServerById(body.serverId);
|
||||
|
||||
// Return only tool-related information
|
||||
res.json({
|
||||
success: result.success,
|
||||
tools: result.tools,
|
||||
error: result.error,
|
||||
});
|
||||
} catch (error) {
|
||||
logError(error, 'List tools failed');
|
||||
res.status(500).json({
|
||||
success: false,
|
||||
error: getErrorMessage(error),
|
||||
});
|
||||
}
|
||||
};
|
||||
}
|
||||
@@ -1,50 +0,0 @@
|
||||
/**
|
||||
* POST /api/mcp/test - Test MCP server connection and list tools
|
||||
*
|
||||
* Tests connection to an MCP server and returns available tools.
|
||||
*
|
||||
* SECURITY: Only accepts serverId to look up saved configs. Does NOT accept
|
||||
* arbitrary serverConfig to prevent drive-by command execution attacks.
|
||||
* Users must explicitly save a server config through the UI before testing.
|
||||
*
|
||||
* Request body:
|
||||
* { serverId: string } - Test server by ID from settings
|
||||
*
|
||||
* Response: { success: boolean, tools?: MCPToolInfo[], error?: string, connectionTime?: number }
|
||||
*/
|
||||
|
||||
import type { Request, Response } from 'express';
|
||||
import type { MCPTestService } from '../../../services/mcp-test-service.js';
|
||||
import { getErrorMessage, logError } from '../common.js';
|
||||
|
||||
interface TestServerRequest {
|
||||
serverId: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* Create handler factory for POST /api/mcp/test
|
||||
*/
|
||||
export function createTestServerHandler(mcpTestService: MCPTestService) {
|
||||
return async (req: Request, res: Response): Promise<void> => {
|
||||
try {
|
||||
const body = req.body as TestServerRequest;
|
||||
|
||||
if (!body.serverId || typeof body.serverId !== 'string') {
|
||||
res.status(400).json({
|
||||
success: false,
|
||||
error: 'serverId is required',
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
const result = await mcpTestService.testServerById(body.serverId);
|
||||
res.json(result);
|
||||
} catch (error) {
|
||||
logError(error, 'Test server failed');
|
||||
res.status(500).json({
|
||||
success: false,
|
||||
error: getErrorMessage(error),
|
||||
});
|
||||
}
|
||||
};
|
||||
}
|
||||
@@ -1,21 +0,0 @@
|
||||
/**
|
||||
* Common utilities for pipeline routes
|
||||
*
|
||||
* Provides logger and error handling utilities shared across all pipeline endpoints.
|
||||
*/
|
||||
|
||||
import { createLogger } from '@automaker/utils';
|
||||
import { getErrorMessage as getErrorMessageShared, createLogError } from '../common.js';
|
||||
|
||||
/** Logger instance for pipeline-related operations */
|
||||
export const logger = createLogger('Pipeline');
|
||||
|
||||
/**
|
||||
* Extract user-friendly error message from error objects
|
||||
*/
|
||||
export { getErrorMessageShared as getErrorMessage };
|
||||
|
||||
/**
|
||||
* Log error with automatic logger binding
|
||||
*/
|
||||
export const logError = createLogError(logger);
|
||||
@@ -1,77 +0,0 @@
|
||||
/**
|
||||
* Pipeline routes - HTTP API for pipeline configuration management
|
||||
*
|
||||
* Provides endpoints for:
|
||||
* - Getting pipeline configuration
|
||||
* - Saving pipeline configuration
|
||||
* - Adding, updating, deleting, and reordering pipeline steps
|
||||
*
|
||||
* All endpoints use handler factories that receive the PipelineService instance.
|
||||
* Mounted at /api/pipeline in the main server.
|
||||
*/
|
||||
|
||||
import { Router } from 'express';
|
||||
import type { PipelineService } from '../../services/pipeline-service.js';
|
||||
import { validatePathParams } from '../../middleware/validate-paths.js';
|
||||
import { createGetConfigHandler } from './routes/get-config.js';
|
||||
import { createSaveConfigHandler } from './routes/save-config.js';
|
||||
import { createAddStepHandler } from './routes/add-step.js';
|
||||
import { createUpdateStepHandler } from './routes/update-step.js';
|
||||
import { createDeleteStepHandler } from './routes/delete-step.js';
|
||||
import { createReorderStepsHandler } from './routes/reorder-steps.js';
|
||||
|
||||
/**
|
||||
* Create pipeline router with all endpoints
|
||||
*
|
||||
* Endpoints:
|
||||
* - POST /config - Get pipeline configuration
|
||||
* - POST /config/save - Save entire pipeline configuration
|
||||
* - POST /steps/add - Add a new pipeline step
|
||||
* - POST /steps/update - Update an existing pipeline step
|
||||
* - POST /steps/delete - Delete a pipeline step
|
||||
* - POST /steps/reorder - Reorder pipeline steps
|
||||
*
|
||||
* @param pipelineService - Instance of PipelineService for file I/O
|
||||
* @returns Express Router configured with all pipeline endpoints
|
||||
*/
|
||||
export function createPipelineRoutes(pipelineService: PipelineService): Router {
|
||||
const router = Router();
|
||||
|
||||
// Get pipeline configuration
|
||||
router.post(
|
||||
'/config',
|
||||
validatePathParams('projectPath'),
|
||||
createGetConfigHandler(pipelineService)
|
||||
);
|
||||
|
||||
// Save entire pipeline configuration
|
||||
router.post(
|
||||
'/config/save',
|
||||
validatePathParams('projectPath'),
|
||||
createSaveConfigHandler(pipelineService)
|
||||
);
|
||||
|
||||
// Pipeline step operations
|
||||
router.post(
|
||||
'/steps/add',
|
||||
validatePathParams('projectPath'),
|
||||
createAddStepHandler(pipelineService)
|
||||
);
|
||||
router.post(
|
||||
'/steps/update',
|
||||
validatePathParams('projectPath'),
|
||||
createUpdateStepHandler(pipelineService)
|
||||
);
|
||||
router.post(
|
||||
'/steps/delete',
|
||||
validatePathParams('projectPath'),
|
||||
createDeleteStepHandler(pipelineService)
|
||||
);
|
||||
router.post(
|
||||
'/steps/reorder',
|
||||
validatePathParams('projectPath'),
|
||||
createReorderStepsHandler(pipelineService)
|
||||
);
|
||||
|
||||
return router;
|
||||
}
|
||||
@@ -1,54 +0,0 @@
|
||||
/**
|
||||
* POST /api/pipeline/steps/add - Add a new pipeline step
|
||||
*
|
||||
* Adds a new step to the pipeline configuration.
|
||||
*
|
||||
* Request body: { projectPath: string, step: { name, order, instructions, colorClass } }
|
||||
* Response: { success: true, step: PipelineStep }
|
||||
*/
|
||||
|
||||
import type { Request, Response } from 'express';
|
||||
import type { PipelineService } from '../../../services/pipeline-service.js';
|
||||
import type { PipelineStep } from '@automaker/types';
|
||||
import { getErrorMessage, logError } from '../common.js';
|
||||
|
||||
export function createAddStepHandler(pipelineService: PipelineService) {
|
||||
return async (req: Request, res: Response): Promise<void> => {
|
||||
try {
|
||||
const { projectPath, step } = req.body as {
|
||||
projectPath: string;
|
||||
step: Omit<PipelineStep, 'id' | 'createdAt' | 'updatedAt'>;
|
||||
};
|
||||
|
||||
if (!projectPath) {
|
||||
res.status(400).json({ success: false, error: 'projectPath is required' });
|
||||
return;
|
||||
}
|
||||
|
||||
if (!step) {
|
||||
res.status(400).json({ success: false, error: 'step is required' });
|
||||
return;
|
||||
}
|
||||
|
||||
if (!step.name) {
|
||||
res.status(400).json({ success: false, error: 'step.name is required' });
|
||||
return;
|
||||
}
|
||||
|
||||
if (step.instructions === undefined) {
|
||||
res.status(400).json({ success: false, error: 'step.instructions is required' });
|
||||
return;
|
||||
}
|
||||
|
||||
const newStep = await pipelineService.addStep(projectPath, step);
|
||||
|
||||
res.json({
|
||||
success: true,
|
||||
step: newStep,
|
||||
});
|
||||
} catch (error) {
|
||||
logError(error, 'Add pipeline step failed');
|
||||
res.status(500).json({ success: false, error: getErrorMessage(error) });
|
||||
}
|
||||
};
|
||||
}
|
||||
@@ -1,42 +0,0 @@
|
||||
/**
|
||||
* POST /api/pipeline/steps/delete - Delete a pipeline step
|
||||
*
|
||||
* Removes a step from the pipeline configuration.
|
||||
*
|
||||
* Request body: { projectPath: string, stepId: string }
|
||||
* Response: { success: true }
|
||||
*/
|
||||
|
||||
import type { Request, Response } from 'express';
|
||||
import type { PipelineService } from '../../../services/pipeline-service.js';
|
||||
import { getErrorMessage, logError } from '../common.js';
|
||||
|
||||
export function createDeleteStepHandler(pipelineService: PipelineService) {
|
||||
return async (req: Request, res: Response): Promise<void> => {
|
||||
try {
|
||||
const { projectPath, stepId } = req.body as {
|
||||
projectPath: string;
|
||||
stepId: string;
|
||||
};
|
||||
|
||||
if (!projectPath) {
|
||||
res.status(400).json({ success: false, error: 'projectPath is required' });
|
||||
return;
|
||||
}
|
||||
|
||||
if (!stepId) {
|
||||
res.status(400).json({ success: false, error: 'stepId is required' });
|
||||
return;
|
||||
}
|
||||
|
||||
await pipelineService.deleteStep(projectPath, stepId);
|
||||
|
||||
res.json({
|
||||
success: true,
|
||||
});
|
||||
} catch (error) {
|
||||
logError(error, 'Delete pipeline step failed');
|
||||
res.status(500).json({ success: false, error: getErrorMessage(error) });
|
||||
}
|
||||
};
|
||||
}
|
||||
@@ -1,35 +0,0 @@
|
||||
/**
|
||||
* POST /api/pipeline/config - Get pipeline configuration
|
||||
*
|
||||
* Returns the pipeline configuration for a project.
|
||||
*
|
||||
* Request body: { projectPath: string }
|
||||
* Response: { success: true, config: PipelineConfig }
|
||||
*/
|
||||
|
||||
import type { Request, Response } from 'express';
|
||||
import type { PipelineService } from '../../../services/pipeline-service.js';
|
||||
import { getErrorMessage, logError } from '../common.js';
|
||||
|
||||
export function createGetConfigHandler(pipelineService: PipelineService) {
|
||||
return async (req: Request, res: Response): Promise<void> => {
|
||||
try {
|
||||
const { projectPath } = req.body;
|
||||
|
||||
if (!projectPath) {
|
||||
res.status(400).json({ success: false, error: 'projectPath is required' });
|
||||
return;
|
||||
}
|
||||
|
||||
const config = await pipelineService.getPipelineConfig(projectPath);
|
||||
|
||||
res.json({
|
||||
success: true,
|
||||
config,
|
||||
});
|
||||
} catch (error) {
|
||||
logError(error, 'Get pipeline config failed');
|
||||
res.status(500).json({ success: false, error: getErrorMessage(error) });
|
||||
}
|
||||
};
|
||||
}
|
||||
@@ -1,42 +0,0 @@
|
||||
/**
|
||||
* POST /api/pipeline/steps/reorder - Reorder pipeline steps
|
||||
*
|
||||
* Reorders the steps in the pipeline configuration.
|
||||
*
|
||||
* Request body: { projectPath: string, stepIds: string[] }
|
||||
* Response: { success: true }
|
||||
*/
|
||||
|
||||
import type { Request, Response } from 'express';
|
||||
import type { PipelineService } from '../../../services/pipeline-service.js';
|
||||
import { getErrorMessage, logError } from '../common.js';
|
||||
|
||||
export function createReorderStepsHandler(pipelineService: PipelineService) {
|
||||
return async (req: Request, res: Response): Promise<void> => {
|
||||
try {
|
||||
const { projectPath, stepIds } = req.body as {
|
||||
projectPath: string;
|
||||
stepIds: string[];
|
||||
};
|
||||
|
||||
if (!projectPath) {
|
||||
res.status(400).json({ success: false, error: 'projectPath is required' });
|
||||
return;
|
||||
}
|
||||
|
||||
if (!stepIds || !Array.isArray(stepIds)) {
|
||||
res.status(400).json({ success: false, error: 'stepIds array is required' });
|
||||
return;
|
||||
}
|
||||
|
||||
await pipelineService.reorderSteps(projectPath, stepIds);
|
||||
|
||||
res.json({
|
||||
success: true,
|
||||
});
|
||||
} catch (error) {
|
||||
logError(error, 'Reorder pipeline steps failed');
|
||||
res.status(500).json({ success: false, error: getErrorMessage(error) });
|
||||
}
|
||||
};
|
||||
}
|
||||
@@ -1,43 +0,0 @@
|
||||
/**
|
||||
* POST /api/pipeline/config/save - Save entire pipeline configuration
|
||||
*
|
||||
* Saves the complete pipeline configuration for a project.
|
||||
*
|
||||
* Request body: { projectPath: string, config: PipelineConfig }
|
||||
* Response: { success: true }
|
||||
*/
|
||||
|
||||
import type { Request, Response } from 'express';
|
||||
import type { PipelineService } from '../../../services/pipeline-service.js';
|
||||
import type { PipelineConfig } from '@automaker/types';
|
||||
import { getErrorMessage, logError } from '../common.js';
|
||||
|
||||
export function createSaveConfigHandler(pipelineService: PipelineService) {
|
||||
return async (req: Request, res: Response): Promise<void> => {
|
||||
try {
|
||||
const { projectPath, config } = req.body as {
|
||||
projectPath: string;
|
||||
config: PipelineConfig;
|
||||
};
|
||||
|
||||
if (!projectPath) {
|
||||
res.status(400).json({ success: false, error: 'projectPath is required' });
|
||||
return;
|
||||
}
|
||||
|
||||
if (!config) {
|
||||
res.status(400).json({ success: false, error: 'config is required' });
|
||||
return;
|
||||
}
|
||||
|
||||
await pipelineService.savePipelineConfig(projectPath, config);
|
||||
|
||||
res.json({
|
||||
success: true,
|
||||
});
|
||||
} catch (error) {
|
||||
logError(error, 'Save pipeline config failed');
|
||||
res.status(500).json({ success: false, error: getErrorMessage(error) });
|
||||
}
|
||||
};
|
||||
}
|
||||
@@ -1,50 +0,0 @@
|
||||
/**
|
||||
* POST /api/pipeline/steps/update - Update an existing pipeline step
|
||||
*
|
||||
* Updates a step in the pipeline configuration.
|
||||
*
|
||||
* Request body: { projectPath: string, stepId: string, updates: Partial<PipelineStep> }
|
||||
* Response: { success: true, step: PipelineStep }
|
||||
*/
|
||||
|
||||
import type { Request, Response } from 'express';
|
||||
import type { PipelineService } from '../../../services/pipeline-service.js';
|
||||
import type { PipelineStep } from '@automaker/types';
|
||||
import { getErrorMessage, logError } from '../common.js';
|
||||
|
||||
export function createUpdateStepHandler(pipelineService: PipelineService) {
|
||||
return async (req: Request, res: Response): Promise<void> => {
|
||||
try {
|
||||
const { projectPath, stepId, updates } = req.body as {
|
||||
projectPath: string;
|
||||
stepId: string;
|
||||
updates: Partial<Omit<PipelineStep, 'id' | 'createdAt'>>;
|
||||
};
|
||||
|
||||
if (!projectPath) {
|
||||
res.status(400).json({ success: false, error: 'projectPath is required' });
|
||||
return;
|
||||
}
|
||||
|
||||
if (!stepId) {
|
||||
res.status(400).json({ success: false, error: 'stepId is required' });
|
||||
return;
|
||||
}
|
||||
|
||||
if (!updates || Object.keys(updates).length === 0) {
|
||||
res.status(400).json({ success: false, error: 'updates is required' });
|
||||
return;
|
||||
}
|
||||
|
||||
const updatedStep = await pipelineService.updateStep(projectPath, stepId, updates);
|
||||
|
||||
res.json({
|
||||
success: true,
|
||||
step: updatedStep,
|
||||
});
|
||||
} catch (error) {
|
||||
logError(error, 'Update pipeline step failed');
|
||||
res.status(500).json({ success: false, error: getErrorMessage(error) });
|
||||
}
|
||||
};
|
||||
}
|
||||
@@ -6,94 +6,9 @@ import { query } from '@anthropic-ai/claude-agent-sdk';
|
||||
import type { EventEmitter } from '../../lib/events.js';
|
||||
import { createLogger } from '@automaker/utils';
|
||||
import { createSuggestionsOptions } from '../../lib/sdk-options.js';
|
||||
import { FeatureLoader } from '../../services/feature-loader.js';
|
||||
import { getAppSpecPath } from '@automaker/platform';
|
||||
import * as secureFs from '../../lib/secure-fs.js';
|
||||
import type { SettingsService } from '../../services/settings-service.js';
|
||||
import { getAutoLoadClaudeMdSetting } from '../../lib/settings-helpers.js';
|
||||
|
||||
const logger = createLogger('Suggestions');
|
||||
|
||||
/**
|
||||
* Extract implemented features from app_spec.txt XML content
|
||||
*
|
||||
* Note: This uses regex-based parsing which is sufficient for our controlled
|
||||
* XML structure. If more complex XML parsing is needed in the future, consider
|
||||
* using a library like 'fast-xml-parser' or 'xml2js'.
|
||||
*/
|
||||
function extractImplementedFeatures(specContent: string): string[] {
|
||||
const features: string[] = [];
|
||||
|
||||
// Match <implemented_features>...</implemented_features> section
|
||||
const implementedMatch = specContent.match(
|
||||
/<implemented_features>([\s\S]*?)<\/implemented_features>/
|
||||
);
|
||||
|
||||
if (implementedMatch) {
|
||||
const implementedSection = implementedMatch[1];
|
||||
|
||||
// Extract feature names from <name>...</name> tags using matchAll
|
||||
const nameRegex = /<name>(.*?)<\/name>/g;
|
||||
const matches = implementedSection.matchAll(nameRegex);
|
||||
|
||||
for (const match of matches) {
|
||||
features.push(match[1].trim());
|
||||
}
|
||||
}
|
||||
|
||||
return features;
|
||||
}
|
||||
|
||||
/**
|
||||
* Load existing context (app spec and backlog features) to avoid duplicates
|
||||
*/
|
||||
async function loadExistingContext(projectPath: string): Promise<string> {
|
||||
let context = '';
|
||||
|
||||
// 1. Read app_spec.txt for implemented features
|
||||
try {
|
||||
const appSpecPath = getAppSpecPath(projectPath);
|
||||
const specContent = (await secureFs.readFile(appSpecPath, 'utf-8')) as string;
|
||||
|
||||
if (specContent && specContent.trim().length > 0) {
|
||||
const implementedFeatures = extractImplementedFeatures(specContent);
|
||||
|
||||
if (implementedFeatures.length > 0) {
|
||||
context += '\n\n=== ALREADY IMPLEMENTED FEATURES ===\n';
|
||||
context += 'These features are already implemented in the codebase:\n';
|
||||
context += implementedFeatures.map((feature) => `- ${feature}`).join('\n') + '\n';
|
||||
}
|
||||
}
|
||||
} catch (error) {
|
||||
// app_spec.txt doesn't exist or can't be read - that's okay
|
||||
logger.debug('No app_spec.txt found or error reading it:', error);
|
||||
}
|
||||
|
||||
// 2. Load existing features from backlog
|
||||
try {
|
||||
const featureLoader = new FeatureLoader();
|
||||
const features = await featureLoader.getAll(projectPath);
|
||||
|
||||
if (features.length > 0) {
|
||||
context += '\n\n=== EXISTING FEATURES IN BACKLOG ===\n';
|
||||
context += 'These features are already planned or in progress:\n';
|
||||
context +=
|
||||
features
|
||||
.map((feature) => {
|
||||
const status = feature.status || 'pending';
|
||||
const title = feature.title || feature.description?.substring(0, 50) || 'Untitled';
|
||||
return `- ${title} (${status})`;
|
||||
})
|
||||
.join('\n') + '\n';
|
||||
}
|
||||
} catch (error) {
|
||||
// Features directory doesn't exist or can't be read - that's okay
|
||||
logger.debug('No features found or error loading them:', error);
|
||||
}
|
||||
|
||||
return context;
|
||||
}
|
||||
|
||||
/**
|
||||
* JSON Schema for suggestions output
|
||||
*/
|
||||
@@ -127,8 +42,7 @@ export async function generateSuggestions(
|
||||
projectPath: string,
|
||||
suggestionType: string,
|
||||
events: EventEmitter,
|
||||
abortController: AbortController,
|
||||
settingsService?: SettingsService
|
||||
abortController: AbortController
|
||||
): Promise<void> {
|
||||
const typePrompts: Record<string, string> = {
|
||||
features: 'Analyze this project and suggest new features that would add value.',
|
||||
@@ -137,13 +51,8 @@ export async function generateSuggestions(
|
||||
performance: 'Analyze this project for performance issues and suggest optimizations.',
|
||||
};
|
||||
|
||||
// Load existing context to avoid duplicates
|
||||
const existingContext = await loadExistingContext(projectPath);
|
||||
|
||||
const prompt = `${typePrompts[suggestionType] || typePrompts.features}
|
||||
${existingContext}
|
||||
|
||||
${existingContext ? '\nIMPORTANT: Do NOT suggest features that are already implemented or already in the backlog above. Focus on NEW ideas that complement what already exists.\n' : ''}
|
||||
Look at the codebase and provide 3-5 concrete suggestions.
|
||||
|
||||
For each suggestion, provide:
|
||||
@@ -154,20 +63,14 @@ For each suggestion, provide:
|
||||
|
||||
The response will be automatically formatted as structured JSON.`;
|
||||
|
||||
// Don't send initial message - let the agent output speak for itself
|
||||
// The first agent message will be captured as an info entry
|
||||
|
||||
// Load autoLoadClaudeMd setting
|
||||
const autoLoadClaudeMd = await getAutoLoadClaudeMdSetting(
|
||||
projectPath,
|
||||
settingsService,
|
||||
'[Suggestions]'
|
||||
);
|
||||
events.emit('suggestions:event', {
|
||||
type: 'suggestions_progress',
|
||||
content: `Starting ${suggestionType} analysis...\n`,
|
||||
});
|
||||
|
||||
const options = createSuggestionsOptions({
|
||||
cwd: projectPath,
|
||||
abortController,
|
||||
autoLoadClaudeMd,
|
||||
outputFormat: {
|
||||
type: 'json_schema',
|
||||
schema: suggestionsSchema,
|
||||
|
||||
@@ -8,19 +8,11 @@ import { validatePathParams } from '../../middleware/validate-paths.js';
|
||||
import { createGenerateHandler } from './routes/generate.js';
|
||||
import { createStopHandler } from './routes/stop.js';
|
||||
import { createStatusHandler } from './routes/status.js';
|
||||
import type { SettingsService } from '../../services/settings-service.js';
|
||||
|
||||
export function createSuggestionsRoutes(
|
||||
events: EventEmitter,
|
||||
settingsService?: SettingsService
|
||||
): Router {
|
||||
export function createSuggestionsRoutes(events: EventEmitter): Router {
|
||||
const router = Router();
|
||||
|
||||
router.post(
|
||||
'/generate',
|
||||
validatePathParams('projectPath'),
|
||||
createGenerateHandler(events, settingsService)
|
||||
);
|
||||
router.post('/generate', validatePathParams('projectPath'), createGenerateHandler(events));
|
||||
router.post('/stop', createStopHandler());
|
||||
router.get('/status', createStatusHandler());
|
||||
|
||||
|
||||
@@ -7,11 +7,10 @@ import type { EventEmitter } from '../../../lib/events.js';
|
||||
import { createLogger } from '@automaker/utils';
|
||||
import { getSuggestionsStatus, setRunningState, getErrorMessage, logError } from '../common.js';
|
||||
import { generateSuggestions } from '../generate-suggestions.js';
|
||||
import type { SettingsService } from '../../../services/settings-service.js';
|
||||
|
||||
const logger = createLogger('Suggestions');
|
||||
|
||||
export function createGenerateHandler(events: EventEmitter, settingsService?: SettingsService) {
|
||||
export function createGenerateHandler(events: EventEmitter) {
|
||||
return async (req: Request, res: Response): Promise<void> => {
|
||||
try {
|
||||
const { projectPath, suggestionType = 'features' } = req.body as {
|
||||
@@ -38,7 +37,7 @@ export function createGenerateHandler(events: EventEmitter, settingsService?: Se
|
||||
setRunningState(true, abortController);
|
||||
|
||||
// Start generation in background
|
||||
generateSuggestions(projectPath, suggestionType, events, abortController, settingsService)
|
||||
generateSuggestions(projectPath, suggestionType, events, abortController)
|
||||
.catch((error) => {
|
||||
logError(error, 'Generate suggestions failed (background)');
|
||||
events.emit('suggestions:event', {
|
||||
|
||||
@@ -111,19 +111,6 @@ export async function isGitRepo(repoPath: string): Promise<boolean> {
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if a git repository has at least one commit (i.e., HEAD exists)
|
||||
* Returns false for freshly initialized repos with no commits
|
||||
*/
|
||||
export async function hasCommits(repoPath: string): Promise<boolean> {
|
||||
try {
|
||||
await execAsync('git rev-parse --verify HEAD', { cwd: repoPath });
|
||||
return true;
|
||||
} catch {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if an error is ENOENT (file/path not found or spawn failed)
|
||||
* These are expected in test environments with mock paths
|
||||
|
||||
@@ -4,7 +4,6 @@
|
||||
|
||||
import { Router } from 'express';
|
||||
import { validatePathParams } from '../../middleware/validate-paths.js';
|
||||
import { requireValidWorktree, requireValidProject, requireGitRepoOnly } from './middleware.js';
|
||||
import { createInfoHandler } from './routes/info.js';
|
||||
import { createStatusHandler } from './routes/status.js';
|
||||
import { createListHandler } from './routes/list.js';
|
||||
@@ -39,42 +38,17 @@ export function createWorktreeRoutes(): Router {
|
||||
router.post('/list', createListHandler());
|
||||
router.post('/diffs', validatePathParams('projectPath'), createDiffsHandler());
|
||||
router.post('/file-diff', validatePathParams('projectPath', 'filePath'), createFileDiffHandler());
|
||||
router.post(
|
||||
'/merge',
|
||||
validatePathParams('projectPath'),
|
||||
requireValidProject,
|
||||
createMergeHandler()
|
||||
);
|
||||
router.post('/merge', validatePathParams('projectPath'), createMergeHandler());
|
||||
router.post('/create', validatePathParams('projectPath'), createCreateHandler());
|
||||
router.post('/delete', validatePathParams('projectPath', 'worktreePath'), createDeleteHandler());
|
||||
router.post('/create-pr', createCreatePRHandler());
|
||||
router.post('/pr-info', createPRInfoHandler());
|
||||
router.post(
|
||||
'/commit',
|
||||
validatePathParams('worktreePath'),
|
||||
requireGitRepoOnly,
|
||||
createCommitHandler()
|
||||
);
|
||||
router.post(
|
||||
'/push',
|
||||
validatePathParams('worktreePath'),
|
||||
requireValidWorktree,
|
||||
createPushHandler()
|
||||
);
|
||||
router.post(
|
||||
'/pull',
|
||||
validatePathParams('worktreePath'),
|
||||
requireValidWorktree,
|
||||
createPullHandler()
|
||||
);
|
||||
router.post('/checkout-branch', requireValidWorktree, createCheckoutBranchHandler());
|
||||
router.post(
|
||||
'/list-branches',
|
||||
validatePathParams('worktreePath'),
|
||||
requireValidWorktree,
|
||||
createListBranchesHandler()
|
||||
);
|
||||
router.post('/switch-branch', requireValidWorktree, createSwitchBranchHandler());
|
||||
router.post('/commit', validatePathParams('worktreePath'), createCommitHandler());
|
||||
router.post('/push', validatePathParams('worktreePath'), createPushHandler());
|
||||
router.post('/pull', validatePathParams('worktreePath'), createPullHandler());
|
||||
router.post('/checkout-branch', createCheckoutBranchHandler());
|
||||
router.post('/list-branches', validatePathParams('worktreePath'), createListBranchesHandler());
|
||||
router.post('/switch-branch', createSwitchBranchHandler());
|
||||
router.post('/open-in-editor', validatePathParams('worktreePath'), createOpenInEditorHandler());
|
||||
router.get('/default-editor', createGetDefaultEditorHandler());
|
||||
router.post('/init-git', validatePathParams('projectPath'), createInitGitHandler());
|
||||
|
||||
@@ -1,74 +0,0 @@
|
||||
/**
|
||||
* Middleware for worktree route validation
|
||||
*/
|
||||
|
||||
import type { Request, Response, NextFunction } from 'express';
|
||||
import { isGitRepo, hasCommits } from './common.js';
|
||||
|
||||
interface ValidationOptions {
|
||||
/** Check if the path is a git repository (default: true) */
|
||||
requireGitRepo?: boolean;
|
||||
/** Check if the repository has at least one commit (default: true) */
|
||||
requireCommits?: boolean;
|
||||
/** The name of the request body field containing the path (default: 'worktreePath') */
|
||||
pathField?: 'worktreePath' | 'projectPath';
|
||||
}
|
||||
|
||||
/**
|
||||
* Middleware factory to validate that a path is a valid git repository with commits.
|
||||
* This reduces code duplication across route handlers.
|
||||
*
|
||||
* @param options - Validation options
|
||||
* @returns Express middleware function
|
||||
*/
|
||||
export function requireValidGitRepo(options: ValidationOptions = {}) {
|
||||
const { requireGitRepo = true, requireCommits = true, pathField = 'worktreePath' } = options;
|
||||
|
||||
return async (req: Request, res: Response, next: NextFunction): Promise<void> => {
|
||||
const repoPath = req.body[pathField] as string | undefined;
|
||||
|
||||
if (!repoPath) {
|
||||
// Let the route handler deal with missing path validation
|
||||
next();
|
||||
return;
|
||||
}
|
||||
|
||||
if (requireGitRepo && !(await isGitRepo(repoPath))) {
|
||||
res.status(400).json({
|
||||
success: false,
|
||||
error: 'Not a git repository',
|
||||
code: 'NOT_GIT_REPO',
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
if (requireCommits && !(await hasCommits(repoPath))) {
|
||||
res.status(400).json({
|
||||
success: false,
|
||||
error: 'Repository has no commits yet',
|
||||
code: 'NO_COMMITS',
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
next();
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Middleware to validate git repo for worktreePath field
|
||||
*/
|
||||
export const requireValidWorktree = requireValidGitRepo({ pathField: 'worktreePath' });
|
||||
|
||||
/**
|
||||
* Middleware to validate git repo for projectPath field
|
||||
*/
|
||||
export const requireValidProject = requireValidGitRepo({ pathField: 'projectPath' });
|
||||
|
||||
/**
|
||||
* Middleware to validate git repo without requiring commits (for commit route)
|
||||
*/
|
||||
export const requireGitRepoOnly = requireValidGitRepo({
|
||||
pathField: 'worktreePath',
|
||||
requireCommits: false,
|
||||
});
|
||||
@@ -1,8 +1,5 @@
|
||||
/**
|
||||
* POST /checkout-branch endpoint - Create and checkout a new branch
|
||||
*
|
||||
* Note: Git repository validation (isGitRepo, hasCommits) is handled by
|
||||
* the requireValidWorktree middleware in index.ts
|
||||
*/
|
||||
|
||||
import type { Request, Response } from 'express';
|
||||
|
||||
@@ -1,8 +1,5 @@
|
||||
/**
|
||||
* POST /commit endpoint - Commit changes in a worktree
|
||||
*
|
||||
* Note: Git repository validation (isGitRepo) is handled by
|
||||
* the requireGitRepoOnly middleware in index.ts
|
||||
*/
|
||||
|
||||
import type { Request, Response } from 'express';
|
||||
|
||||
@@ -56,56 +56,32 @@ export function createCreatePRHandler() {
|
||||
}
|
||||
|
||||
// Check for uncommitted changes
|
||||
console.log(`[CreatePR] Checking for uncommitted changes in: ${worktreePath}`);
|
||||
const { stdout: status } = await execAsync('git status --porcelain', {
|
||||
cwd: worktreePath,
|
||||
env: execEnv,
|
||||
});
|
||||
const hasChanges = status.trim().length > 0;
|
||||
console.log(`[CreatePR] Has uncommitted changes: ${hasChanges}`);
|
||||
if (hasChanges) {
|
||||
console.log(`[CreatePR] Changed files:\n${status}`);
|
||||
}
|
||||
|
||||
// If there are changes, commit them
|
||||
let commitHash: string | null = null;
|
||||
let commitError: string | null = null;
|
||||
if (hasChanges) {
|
||||
const message = commitMessage || `Changes from ${branchName}`;
|
||||
console.log(`[CreatePR] Committing changes with message: ${message}`);
|
||||
|
||||
try {
|
||||
// Stage all changes
|
||||
console.log(`[CreatePR] Running: git add -A`);
|
||||
await execAsync('git add -A', { cwd: worktreePath, env: execEnv });
|
||||
// Stage all changes
|
||||
await execAsync('git add -A', { cwd: worktreePath, env: execEnv });
|
||||
|
||||
// Create commit
|
||||
console.log(`[CreatePR] Running: git commit`);
|
||||
await execAsync(`git commit -m "${message.replace(/"/g, '\\"')}"`, {
|
||||
cwd: worktreePath,
|
||||
env: execEnv,
|
||||
});
|
||||
// Create commit
|
||||
await execAsync(`git commit -m "${message.replace(/"/g, '\\"')}"`, {
|
||||
cwd: worktreePath,
|
||||
env: execEnv,
|
||||
});
|
||||
|
||||
// Get commit hash
|
||||
const { stdout: hashOutput } = await execAsync('git rev-parse HEAD', {
|
||||
cwd: worktreePath,
|
||||
env: execEnv,
|
||||
});
|
||||
commitHash = hashOutput.trim().substring(0, 8);
|
||||
console.log(`[CreatePR] Commit successful: ${commitHash}`);
|
||||
} catch (commitErr: unknown) {
|
||||
const err = commitErr as { stderr?: string; message?: string };
|
||||
commitError = err.stderr || err.message || 'Commit failed';
|
||||
console.error(`[CreatePR] Commit failed: ${commitError}`);
|
||||
|
||||
// Return error immediately - don't proceed with push/PR if commit fails
|
||||
res.status(500).json({
|
||||
success: false,
|
||||
error: `Failed to commit changes: ${commitError}`,
|
||||
commitError,
|
||||
});
|
||||
return;
|
||||
}
|
||||
// Get commit hash
|
||||
const { stdout: hashOutput } = await execAsync('git rev-parse HEAD', {
|
||||
cwd: worktreePath,
|
||||
env: execEnv,
|
||||
});
|
||||
commitHash = hashOutput.trim().substring(0, 8);
|
||||
}
|
||||
|
||||
// Push the branch to remote
|
||||
@@ -384,9 +360,8 @@ export function createCreatePRHandler() {
|
||||
success: true,
|
||||
result: {
|
||||
branch: branchName,
|
||||
committed: hasChanges && !commitError,
|
||||
committed: hasChanges,
|
||||
commitHash,
|
||||
commitError: commitError || undefined,
|
||||
pushed: true,
|
||||
prUrl,
|
||||
prNumber,
|
||||
|
||||
@@ -1,8 +1,5 @@
|
||||
/**
|
||||
* POST /list-branches endpoint - List all local branches
|
||||
*
|
||||
* Note: Git repository validation (isGitRepo, hasCommits) is handled by
|
||||
* the requireValidWorktree middleware in index.ts
|
||||
*/
|
||||
|
||||
import type { Request, Response } from 'express';
|
||||
|
||||
@@ -1,8 +1,5 @@
|
||||
/**
|
||||
* POST /merge endpoint - Merge feature (merge worktree branch into main)
|
||||
*
|
||||
* Note: Git repository validation (isGitRepo, hasCommits) is handled by
|
||||
* the requireValidProject middleware in index.ts
|
||||
*/
|
||||
|
||||
import type { Request, Response } from 'express';
|
||||
|
||||
@@ -1,8 +1,5 @@
|
||||
/**
|
||||
* POST /pull endpoint - Pull latest changes for a worktree/branch
|
||||
*
|
||||
* Note: Git repository validation (isGitRepo, hasCommits) is handled by
|
||||
* the requireValidWorktree middleware in index.ts
|
||||
*/
|
||||
|
||||
import type { Request, Response } from 'express';
|
||||
|
||||
@@ -1,8 +1,5 @@
|
||||
/**
|
||||
* POST /push endpoint - Push a worktree branch to remote
|
||||
*
|
||||
* Note: Git repository validation (isGitRepo, hasCommits) is handled by
|
||||
* the requireValidWorktree middleware in index.ts
|
||||
*/
|
||||
|
||||
import type { Request, Response } from 'express';
|
||||
|
||||
@@ -4,9 +4,6 @@
|
||||
* Simple branch switching.
|
||||
* If there are uncommitted changes, the switch will fail and
|
||||
* the user should commit first.
|
||||
*
|
||||
* Note: Git repository validation (isGitRepo, hasCommits) is handled by
|
||||
* the requireValidWorktree middleware in index.ts
|
||||
*/
|
||||
|
||||
import type { Request, Response } from 'express';
|
||||
|
||||
@@ -16,14 +16,6 @@ import {
|
||||
import { ProviderFactory } from '../providers/provider-factory.js';
|
||||
import { createChatOptions, validateWorkingDirectory } from '../lib/sdk-options.js';
|
||||
import { PathNotAllowedError } from '@automaker/platform';
|
||||
import type { SettingsService } from './settings-service.js';
|
||||
import {
|
||||
getAutoLoadClaudeMdSetting,
|
||||
getEnableSandboxModeSetting,
|
||||
filterClaudeMdFromContext,
|
||||
getMCPServersFromSettings,
|
||||
getMCPPermissionSettings,
|
||||
} from '../lib/settings-helpers.js';
|
||||
|
||||
interface Message {
|
||||
id: string;
|
||||
@@ -38,14 +30,6 @@ interface Message {
|
||||
isError?: boolean;
|
||||
}
|
||||
|
||||
interface QueuedPrompt {
|
||||
id: string;
|
||||
message: string;
|
||||
imagePaths?: string[];
|
||||
model?: string;
|
||||
addedAt: string;
|
||||
}
|
||||
|
||||
interface Session {
|
||||
messages: Message[];
|
||||
isRunning: boolean;
|
||||
@@ -53,7 +37,6 @@ interface Session {
|
||||
workingDirectory: string;
|
||||
model?: string;
|
||||
sdkSessionId?: string; // Claude SDK session ID for conversation continuity
|
||||
promptQueue: QueuedPrompt[]; // Queue of prompts to auto-run after current task
|
||||
}
|
||||
|
||||
interface SessionMetadata {
|
||||
@@ -74,13 +57,11 @@ export class AgentService {
|
||||
private stateDir: string;
|
||||
private metadataFile: string;
|
||||
private events: EventEmitter;
|
||||
private settingsService: SettingsService | null = null;
|
||||
|
||||
constructor(dataDir: string, events: EventEmitter, settingsService?: SettingsService) {
|
||||
constructor(dataDir: string, events: EventEmitter) {
|
||||
this.stateDir = path.join(dataDir, 'agent-sessions');
|
||||
this.metadataFile = path.join(dataDir, 'sessions-metadata.json');
|
||||
this.events = events;
|
||||
this.settingsService = settingsService ?? null;
|
||||
}
|
||||
|
||||
async initialize(): Promise<void> {
|
||||
@@ -109,16 +90,12 @@ export class AgentService {
|
||||
// Validate that the working directory is allowed using centralized validation
|
||||
validateWorkingDirectory(resolvedWorkingDirectory);
|
||||
|
||||
// Load persisted queue
|
||||
const promptQueue = await this.loadQueueState(sessionId);
|
||||
|
||||
this.sessions.set(sessionId, {
|
||||
messages,
|
||||
isRunning: false,
|
||||
abortController: null,
|
||||
workingDirectory: resolvedWorkingDirectory,
|
||||
sdkSessionId: sessionMetadata?.sdkSessionId, // Load persisted SDK session ID
|
||||
promptQueue,
|
||||
});
|
||||
}
|
||||
|
||||
@@ -148,12 +125,10 @@ export class AgentService {
|
||||
}) {
|
||||
const session = this.sessions.get(sessionId);
|
||||
if (!session) {
|
||||
console.error('[AgentService] ERROR: Session not found:', sessionId);
|
||||
throw new Error(`Session ${sessionId} not found`);
|
||||
}
|
||||
|
||||
if (session.isRunning) {
|
||||
console.error('[AgentService] ERROR: Agent already running for session:', sessionId);
|
||||
throw new Error('Agent is already processing a message');
|
||||
}
|
||||
|
||||
@@ -199,11 +174,6 @@ export class AgentService {
|
||||
session.isRunning = true;
|
||||
session.abortController = new AbortController();
|
||||
|
||||
// Emit started event so UI can show thinking indicator
|
||||
this.emitAgentEvent(sessionId, {
|
||||
type: 'started',
|
||||
});
|
||||
|
||||
// Emit user message event
|
||||
this.emitAgentEvent(sessionId, {
|
||||
type: 'message',
|
||||
@@ -216,35 +186,12 @@ export class AgentService {
|
||||
// Determine the effective working directory for context loading
|
||||
const effectiveWorkDir = workingDirectory || session.workingDirectory;
|
||||
|
||||
// Load autoLoadClaudeMd setting (project setting takes precedence over global)
|
||||
const autoLoadClaudeMd = await getAutoLoadClaudeMdSetting(
|
||||
effectiveWorkDir,
|
||||
this.settingsService,
|
||||
'[AgentService]'
|
||||
);
|
||||
|
||||
// Load enableSandboxMode setting (global setting only)
|
||||
const enableSandboxMode = await getEnableSandboxModeSetting(
|
||||
this.settingsService,
|
||||
'[AgentService]'
|
||||
);
|
||||
|
||||
// Load MCP servers from settings (global setting only)
|
||||
const mcpServers = await getMCPServersFromSettings(this.settingsService, '[AgentService]');
|
||||
|
||||
// Load MCP permission settings (global setting only)
|
||||
const mcpPermissions = await getMCPPermissionSettings(this.settingsService, '[AgentService]');
|
||||
|
||||
// Load project context files (CLAUDE.md, CODE_QUALITY.md, etc.)
|
||||
const contextResult = await loadContextFiles({
|
||||
const { formattedPrompt: contextFilesPrompt } = await loadContextFiles({
|
||||
projectPath: effectiveWorkDir,
|
||||
fsModule: secureFs as Parameters<typeof loadContextFiles>[0]['fsModule'],
|
||||
});
|
||||
|
||||
// When autoLoadClaudeMd is enabled, filter out CLAUDE.md to avoid duplication
|
||||
// (SDK handles CLAUDE.md via settingSources), but keep other context files like CODE_QUALITY.md
|
||||
const contextFilesPrompt = filterClaudeMdFromContext(contextResult, autoLoadClaudeMd);
|
||||
|
||||
// Build combined system prompt with base prompt and context files
|
||||
const baseSystemPrompt = this.getSystemPrompt();
|
||||
const combinedSystemPrompt = contextFilesPrompt
|
||||
@@ -258,11 +205,6 @@ export class AgentService {
|
||||
sessionModel: session.model,
|
||||
systemPrompt: combinedSystemPrompt,
|
||||
abortController: session.abortController!,
|
||||
autoLoadClaudeMd,
|
||||
enableSandboxMode,
|
||||
mcpServers: Object.keys(mcpServers).length > 0 ? mcpServers : undefined,
|
||||
mcpAutoApproveTools: mcpPermissions.mcpAutoApproveTools,
|
||||
mcpUnrestrictedTools: mcpPermissions.mcpUnrestrictedTools,
|
||||
});
|
||||
|
||||
// Extract model, maxTurns, and allowedTools from SDK options
|
||||
@@ -273,22 +215,21 @@ export class AgentService {
|
||||
// Get provider for this model
|
||||
const provider = ProviderFactory.getProviderForModel(effectiveModel);
|
||||
|
||||
console.log(
|
||||
`[AgentService] Using provider "${provider.getName()}" for model "${effectiveModel}"`
|
||||
);
|
||||
|
||||
// Build options for provider
|
||||
const options: ExecuteOptions = {
|
||||
prompt: '', // Will be set below based on images
|
||||
model: effectiveModel,
|
||||
cwd: effectiveWorkDir,
|
||||
systemPrompt: sdkOptions.systemPrompt,
|
||||
systemPrompt: combinedSystemPrompt,
|
||||
maxTurns: maxTurns,
|
||||
allowedTools: allowedTools,
|
||||
abortController: session.abortController!,
|
||||
conversationHistory: conversationHistory.length > 0 ? conversationHistory : undefined,
|
||||
settingSources: sdkOptions.settingSources,
|
||||
sandbox: sdkOptions.sandbox, // Pass sandbox configuration
|
||||
sdkSessionId: session.sdkSessionId, // Pass SDK session ID for resuming
|
||||
mcpServers: Object.keys(mcpServers).length > 0 ? mcpServers : undefined, // Pass MCP servers configuration
|
||||
mcpAutoApproveTools: mcpPermissions.mcpAutoApproveTools, // Pass MCP auto-approve setting
|
||||
mcpUnrestrictedTools: mcpPermissions.mcpUnrestrictedTools, // Pass MCP unrestricted tools setting
|
||||
};
|
||||
|
||||
// Build prompt content with images
|
||||
@@ -313,6 +254,7 @@ export class AgentService {
|
||||
// Capture SDK session ID from any message and persist it
|
||||
if (msg.session_id && !session.sdkSessionId) {
|
||||
session.sdkSessionId = msg.session_id;
|
||||
console.log(`[AgentService] Captured SDK session ID: ${msg.session_id}`);
|
||||
// Persist the SDK session ID to ensure conversation continuity across server restarts
|
||||
await this.updateSession(sessionId, { sdkSessionId: msg.session_id });
|
||||
}
|
||||
@@ -377,9 +319,6 @@ export class AgentService {
|
||||
session.isRunning = false;
|
||||
session.abortController = null;
|
||||
|
||||
// Process next item in queue after completion
|
||||
setImmediate(() => this.processNextInQueue(sessionId));
|
||||
|
||||
return {
|
||||
success: true,
|
||||
message: currentAssistantMessage,
|
||||
@@ -618,165 +557,6 @@ export class AgentService {
|
||||
return true;
|
||||
}
|
||||
|
||||
// Queue management methods
|
||||
|
||||
/**
|
||||
* Add a prompt to the queue for later execution
|
||||
*/
|
||||
async addToQueue(
|
||||
sessionId: string,
|
||||
prompt: { message: string; imagePaths?: string[]; model?: string }
|
||||
): Promise<{ success: boolean; queuedPrompt?: QueuedPrompt; error?: string }> {
|
||||
const session = this.sessions.get(sessionId);
|
||||
if (!session) {
|
||||
return { success: false, error: 'Session not found' };
|
||||
}
|
||||
|
||||
const queuedPrompt: QueuedPrompt = {
|
||||
id: this.generateId(),
|
||||
message: prompt.message,
|
||||
imagePaths: prompt.imagePaths,
|
||||
model: prompt.model,
|
||||
addedAt: new Date().toISOString(),
|
||||
};
|
||||
|
||||
session.promptQueue.push(queuedPrompt);
|
||||
await this.saveQueueState(sessionId, session.promptQueue);
|
||||
|
||||
// Emit queue update event
|
||||
this.emitAgentEvent(sessionId, {
|
||||
type: 'queue_updated',
|
||||
queue: session.promptQueue,
|
||||
});
|
||||
|
||||
return { success: true, queuedPrompt };
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the current queue for a session
|
||||
*/
|
||||
getQueue(sessionId: string): { success: boolean; queue?: QueuedPrompt[]; error?: string } {
|
||||
const session = this.sessions.get(sessionId);
|
||||
if (!session) {
|
||||
return { success: false, error: 'Session not found' };
|
||||
}
|
||||
return { success: true, queue: session.promptQueue };
|
||||
}
|
||||
|
||||
/**
|
||||
* Remove a specific prompt from the queue
|
||||
*/
|
||||
async removeFromQueue(
|
||||
sessionId: string,
|
||||
promptId: string
|
||||
): Promise<{ success: boolean; error?: string }> {
|
||||
const session = this.sessions.get(sessionId);
|
||||
if (!session) {
|
||||
return { success: false, error: 'Session not found' };
|
||||
}
|
||||
|
||||
const index = session.promptQueue.findIndex((p) => p.id === promptId);
|
||||
if (index === -1) {
|
||||
return { success: false, error: 'Prompt not found in queue' };
|
||||
}
|
||||
|
||||
session.promptQueue.splice(index, 1);
|
||||
await this.saveQueueState(sessionId, session.promptQueue);
|
||||
|
||||
this.emitAgentEvent(sessionId, {
|
||||
type: 'queue_updated',
|
||||
queue: session.promptQueue,
|
||||
});
|
||||
|
||||
return { success: true };
|
||||
}
|
||||
|
||||
/**
|
||||
* Clear all prompts from the queue
|
||||
*/
|
||||
async clearQueue(sessionId: string): Promise<{ success: boolean; error?: string }> {
|
||||
const session = this.sessions.get(sessionId);
|
||||
if (!session) {
|
||||
return { success: false, error: 'Session not found' };
|
||||
}
|
||||
|
||||
session.promptQueue = [];
|
||||
await this.saveQueueState(sessionId, []);
|
||||
|
||||
this.emitAgentEvent(sessionId, {
|
||||
type: 'queue_updated',
|
||||
queue: [],
|
||||
});
|
||||
|
||||
return { success: true };
|
||||
}
|
||||
|
||||
/**
|
||||
* Save queue state to disk for persistence
|
||||
*/
|
||||
private async saveQueueState(sessionId: string, queue: QueuedPrompt[]): Promise<void> {
|
||||
const queueFile = path.join(this.stateDir, `${sessionId}-queue.json`);
|
||||
try {
|
||||
await secureFs.writeFile(queueFile, JSON.stringify(queue, null, 2), 'utf-8');
|
||||
} catch (error) {
|
||||
console.error('[AgentService] Failed to save queue state:', error);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Load queue state from disk
|
||||
*/
|
||||
private async loadQueueState(sessionId: string): Promise<QueuedPrompt[]> {
|
||||
const queueFile = path.join(this.stateDir, `${sessionId}-queue.json`);
|
||||
try {
|
||||
const data = (await secureFs.readFile(queueFile, 'utf-8')) as string;
|
||||
return JSON.parse(data);
|
||||
} catch {
|
||||
return [];
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Process the next item in the queue (called after task completion)
|
||||
*/
|
||||
private async processNextInQueue(sessionId: string): Promise<void> {
|
||||
const session = this.sessions.get(sessionId);
|
||||
if (!session || session.promptQueue.length === 0) {
|
||||
return;
|
||||
}
|
||||
|
||||
// Don't process if already running
|
||||
if (session.isRunning) {
|
||||
return;
|
||||
}
|
||||
|
||||
const nextPrompt = session.promptQueue.shift();
|
||||
if (!nextPrompt) return;
|
||||
|
||||
await this.saveQueueState(sessionId, session.promptQueue);
|
||||
|
||||
this.emitAgentEvent(sessionId, {
|
||||
type: 'queue_updated',
|
||||
queue: session.promptQueue,
|
||||
});
|
||||
|
||||
try {
|
||||
await this.sendMessage({
|
||||
sessionId,
|
||||
message: nextPrompt.message,
|
||||
imagePaths: nextPrompt.imagePaths,
|
||||
model: nextPrompt.model,
|
||||
});
|
||||
} catch (error) {
|
||||
console.error('[AgentService] Failed to process queued prompt:', error);
|
||||
this.emitAgentEvent(sessionId, {
|
||||
type: 'queue_error',
|
||||
error: (error as Error).message,
|
||||
promptId: nextPrompt.id,
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
private emitAgentEvent(sessionId: string, data: Record<string, unknown>): void {
|
||||
this.events.emit('agent:stream', { sessionId, ...data });
|
||||
}
|
||||
|
||||
@@ -10,7 +10,7 @@
|
||||
*/
|
||||
|
||||
import { ProviderFactory } from '../providers/provider-factory.js';
|
||||
import type { ExecuteOptions, Feature, PipelineConfig, PipelineStep } from '@automaker/types';
|
||||
import type { ExecuteOptions, Feature } from '@automaker/types';
|
||||
import {
|
||||
buildPromptWithImages,
|
||||
isAbortError,
|
||||
@@ -25,21 +25,8 @@ import { promisify } from 'util';
|
||||
import path from 'path';
|
||||
import * as secureFs from '../lib/secure-fs.js';
|
||||
import type { EventEmitter } from '../lib/events.js';
|
||||
import {
|
||||
createAutoModeOptions,
|
||||
createCustomOptions,
|
||||
validateWorkingDirectory,
|
||||
} from '../lib/sdk-options.js';
|
||||
import { createAutoModeOptions, validateWorkingDirectory } from '../lib/sdk-options.js';
|
||||
import { FeatureLoader } from './feature-loader.js';
|
||||
import type { SettingsService } from './settings-service.js';
|
||||
import { pipelineService, PipelineService } from './pipeline-service.js';
|
||||
import {
|
||||
getAutoLoadClaudeMdSetting,
|
||||
getEnableSandboxModeSetting,
|
||||
filterClaudeMdFromContext,
|
||||
getMCPServersFromSettings,
|
||||
getMCPPermissionSettings,
|
||||
} from '../lib/settings-helpers.js';
|
||||
|
||||
const execAsync = promisify(exec);
|
||||
|
||||
@@ -354,11 +341,9 @@ export class AutoModeService {
|
||||
private autoLoopAbortController: AbortController | null = null;
|
||||
private config: AutoModeConfig | null = null;
|
||||
private pendingApprovals = new Map<string, PendingApproval>();
|
||||
private settingsService: SettingsService | null = null;
|
||||
|
||||
constructor(events: EventEmitter, settingsService?: SettingsService) {
|
||||
constructor(events: EventEmitter) {
|
||||
this.events = events;
|
||||
this.settingsService = settingsService ?? null;
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -566,25 +551,14 @@ export class AutoModeService {
|
||||
// Update feature status to in_progress
|
||||
await this.updateFeatureStatus(projectPath, featureId, 'in_progress');
|
||||
|
||||
// Load autoLoadClaudeMd setting to determine context loading strategy
|
||||
const autoLoadClaudeMd = await getAutoLoadClaudeMdSetting(
|
||||
projectPath,
|
||||
this.settingsService,
|
||||
'[AutoMode]'
|
||||
);
|
||||
|
||||
// Build the prompt - use continuation prompt if provided (for recovery after plan approval)
|
||||
let prompt: string;
|
||||
// Load project context files (CLAUDE.md, CODE_QUALITY.md, etc.) - passed as system prompt
|
||||
const contextResult = await loadContextFiles({
|
||||
const { formattedPrompt: contextFilesPrompt } = await loadContextFiles({
|
||||
projectPath,
|
||||
fsModule: secureFs as Parameters<typeof loadContextFiles>[0]['fsModule'],
|
||||
});
|
||||
|
||||
// When autoLoadClaudeMd is enabled, filter out CLAUDE.md to avoid duplication
|
||||
// (SDK handles CLAUDE.md via settingSources), but keep other context files like CODE_QUALITY.md
|
||||
const contextFilesPrompt = filterClaudeMdFromContext(contextResult, autoLoadClaudeMd);
|
||||
|
||||
if (options?.continuationPrompt) {
|
||||
// Continuation prompt is used when recovering from a plan approval
|
||||
// The plan was already approved, so skip the planning phase
|
||||
@@ -630,27 +604,9 @@ export class AutoModeService {
|
||||
planningMode: feature.planningMode,
|
||||
requirePlanApproval: feature.requirePlanApproval,
|
||||
systemPrompt: contextFilesPrompt || undefined,
|
||||
autoLoadClaudeMd,
|
||||
}
|
||||
);
|
||||
|
||||
// Check for pipeline steps and execute them
|
||||
const pipelineConfig = await pipelineService.getPipelineConfig(projectPath);
|
||||
const sortedSteps = [...(pipelineConfig?.steps || [])].sort((a, b) => a.order - b.order);
|
||||
|
||||
if (sortedSteps.length > 0) {
|
||||
// Execute pipeline steps sequentially
|
||||
await this.executePipelineSteps(
|
||||
projectPath,
|
||||
featureId,
|
||||
feature,
|
||||
sortedSteps,
|
||||
workDir,
|
||||
abortController,
|
||||
autoLoadClaudeMd
|
||||
);
|
||||
}
|
||||
|
||||
// Determine final status based on testing mode:
|
||||
// - skipTests=false (automated testing): go directly to 'verified' (no manual verify needed)
|
||||
// - skipTests=true (manual verification): go to 'waiting_approval' for manual review
|
||||
@@ -694,143 +650,6 @@ export class AutoModeService {
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Execute pipeline steps sequentially after initial feature implementation
|
||||
*/
|
||||
private async executePipelineSteps(
|
||||
projectPath: string,
|
||||
featureId: string,
|
||||
feature: Feature,
|
||||
steps: PipelineStep[],
|
||||
workDir: string,
|
||||
abortController: AbortController,
|
||||
autoLoadClaudeMd: boolean
|
||||
): Promise<void> {
|
||||
console.log(`[AutoMode] Executing ${steps.length} pipeline step(s) for feature ${featureId}`);
|
||||
|
||||
// Load context files once
|
||||
const contextResult = await loadContextFiles({
|
||||
projectPath,
|
||||
fsModule: secureFs as Parameters<typeof loadContextFiles>[0]['fsModule'],
|
||||
});
|
||||
const contextFilesPrompt = filterClaudeMdFromContext(contextResult, autoLoadClaudeMd);
|
||||
|
||||
// Load previous agent output for context continuity
|
||||
const featureDir = getFeatureDir(projectPath, featureId);
|
||||
const contextPath = path.join(featureDir, 'agent-output.md');
|
||||
let previousContext = '';
|
||||
try {
|
||||
previousContext = (await secureFs.readFile(contextPath, 'utf-8')) as string;
|
||||
} catch {
|
||||
// No previous context
|
||||
}
|
||||
|
||||
for (let i = 0; i < steps.length; i++) {
|
||||
const step = steps[i];
|
||||
const pipelineStatus = `pipeline_${step.id}`;
|
||||
|
||||
// Update feature status to current pipeline step
|
||||
await this.updateFeatureStatus(projectPath, featureId, pipelineStatus);
|
||||
|
||||
this.emitAutoModeEvent('auto_mode_progress', {
|
||||
featureId,
|
||||
content: `Starting pipeline step ${i + 1}/${steps.length}: ${step.name}`,
|
||||
projectPath,
|
||||
});
|
||||
|
||||
this.emitAutoModeEvent('pipeline_step_started', {
|
||||
featureId,
|
||||
stepId: step.id,
|
||||
stepName: step.name,
|
||||
stepIndex: i,
|
||||
totalSteps: steps.length,
|
||||
projectPath,
|
||||
});
|
||||
|
||||
// Build prompt for this pipeline step
|
||||
const prompt = this.buildPipelineStepPrompt(step, feature, previousContext);
|
||||
|
||||
// Get model from feature
|
||||
const model = resolveModelString(feature.model, DEFAULT_MODELS.claude);
|
||||
|
||||
// Run the agent for this pipeline step
|
||||
await this.runAgent(
|
||||
workDir,
|
||||
featureId,
|
||||
prompt,
|
||||
abortController,
|
||||
projectPath,
|
||||
undefined, // no images for pipeline steps
|
||||
model,
|
||||
{
|
||||
projectPath,
|
||||
planningMode: 'skip', // Pipeline steps don't need planning
|
||||
requirePlanApproval: false,
|
||||
previousContent: previousContext,
|
||||
systemPrompt: contextFilesPrompt || undefined,
|
||||
autoLoadClaudeMd,
|
||||
}
|
||||
);
|
||||
|
||||
// Load updated context for next step
|
||||
try {
|
||||
previousContext = (await secureFs.readFile(contextPath, 'utf-8')) as string;
|
||||
} catch {
|
||||
// No context update
|
||||
}
|
||||
|
||||
this.emitAutoModeEvent('pipeline_step_complete', {
|
||||
featureId,
|
||||
stepId: step.id,
|
||||
stepName: step.name,
|
||||
stepIndex: i,
|
||||
totalSteps: steps.length,
|
||||
projectPath,
|
||||
});
|
||||
|
||||
console.log(
|
||||
`[AutoMode] Pipeline step ${i + 1}/${steps.length} (${step.name}) completed for feature ${featureId}`
|
||||
);
|
||||
}
|
||||
|
||||
console.log(`[AutoMode] All pipeline steps completed for feature ${featureId}`);
|
||||
}
|
||||
|
||||
/**
|
||||
* Build the prompt for a pipeline step
|
||||
*/
|
||||
private buildPipelineStepPrompt(
|
||||
step: PipelineStep,
|
||||
feature: Feature,
|
||||
previousContext: string
|
||||
): string {
|
||||
let prompt = `## Pipeline Step: ${step.name}
|
||||
|
||||
This is an automated pipeline step following the initial feature implementation.
|
||||
|
||||
### Feature Context
|
||||
${this.buildFeaturePrompt(feature)}
|
||||
|
||||
`;
|
||||
|
||||
if (previousContext) {
|
||||
prompt += `### Previous Work
|
||||
The following is the output from the previous work on this feature:
|
||||
|
||||
${previousContext}
|
||||
|
||||
`;
|
||||
}
|
||||
|
||||
prompt += `### Pipeline Step Instructions
|
||||
${step.instructions}
|
||||
|
||||
### Task
|
||||
Complete the pipeline step instructions above. Review the previous work and apply the required changes or actions.`;
|
||||
|
||||
return prompt;
|
||||
}
|
||||
|
||||
/**
|
||||
* Stop a specific feature
|
||||
*/
|
||||
@@ -927,23 +746,12 @@ Complete the pipeline step instructions above. Review the previous work and appl
|
||||
// No previous context
|
||||
}
|
||||
|
||||
// Load autoLoadClaudeMd setting to determine context loading strategy
|
||||
const autoLoadClaudeMd = await getAutoLoadClaudeMdSetting(
|
||||
projectPath,
|
||||
this.settingsService,
|
||||
'[AutoMode]'
|
||||
);
|
||||
|
||||
// Load project context files (CLAUDE.md, CODE_QUALITY.md, etc.) - passed as system prompt
|
||||
const contextResult = await loadContextFiles({
|
||||
const { formattedPrompt: contextFilesPrompt } = await loadContextFiles({
|
||||
projectPath,
|
||||
fsModule: secureFs as Parameters<typeof loadContextFiles>[0]['fsModule'],
|
||||
});
|
||||
|
||||
// When autoLoadClaudeMd is enabled, filter out CLAUDE.md to avoid duplication
|
||||
// (SDK handles CLAUDE.md via settingSources), but keep other context files like CODE_QUALITY.md
|
||||
const contextFilesPrompt = filterClaudeMdFromContext(contextResult, autoLoadClaudeMd);
|
||||
|
||||
// Build complete prompt with feature info, previous context, and follow-up instructions
|
||||
let fullPrompt = `## Follow-up on Feature Implementation
|
||||
|
||||
@@ -1071,7 +879,6 @@ Address the follow-up instructions above. Review the previous work and make the
|
||||
planningMode: 'skip', // Follow-ups don't require approval
|
||||
previousContent: previousContext || undefined,
|
||||
systemPrompt: contextFilesPrompt || undefined,
|
||||
autoLoadClaudeMd,
|
||||
}
|
||||
);
|
||||
|
||||
@@ -1258,6 +1065,11 @@ Address the follow-up instructions above. Review the previous work and make the
|
||||
* Analyze project to gather context
|
||||
*/
|
||||
async analyzeProject(projectPath: string): Promise<void> {
|
||||
// Validate project path before proceeding
|
||||
// This is called here because analyzeProject builds ExecuteOptions directly
|
||||
// without using a factory function from sdk-options.ts
|
||||
validateWorkingDirectory(projectPath);
|
||||
|
||||
const abortController = new AbortController();
|
||||
|
||||
const analysisFeatureId = `analysis-${Date.now()}`;
|
||||
@@ -1285,32 +1097,13 @@ Format your response as a structured markdown document.`;
|
||||
const analysisModel = resolveModelString(undefined, DEFAULT_MODELS.claude);
|
||||
const provider = ProviderFactory.getProviderForModel(analysisModel);
|
||||
|
||||
// Load autoLoadClaudeMd setting
|
||||
const autoLoadClaudeMd = await getAutoLoadClaudeMdSetting(
|
||||
projectPath,
|
||||
this.settingsService,
|
||||
'[AutoMode]'
|
||||
);
|
||||
|
||||
// Use createCustomOptions for centralized SDK configuration with CLAUDE.md support
|
||||
const sdkOptions = createCustomOptions({
|
||||
cwd: projectPath,
|
||||
model: analysisModel,
|
||||
maxTurns: 5,
|
||||
allowedTools: ['Read', 'Glob', 'Grep'],
|
||||
abortController,
|
||||
autoLoadClaudeMd,
|
||||
});
|
||||
|
||||
const options: ExecuteOptions = {
|
||||
prompt,
|
||||
model: sdkOptions.model ?? analysisModel,
|
||||
cwd: sdkOptions.cwd ?? projectPath,
|
||||
maxTurns: sdkOptions.maxTurns,
|
||||
allowedTools: sdkOptions.allowedTools as string[],
|
||||
model: analysisModel,
|
||||
maxTurns: 5,
|
||||
cwd: projectPath,
|
||||
allowedTools: ['Read', 'Glob', 'Grep'],
|
||||
abortController,
|
||||
settingSources: sdkOptions.settingSources,
|
||||
sandbox: sdkOptions.sandbox, // Pass sandbox configuration
|
||||
};
|
||||
|
||||
const stream = provider.executeQuery(options);
|
||||
@@ -1915,7 +1708,6 @@ This helps parse your summary correctly in the output logs.`;
|
||||
requirePlanApproval?: boolean;
|
||||
previousContent?: string;
|
||||
systemPrompt?: string;
|
||||
autoLoadClaudeMd?: boolean;
|
||||
}
|
||||
): Promise<void> {
|
||||
const finalProjectPath = options?.projectPath || projectPath;
|
||||
@@ -1988,32 +1780,11 @@ This mock response was generated because AUTOMAKER_MOCK_AGENT=true was set.
|
||||
return;
|
||||
}
|
||||
|
||||
// Load autoLoadClaudeMd setting (project setting takes precedence over global)
|
||||
// Use provided value if available, otherwise load from settings
|
||||
const autoLoadClaudeMd =
|
||||
options?.autoLoadClaudeMd !== undefined
|
||||
? options.autoLoadClaudeMd
|
||||
: await getAutoLoadClaudeMdSetting(finalProjectPath, this.settingsService, '[AutoMode]');
|
||||
|
||||
// Load enableSandboxMode setting (global setting only)
|
||||
const enableSandboxMode = await getEnableSandboxModeSetting(this.settingsService, '[AutoMode]');
|
||||
|
||||
// Load MCP servers from settings (global setting only)
|
||||
const mcpServers = await getMCPServersFromSettings(this.settingsService, '[AutoMode]');
|
||||
|
||||
// Load MCP permission settings (global setting only)
|
||||
const mcpPermissions = await getMCPPermissionSettings(this.settingsService, '[AutoMode]');
|
||||
|
||||
// Build SDK options using centralized configuration for feature implementation
|
||||
const sdkOptions = createAutoModeOptions({
|
||||
cwd: workDir,
|
||||
model: model,
|
||||
abortController,
|
||||
autoLoadClaudeMd,
|
||||
enableSandboxMode,
|
||||
mcpServers: Object.keys(mcpServers).length > 0 ? mcpServers : undefined,
|
||||
mcpAutoApproveTools: mcpPermissions.mcpAutoApproveTools,
|
||||
mcpUnrestrictedTools: mcpPermissions.mcpUnrestrictedTools,
|
||||
});
|
||||
|
||||
// Extract model, maxTurns, and allowedTools from SDK options
|
||||
@@ -2052,12 +1823,7 @@ This mock response was generated because AUTOMAKER_MOCK_AGENT=true was set.
|
||||
cwd: workDir,
|
||||
allowedTools: allowedTools,
|
||||
abortController,
|
||||
systemPrompt: sdkOptions.systemPrompt,
|
||||
settingSources: sdkOptions.settingSources,
|
||||
sandbox: sdkOptions.sandbox, // Pass sandbox configuration
|
||||
mcpServers: Object.keys(mcpServers).length > 0 ? mcpServers : undefined, // Pass MCP servers configuration
|
||||
mcpAutoApproveTools: mcpPermissions.mcpAutoApproveTools, // Pass MCP auto-approve setting
|
||||
mcpUnrestrictedTools: mcpPermissions.mcpUnrestrictedTools, // Pass MCP unrestricted tools setting
|
||||
systemPrompt: options?.systemPrompt,
|
||||
};
|
||||
|
||||
// Execute via provider
|
||||
@@ -2285,9 +2051,6 @@ After generating the revised spec, output:
|
||||
cwd: workDir,
|
||||
allowedTools: allowedTools,
|
||||
abortController,
|
||||
mcpServers: Object.keys(mcpServers).length > 0 ? mcpServers : undefined,
|
||||
mcpAutoApproveTools: mcpPermissions.mcpAutoApproveTools,
|
||||
mcpUnrestrictedTools: mcpPermissions.mcpUnrestrictedTools,
|
||||
});
|
||||
|
||||
let revisionText = '';
|
||||
@@ -2425,9 +2188,6 @@ After generating the revised spec, output:
|
||||
cwd: workDir,
|
||||
allowedTools: allowedTools,
|
||||
abortController,
|
||||
mcpServers: Object.keys(mcpServers).length > 0 ? mcpServers : undefined,
|
||||
mcpAutoApproveTools: mcpPermissions.mcpAutoApproveTools,
|
||||
mcpUnrestrictedTools: mcpPermissions.mcpUnrestrictedTools,
|
||||
});
|
||||
|
||||
let taskOutput = '';
|
||||
@@ -2517,9 +2277,6 @@ Implement all the changes described in the plan above.`;
|
||||
cwd: workDir,
|
||||
allowedTools: allowedTools,
|
||||
abortController,
|
||||
mcpServers: Object.keys(mcpServers).length > 0 ? mcpServers : undefined,
|
||||
mcpAutoApproveTools: mcpPermissions.mcpAutoApproveTools,
|
||||
mcpUnrestrictedTools: mcpPermissions.mcpUnrestrictedTools,
|
||||
});
|
||||
|
||||
for await (const msg of continuationStream) {
|
||||
|
||||
@@ -1,208 +0,0 @@
|
||||
/**
|
||||
* MCP Test Service
|
||||
*
|
||||
* Provides functionality to test MCP server connections and list available tools.
|
||||
* Supports stdio, SSE, and HTTP transport types.
|
||||
*/
|
||||
|
||||
import { Client } from '@modelcontextprotocol/sdk/client/index.js';
|
||||
import { StdioClientTransport } from '@modelcontextprotocol/sdk/client/stdio.js';
|
||||
import { SSEClientTransport } from '@modelcontextprotocol/sdk/client/sse.js';
|
||||
import { StreamableHTTPClientTransport } from '@modelcontextprotocol/sdk/client/streamableHttp.js';
|
||||
import type { MCPServerConfig, MCPToolInfo } from '@automaker/types';
|
||||
import type { SettingsService } from './settings-service.js';
|
||||
|
||||
const DEFAULT_TIMEOUT = 10000; // 10 seconds
|
||||
|
||||
export interface MCPTestResult {
|
||||
success: boolean;
|
||||
tools?: MCPToolInfo[];
|
||||
error?: string;
|
||||
connectionTime?: number;
|
||||
serverInfo?: {
|
||||
name?: string;
|
||||
version?: string;
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* MCP Test Service for testing server connections and listing tools
|
||||
*/
|
||||
export class MCPTestService {
|
||||
private settingsService: SettingsService;
|
||||
|
||||
constructor(settingsService: SettingsService) {
|
||||
this.settingsService = settingsService;
|
||||
}
|
||||
|
||||
/**
|
||||
* Test connection to an MCP server and list its tools
|
||||
*/
|
||||
async testServer(serverConfig: MCPServerConfig): Promise<MCPTestResult> {
|
||||
const startTime = Date.now();
|
||||
let client: Client | null = null;
|
||||
|
||||
try {
|
||||
client = new Client({
|
||||
name: 'automaker-mcp-test',
|
||||
version: '1.0.0',
|
||||
});
|
||||
|
||||
// Create transport based on server type
|
||||
const transport = await this.createTransport(serverConfig);
|
||||
|
||||
// Connect with timeout
|
||||
await Promise.race([
|
||||
client.connect(transport),
|
||||
this.timeout(DEFAULT_TIMEOUT, 'Connection timeout'),
|
||||
]);
|
||||
|
||||
// List tools with timeout
|
||||
const toolsResult = await Promise.race([
|
||||
client.listTools(),
|
||||
this.timeout<{
|
||||
tools: Array<{
|
||||
name: string;
|
||||
description?: string;
|
||||
inputSchema?: Record<string, unknown>;
|
||||
}>;
|
||||
}>(DEFAULT_TIMEOUT, 'List tools timeout'),
|
||||
]);
|
||||
|
||||
const connectionTime = Date.now() - startTime;
|
||||
|
||||
// Convert tools to MCPToolInfo format
|
||||
const tools: MCPToolInfo[] = (toolsResult.tools || []).map(
|
||||
(tool: { name: string; description?: string; inputSchema?: Record<string, unknown> }) => ({
|
||||
name: tool.name,
|
||||
description: tool.description,
|
||||
inputSchema: tool.inputSchema,
|
||||
enabled: true,
|
||||
})
|
||||
);
|
||||
|
||||
return {
|
||||
success: true,
|
||||
tools,
|
||||
connectionTime,
|
||||
serverInfo: {
|
||||
name: serverConfig.name,
|
||||
version: undefined, // Could be extracted from server info if available
|
||||
},
|
||||
};
|
||||
} catch (error) {
|
||||
const connectionTime = Date.now() - startTime;
|
||||
return {
|
||||
success: false,
|
||||
error: this.getErrorMessage(error),
|
||||
connectionTime,
|
||||
};
|
||||
} finally {
|
||||
// Clean up client connection
|
||||
if (client) {
|
||||
try {
|
||||
await client.close();
|
||||
} catch {
|
||||
// Ignore cleanup errors
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Test server by ID (looks up config from settings)
|
||||
*/
|
||||
async testServerById(serverId: string): Promise<MCPTestResult> {
|
||||
try {
|
||||
const globalSettings = await this.settingsService.getGlobalSettings();
|
||||
const serverConfig = globalSettings.mcpServers?.find((s) => s.id === serverId);
|
||||
|
||||
if (!serverConfig) {
|
||||
return {
|
||||
success: false,
|
||||
error: `Server with ID "${serverId}" not found`,
|
||||
};
|
||||
}
|
||||
|
||||
return this.testServer(serverConfig);
|
||||
} catch (error) {
|
||||
return {
|
||||
success: false,
|
||||
error: this.getErrorMessage(error),
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Create appropriate transport based on server type
|
||||
*/
|
||||
private async createTransport(
|
||||
config: MCPServerConfig
|
||||
): Promise<StdioClientTransport | SSEClientTransport | StreamableHTTPClientTransport> {
|
||||
if (config.type === 'sse') {
|
||||
if (!config.url) {
|
||||
throw new Error('URL is required for SSE transport');
|
||||
}
|
||||
// Use eventSourceInit workaround for SSE headers (SDK bug workaround)
|
||||
// See: https://github.com/modelcontextprotocol/typescript-sdk/issues/436
|
||||
const headers = config.headers;
|
||||
return new SSEClientTransport(new URL(config.url), {
|
||||
requestInit: headers ? { headers } : undefined,
|
||||
eventSourceInit: headers
|
||||
? {
|
||||
fetch: (url: string | URL | Request, init?: RequestInit) => {
|
||||
const fetchHeaders = new Headers(init?.headers || {});
|
||||
for (const [key, value] of Object.entries(headers)) {
|
||||
fetchHeaders.set(key, value);
|
||||
}
|
||||
return fetch(url, { ...init, headers: fetchHeaders });
|
||||
},
|
||||
}
|
||||
: undefined,
|
||||
});
|
||||
}
|
||||
|
||||
if (config.type === 'http') {
|
||||
if (!config.url) {
|
||||
throw new Error('URL is required for HTTP transport');
|
||||
}
|
||||
return new StreamableHTTPClientTransport(new URL(config.url), {
|
||||
requestInit: config.headers
|
||||
? {
|
||||
headers: config.headers,
|
||||
}
|
||||
: undefined,
|
||||
});
|
||||
}
|
||||
|
||||
// Default to stdio
|
||||
if (!config.command) {
|
||||
throw new Error('Command is required for stdio transport');
|
||||
}
|
||||
|
||||
return new StdioClientTransport({
|
||||
command: config.command,
|
||||
args: config.args,
|
||||
env: config.env,
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a timeout promise
|
||||
*/
|
||||
private timeout<T>(ms: number, message: string): Promise<T> {
|
||||
return new Promise((_, reject) => {
|
||||
setTimeout(() => reject(new Error(message)), ms);
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract error message from unknown error
|
||||
*/
|
||||
private getErrorMessage(error: unknown): string {
|
||||
if (error instanceof Error) {
|
||||
return error.message;
|
||||
}
|
||||
return String(error);
|
||||
}
|
||||
}
|
||||
@@ -1,320 +0,0 @@
|
||||
/**
|
||||
* Pipeline Service - Handles reading/writing pipeline configuration
|
||||
*
|
||||
* Provides persistent storage for:
|
||||
* - Pipeline configuration ({projectPath}/.automaker/pipeline.json)
|
||||
*/
|
||||
|
||||
import path from 'path';
|
||||
import { createLogger } from '@automaker/utils';
|
||||
import * as secureFs from '../lib/secure-fs.js';
|
||||
import { ensureAutomakerDir } from '@automaker/platform';
|
||||
import type { PipelineConfig, PipelineStep, FeatureStatusWithPipeline } from '@automaker/types';
|
||||
|
||||
const logger = createLogger('PipelineService');
|
||||
|
||||
// Default empty pipeline config
|
||||
const DEFAULT_PIPELINE_CONFIG: PipelineConfig = {
|
||||
version: 1,
|
||||
steps: [],
|
||||
};
|
||||
|
||||
/**
|
||||
* Atomic file write - write to temp file then rename
|
||||
*/
|
||||
async function atomicWriteJson(filePath: string, data: unknown): Promise<void> {
|
||||
const tempPath = `${filePath}.tmp.${Date.now()}`;
|
||||
const content = JSON.stringify(data, null, 2);
|
||||
|
||||
try {
|
||||
await secureFs.writeFile(tempPath, content, 'utf-8');
|
||||
await secureFs.rename(tempPath, filePath);
|
||||
} catch (error) {
|
||||
// Clean up temp file if it exists
|
||||
try {
|
||||
await secureFs.unlink(tempPath);
|
||||
} catch {
|
||||
// Ignore cleanup errors
|
||||
}
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Safely read JSON file with fallback to default
|
||||
*/
|
||||
async function readJsonFile<T>(filePath: string, defaultValue: T): Promise<T> {
|
||||
try {
|
||||
const content = (await secureFs.readFile(filePath, 'utf-8')) as string;
|
||||
return JSON.parse(content) as T;
|
||||
} catch (error) {
|
||||
if ((error as NodeJS.ErrnoException).code === 'ENOENT') {
|
||||
return defaultValue;
|
||||
}
|
||||
logger.error(`Error reading ${filePath}:`, error);
|
||||
return defaultValue;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Generate a unique ID for pipeline steps
|
||||
*/
|
||||
function generateStepId(): string {
|
||||
return `step_${Date.now().toString(36)}_${Math.random().toString(36).substring(2, 8)}`;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the pipeline config file path for a project
|
||||
*/
|
||||
function getPipelineConfigPath(projectPath: string): string {
|
||||
return path.join(projectPath, '.automaker', 'pipeline.json');
|
||||
}
|
||||
|
||||
/**
|
||||
* PipelineService - Manages pipeline configuration for workflow automation
|
||||
*
|
||||
* Handles reading and writing pipeline config to JSON files with atomic operations.
|
||||
* Pipeline steps define custom columns that appear between "in_progress" and
|
||||
* "waiting_approval/verified" columns in the kanban board.
|
||||
*/
|
||||
export class PipelineService {
|
||||
/**
|
||||
* Get pipeline configuration for a project
|
||||
*
|
||||
* @param projectPath - Absolute path to the project
|
||||
* @returns Promise resolving to PipelineConfig (empty steps array if no config exists)
|
||||
*/
|
||||
async getPipelineConfig(projectPath: string): Promise<PipelineConfig> {
|
||||
const configPath = getPipelineConfigPath(projectPath);
|
||||
const config = await readJsonFile<PipelineConfig>(configPath, DEFAULT_PIPELINE_CONFIG);
|
||||
|
||||
// Ensure version is set
|
||||
return {
|
||||
...DEFAULT_PIPELINE_CONFIG,
|
||||
...config,
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Save entire pipeline configuration
|
||||
*
|
||||
* @param projectPath - Absolute path to the project
|
||||
* @param config - Complete PipelineConfig to save
|
||||
*/
|
||||
async savePipelineConfig(projectPath: string, config: PipelineConfig): Promise<void> {
|
||||
await ensureAutomakerDir(projectPath);
|
||||
const configPath = getPipelineConfigPath(projectPath);
|
||||
await atomicWriteJson(configPath, config);
|
||||
logger.info(`Pipeline config saved for project: ${projectPath}`);
|
||||
}
|
||||
|
||||
/**
|
||||
* Add a new pipeline step
|
||||
*
|
||||
* @param projectPath - Absolute path to the project
|
||||
* @param step - Step data (without id, createdAt, updatedAt)
|
||||
* @returns Promise resolving to the created PipelineStep
|
||||
*/
|
||||
async addStep(
|
||||
projectPath: string,
|
||||
step: Omit<PipelineStep, 'id' | 'createdAt' | 'updatedAt'>
|
||||
): Promise<PipelineStep> {
|
||||
const config = await this.getPipelineConfig(projectPath);
|
||||
const now = new Date().toISOString();
|
||||
|
||||
const newStep: PipelineStep = {
|
||||
...step,
|
||||
id: generateStepId(),
|
||||
createdAt: now,
|
||||
updatedAt: now,
|
||||
};
|
||||
|
||||
config.steps.push(newStep);
|
||||
|
||||
// Normalize order values
|
||||
config.steps.sort((a, b) => a.order - b.order);
|
||||
config.steps.forEach((s, index) => {
|
||||
s.order = index;
|
||||
});
|
||||
|
||||
await this.savePipelineConfig(projectPath, config);
|
||||
logger.info(`Pipeline step added: ${newStep.name} (${newStep.id})`);
|
||||
|
||||
return newStep;
|
||||
}
|
||||
|
||||
/**
|
||||
* Update an existing pipeline step
|
||||
*
|
||||
* @param projectPath - Absolute path to the project
|
||||
* @param stepId - ID of the step to update
|
||||
* @param updates - Partial step data to merge
|
||||
*/
|
||||
async updateStep(
|
||||
projectPath: string,
|
||||
stepId: string,
|
||||
updates: Partial<Omit<PipelineStep, 'id' | 'createdAt'>>
|
||||
): Promise<PipelineStep> {
|
||||
const config = await this.getPipelineConfig(projectPath);
|
||||
const stepIndex = config.steps.findIndex((s) => s.id === stepId);
|
||||
|
||||
if (stepIndex === -1) {
|
||||
throw new Error(`Pipeline step not found: ${stepId}`);
|
||||
}
|
||||
|
||||
config.steps[stepIndex] = {
|
||||
...config.steps[stepIndex],
|
||||
...updates,
|
||||
updatedAt: new Date().toISOString(),
|
||||
};
|
||||
|
||||
await this.savePipelineConfig(projectPath, config);
|
||||
logger.info(`Pipeline step updated: ${stepId}`);
|
||||
|
||||
return config.steps[stepIndex];
|
||||
}
|
||||
|
||||
/**
|
||||
* Delete a pipeline step
|
||||
*
|
||||
* @param projectPath - Absolute path to the project
|
||||
* @param stepId - ID of the step to delete
|
||||
*/
|
||||
async deleteStep(projectPath: string, stepId: string): Promise<void> {
|
||||
const config = await this.getPipelineConfig(projectPath);
|
||||
const stepIndex = config.steps.findIndex((s) => s.id === stepId);
|
||||
|
||||
if (stepIndex === -1) {
|
||||
throw new Error(`Pipeline step not found: ${stepId}`);
|
||||
}
|
||||
|
||||
config.steps.splice(stepIndex, 1);
|
||||
|
||||
// Normalize order values after deletion
|
||||
config.steps.forEach((s, index) => {
|
||||
s.order = index;
|
||||
});
|
||||
|
||||
await this.savePipelineConfig(projectPath, config);
|
||||
logger.info(`Pipeline step deleted: ${stepId}`);
|
||||
}
|
||||
|
||||
/**
|
||||
* Reorder pipeline steps
|
||||
*
|
||||
* @param projectPath - Absolute path to the project
|
||||
* @param stepIds - Array of step IDs in the desired order
|
||||
*/
|
||||
async reorderSteps(projectPath: string, stepIds: string[]): Promise<void> {
|
||||
const config = await this.getPipelineConfig(projectPath);
|
||||
|
||||
// Validate all step IDs exist
|
||||
const existingIds = new Set(config.steps.map((s) => s.id));
|
||||
for (const id of stepIds) {
|
||||
if (!existingIds.has(id)) {
|
||||
throw new Error(`Pipeline step not found: ${id}`);
|
||||
}
|
||||
}
|
||||
|
||||
// Create a map for quick lookup
|
||||
const stepMap = new Map(config.steps.map((s) => [s.id, s]));
|
||||
|
||||
// Reorder steps based on stepIds array
|
||||
config.steps = stepIds.map((id, index) => {
|
||||
const step = stepMap.get(id)!;
|
||||
return { ...step, order: index, updatedAt: new Date().toISOString() };
|
||||
});
|
||||
|
||||
await this.savePipelineConfig(projectPath, config);
|
||||
logger.info(`Pipeline steps reordered`);
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the next status in the pipeline flow
|
||||
*
|
||||
* Determines what status a feature should transition to based on current status.
|
||||
* Flow: in_progress -> pipeline_step_0 -> pipeline_step_1 -> ... -> final status
|
||||
*
|
||||
* @param currentStatus - Current feature status
|
||||
* @param config - Pipeline configuration (or null if no pipeline)
|
||||
* @param skipTests - Whether to skip tests (affects final status)
|
||||
* @returns The next status in the pipeline flow
|
||||
*/
|
||||
getNextStatus(
|
||||
currentStatus: FeatureStatusWithPipeline,
|
||||
config: PipelineConfig | null,
|
||||
skipTests: boolean
|
||||
): FeatureStatusWithPipeline {
|
||||
const steps = config?.steps || [];
|
||||
|
||||
// Sort steps by order
|
||||
const sortedSteps = [...steps].sort((a, b) => a.order - b.order);
|
||||
|
||||
// If no pipeline steps, use original logic
|
||||
if (sortedSteps.length === 0) {
|
||||
if (currentStatus === 'in_progress') {
|
||||
return skipTests ? 'waiting_approval' : 'verified';
|
||||
}
|
||||
return currentStatus;
|
||||
}
|
||||
|
||||
// Coming from in_progress -> go to first pipeline step
|
||||
if (currentStatus === 'in_progress') {
|
||||
return `pipeline_${sortedSteps[0].id}`;
|
||||
}
|
||||
|
||||
// Coming from a pipeline step -> go to next step or final status
|
||||
if (currentStatus.startsWith('pipeline_')) {
|
||||
const currentStepId = currentStatus.replace('pipeline_', '');
|
||||
const currentIndex = sortedSteps.findIndex((s) => s.id === currentStepId);
|
||||
|
||||
if (currentIndex === -1) {
|
||||
// Step not found, go to final status
|
||||
return skipTests ? 'waiting_approval' : 'verified';
|
||||
}
|
||||
|
||||
if (currentIndex < sortedSteps.length - 1) {
|
||||
// Go to next step
|
||||
return `pipeline_${sortedSteps[currentIndex + 1].id}`;
|
||||
}
|
||||
|
||||
// Last step completed, go to final status
|
||||
return skipTests ? 'waiting_approval' : 'verified';
|
||||
}
|
||||
|
||||
// For other statuses, don't change
|
||||
return currentStatus;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get a specific pipeline step by ID
|
||||
*
|
||||
* @param projectPath - Absolute path to the project
|
||||
* @param stepId - ID of the step to retrieve
|
||||
* @returns The pipeline step or null if not found
|
||||
*/
|
||||
async getStep(projectPath: string, stepId: string): Promise<PipelineStep | null> {
|
||||
const config = await this.getPipelineConfig(projectPath);
|
||||
return config.steps.find((s) => s.id === stepId) || null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if a status is a pipeline status
|
||||
*/
|
||||
isPipelineStatus(status: FeatureStatusWithPipeline): boolean {
|
||||
return status.startsWith('pipeline_');
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract step ID from a pipeline status
|
||||
*/
|
||||
getStepIdFromStatus(status: FeatureStatusWithPipeline): string | null {
|
||||
if (!this.isPipelineStatus(status)) {
|
||||
return null;
|
||||
}
|
||||
return status.replace('pipeline_', '');
|
||||
}
|
||||
}
|
||||
|
||||
// Export singleton instance
|
||||
export const pipelineService = new PipelineService();
|
||||
@@ -179,7 +179,7 @@ describe('sdk-options.ts', () => {
|
||||
it('should create options with chat settings', async () => {
|
||||
const { createChatOptions, TOOL_PRESETS, MAX_TURNS } = await import('@/lib/sdk-options.js');
|
||||
|
||||
const options = createChatOptions({ cwd: '/test/path', enableSandboxMode: true });
|
||||
const options = createChatOptions({ cwd: '/test/path' });
|
||||
|
||||
expect(options.cwd).toBe('/test/path');
|
||||
expect(options.maxTurns).toBe(MAX_TURNS.standard);
|
||||
@@ -212,27 +212,6 @@ describe('sdk-options.ts', () => {
|
||||
|
||||
expect(options.model).toBe('claude-sonnet-4-20250514');
|
||||
});
|
||||
|
||||
it('should not set sandbox when enableSandboxMode is false', async () => {
|
||||
const { createChatOptions } = await import('@/lib/sdk-options.js');
|
||||
|
||||
const options = createChatOptions({
|
||||
cwd: '/test/path',
|
||||
enableSandboxMode: false,
|
||||
});
|
||||
|
||||
expect(options.sandbox).toBeUndefined();
|
||||
});
|
||||
|
||||
it('should not set sandbox when enableSandboxMode is not provided', async () => {
|
||||
const { createChatOptions } = await import('@/lib/sdk-options.js');
|
||||
|
||||
const options = createChatOptions({
|
||||
cwd: '/test/path',
|
||||
});
|
||||
|
||||
expect(options.sandbox).toBeUndefined();
|
||||
});
|
||||
});
|
||||
|
||||
describe('createAutoModeOptions', () => {
|
||||
@@ -240,7 +219,7 @@ describe('sdk-options.ts', () => {
|
||||
const { createAutoModeOptions, TOOL_PRESETS, MAX_TURNS } =
|
||||
await import('@/lib/sdk-options.js');
|
||||
|
||||
const options = createAutoModeOptions({ cwd: '/test/path', enableSandboxMode: true });
|
||||
const options = createAutoModeOptions({ cwd: '/test/path' });
|
||||
|
||||
expect(options.cwd).toBe('/test/path');
|
||||
expect(options.maxTurns).toBe(MAX_TURNS.maximum);
|
||||
@@ -273,27 +252,6 @@ describe('sdk-options.ts', () => {
|
||||
|
||||
expect(options.abortController).toBe(abortController);
|
||||
});
|
||||
|
||||
it('should not set sandbox when enableSandboxMode is false', async () => {
|
||||
const { createAutoModeOptions } = await import('@/lib/sdk-options.js');
|
||||
|
||||
const options = createAutoModeOptions({
|
||||
cwd: '/test/path',
|
||||
enableSandboxMode: false,
|
||||
});
|
||||
|
||||
expect(options.sandbox).toBeUndefined();
|
||||
});
|
||||
|
||||
it('should not set sandbox when enableSandboxMode is not provided', async () => {
|
||||
const { createAutoModeOptions } = await import('@/lib/sdk-options.js');
|
||||
|
||||
const options = createAutoModeOptions({
|
||||
cwd: '/test/path',
|
||||
});
|
||||
|
||||
expect(options.sandbox).toBeUndefined();
|
||||
});
|
||||
});
|
||||
|
||||
describe('createCustomOptions', () => {
|
||||
|
||||
@@ -1,365 +0,0 @@
|
||||
import { describe, it, expect, vi, beforeEach } from 'vitest';
|
||||
import { getMCPServersFromSettings, getMCPPermissionSettings } from '@/lib/settings-helpers.js';
|
||||
import type { SettingsService } from '@/services/settings-service.js';
|
||||
|
||||
describe('settings-helpers.ts', () => {
|
||||
describe('getMCPServersFromSettings', () => {
|
||||
beforeEach(() => {
|
||||
vi.spyOn(console, 'log').mockImplementation(() => {});
|
||||
vi.spyOn(console, 'error').mockImplementation(() => {});
|
||||
});
|
||||
|
||||
it('should return empty object when settingsService is null', async () => {
|
||||
const result = await getMCPServersFromSettings(null);
|
||||
expect(result).toEqual({});
|
||||
});
|
||||
|
||||
it('should return empty object when settingsService is undefined', async () => {
|
||||
const result = await getMCPServersFromSettings(undefined);
|
||||
expect(result).toEqual({});
|
||||
});
|
||||
|
||||
it('should return empty object when no MCP servers configured', async () => {
|
||||
const mockSettingsService = {
|
||||
getGlobalSettings: vi.fn().mockResolvedValue({ mcpServers: [] }),
|
||||
} as unknown as SettingsService;
|
||||
|
||||
const result = await getMCPServersFromSettings(mockSettingsService);
|
||||
expect(result).toEqual({});
|
||||
});
|
||||
|
||||
it('should return empty object when mcpServers is undefined', async () => {
|
||||
const mockSettingsService = {
|
||||
getGlobalSettings: vi.fn().mockResolvedValue({}),
|
||||
} as unknown as SettingsService;
|
||||
|
||||
const result = await getMCPServersFromSettings(mockSettingsService);
|
||||
expect(result).toEqual({});
|
||||
});
|
||||
|
||||
it('should convert enabled stdio server to SDK format', async () => {
|
||||
const mockSettingsService = {
|
||||
getGlobalSettings: vi.fn().mockResolvedValue({
|
||||
mcpServers: [
|
||||
{
|
||||
id: '1',
|
||||
name: 'test-server',
|
||||
type: 'stdio',
|
||||
command: 'node',
|
||||
args: ['server.js'],
|
||||
env: { NODE_ENV: 'test' },
|
||||
enabled: true,
|
||||
},
|
||||
],
|
||||
}),
|
||||
} as unknown as SettingsService;
|
||||
|
||||
const result = await getMCPServersFromSettings(mockSettingsService);
|
||||
expect(result).toEqual({
|
||||
'test-server': {
|
||||
type: 'stdio',
|
||||
command: 'node',
|
||||
args: ['server.js'],
|
||||
env: { NODE_ENV: 'test' },
|
||||
},
|
||||
});
|
||||
});
|
||||
|
||||
it('should convert enabled SSE server to SDK format', async () => {
|
||||
const mockSettingsService = {
|
||||
getGlobalSettings: vi.fn().mockResolvedValue({
|
||||
mcpServers: [
|
||||
{
|
||||
id: '1',
|
||||
name: 'sse-server',
|
||||
type: 'sse',
|
||||
url: 'http://localhost:3000/sse',
|
||||
headers: { Authorization: 'Bearer token' },
|
||||
enabled: true,
|
||||
},
|
||||
],
|
||||
}),
|
||||
} as unknown as SettingsService;
|
||||
|
||||
const result = await getMCPServersFromSettings(mockSettingsService);
|
||||
expect(result).toEqual({
|
||||
'sse-server': {
|
||||
type: 'sse',
|
||||
url: 'http://localhost:3000/sse',
|
||||
headers: { Authorization: 'Bearer token' },
|
||||
},
|
||||
});
|
||||
});
|
||||
|
||||
it('should convert enabled HTTP server to SDK format', async () => {
|
||||
const mockSettingsService = {
|
||||
getGlobalSettings: vi.fn().mockResolvedValue({
|
||||
mcpServers: [
|
||||
{
|
||||
id: '1',
|
||||
name: 'http-server',
|
||||
type: 'http',
|
||||
url: 'http://localhost:3000/api',
|
||||
headers: { 'X-API-Key': 'secret' },
|
||||
enabled: true,
|
||||
},
|
||||
],
|
||||
}),
|
||||
} as unknown as SettingsService;
|
||||
|
||||
const result = await getMCPServersFromSettings(mockSettingsService);
|
||||
expect(result).toEqual({
|
||||
'http-server': {
|
||||
type: 'http',
|
||||
url: 'http://localhost:3000/api',
|
||||
headers: { 'X-API-Key': 'secret' },
|
||||
},
|
||||
});
|
||||
});
|
||||
|
||||
it('should filter out disabled servers', async () => {
|
||||
const mockSettingsService = {
|
||||
getGlobalSettings: vi.fn().mockResolvedValue({
|
||||
mcpServers: [
|
||||
{
|
||||
id: '1',
|
||||
name: 'enabled-server',
|
||||
type: 'stdio',
|
||||
command: 'node',
|
||||
enabled: true,
|
||||
},
|
||||
{
|
||||
id: '2',
|
||||
name: 'disabled-server',
|
||||
type: 'stdio',
|
||||
command: 'python',
|
||||
enabled: false,
|
||||
},
|
||||
],
|
||||
}),
|
||||
} as unknown as SettingsService;
|
||||
|
||||
const result = await getMCPServersFromSettings(mockSettingsService);
|
||||
expect(Object.keys(result)).toHaveLength(1);
|
||||
expect(result['enabled-server']).toBeDefined();
|
||||
expect(result['disabled-server']).toBeUndefined();
|
||||
});
|
||||
|
||||
it('should treat servers without enabled field as enabled', async () => {
|
||||
const mockSettingsService = {
|
||||
getGlobalSettings: vi.fn().mockResolvedValue({
|
||||
mcpServers: [
|
||||
{
|
||||
id: '1',
|
||||
name: 'implicit-enabled',
|
||||
type: 'stdio',
|
||||
command: 'node',
|
||||
// enabled field not set
|
||||
},
|
||||
],
|
||||
}),
|
||||
} as unknown as SettingsService;
|
||||
|
||||
const result = await getMCPServersFromSettings(mockSettingsService);
|
||||
expect(result['implicit-enabled']).toBeDefined();
|
||||
});
|
||||
|
||||
it('should handle multiple enabled servers', async () => {
|
||||
const mockSettingsService = {
|
||||
getGlobalSettings: vi.fn().mockResolvedValue({
|
||||
mcpServers: [
|
||||
{ id: '1', name: 'server1', type: 'stdio', command: 'node', enabled: true },
|
||||
{ id: '2', name: 'server2', type: 'stdio', command: 'python', enabled: true },
|
||||
],
|
||||
}),
|
||||
} as unknown as SettingsService;
|
||||
|
||||
const result = await getMCPServersFromSettings(mockSettingsService);
|
||||
expect(Object.keys(result)).toHaveLength(2);
|
||||
expect(result['server1']).toBeDefined();
|
||||
expect(result['server2']).toBeDefined();
|
||||
});
|
||||
|
||||
it('should return empty object and log error on exception', async () => {
|
||||
const mockSettingsService = {
|
||||
getGlobalSettings: vi.fn().mockRejectedValue(new Error('Settings error')),
|
||||
} as unknown as SettingsService;
|
||||
|
||||
const result = await getMCPServersFromSettings(mockSettingsService, '[Test]');
|
||||
expect(result).toEqual({});
|
||||
expect(console.error).toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('should throw error for SSE server without URL', async () => {
|
||||
const mockSettingsService = {
|
||||
getGlobalSettings: vi.fn().mockResolvedValue({
|
||||
mcpServers: [
|
||||
{
|
||||
id: '1',
|
||||
name: 'bad-sse',
|
||||
type: 'sse',
|
||||
enabled: true,
|
||||
// url missing
|
||||
},
|
||||
],
|
||||
}),
|
||||
} as unknown as SettingsService;
|
||||
|
||||
// The error is caught and logged, returns empty
|
||||
const result = await getMCPServersFromSettings(mockSettingsService);
|
||||
expect(result).toEqual({});
|
||||
});
|
||||
|
||||
it('should throw error for HTTP server without URL', async () => {
|
||||
const mockSettingsService = {
|
||||
getGlobalSettings: vi.fn().mockResolvedValue({
|
||||
mcpServers: [
|
||||
{
|
||||
id: '1',
|
||||
name: 'bad-http',
|
||||
type: 'http',
|
||||
enabled: true,
|
||||
// url missing
|
||||
},
|
||||
],
|
||||
}),
|
||||
} as unknown as SettingsService;
|
||||
|
||||
const result = await getMCPServersFromSettings(mockSettingsService);
|
||||
expect(result).toEqual({});
|
||||
});
|
||||
|
||||
it('should throw error for stdio server without command', async () => {
|
||||
const mockSettingsService = {
|
||||
getGlobalSettings: vi.fn().mockResolvedValue({
|
||||
mcpServers: [
|
||||
{
|
||||
id: '1',
|
||||
name: 'bad-stdio',
|
||||
type: 'stdio',
|
||||
enabled: true,
|
||||
// command missing
|
||||
},
|
||||
],
|
||||
}),
|
||||
} as unknown as SettingsService;
|
||||
|
||||
const result = await getMCPServersFromSettings(mockSettingsService);
|
||||
expect(result).toEqual({});
|
||||
});
|
||||
|
||||
it('should default to stdio type when type is not specified', async () => {
|
||||
const mockSettingsService = {
|
||||
getGlobalSettings: vi.fn().mockResolvedValue({
|
||||
mcpServers: [
|
||||
{
|
||||
id: '1',
|
||||
name: 'no-type',
|
||||
command: 'node',
|
||||
enabled: true,
|
||||
// type not specified, should default to stdio
|
||||
},
|
||||
],
|
||||
}),
|
||||
} as unknown as SettingsService;
|
||||
|
||||
const result = await getMCPServersFromSettings(mockSettingsService);
|
||||
expect(result['no-type']).toEqual({
|
||||
type: 'stdio',
|
||||
command: 'node',
|
||||
args: undefined,
|
||||
env: undefined,
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
describe('getMCPPermissionSettings', () => {
|
||||
beforeEach(() => {
|
||||
vi.spyOn(console, 'log').mockImplementation(() => {});
|
||||
vi.spyOn(console, 'error').mockImplementation(() => {});
|
||||
});
|
||||
|
||||
it('should return defaults when settingsService is null', async () => {
|
||||
const result = await getMCPPermissionSettings(null);
|
||||
expect(result).toEqual({
|
||||
mcpAutoApproveTools: true,
|
||||
mcpUnrestrictedTools: true,
|
||||
});
|
||||
});
|
||||
|
||||
it('should return defaults when settingsService is undefined', async () => {
|
||||
const result = await getMCPPermissionSettings(undefined);
|
||||
expect(result).toEqual({
|
||||
mcpAutoApproveTools: true,
|
||||
mcpUnrestrictedTools: true,
|
||||
});
|
||||
});
|
||||
|
||||
it('should return settings from service', async () => {
|
||||
const mockSettingsService = {
|
||||
getGlobalSettings: vi.fn().mockResolvedValue({
|
||||
mcpAutoApproveTools: false,
|
||||
mcpUnrestrictedTools: false,
|
||||
}),
|
||||
} as unknown as SettingsService;
|
||||
|
||||
const result = await getMCPPermissionSettings(mockSettingsService);
|
||||
expect(result).toEqual({
|
||||
mcpAutoApproveTools: false,
|
||||
mcpUnrestrictedTools: false,
|
||||
});
|
||||
});
|
||||
|
||||
it('should default to true when settings are undefined', async () => {
|
||||
const mockSettingsService = {
|
||||
getGlobalSettings: vi.fn().mockResolvedValue({}),
|
||||
} as unknown as SettingsService;
|
||||
|
||||
const result = await getMCPPermissionSettings(mockSettingsService);
|
||||
expect(result).toEqual({
|
||||
mcpAutoApproveTools: true,
|
||||
mcpUnrestrictedTools: true,
|
||||
});
|
||||
});
|
||||
|
||||
it('should handle mixed settings', async () => {
|
||||
const mockSettingsService = {
|
||||
getGlobalSettings: vi.fn().mockResolvedValue({
|
||||
mcpAutoApproveTools: true,
|
||||
mcpUnrestrictedTools: false,
|
||||
}),
|
||||
} as unknown as SettingsService;
|
||||
|
||||
const result = await getMCPPermissionSettings(mockSettingsService);
|
||||
expect(result).toEqual({
|
||||
mcpAutoApproveTools: true,
|
||||
mcpUnrestrictedTools: false,
|
||||
});
|
||||
});
|
||||
|
||||
it('should return defaults and log error on exception', async () => {
|
||||
const mockSettingsService = {
|
||||
getGlobalSettings: vi.fn().mockRejectedValue(new Error('Settings error')),
|
||||
} as unknown as SettingsService;
|
||||
|
||||
const result = await getMCPPermissionSettings(mockSettingsService, '[Test]');
|
||||
expect(result).toEqual({
|
||||
mcpAutoApproveTools: true,
|
||||
mcpUnrestrictedTools: true,
|
||||
});
|
||||
expect(console.error).toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('should use custom log prefix', async () => {
|
||||
const mockSettingsService = {
|
||||
getGlobalSettings: vi.fn().mockResolvedValue({
|
||||
mcpAutoApproveTools: true,
|
||||
mcpUnrestrictedTools: true,
|
||||
}),
|
||||
} as unknown as SettingsService;
|
||||
|
||||
await getMCPPermissionSettings(mockSettingsService, '[CustomPrefix]');
|
||||
expect(console.log).toHaveBeenCalledWith(expect.stringContaining('[CustomPrefix]'));
|
||||
});
|
||||
});
|
||||
});
|
||||
@@ -1,307 +0,0 @@
|
||||
import { describe, it, expect, beforeEach, afterEach, vi } from 'vitest';
|
||||
import {
|
||||
writeValidation,
|
||||
readValidation,
|
||||
getAllValidations,
|
||||
deleteValidation,
|
||||
isValidationStale,
|
||||
getValidationWithFreshness,
|
||||
markValidationViewed,
|
||||
getUnviewedValidationsCount,
|
||||
type StoredValidation,
|
||||
} from '@/lib/validation-storage.js';
|
||||
import fs from 'fs/promises';
|
||||
import path from 'path';
|
||||
import os from 'os';
|
||||
|
||||
describe('validation-storage.ts', () => {
|
||||
let testProjectPath: string;
|
||||
|
||||
beforeEach(async () => {
|
||||
testProjectPath = path.join(os.tmpdir(), `validation-storage-test-${Date.now()}`);
|
||||
await fs.mkdir(testProjectPath, { recursive: true });
|
||||
});
|
||||
|
||||
afterEach(async () => {
|
||||
try {
|
||||
await fs.rm(testProjectPath, { recursive: true, force: true });
|
||||
} catch {
|
||||
// Ignore cleanup errors
|
||||
}
|
||||
});
|
||||
|
||||
const createMockValidation = (overrides: Partial<StoredValidation> = {}): StoredValidation => ({
|
||||
issueNumber: 123,
|
||||
issueTitle: 'Test Issue',
|
||||
validatedAt: new Date().toISOString(),
|
||||
model: 'haiku',
|
||||
result: {
|
||||
verdict: 'valid',
|
||||
confidence: 'high',
|
||||
reasoning: 'Test reasoning',
|
||||
},
|
||||
...overrides,
|
||||
});
|
||||
|
||||
describe('writeValidation', () => {
|
||||
it('should write validation to storage', async () => {
|
||||
const validation = createMockValidation();
|
||||
|
||||
await writeValidation(testProjectPath, 123, validation);
|
||||
|
||||
// Verify file was created
|
||||
const validationPath = path.join(
|
||||
testProjectPath,
|
||||
'.automaker',
|
||||
'validations',
|
||||
'123',
|
||||
'validation.json'
|
||||
);
|
||||
const content = await fs.readFile(validationPath, 'utf-8');
|
||||
expect(JSON.parse(content)).toEqual(validation);
|
||||
});
|
||||
|
||||
it('should create nested directories if they do not exist', async () => {
|
||||
const validation = createMockValidation({ issueNumber: 456 });
|
||||
|
||||
await writeValidation(testProjectPath, 456, validation);
|
||||
|
||||
const validationPath = path.join(
|
||||
testProjectPath,
|
||||
'.automaker',
|
||||
'validations',
|
||||
'456',
|
||||
'validation.json'
|
||||
);
|
||||
const content = await fs.readFile(validationPath, 'utf-8');
|
||||
expect(JSON.parse(content)).toEqual(validation);
|
||||
});
|
||||
});
|
||||
|
||||
describe('readValidation', () => {
|
||||
it('should read validation from storage', async () => {
|
||||
const validation = createMockValidation();
|
||||
await writeValidation(testProjectPath, 123, validation);
|
||||
|
||||
const result = await readValidation(testProjectPath, 123);
|
||||
|
||||
expect(result).toEqual(validation);
|
||||
});
|
||||
|
||||
it('should return null when validation does not exist', async () => {
|
||||
const result = await readValidation(testProjectPath, 999);
|
||||
|
||||
expect(result).toBeNull();
|
||||
});
|
||||
});
|
||||
|
||||
describe('getAllValidations', () => {
|
||||
it('should return all validations for a project', async () => {
|
||||
const validation1 = createMockValidation({ issueNumber: 1, issueTitle: 'Issue 1' });
|
||||
const validation2 = createMockValidation({ issueNumber: 2, issueTitle: 'Issue 2' });
|
||||
const validation3 = createMockValidation({ issueNumber: 3, issueTitle: 'Issue 3' });
|
||||
|
||||
await writeValidation(testProjectPath, 1, validation1);
|
||||
await writeValidation(testProjectPath, 2, validation2);
|
||||
await writeValidation(testProjectPath, 3, validation3);
|
||||
|
||||
const result = await getAllValidations(testProjectPath);
|
||||
|
||||
expect(result).toHaveLength(3);
|
||||
expect(result[0]).toEqual(validation1);
|
||||
expect(result[1]).toEqual(validation2);
|
||||
expect(result[2]).toEqual(validation3);
|
||||
});
|
||||
|
||||
it('should return empty array when no validations exist', async () => {
|
||||
const result = await getAllValidations(testProjectPath);
|
||||
|
||||
expect(result).toEqual([]);
|
||||
});
|
||||
|
||||
it('should skip non-numeric directories', async () => {
|
||||
const validation = createMockValidation({ issueNumber: 1 });
|
||||
await writeValidation(testProjectPath, 1, validation);
|
||||
|
||||
// Create a non-numeric directory
|
||||
const invalidDir = path.join(testProjectPath, '.automaker', 'validations', 'invalid');
|
||||
await fs.mkdir(invalidDir, { recursive: true });
|
||||
|
||||
const result = await getAllValidations(testProjectPath);
|
||||
|
||||
expect(result).toHaveLength(1);
|
||||
expect(result[0]).toEqual(validation);
|
||||
});
|
||||
});
|
||||
|
||||
describe('deleteValidation', () => {
|
||||
it('should delete validation from storage', async () => {
|
||||
const validation = createMockValidation();
|
||||
await writeValidation(testProjectPath, 123, validation);
|
||||
|
||||
const result = await deleteValidation(testProjectPath, 123);
|
||||
|
||||
expect(result).toBe(true);
|
||||
|
||||
const readResult = await readValidation(testProjectPath, 123);
|
||||
expect(readResult).toBeNull();
|
||||
});
|
||||
|
||||
it('should return true even when validation does not exist', async () => {
|
||||
const result = await deleteValidation(testProjectPath, 999);
|
||||
|
||||
expect(result).toBe(true);
|
||||
});
|
||||
});
|
||||
|
||||
describe('isValidationStale', () => {
|
||||
it('should return false for recent validation', () => {
|
||||
const validation = createMockValidation({
|
||||
validatedAt: new Date().toISOString(),
|
||||
});
|
||||
|
||||
const result = isValidationStale(validation);
|
||||
|
||||
expect(result).toBe(false);
|
||||
});
|
||||
|
||||
it('should return true for validation older than 24 hours', () => {
|
||||
const oldDate = new Date();
|
||||
oldDate.setHours(oldDate.getHours() - 25); // 25 hours ago
|
||||
|
||||
const validation = createMockValidation({
|
||||
validatedAt: oldDate.toISOString(),
|
||||
});
|
||||
|
||||
const result = isValidationStale(validation);
|
||||
|
||||
expect(result).toBe(true);
|
||||
});
|
||||
|
||||
it('should return false for validation exactly at 24 hours', () => {
|
||||
const exactDate = new Date();
|
||||
exactDate.setHours(exactDate.getHours() - 24);
|
||||
|
||||
const validation = createMockValidation({
|
||||
validatedAt: exactDate.toISOString(),
|
||||
});
|
||||
|
||||
const result = isValidationStale(validation);
|
||||
|
||||
expect(result).toBe(false);
|
||||
});
|
||||
});
|
||||
|
||||
describe('getValidationWithFreshness', () => {
|
||||
it('should return validation with isStale false for recent validation', async () => {
|
||||
const validation = createMockValidation({
|
||||
validatedAt: new Date().toISOString(),
|
||||
});
|
||||
await writeValidation(testProjectPath, 123, validation);
|
||||
|
||||
const result = await getValidationWithFreshness(testProjectPath, 123);
|
||||
|
||||
expect(result).not.toBeNull();
|
||||
expect(result!.validation).toEqual(validation);
|
||||
expect(result!.isStale).toBe(false);
|
||||
});
|
||||
|
||||
it('should return validation with isStale true for old validation', async () => {
|
||||
const oldDate = new Date();
|
||||
oldDate.setHours(oldDate.getHours() - 25);
|
||||
|
||||
const validation = createMockValidation({
|
||||
validatedAt: oldDate.toISOString(),
|
||||
});
|
||||
await writeValidation(testProjectPath, 123, validation);
|
||||
|
||||
const result = await getValidationWithFreshness(testProjectPath, 123);
|
||||
|
||||
expect(result).not.toBeNull();
|
||||
expect(result!.isStale).toBe(true);
|
||||
});
|
||||
|
||||
it('should return null when validation does not exist', async () => {
|
||||
const result = await getValidationWithFreshness(testProjectPath, 999);
|
||||
|
||||
expect(result).toBeNull();
|
||||
});
|
||||
});
|
||||
|
||||
describe('markValidationViewed', () => {
|
||||
it('should mark validation as viewed', async () => {
|
||||
const validation = createMockValidation();
|
||||
await writeValidation(testProjectPath, 123, validation);
|
||||
|
||||
const result = await markValidationViewed(testProjectPath, 123);
|
||||
|
||||
expect(result).toBe(true);
|
||||
|
||||
const updated = await readValidation(testProjectPath, 123);
|
||||
expect(updated).not.toBeNull();
|
||||
expect(updated!.viewedAt).toBeDefined();
|
||||
});
|
||||
|
||||
it('should return false when validation does not exist', async () => {
|
||||
const result = await markValidationViewed(testProjectPath, 999);
|
||||
|
||||
expect(result).toBe(false);
|
||||
});
|
||||
});
|
||||
|
||||
describe('getUnviewedValidationsCount', () => {
|
||||
it('should return count of unviewed non-stale validations', async () => {
|
||||
const validation1 = createMockValidation({ issueNumber: 1 });
|
||||
const validation2 = createMockValidation({ issueNumber: 2 });
|
||||
const validation3 = createMockValidation({
|
||||
issueNumber: 3,
|
||||
viewedAt: new Date().toISOString(),
|
||||
});
|
||||
|
||||
await writeValidation(testProjectPath, 1, validation1);
|
||||
await writeValidation(testProjectPath, 2, validation2);
|
||||
await writeValidation(testProjectPath, 3, validation3);
|
||||
|
||||
const result = await getUnviewedValidationsCount(testProjectPath);
|
||||
|
||||
expect(result).toBe(2);
|
||||
});
|
||||
|
||||
it('should not count stale validations', async () => {
|
||||
const oldDate = new Date();
|
||||
oldDate.setHours(oldDate.getHours() - 25);
|
||||
|
||||
const validation1 = createMockValidation({ issueNumber: 1 });
|
||||
const validation2 = createMockValidation({
|
||||
issueNumber: 2,
|
||||
validatedAt: oldDate.toISOString(),
|
||||
});
|
||||
|
||||
await writeValidation(testProjectPath, 1, validation1);
|
||||
await writeValidation(testProjectPath, 2, validation2);
|
||||
|
||||
const result = await getUnviewedValidationsCount(testProjectPath);
|
||||
|
||||
expect(result).toBe(1);
|
||||
});
|
||||
|
||||
it('should return 0 when no validations exist', async () => {
|
||||
const result = await getUnviewedValidationsCount(testProjectPath);
|
||||
|
||||
expect(result).toBe(0);
|
||||
});
|
||||
|
||||
it('should return 0 when all validations are viewed', async () => {
|
||||
const validation = createMockValidation({
|
||||
issueNumber: 1,
|
||||
viewedAt: new Date().toISOString(),
|
||||
});
|
||||
|
||||
await writeValidation(testProjectPath, 1, validation);
|
||||
|
||||
const result = await getUnviewedValidationsCount(testProjectPath);
|
||||
|
||||
expect(result).toBe(0);
|
||||
});
|
||||
});
|
||||
});
|
||||
@@ -73,7 +73,7 @@ describe('claude-provider.ts', () => {
|
||||
maxTurns: 10,
|
||||
cwd: '/test/dir',
|
||||
allowedTools: ['Read', 'Write'],
|
||||
permissionMode: 'default',
|
||||
permissionMode: 'acceptEdits',
|
||||
}),
|
||||
});
|
||||
});
|
||||
@@ -100,7 +100,7 @@ describe('claude-provider.ts', () => {
|
||||
});
|
||||
});
|
||||
|
||||
it('should pass sandbox configuration when provided', async () => {
|
||||
it('should enable sandbox by default', async () => {
|
||||
vi.mocked(sdk.query).mockReturnValue(
|
||||
(async function* () {
|
||||
yield { type: 'text', text: 'test' };
|
||||
@@ -110,10 +110,6 @@ describe('claude-provider.ts', () => {
|
||||
const generator = provider.executeQuery({
|
||||
prompt: 'Test',
|
||||
cwd: '/test',
|
||||
sandbox: {
|
||||
enabled: true,
|
||||
autoAllowBashIfSandboxed: true,
|
||||
},
|
||||
});
|
||||
|
||||
await collectAsyncGenerator(generator);
|
||||
@@ -246,21 +242,11 @@ describe('claude-provider.ts', () => {
|
||||
});
|
||||
|
||||
await expect(collectAsyncGenerator(generator)).rejects.toThrow('SDK execution failed');
|
||||
|
||||
// Should log error message
|
||||
expect(consoleErrorSpy).toHaveBeenNthCalledWith(
|
||||
1,
|
||||
'[ClaudeProvider] ERROR: executeQuery() error during execution:',
|
||||
expect(consoleErrorSpy).toHaveBeenCalledWith(
|
||||
'[ClaudeProvider] executeQuery() error during execution:',
|
||||
testError
|
||||
);
|
||||
|
||||
// Should log stack trace
|
||||
expect(consoleErrorSpy).toHaveBeenNthCalledWith(
|
||||
2,
|
||||
'[ClaudeProvider] ERROR stack:',
|
||||
testError.stack
|
||||
);
|
||||
|
||||
consoleErrorSpy.mockRestore();
|
||||
});
|
||||
});
|
||||
|
||||
@@ -1,499 +0,0 @@
|
||||
import { describe, it, expect, vi, beforeEach } from 'vitest';
|
||||
import type { Request, Response } from 'express';
|
||||
import { createGetConfigHandler } from '@/routes/pipeline/routes/get-config.js';
|
||||
import { createSaveConfigHandler } from '@/routes/pipeline/routes/save-config.js';
|
||||
import { createAddStepHandler } from '@/routes/pipeline/routes/add-step.js';
|
||||
import { createUpdateStepHandler } from '@/routes/pipeline/routes/update-step.js';
|
||||
import { createDeleteStepHandler } from '@/routes/pipeline/routes/delete-step.js';
|
||||
import { createReorderStepsHandler } from '@/routes/pipeline/routes/reorder-steps.js';
|
||||
import type { PipelineService } from '@/services/pipeline-service.js';
|
||||
import type { PipelineConfig, PipelineStep } from '@automaker/types';
|
||||
import { createMockExpressContext } from '../../utils/mocks.js';
|
||||
|
||||
describe('pipeline routes', () => {
|
||||
let mockPipelineService: PipelineService;
|
||||
let req: Request;
|
||||
let res: Response;
|
||||
|
||||
beforeEach(() => {
|
||||
vi.clearAllMocks();
|
||||
|
||||
mockPipelineService = {
|
||||
getPipelineConfig: vi.fn(),
|
||||
savePipelineConfig: vi.fn(),
|
||||
addStep: vi.fn(),
|
||||
updateStep: vi.fn(),
|
||||
deleteStep: vi.fn(),
|
||||
reorderSteps: vi.fn(),
|
||||
} as any;
|
||||
|
||||
const context = createMockExpressContext();
|
||||
req = context.req;
|
||||
res = context.res;
|
||||
});
|
||||
|
||||
describe('get-config', () => {
|
||||
it('should return pipeline config successfully', async () => {
|
||||
const config: PipelineConfig = {
|
||||
version: 1,
|
||||
steps: [],
|
||||
};
|
||||
|
||||
vi.mocked(mockPipelineService.getPipelineConfig).mockResolvedValue(config);
|
||||
req.body = { projectPath: '/test/project' };
|
||||
|
||||
const handler = createGetConfigHandler(mockPipelineService);
|
||||
await handler(req, res);
|
||||
|
||||
expect(mockPipelineService.getPipelineConfig).toHaveBeenCalledWith('/test/project');
|
||||
expect(res.json).toHaveBeenCalledWith({
|
||||
success: true,
|
||||
config,
|
||||
});
|
||||
});
|
||||
|
||||
it('should return 400 if projectPath is missing', async () => {
|
||||
req.body = {};
|
||||
|
||||
const handler = createGetConfigHandler(mockPipelineService);
|
||||
await handler(req, res);
|
||||
|
||||
expect(res.status).toHaveBeenCalledWith(400);
|
||||
expect(res.json).toHaveBeenCalledWith({
|
||||
success: false,
|
||||
error: 'projectPath is required',
|
||||
});
|
||||
expect(mockPipelineService.getPipelineConfig).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('should handle errors gracefully', async () => {
|
||||
const error = new Error('Read failed');
|
||||
vi.mocked(mockPipelineService.getPipelineConfig).mockRejectedValue(error);
|
||||
req.body = { projectPath: '/test/project' };
|
||||
|
||||
const handler = createGetConfigHandler(mockPipelineService);
|
||||
await handler(req, res);
|
||||
|
||||
expect(res.status).toHaveBeenCalledWith(500);
|
||||
expect(res.json).toHaveBeenCalledWith({
|
||||
success: false,
|
||||
error: 'Read failed',
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
describe('save-config', () => {
|
||||
it('should save pipeline config successfully', async () => {
|
||||
const config: PipelineConfig = {
|
||||
version: 1,
|
||||
steps: [
|
||||
{
|
||||
id: 'step1',
|
||||
name: 'Step 1',
|
||||
order: 0,
|
||||
instructions: 'Instructions',
|
||||
colorClass: 'blue',
|
||||
createdAt: '2024-01-01T00:00:00.000Z',
|
||||
updatedAt: '2024-01-01T00:00:00.000Z',
|
||||
},
|
||||
],
|
||||
};
|
||||
|
||||
vi.mocked(mockPipelineService.savePipelineConfig).mockResolvedValue(undefined);
|
||||
req.body = { projectPath: '/test/project', config };
|
||||
|
||||
const handler = createSaveConfigHandler(mockPipelineService);
|
||||
await handler(req, res);
|
||||
|
||||
expect(mockPipelineService.savePipelineConfig).toHaveBeenCalledWith('/test/project', config);
|
||||
expect(res.json).toHaveBeenCalledWith({
|
||||
success: true,
|
||||
});
|
||||
});
|
||||
|
||||
it('should return 400 if projectPath is missing', async () => {
|
||||
req.body = { config: { version: 1, steps: [] } };
|
||||
|
||||
const handler = createSaveConfigHandler(mockPipelineService);
|
||||
await handler(req, res);
|
||||
|
||||
expect(res.status).toHaveBeenCalledWith(400);
|
||||
expect(res.json).toHaveBeenCalledWith({
|
||||
success: false,
|
||||
error: 'projectPath is required',
|
||||
});
|
||||
});
|
||||
|
||||
it('should return 400 if config is missing', async () => {
|
||||
req.body = { projectPath: '/test/project' };
|
||||
|
||||
const handler = createSaveConfigHandler(mockPipelineService);
|
||||
await handler(req, res);
|
||||
|
||||
expect(res.status).toHaveBeenCalledWith(400);
|
||||
expect(res.json).toHaveBeenCalledWith({
|
||||
success: false,
|
||||
error: 'config is required',
|
||||
});
|
||||
});
|
||||
|
||||
it('should handle errors gracefully', async () => {
|
||||
const error = new Error('Save failed');
|
||||
vi.mocked(mockPipelineService.savePipelineConfig).mockRejectedValue(error);
|
||||
req.body = {
|
||||
projectPath: '/test/project',
|
||||
config: { version: 1, steps: [] },
|
||||
};
|
||||
|
||||
const handler = createSaveConfigHandler(mockPipelineService);
|
||||
await handler(req, res);
|
||||
|
||||
expect(res.status).toHaveBeenCalledWith(500);
|
||||
expect(res.json).toHaveBeenCalledWith({
|
||||
success: false,
|
||||
error: 'Save failed',
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
describe('add-step', () => {
|
||||
it('should add step successfully', async () => {
|
||||
const stepData = {
|
||||
name: 'New Step',
|
||||
order: 0,
|
||||
instructions: 'Do something',
|
||||
colorClass: 'blue',
|
||||
};
|
||||
|
||||
const newStep: PipelineStep = {
|
||||
...stepData,
|
||||
id: 'step1',
|
||||
createdAt: '2024-01-01T00:00:00.000Z',
|
||||
updatedAt: '2024-01-01T00:00:00.000Z',
|
||||
};
|
||||
|
||||
vi.mocked(mockPipelineService.addStep).mockResolvedValue(newStep);
|
||||
req.body = { projectPath: '/test/project', step: stepData };
|
||||
|
||||
const handler = createAddStepHandler(mockPipelineService);
|
||||
await handler(req, res);
|
||||
|
||||
expect(mockPipelineService.addStep).toHaveBeenCalledWith('/test/project', stepData);
|
||||
expect(res.json).toHaveBeenCalledWith({
|
||||
success: true,
|
||||
step: newStep,
|
||||
});
|
||||
});
|
||||
|
||||
it('should return 400 if projectPath is missing', async () => {
|
||||
req.body = { step: { name: 'Step', order: 0, instructions: 'Do', colorClass: 'blue' } };
|
||||
|
||||
const handler = createAddStepHandler(mockPipelineService);
|
||||
await handler(req, res);
|
||||
|
||||
expect(res.status).toHaveBeenCalledWith(400);
|
||||
expect(res.json).toHaveBeenCalledWith({
|
||||
success: false,
|
||||
error: 'projectPath is required',
|
||||
});
|
||||
});
|
||||
|
||||
it('should return 400 if step is missing', async () => {
|
||||
req.body = { projectPath: '/test/project' };
|
||||
|
||||
const handler = createAddStepHandler(mockPipelineService);
|
||||
await handler(req, res);
|
||||
|
||||
expect(res.status).toHaveBeenCalledWith(400);
|
||||
expect(res.json).toHaveBeenCalledWith({
|
||||
success: false,
|
||||
error: 'step is required',
|
||||
});
|
||||
});
|
||||
|
||||
it('should return 400 if step.name is missing', async () => {
|
||||
req.body = {
|
||||
projectPath: '/test/project',
|
||||
step: { order: 0, instructions: 'Do', colorClass: 'blue' },
|
||||
};
|
||||
|
||||
const handler = createAddStepHandler(mockPipelineService);
|
||||
await handler(req, res);
|
||||
|
||||
expect(res.status).toHaveBeenCalledWith(400);
|
||||
expect(res.json).toHaveBeenCalledWith({
|
||||
success: false,
|
||||
error: 'step.name is required',
|
||||
});
|
||||
});
|
||||
|
||||
it('should return 400 if step.instructions is missing', async () => {
|
||||
req.body = {
|
||||
projectPath: '/test/project',
|
||||
step: { name: 'Step', order: 0, colorClass: 'blue' },
|
||||
};
|
||||
|
||||
const handler = createAddStepHandler(mockPipelineService);
|
||||
await handler(req, res);
|
||||
|
||||
expect(res.status).toHaveBeenCalledWith(400);
|
||||
expect(res.json).toHaveBeenCalledWith({
|
||||
success: false,
|
||||
error: 'step.instructions is required',
|
||||
});
|
||||
});
|
||||
|
||||
it('should handle errors gracefully', async () => {
|
||||
const error = new Error('Add failed');
|
||||
vi.mocked(mockPipelineService.addStep).mockRejectedValue(error);
|
||||
req.body = {
|
||||
projectPath: '/test/project',
|
||||
step: { name: 'Step', order: 0, instructions: 'Do', colorClass: 'blue' },
|
||||
};
|
||||
|
||||
const handler = createAddStepHandler(mockPipelineService);
|
||||
await handler(req, res);
|
||||
|
||||
expect(res.status).toHaveBeenCalledWith(500);
|
||||
expect(res.json).toHaveBeenCalledWith({
|
||||
success: false,
|
||||
error: 'Add failed',
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
describe('update-step', () => {
|
||||
it('should update step successfully', async () => {
|
||||
const updates = {
|
||||
name: 'Updated Name',
|
||||
instructions: 'Updated instructions',
|
||||
};
|
||||
|
||||
const updatedStep: PipelineStep = {
|
||||
id: 'step1',
|
||||
name: 'Updated Name',
|
||||
order: 0,
|
||||
instructions: 'Updated instructions',
|
||||
colorClass: 'blue',
|
||||
createdAt: '2024-01-01T00:00:00.000Z',
|
||||
updatedAt: '2024-01-02T00:00:00.000Z',
|
||||
};
|
||||
|
||||
vi.mocked(mockPipelineService.updateStep).mockResolvedValue(updatedStep);
|
||||
req.body = { projectPath: '/test/project', stepId: 'step1', updates };
|
||||
|
||||
const handler = createUpdateStepHandler(mockPipelineService);
|
||||
await handler(req, res);
|
||||
|
||||
expect(mockPipelineService.updateStep).toHaveBeenCalledWith(
|
||||
'/test/project',
|
||||
'step1',
|
||||
updates
|
||||
);
|
||||
expect(res.json).toHaveBeenCalledWith({
|
||||
success: true,
|
||||
step: updatedStep,
|
||||
});
|
||||
});
|
||||
|
||||
it('should return 400 if projectPath is missing', async () => {
|
||||
req.body = { stepId: 'step1', updates: { name: 'New' } };
|
||||
|
||||
const handler = createUpdateStepHandler(mockPipelineService);
|
||||
await handler(req, res);
|
||||
|
||||
expect(res.status).toHaveBeenCalledWith(400);
|
||||
expect(res.json).toHaveBeenCalledWith({
|
||||
success: false,
|
||||
error: 'projectPath is required',
|
||||
});
|
||||
});
|
||||
|
||||
it('should return 400 if stepId is missing', async () => {
|
||||
req.body = { projectPath: '/test/project', updates: { name: 'New' } };
|
||||
|
||||
const handler = createUpdateStepHandler(mockPipelineService);
|
||||
await handler(req, res);
|
||||
|
||||
expect(res.status).toHaveBeenCalledWith(400);
|
||||
expect(res.json).toHaveBeenCalledWith({
|
||||
success: false,
|
||||
error: 'stepId is required',
|
||||
});
|
||||
});
|
||||
|
||||
it('should return 400 if updates is missing', async () => {
|
||||
req.body = { projectPath: '/test/project', stepId: 'step1' };
|
||||
|
||||
const handler = createUpdateStepHandler(mockPipelineService);
|
||||
await handler(req, res);
|
||||
|
||||
expect(res.status).toHaveBeenCalledWith(400);
|
||||
expect(res.json).toHaveBeenCalledWith({
|
||||
success: false,
|
||||
error: 'updates is required',
|
||||
});
|
||||
});
|
||||
|
||||
it('should return 400 if updates is empty object', async () => {
|
||||
req.body = { projectPath: '/test/project', stepId: 'step1', updates: {} };
|
||||
|
||||
const handler = createUpdateStepHandler(mockPipelineService);
|
||||
await handler(req, res);
|
||||
|
||||
expect(res.status).toHaveBeenCalledWith(400);
|
||||
expect(res.json).toHaveBeenCalledWith({
|
||||
success: false,
|
||||
error: 'updates is required',
|
||||
});
|
||||
});
|
||||
|
||||
it('should handle errors gracefully', async () => {
|
||||
const error = new Error('Update failed');
|
||||
vi.mocked(mockPipelineService.updateStep).mockRejectedValue(error);
|
||||
req.body = {
|
||||
projectPath: '/test/project',
|
||||
stepId: 'step1',
|
||||
updates: { name: 'New' },
|
||||
};
|
||||
|
||||
const handler = createUpdateStepHandler(mockPipelineService);
|
||||
await handler(req, res);
|
||||
|
||||
expect(res.status).toHaveBeenCalledWith(500);
|
||||
expect(res.json).toHaveBeenCalledWith({
|
||||
success: false,
|
||||
error: 'Update failed',
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
describe('delete-step', () => {
|
||||
it('should delete step successfully', async () => {
|
||||
vi.mocked(mockPipelineService.deleteStep).mockResolvedValue(undefined);
|
||||
req.body = { projectPath: '/test/project', stepId: 'step1' };
|
||||
|
||||
const handler = createDeleteStepHandler(mockPipelineService);
|
||||
await handler(req, res);
|
||||
|
||||
expect(mockPipelineService.deleteStep).toHaveBeenCalledWith('/test/project', 'step1');
|
||||
expect(res.json).toHaveBeenCalledWith({
|
||||
success: true,
|
||||
});
|
||||
});
|
||||
|
||||
it('should return 400 if projectPath is missing', async () => {
|
||||
req.body = { stepId: 'step1' };
|
||||
|
||||
const handler = createDeleteStepHandler(mockPipelineService);
|
||||
await handler(req, res);
|
||||
|
||||
expect(res.status).toHaveBeenCalledWith(400);
|
||||
expect(res.json).toHaveBeenCalledWith({
|
||||
success: false,
|
||||
error: 'projectPath is required',
|
||||
});
|
||||
});
|
||||
|
||||
it('should return 400 if stepId is missing', async () => {
|
||||
req.body = { projectPath: '/test/project' };
|
||||
|
||||
const handler = createDeleteStepHandler(mockPipelineService);
|
||||
await handler(req, res);
|
||||
|
||||
expect(res.status).toHaveBeenCalledWith(400);
|
||||
expect(res.json).toHaveBeenCalledWith({
|
||||
success: false,
|
||||
error: 'stepId is required',
|
||||
});
|
||||
});
|
||||
|
||||
it('should handle errors gracefully', async () => {
|
||||
const error = new Error('Delete failed');
|
||||
vi.mocked(mockPipelineService.deleteStep).mockRejectedValue(error);
|
||||
req.body = { projectPath: '/test/project', stepId: 'step1' };
|
||||
|
||||
const handler = createDeleteStepHandler(mockPipelineService);
|
||||
await handler(req, res);
|
||||
|
||||
expect(res.status).toHaveBeenCalledWith(500);
|
||||
expect(res.json).toHaveBeenCalledWith({
|
||||
success: false,
|
||||
error: 'Delete failed',
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
describe('reorder-steps', () => {
|
||||
it('should reorder steps successfully', async () => {
|
||||
vi.mocked(mockPipelineService.reorderSteps).mockResolvedValue(undefined);
|
||||
req.body = { projectPath: '/test/project', stepIds: ['step2', 'step1', 'step3'] };
|
||||
|
||||
const handler = createReorderStepsHandler(mockPipelineService);
|
||||
await handler(req, res);
|
||||
|
||||
expect(mockPipelineService.reorderSteps).toHaveBeenCalledWith('/test/project', [
|
||||
'step2',
|
||||
'step1',
|
||||
'step3',
|
||||
]);
|
||||
expect(res.json).toHaveBeenCalledWith({
|
||||
success: true,
|
||||
});
|
||||
});
|
||||
|
||||
it('should return 400 if projectPath is missing', async () => {
|
||||
req.body = { stepIds: ['step1', 'step2'] };
|
||||
|
||||
const handler = createReorderStepsHandler(mockPipelineService);
|
||||
await handler(req, res);
|
||||
|
||||
expect(res.status).toHaveBeenCalledWith(400);
|
||||
expect(res.json).toHaveBeenCalledWith({
|
||||
success: false,
|
||||
error: 'projectPath is required',
|
||||
});
|
||||
});
|
||||
|
||||
it('should return 400 if stepIds is missing', async () => {
|
||||
req.body = { projectPath: '/test/project' };
|
||||
|
||||
const handler = createReorderStepsHandler(mockPipelineService);
|
||||
await handler(req, res);
|
||||
|
||||
expect(res.status).toHaveBeenCalledWith(400);
|
||||
expect(res.json).toHaveBeenCalledWith({
|
||||
success: false,
|
||||
error: 'stepIds array is required',
|
||||
});
|
||||
});
|
||||
|
||||
it('should return 400 if stepIds is not an array', async () => {
|
||||
req.body = { projectPath: '/test/project', stepIds: 'not-an-array' };
|
||||
|
||||
const handler = createReorderStepsHandler(mockPipelineService);
|
||||
await handler(req, res);
|
||||
|
||||
expect(res.status).toHaveBeenCalledWith(400);
|
||||
expect(res.json).toHaveBeenCalledWith({
|
||||
success: false,
|
||||
error: 'stepIds array is required',
|
||||
});
|
||||
});
|
||||
|
||||
it('should handle errors gracefully', async () => {
|
||||
const error = new Error('Reorder failed');
|
||||
vi.mocked(mockPipelineService.reorderSteps).mockRejectedValue(error);
|
||||
req.body = { projectPath: '/test/project', stepIds: ['step1', 'step2'] };
|
||||
|
||||
const handler = createReorderStepsHandler(mockPipelineService);
|
||||
await handler(req, res);
|
||||
|
||||
expect(res.status).toHaveBeenCalledWith(500);
|
||||
expect(res.json).toHaveBeenCalledWith({
|
||||
success: false,
|
||||
error: 'Reorder failed',
|
||||
});
|
||||
});
|
||||
});
|
||||
});
|
||||
@@ -106,9 +106,9 @@ describe('agent-service.ts', () => {
|
||||
});
|
||||
|
||||
expect(result.success).toBe(true);
|
||||
// First call reads session file, metadata file, and queue state file (3 calls)
|
||||
// First call reads session file and metadata file (2 calls)
|
||||
// Second call should reuse in-memory session (no additional calls)
|
||||
expect(fs.readFile).toHaveBeenCalledTimes(3);
|
||||
expect(fs.readFile).toHaveBeenCalledTimes(2);
|
||||
});
|
||||
});
|
||||
|
||||
|
||||
@@ -1,860 +0,0 @@
|
||||
import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest';
|
||||
import fs from 'fs/promises';
|
||||
import path from 'path';
|
||||
import os from 'os';
|
||||
import { PipelineService } from '@/services/pipeline-service.js';
|
||||
import type { PipelineConfig, PipelineStep } from '@automaker/types';
|
||||
|
||||
// Mock secure-fs
|
||||
vi.mock('@/lib/secure-fs.js', () => ({
|
||||
readFile: vi.fn(),
|
||||
writeFile: vi.fn(),
|
||||
rename: vi.fn(),
|
||||
unlink: vi.fn(),
|
||||
}));
|
||||
|
||||
// Mock ensureAutomakerDir
|
||||
vi.mock('@automaker/platform', () => ({
|
||||
ensureAutomakerDir: vi.fn(),
|
||||
}));
|
||||
|
||||
import * as secureFs from '@/lib/secure-fs.js';
|
||||
import { ensureAutomakerDir } from '@automaker/platform';
|
||||
|
||||
describe('pipeline-service.ts', () => {
|
||||
let testProjectDir: string;
|
||||
let pipelineService: PipelineService;
|
||||
|
||||
beforeEach(async () => {
|
||||
testProjectDir = path.join(os.tmpdir(), `pipeline-test-${Date.now()}`);
|
||||
await fs.mkdir(testProjectDir, { recursive: true });
|
||||
pipelineService = new PipelineService();
|
||||
vi.clearAllMocks();
|
||||
});
|
||||
|
||||
afterEach(async () => {
|
||||
try {
|
||||
await fs.rm(testProjectDir, { recursive: true, force: true });
|
||||
} catch {
|
||||
// Ignore cleanup errors
|
||||
}
|
||||
});
|
||||
|
||||
describe('getPipelineConfig', () => {
|
||||
it('should return default config when file does not exist', async () => {
|
||||
const error = new Error('File not found') as NodeJS.ErrnoException;
|
||||
error.code = 'ENOENT';
|
||||
vi.mocked(secureFs.readFile).mockRejectedValue(error);
|
||||
|
||||
const config = await pipelineService.getPipelineConfig(testProjectDir);
|
||||
|
||||
expect(config).toEqual({
|
||||
version: 1,
|
||||
steps: [],
|
||||
});
|
||||
});
|
||||
|
||||
it('should read and return existing config', async () => {
|
||||
const existingConfig: PipelineConfig = {
|
||||
version: 1,
|
||||
steps: [
|
||||
{
|
||||
id: 'step1',
|
||||
name: 'Test Step',
|
||||
order: 0,
|
||||
instructions: 'Do something',
|
||||
colorClass: 'blue',
|
||||
createdAt: '2024-01-01T00:00:00.000Z',
|
||||
updatedAt: '2024-01-01T00:00:00.000Z',
|
||||
},
|
||||
],
|
||||
};
|
||||
|
||||
const configPath = path.join(testProjectDir, '.automaker', 'pipeline.json');
|
||||
vi.mocked(secureFs.readFile).mockResolvedValue(JSON.stringify(existingConfig) as any);
|
||||
|
||||
const config = await pipelineService.getPipelineConfig(testProjectDir);
|
||||
|
||||
expect(secureFs.readFile).toHaveBeenCalledWith(configPath, 'utf-8');
|
||||
expect(config).toEqual(existingConfig);
|
||||
});
|
||||
|
||||
it('should merge with defaults for missing properties', async () => {
|
||||
const partialConfig = {
|
||||
steps: [
|
||||
{
|
||||
id: 'step1',
|
||||
name: 'Test Step',
|
||||
order: 0,
|
||||
instructions: 'Do something',
|
||||
colorClass: 'blue',
|
||||
createdAt: '2024-01-01T00:00:00.000Z',
|
||||
updatedAt: '2024-01-01T00:00:00.000Z',
|
||||
},
|
||||
],
|
||||
};
|
||||
|
||||
const configPath = path.join(testProjectDir, '.automaker', 'pipeline.json');
|
||||
vi.mocked(secureFs.readFile).mockResolvedValue(JSON.stringify(partialConfig) as any);
|
||||
|
||||
const config = await pipelineService.getPipelineConfig(testProjectDir);
|
||||
|
||||
expect(config.version).toBe(1);
|
||||
expect(config.steps).toHaveLength(1);
|
||||
});
|
||||
|
||||
it('should handle read errors gracefully', async () => {
|
||||
const error = new Error('Read error');
|
||||
vi.mocked(secureFs.readFile).mockRejectedValue(error);
|
||||
|
||||
const config = await pipelineService.getPipelineConfig(testProjectDir);
|
||||
|
||||
// Should return default config on error
|
||||
expect(config).toEqual({
|
||||
version: 1,
|
||||
steps: [],
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
describe('savePipelineConfig', () => {
|
||||
it('should save config to file', async () => {
|
||||
const config: PipelineConfig = {
|
||||
version: 1,
|
||||
steps: [
|
||||
{
|
||||
id: 'step1',
|
||||
name: 'Test Step',
|
||||
order: 0,
|
||||
instructions: 'Do something',
|
||||
colorClass: 'blue',
|
||||
createdAt: '2024-01-01T00:00:00.000Z',
|
||||
updatedAt: '2024-01-01T00:00:00.000Z',
|
||||
},
|
||||
],
|
||||
};
|
||||
|
||||
vi.mocked(ensureAutomakerDir).mockResolvedValue(undefined);
|
||||
vi.mocked(secureFs.writeFile).mockResolvedValue(undefined);
|
||||
vi.mocked(secureFs.rename).mockResolvedValue(undefined);
|
||||
|
||||
await pipelineService.savePipelineConfig(testProjectDir, config);
|
||||
|
||||
expect(ensureAutomakerDir).toHaveBeenCalledWith(testProjectDir);
|
||||
expect(secureFs.writeFile).toHaveBeenCalled();
|
||||
expect(secureFs.rename).toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('should use atomic write pattern', async () => {
|
||||
const config: PipelineConfig = {
|
||||
version: 1,
|
||||
steps: [],
|
||||
};
|
||||
|
||||
vi.mocked(ensureAutomakerDir).mockResolvedValue(undefined);
|
||||
vi.mocked(secureFs.writeFile).mockResolvedValue(undefined);
|
||||
vi.mocked(secureFs.rename).mockResolvedValue(undefined);
|
||||
|
||||
await pipelineService.savePipelineConfig(testProjectDir, config);
|
||||
|
||||
const writeCall = vi.mocked(secureFs.writeFile).mock.calls[0];
|
||||
const tempPath = writeCall[0] as string;
|
||||
expect(tempPath).toContain('.tmp.');
|
||||
expect(tempPath).toContain('pipeline.json');
|
||||
});
|
||||
|
||||
it('should clean up temp file on write error', async () => {
|
||||
const config: PipelineConfig = {
|
||||
version: 1,
|
||||
steps: [],
|
||||
};
|
||||
|
||||
vi.mocked(ensureAutomakerDir).mockResolvedValue(undefined);
|
||||
vi.mocked(secureFs.writeFile).mockRejectedValue(new Error('Write failed'));
|
||||
vi.mocked(secureFs.unlink).mockResolvedValue(undefined);
|
||||
|
||||
await expect(pipelineService.savePipelineConfig(testProjectDir, config)).rejects.toThrow(
|
||||
'Write failed'
|
||||
);
|
||||
|
||||
expect(secureFs.unlink).toHaveBeenCalled();
|
||||
});
|
||||
});
|
||||
|
||||
describe('addStep', () => {
|
||||
it('should add a new step to config', async () => {
|
||||
const error = new Error('File not found') as NodeJS.ErrnoException;
|
||||
error.code = 'ENOENT';
|
||||
vi.mocked(secureFs.readFile).mockRejectedValue(error);
|
||||
vi.mocked(ensureAutomakerDir).mockResolvedValue(undefined);
|
||||
vi.mocked(secureFs.writeFile).mockResolvedValue(undefined);
|
||||
vi.mocked(secureFs.rename).mockResolvedValue(undefined);
|
||||
|
||||
const stepData = {
|
||||
name: 'New Step',
|
||||
order: 0,
|
||||
instructions: 'Do something',
|
||||
colorClass: 'blue',
|
||||
};
|
||||
|
||||
const newStep = await pipelineService.addStep(testProjectDir, stepData);
|
||||
|
||||
expect(newStep.name).toBe('New Step');
|
||||
expect(newStep.id).toMatch(/^step_/);
|
||||
expect(newStep.createdAt).toBeDefined();
|
||||
expect(newStep.updatedAt).toBeDefined();
|
||||
expect(newStep.createdAt).toBe(newStep.updatedAt);
|
||||
});
|
||||
|
||||
it('should normalize order values after adding step', async () => {
|
||||
const existingConfig: PipelineConfig = {
|
||||
version: 1,
|
||||
steps: [
|
||||
{
|
||||
id: 'step1',
|
||||
name: 'Step 1',
|
||||
order: 5, // Out of order
|
||||
instructions: 'Do something',
|
||||
colorClass: 'blue',
|
||||
createdAt: '2024-01-01T00:00:00.000Z',
|
||||
updatedAt: '2024-01-01T00:00:00.000Z',
|
||||
},
|
||||
],
|
||||
};
|
||||
|
||||
vi.mocked(secureFs.readFile).mockResolvedValue(JSON.stringify(existingConfig) as any);
|
||||
vi.mocked(ensureAutomakerDir).mockResolvedValue(undefined);
|
||||
vi.mocked(secureFs.writeFile).mockResolvedValue(undefined);
|
||||
vi.mocked(secureFs.rename).mockResolvedValue(undefined);
|
||||
|
||||
const stepData = {
|
||||
name: 'New Step',
|
||||
order: 10, // Out of order
|
||||
instructions: 'Do something',
|
||||
colorClass: 'red',
|
||||
};
|
||||
|
||||
await pipelineService.addStep(testProjectDir, stepData);
|
||||
|
||||
const writeCall = vi.mocked(secureFs.writeFile).mock.calls[0];
|
||||
const savedConfig = JSON.parse(writeCall[1] as string) as PipelineConfig;
|
||||
expect(savedConfig.steps[0].order).toBe(0);
|
||||
expect(savedConfig.steps[1].order).toBe(1);
|
||||
});
|
||||
|
||||
it('should sort steps by order before normalizing', async () => {
|
||||
const existingConfig: PipelineConfig = {
|
||||
version: 1,
|
||||
steps: [
|
||||
{
|
||||
id: 'step1',
|
||||
name: 'Step 1',
|
||||
order: 2,
|
||||
instructions: 'Do something',
|
||||
colorClass: 'blue',
|
||||
createdAt: '2024-01-01T00:00:00.000Z',
|
||||
updatedAt: '2024-01-01T00:00:00.000Z',
|
||||
},
|
||||
{
|
||||
id: 'step2',
|
||||
name: 'Step 2',
|
||||
order: 0,
|
||||
instructions: 'Do something else',
|
||||
colorClass: 'green',
|
||||
createdAt: '2024-01-01T00:00:00.000Z',
|
||||
updatedAt: '2024-01-01T00:00:00.000Z',
|
||||
},
|
||||
],
|
||||
};
|
||||
|
||||
vi.mocked(secureFs.readFile).mockResolvedValue(JSON.stringify(existingConfig) as any);
|
||||
vi.mocked(ensureAutomakerDir).mockResolvedValue(undefined);
|
||||
vi.mocked(secureFs.writeFile).mockResolvedValue(undefined);
|
||||
vi.mocked(secureFs.rename).mockResolvedValue(undefined);
|
||||
|
||||
const stepData = {
|
||||
name: 'New Step',
|
||||
order: 1,
|
||||
instructions: 'Do something',
|
||||
colorClass: 'red',
|
||||
};
|
||||
|
||||
await pipelineService.addStep(testProjectDir, stepData);
|
||||
|
||||
const writeCall = vi.mocked(secureFs.writeFile).mock.calls[0];
|
||||
const savedConfig = JSON.parse(writeCall[1] as string) as PipelineConfig;
|
||||
// Should be sorted: step2 (order 0), newStep (order 1), step1 (order 2)
|
||||
expect(savedConfig.steps[0].id).toBe('step2');
|
||||
expect(savedConfig.steps[0].order).toBe(0);
|
||||
expect(savedConfig.steps[1].order).toBe(1);
|
||||
expect(savedConfig.steps[2].id).toBe('step1');
|
||||
expect(savedConfig.steps[2].order).toBe(2);
|
||||
});
|
||||
});
|
||||
|
||||
describe('updateStep', () => {
|
||||
it('should update an existing step', async () => {
|
||||
const existingConfig: PipelineConfig = {
|
||||
version: 1,
|
||||
steps: [
|
||||
{
|
||||
id: 'step1',
|
||||
name: 'Old Name',
|
||||
order: 0,
|
||||
instructions: 'Old instructions',
|
||||
colorClass: 'blue',
|
||||
createdAt: '2024-01-01T00:00:00.000Z',
|
||||
updatedAt: '2024-01-01T00:00:00.000Z',
|
||||
},
|
||||
],
|
||||
};
|
||||
|
||||
vi.mocked(secureFs.readFile).mockResolvedValue(JSON.stringify(existingConfig) as any);
|
||||
vi.mocked(ensureAutomakerDir).mockResolvedValue(undefined);
|
||||
vi.mocked(secureFs.writeFile).mockResolvedValue(undefined);
|
||||
vi.mocked(secureFs.rename).mockResolvedValue(undefined);
|
||||
|
||||
const updates = {
|
||||
name: 'New Name',
|
||||
instructions: 'New instructions',
|
||||
};
|
||||
|
||||
const updatedStep = await pipelineService.updateStep(testProjectDir, 'step1', updates);
|
||||
|
||||
expect(updatedStep.name).toBe('New Name');
|
||||
expect(updatedStep.instructions).toBe('New instructions');
|
||||
expect(updatedStep.id).toBe('step1');
|
||||
expect(updatedStep.createdAt).toBe('2024-01-01T00:00:00.000Z');
|
||||
expect(updatedStep.updatedAt).not.toBe('2024-01-01T00:00:00.000Z');
|
||||
});
|
||||
|
||||
it('should throw error if step not found', async () => {
|
||||
const existingConfig: PipelineConfig = {
|
||||
version: 1,
|
||||
steps: [],
|
||||
};
|
||||
|
||||
vi.mocked(secureFs.readFile).mockResolvedValue(JSON.stringify(existingConfig) as any);
|
||||
|
||||
await expect(
|
||||
pipelineService.updateStep(testProjectDir, 'nonexistent', { name: 'New' })
|
||||
).rejects.toThrow('Pipeline step not found: nonexistent');
|
||||
});
|
||||
|
||||
it('should preserve createdAt when updating', async () => {
|
||||
const existingConfig: PipelineConfig = {
|
||||
version: 1,
|
||||
steps: [
|
||||
{
|
||||
id: 'step1',
|
||||
name: 'Step',
|
||||
order: 0,
|
||||
instructions: 'Instructions',
|
||||
colorClass: 'blue',
|
||||
createdAt: '2024-01-01T00:00:00.000Z',
|
||||
updatedAt: '2024-01-01T00:00:00.000Z',
|
||||
},
|
||||
],
|
||||
};
|
||||
|
||||
vi.mocked(secureFs.readFile).mockResolvedValue(JSON.stringify(existingConfig) as any);
|
||||
vi.mocked(ensureAutomakerDir).mockResolvedValue(undefined);
|
||||
vi.mocked(secureFs.writeFile).mockResolvedValue(undefined);
|
||||
vi.mocked(secureFs.rename).mockResolvedValue(undefined);
|
||||
|
||||
const updatedStep = await pipelineService.updateStep(testProjectDir, 'step1', {
|
||||
name: 'Updated',
|
||||
});
|
||||
|
||||
expect(updatedStep.createdAt).toBe('2024-01-01T00:00:00.000Z');
|
||||
});
|
||||
});
|
||||
|
||||
describe('deleteStep', () => {
|
||||
it('should delete an existing step', async () => {
|
||||
const existingConfig: PipelineConfig = {
|
||||
version: 1,
|
||||
steps: [
|
||||
{
|
||||
id: 'step1',
|
||||
name: 'Step 1',
|
||||
order: 0,
|
||||
instructions: 'Instructions',
|
||||
colorClass: 'blue',
|
||||
createdAt: '2024-01-01T00:00:00.000Z',
|
||||
updatedAt: '2024-01-01T00:00:00.000Z',
|
||||
},
|
||||
{
|
||||
id: 'step2',
|
||||
name: 'Step 2',
|
||||
order: 1,
|
||||
instructions: 'Instructions',
|
||||
colorClass: 'green',
|
||||
createdAt: '2024-01-01T00:00:00.000Z',
|
||||
updatedAt: '2024-01-01T00:00:00.000Z',
|
||||
},
|
||||
],
|
||||
};
|
||||
|
||||
vi.mocked(secureFs.readFile).mockResolvedValue(JSON.stringify(existingConfig) as any);
|
||||
vi.mocked(ensureAutomakerDir).mockResolvedValue(undefined);
|
||||
vi.mocked(secureFs.writeFile).mockResolvedValue(undefined);
|
||||
vi.mocked(secureFs.rename).mockResolvedValue(undefined);
|
||||
|
||||
await pipelineService.deleteStep(testProjectDir, 'step1');
|
||||
|
||||
const writeCall = vi.mocked(secureFs.writeFile).mock.calls[0];
|
||||
const savedConfig = JSON.parse(writeCall[1] as string) as PipelineConfig;
|
||||
expect(savedConfig.steps).toHaveLength(1);
|
||||
expect(savedConfig.steps[0].id).toBe('step2');
|
||||
expect(savedConfig.steps[0].order).toBe(0); // Normalized
|
||||
});
|
||||
|
||||
it('should throw error if step not found', async () => {
|
||||
const existingConfig: PipelineConfig = {
|
||||
version: 1,
|
||||
steps: [],
|
||||
};
|
||||
|
||||
vi.mocked(secureFs.readFile).mockResolvedValue(JSON.stringify(existingConfig) as any);
|
||||
|
||||
await expect(pipelineService.deleteStep(testProjectDir, 'nonexistent')).rejects.toThrow(
|
||||
'Pipeline step not found: nonexistent'
|
||||
);
|
||||
});
|
||||
|
||||
it('should normalize order values after deletion', async () => {
|
||||
const existingConfig: PipelineConfig = {
|
||||
version: 1,
|
||||
steps: [
|
||||
{
|
||||
id: 'step1',
|
||||
name: 'Step 1',
|
||||
order: 0,
|
||||
instructions: 'Instructions',
|
||||
colorClass: 'blue',
|
||||
createdAt: '2024-01-01T00:00:00.000Z',
|
||||
updatedAt: '2024-01-01T00:00:00.000Z',
|
||||
},
|
||||
{
|
||||
id: 'step2',
|
||||
name: 'Step 2',
|
||||
order: 5, // Out of order
|
||||
instructions: 'Instructions',
|
||||
colorClass: 'green',
|
||||
createdAt: '2024-01-01T00:00:00.000Z',
|
||||
updatedAt: '2024-01-01T00:00:00.000Z',
|
||||
},
|
||||
{
|
||||
id: 'step3',
|
||||
name: 'Step 3',
|
||||
order: 10, // Out of order
|
||||
instructions: 'Instructions',
|
||||
colorClass: 'red',
|
||||
createdAt: '2024-01-01T00:00:00.000Z',
|
||||
updatedAt: '2024-01-01T00:00:00.000Z',
|
||||
},
|
||||
],
|
||||
};
|
||||
|
||||
vi.mocked(secureFs.readFile).mockResolvedValue(JSON.stringify(existingConfig) as any);
|
||||
vi.mocked(ensureAutomakerDir).mockResolvedValue(undefined);
|
||||
vi.mocked(secureFs.writeFile).mockResolvedValue(undefined);
|
||||
vi.mocked(secureFs.rename).mockResolvedValue(undefined);
|
||||
|
||||
await pipelineService.deleteStep(testProjectDir, 'step2');
|
||||
|
||||
const writeCall = vi.mocked(secureFs.writeFile).mock.calls[0];
|
||||
const savedConfig = JSON.parse(writeCall[1] as string) as PipelineConfig;
|
||||
expect(savedConfig.steps).toHaveLength(2);
|
||||
expect(savedConfig.steps[0].order).toBe(0);
|
||||
expect(savedConfig.steps[1].order).toBe(1);
|
||||
});
|
||||
});
|
||||
|
||||
describe('reorderSteps', () => {
|
||||
it('should reorder steps according to stepIds array', async () => {
|
||||
const existingConfig: PipelineConfig = {
|
||||
version: 1,
|
||||
steps: [
|
||||
{
|
||||
id: 'step1',
|
||||
name: 'Step 1',
|
||||
order: 0,
|
||||
instructions: 'Instructions',
|
||||
colorClass: 'blue',
|
||||
createdAt: '2024-01-01T00:00:00.000Z',
|
||||
updatedAt: '2024-01-01T00:00:00.000Z',
|
||||
},
|
||||
{
|
||||
id: 'step2',
|
||||
name: 'Step 2',
|
||||
order: 1,
|
||||
instructions: 'Instructions',
|
||||
colorClass: 'green',
|
||||
createdAt: '2024-01-01T00:00:00.000Z',
|
||||
updatedAt: '2024-01-01T00:00:00.000Z',
|
||||
},
|
||||
{
|
||||
id: 'step3',
|
||||
name: 'Step 3',
|
||||
order: 2,
|
||||
instructions: 'Instructions',
|
||||
colorClass: 'red',
|
||||
createdAt: '2024-01-01T00:00:00.000Z',
|
||||
updatedAt: '2024-01-01T00:00:00.000Z',
|
||||
},
|
||||
],
|
||||
};
|
||||
|
||||
vi.mocked(secureFs.readFile).mockResolvedValue(JSON.stringify(existingConfig) as any);
|
||||
vi.mocked(ensureAutomakerDir).mockResolvedValue(undefined);
|
||||
vi.mocked(secureFs.writeFile).mockResolvedValue(undefined);
|
||||
vi.mocked(secureFs.rename).mockResolvedValue(undefined);
|
||||
|
||||
await pipelineService.reorderSteps(testProjectDir, ['step3', 'step1', 'step2']);
|
||||
|
||||
const writeCall = vi.mocked(secureFs.writeFile).mock.calls[0];
|
||||
const savedConfig = JSON.parse(writeCall[1] as string) as PipelineConfig;
|
||||
expect(savedConfig.steps[0].id).toBe('step3');
|
||||
expect(savedConfig.steps[0].order).toBe(0);
|
||||
expect(savedConfig.steps[1].id).toBe('step1');
|
||||
expect(savedConfig.steps[1].order).toBe(1);
|
||||
expect(savedConfig.steps[2].id).toBe('step2');
|
||||
expect(savedConfig.steps[2].order).toBe(2);
|
||||
});
|
||||
|
||||
it('should update updatedAt timestamp for reordered steps', async () => {
|
||||
const existingConfig: PipelineConfig = {
|
||||
version: 1,
|
||||
steps: [
|
||||
{
|
||||
id: 'step1',
|
||||
name: 'Step 1',
|
||||
order: 0,
|
||||
instructions: 'Instructions',
|
||||
colorClass: 'blue',
|
||||
createdAt: '2024-01-01T00:00:00.000Z',
|
||||
updatedAt: '2024-01-01T00:00:00.000Z',
|
||||
},
|
||||
{
|
||||
id: 'step2',
|
||||
name: 'Step 2',
|
||||
order: 1,
|
||||
instructions: 'Instructions',
|
||||
colorClass: 'green',
|
||||
createdAt: '2024-01-01T00:00:00.000Z',
|
||||
updatedAt: '2024-01-01T00:00:00.000Z',
|
||||
},
|
||||
],
|
||||
};
|
||||
|
||||
vi.mocked(secureFs.readFile).mockResolvedValue(JSON.stringify(existingConfig) as any);
|
||||
vi.mocked(ensureAutomakerDir).mockResolvedValue(undefined);
|
||||
vi.mocked(secureFs.writeFile).mockResolvedValue(undefined);
|
||||
vi.mocked(secureFs.rename).mockResolvedValue(undefined);
|
||||
|
||||
await pipelineService.reorderSteps(testProjectDir, ['step2', 'step1']);
|
||||
|
||||
const writeCall = vi.mocked(secureFs.writeFile).mock.calls[0];
|
||||
const savedConfig = JSON.parse(writeCall[1] as string) as PipelineConfig;
|
||||
expect(savedConfig.steps[0].updatedAt).not.toBe('2024-01-01T00:00:00.000Z');
|
||||
expect(savedConfig.steps[1].updatedAt).not.toBe('2024-01-01T00:00:00.000Z');
|
||||
});
|
||||
|
||||
it('should throw error if step ID not found', async () => {
|
||||
const existingConfig: PipelineConfig = {
|
||||
version: 1,
|
||||
steps: [
|
||||
{
|
||||
id: 'step1',
|
||||
name: 'Step 1',
|
||||
order: 0,
|
||||
instructions: 'Instructions',
|
||||
colorClass: 'blue',
|
||||
createdAt: '2024-01-01T00:00:00.000Z',
|
||||
updatedAt: '2024-01-01T00:00:00.000Z',
|
||||
},
|
||||
],
|
||||
};
|
||||
|
||||
vi.mocked(secureFs.readFile).mockResolvedValue(JSON.stringify(existingConfig) as any);
|
||||
|
||||
await expect(
|
||||
pipelineService.reorderSteps(testProjectDir, ['step1', 'nonexistent'])
|
||||
).rejects.toThrow('Pipeline step not found: nonexistent');
|
||||
});
|
||||
|
||||
it('should allow partial reordering (filtering steps)', async () => {
|
||||
const existingConfig: PipelineConfig = {
|
||||
version: 1,
|
||||
steps: [
|
||||
{
|
||||
id: 'step1',
|
||||
name: 'Step 1',
|
||||
order: 0,
|
||||
instructions: 'Instructions',
|
||||
colorClass: 'blue',
|
||||
createdAt: '2024-01-01T00:00:00.000Z',
|
||||
updatedAt: '2024-01-01T00:00:00.000Z',
|
||||
},
|
||||
{
|
||||
id: 'step2',
|
||||
name: 'Step 2',
|
||||
order: 1,
|
||||
instructions: 'Instructions',
|
||||
colorClass: 'green',
|
||||
createdAt: '2024-01-01T00:00:00.000Z',
|
||||
updatedAt: '2024-01-01T00:00:00.000Z',
|
||||
},
|
||||
],
|
||||
};
|
||||
|
||||
vi.mocked(secureFs.readFile).mockResolvedValue(JSON.stringify(existingConfig) as any);
|
||||
vi.mocked(ensureAutomakerDir).mockResolvedValue(undefined);
|
||||
vi.mocked(secureFs.writeFile).mockResolvedValue(undefined);
|
||||
vi.mocked(secureFs.rename).mockResolvedValue(undefined);
|
||||
|
||||
await pipelineService.reorderSteps(testProjectDir, ['step1']);
|
||||
|
||||
const writeCall = vi.mocked(secureFs.writeFile).mock.calls[0];
|
||||
const savedConfig = JSON.parse(writeCall[1] as string) as PipelineConfig;
|
||||
// Should only keep step1, effectively filtering out step2
|
||||
expect(savedConfig.steps).toHaveLength(1);
|
||||
expect(savedConfig.steps[0].id).toBe('step1');
|
||||
expect(savedConfig.steps[0].order).toBe(0);
|
||||
});
|
||||
});
|
||||
|
||||
describe('getNextStatus', () => {
|
||||
it('should return waiting_approval when no pipeline and skipTests is true', () => {
|
||||
const nextStatus = pipelineService.getNextStatus('in_progress', null, true);
|
||||
expect(nextStatus).toBe('waiting_approval');
|
||||
});
|
||||
|
||||
it('should return verified when no pipeline and skipTests is false', () => {
|
||||
const nextStatus = pipelineService.getNextStatus('in_progress', null, false);
|
||||
expect(nextStatus).toBe('verified');
|
||||
});
|
||||
|
||||
it('should return first pipeline step when coming from in_progress', () => {
|
||||
const config: PipelineConfig = {
|
||||
version: 1,
|
||||
steps: [
|
||||
{
|
||||
id: 'step1',
|
||||
name: 'Step 1',
|
||||
order: 0,
|
||||
instructions: 'Instructions',
|
||||
colorClass: 'blue',
|
||||
createdAt: '2024-01-01T00:00:00.000Z',
|
||||
updatedAt: '2024-01-01T00:00:00.000Z',
|
||||
},
|
||||
],
|
||||
};
|
||||
|
||||
const nextStatus = pipelineService.getNextStatus('in_progress', config, false);
|
||||
expect(nextStatus).toBe('pipeline_step1');
|
||||
});
|
||||
|
||||
it('should go to next pipeline step when in middle of pipeline', () => {
|
||||
const config: PipelineConfig = {
|
||||
version: 1,
|
||||
steps: [
|
||||
{
|
||||
id: 'step1',
|
||||
name: 'Step 1',
|
||||
order: 0,
|
||||
instructions: 'Instructions',
|
||||
colorClass: 'blue',
|
||||
createdAt: '2024-01-01T00:00:00.000Z',
|
||||
updatedAt: '2024-01-01T00:00:00.000Z',
|
||||
},
|
||||
{
|
||||
id: 'step2',
|
||||
name: 'Step 2',
|
||||
order: 1,
|
||||
instructions: 'Instructions',
|
||||
colorClass: 'green',
|
||||
createdAt: '2024-01-01T00:00:00.000Z',
|
||||
updatedAt: '2024-01-01T00:00:00.000Z',
|
||||
},
|
||||
],
|
||||
};
|
||||
|
||||
const nextStatus = pipelineService.getNextStatus('pipeline_step1', config, false);
|
||||
expect(nextStatus).toBe('pipeline_step2');
|
||||
});
|
||||
|
||||
it('should go to final status when completing last pipeline step', () => {
|
||||
const config: PipelineConfig = {
|
||||
version: 1,
|
||||
steps: [
|
||||
{
|
||||
id: 'step1',
|
||||
name: 'Step 1',
|
||||
order: 0,
|
||||
instructions: 'Instructions',
|
||||
colorClass: 'blue',
|
||||
createdAt: '2024-01-01T00:00:00.000Z',
|
||||
updatedAt: '2024-01-01T00:00:00.000Z',
|
||||
},
|
||||
],
|
||||
};
|
||||
|
||||
const nextStatus = pipelineService.getNextStatus('pipeline_step1', config, false);
|
||||
expect(nextStatus).toBe('verified');
|
||||
});
|
||||
|
||||
it('should go to waiting_approval when completing last step with skipTests', () => {
|
||||
const config: PipelineConfig = {
|
||||
version: 1,
|
||||
steps: [
|
||||
{
|
||||
id: 'step1',
|
||||
name: 'Step 1',
|
||||
order: 0,
|
||||
instructions: 'Instructions',
|
||||
colorClass: 'blue',
|
||||
createdAt: '2024-01-01T00:00:00.000Z',
|
||||
updatedAt: '2024-01-01T00:00:00.000Z',
|
||||
},
|
||||
],
|
||||
};
|
||||
|
||||
const nextStatus = pipelineService.getNextStatus('pipeline_step1', config, true);
|
||||
expect(nextStatus).toBe('waiting_approval');
|
||||
});
|
||||
|
||||
it('should handle invalid pipeline step ID gracefully', () => {
|
||||
const config: PipelineConfig = {
|
||||
version: 1,
|
||||
steps: [
|
||||
{
|
||||
id: 'step1',
|
||||
name: 'Step 1',
|
||||
order: 0,
|
||||
instructions: 'Instructions',
|
||||
colorClass: 'blue',
|
||||
createdAt: '2024-01-01T00:00:00.000Z',
|
||||
updatedAt: '2024-01-01T00:00:00.000Z',
|
||||
},
|
||||
],
|
||||
};
|
||||
|
||||
const nextStatus = pipelineService.getNextStatus('pipeline_nonexistent', config, false);
|
||||
expect(nextStatus).toBe('verified');
|
||||
});
|
||||
|
||||
it('should preserve other statuses unchanged', () => {
|
||||
const config: PipelineConfig = {
|
||||
version: 1,
|
||||
steps: [],
|
||||
};
|
||||
|
||||
expect(pipelineService.getNextStatus('backlog', config, false)).toBe('backlog');
|
||||
expect(pipelineService.getNextStatus('waiting_approval', config, false)).toBe(
|
||||
'waiting_approval'
|
||||
);
|
||||
expect(pipelineService.getNextStatus('verified', config, false)).toBe('verified');
|
||||
expect(pipelineService.getNextStatus('completed', config, false)).toBe('completed');
|
||||
});
|
||||
|
||||
it('should sort steps by order when determining next status', () => {
|
||||
const config: PipelineConfig = {
|
||||
version: 1,
|
||||
steps: [
|
||||
{
|
||||
id: 'step2',
|
||||
name: 'Step 2',
|
||||
order: 1,
|
||||
instructions: 'Instructions',
|
||||
colorClass: 'green',
|
||||
createdAt: '2024-01-01T00:00:00.000Z',
|
||||
updatedAt: '2024-01-01T00:00:00.000Z',
|
||||
},
|
||||
{
|
||||
id: 'step1',
|
||||
name: 'Step 1',
|
||||
order: 0,
|
||||
instructions: 'Instructions',
|
||||
colorClass: 'blue',
|
||||
createdAt: '2024-01-01T00:00:00.000Z',
|
||||
updatedAt: '2024-01-01T00:00:00.000Z',
|
||||
},
|
||||
],
|
||||
};
|
||||
|
||||
const nextStatus = pipelineService.getNextStatus('in_progress', config, false);
|
||||
expect(nextStatus).toBe('pipeline_step1'); // Should use step1 (order 0), not step2
|
||||
});
|
||||
});
|
||||
|
||||
describe('getStep', () => {
|
||||
it('should return step by ID', async () => {
|
||||
const existingConfig: PipelineConfig = {
|
||||
version: 1,
|
||||
steps: [
|
||||
{
|
||||
id: 'step1',
|
||||
name: 'Step 1',
|
||||
order: 0,
|
||||
instructions: 'Instructions',
|
||||
colorClass: 'blue',
|
||||
createdAt: '2024-01-01T00:00:00.000Z',
|
||||
updatedAt: '2024-01-01T00:00:00.000Z',
|
||||
},
|
||||
],
|
||||
};
|
||||
|
||||
vi.mocked(secureFs.readFile).mockResolvedValue(JSON.stringify(existingConfig) as any);
|
||||
|
||||
const step = await pipelineService.getStep(testProjectDir, 'step1');
|
||||
|
||||
expect(step).not.toBeNull();
|
||||
expect(step?.id).toBe('step1');
|
||||
expect(step?.name).toBe('Step 1');
|
||||
});
|
||||
|
||||
it('should return null if step not found', async () => {
|
||||
const existingConfig: PipelineConfig = {
|
||||
version: 1,
|
||||
steps: [],
|
||||
};
|
||||
|
||||
vi.mocked(secureFs.readFile).mockResolvedValue(JSON.stringify(existingConfig) as any);
|
||||
|
||||
const step = await pipelineService.getStep(testProjectDir, 'nonexistent');
|
||||
|
||||
expect(step).toBeNull();
|
||||
});
|
||||
});
|
||||
|
||||
describe('isPipelineStatus', () => {
|
||||
it('should return true for pipeline statuses', () => {
|
||||
expect(pipelineService.isPipelineStatus('pipeline_step1')).toBe(true);
|
||||
expect(pipelineService.isPipelineStatus('pipeline_abc123')).toBe(true);
|
||||
});
|
||||
|
||||
it('should return false for non-pipeline statuses', () => {
|
||||
expect(pipelineService.isPipelineStatus('in_progress')).toBe(false);
|
||||
expect(pipelineService.isPipelineStatus('waiting_approval')).toBe(false);
|
||||
expect(pipelineService.isPipelineStatus('verified')).toBe(false);
|
||||
expect(pipelineService.isPipelineStatus('backlog')).toBe(false);
|
||||
expect(pipelineService.isPipelineStatus('completed')).toBe(false);
|
||||
});
|
||||
});
|
||||
|
||||
describe('getStepIdFromStatus', () => {
|
||||
it('should extract step ID from pipeline status', () => {
|
||||
expect(pipelineService.getStepIdFromStatus('pipeline_step1')).toBe('step1');
|
||||
expect(pipelineService.getStepIdFromStatus('pipeline_abc123')).toBe('abc123');
|
||||
});
|
||||
|
||||
it('should return null for non-pipeline statuses', () => {
|
||||
expect(pipelineService.getStepIdFromStatus('in_progress')).toBeNull();
|
||||
expect(pipelineService.getStepIdFromStatus('waiting_approval')).toBeNull();
|
||||
expect(pipelineService.getStepIdFromStatus('verified')).toBeNull();
|
||||
});
|
||||
});
|
||||
});
|
||||
@@ -44,7 +44,6 @@
|
||||
"@dnd-kit/utilities": "^3.2.2",
|
||||
"@lezer/highlight": "^1.2.3",
|
||||
"@radix-ui/react-checkbox": "^1.3.3",
|
||||
"@radix-ui/react-collapsible": "^1.1.12",
|
||||
"@radix-ui/react-dialog": "^1.1.15",
|
||||
"@radix-ui/react-dropdown-menu": "^2.1.16",
|
||||
"@radix-ui/react-label": "^2.1.8",
|
||||
@@ -70,6 +69,7 @@
|
||||
"cmdk": "^1.1.1",
|
||||
"dagre": "^0.8.5",
|
||||
"dotenv": "^17.2.3",
|
||||
"framer-motion": "^12.23.26",
|
||||
"geist": "^1.5.1",
|
||||
"lucide-react": "^0.562.0",
|
||||
"react": "19.2.3",
|
||||
@@ -77,9 +77,9 @@
|
||||
"react-markdown": "^10.1.0",
|
||||
"react-resizable-panels": "^3.0.6",
|
||||
"rehype-raw": "^7.0.0",
|
||||
"rehype-sanitize": "^6.0.0",
|
||||
"sonner": "^2.0.7",
|
||||
"tailwind-merge": "^3.4.0",
|
||||
"usehooks-ts": "^3.1.1",
|
||||
"zustand": "^5.0.9"
|
||||
},
|
||||
"optionalDependencies": {
|
||||
|
||||
@@ -6,6 +6,8 @@ import { useSettingsMigration } from './hooks/use-settings-migration';
|
||||
import './styles/global.css';
|
||||
import './styles/theme-imports';
|
||||
|
||||
import { Shell } from './components/layout/shell';
|
||||
|
||||
export default function App() {
|
||||
const [showSplash, setShowSplash] = useState(() => {
|
||||
// Only show splash once per session
|
||||
@@ -27,9 +29,9 @@ export default function App() {
|
||||
}, []);
|
||||
|
||||
return (
|
||||
<>
|
||||
<Shell>
|
||||
<RouterProvider router={router} />
|
||||
{showSplash && <SplashScreen onComplete={handleSplashComplete} />}
|
||||
</>
|
||||
</Shell>
|
||||
);
|
||||
}
|
||||
|
||||
@@ -1,5 +1,15 @@
|
||||
import { useState, useEffect, useCallback } from 'react';
|
||||
import { FolderOpen, Folder, ChevronRight, HardDrive, Clock, X } from 'lucide-react';
|
||||
import { useState, useEffect, useRef, useCallback } from 'react';
|
||||
import {
|
||||
FolderOpen,
|
||||
Folder,
|
||||
ChevronRight,
|
||||
Home,
|
||||
ArrowLeft,
|
||||
HardDrive,
|
||||
CornerDownLeft,
|
||||
Clock,
|
||||
X,
|
||||
} from 'lucide-react';
|
||||
import {
|
||||
Dialog,
|
||||
DialogContent,
|
||||
@@ -9,11 +19,9 @@ import {
|
||||
DialogTitle,
|
||||
} from '@/components/ui/dialog';
|
||||
import { Button } from '@/components/ui/button';
|
||||
import { PathInput } from '@/components/ui/path-input';
|
||||
import { Kbd, KbdGroup } from '@/components/ui/kbd';
|
||||
import { Input } from '@/components/ui/input';
|
||||
import { getJSON, setJSON } from '@/lib/storage';
|
||||
import { getDefaultWorkspaceDirectory, saveLastProjectDirectory } from '@/lib/workspace-config';
|
||||
import { useOSDetection } from '@/hooks';
|
||||
|
||||
interface DirectoryEntry {
|
||||
name: string;
|
||||
@@ -69,8 +77,8 @@ export function FileBrowserDialog({
|
||||
description = 'Navigate to your project folder or paste a path directly',
|
||||
initialPath,
|
||||
}: FileBrowserDialogProps) {
|
||||
const { isMac } = useOSDetection();
|
||||
const [currentPath, setCurrentPath] = useState<string>('');
|
||||
const [pathInput, setPathInput] = useState<string>('');
|
||||
const [parentPath, setParentPath] = useState<string | null>(null);
|
||||
const [directories, setDirectories] = useState<DirectoryEntry[]>([]);
|
||||
const [drives, setDrives] = useState<string[]>([]);
|
||||
@@ -78,6 +86,7 @@ export function FileBrowserDialog({
|
||||
const [error, setError] = useState('');
|
||||
const [warning, setWarning] = useState('');
|
||||
const [recentFolders, setRecentFolders] = useState<string[]>([]);
|
||||
const pathInputRef = useRef<HTMLInputElement>(null);
|
||||
|
||||
// Load recent folders when dialog opens
|
||||
useEffect(() => {
|
||||
@@ -111,6 +120,7 @@ export function FileBrowserDialog({
|
||||
|
||||
if (result.success) {
|
||||
setCurrentPath(result.currentPath);
|
||||
setPathInput(result.currentPath);
|
||||
setParentPath(result.parentPath);
|
||||
setDirectories(result.directories);
|
||||
setDrives(result.drives || []);
|
||||
@@ -132,10 +142,11 @@ export function FileBrowserDialog({
|
||||
[browseDirectory]
|
||||
);
|
||||
|
||||
// Reset state when dialog closes
|
||||
// Reset current path when dialog closes
|
||||
useEffect(() => {
|
||||
if (!open) {
|
||||
setCurrentPath('');
|
||||
setPathInput('');
|
||||
setParentPath(null);
|
||||
setDirectories([]);
|
||||
setError('');
|
||||
@@ -161,6 +172,9 @@ export function FileBrowserDialog({
|
||||
const pathToUse = initialPath || defaultDir;
|
||||
|
||||
if (pathToUse) {
|
||||
// Pre-fill the path input immediately
|
||||
setPathInput(pathToUse);
|
||||
// Then browse to that directory
|
||||
browseDirectory(pathToUse);
|
||||
} else {
|
||||
// No default directory, browse home directory
|
||||
@@ -169,6 +183,7 @@ export function FileBrowserDialog({
|
||||
} catch {
|
||||
// If config fetch fails, try initialPath or fall back to home directory
|
||||
if (initialPath) {
|
||||
setPathInput(initialPath);
|
||||
browseDirectory(initialPath);
|
||||
} else {
|
||||
browseDirectory();
|
||||
@@ -184,21 +199,34 @@ export function FileBrowserDialog({
|
||||
browseDirectory(dir.path);
|
||||
};
|
||||
|
||||
const handleGoHome = useCallback(() => {
|
||||
browseDirectory();
|
||||
}, [browseDirectory]);
|
||||
const handleGoToParent = () => {
|
||||
if (parentPath) {
|
||||
browseDirectory(parentPath);
|
||||
}
|
||||
};
|
||||
|
||||
const handleNavigate = useCallback(
|
||||
(path: string) => {
|
||||
browseDirectory(path);
|
||||
},
|
||||
[browseDirectory]
|
||||
);
|
||||
const handleGoHome = () => {
|
||||
browseDirectory();
|
||||
};
|
||||
|
||||
const handleSelectDrive = (drivePath: string) => {
|
||||
browseDirectory(drivePath);
|
||||
};
|
||||
|
||||
const handleGoToPath = () => {
|
||||
const trimmedPath = pathInput.trim();
|
||||
if (trimmedPath) {
|
||||
browseDirectory(trimmedPath);
|
||||
}
|
||||
};
|
||||
|
||||
const handlePathInputKeyDown = (e: React.KeyboardEvent<HTMLInputElement>) => {
|
||||
if (e.key === 'Enter') {
|
||||
e.preventDefault();
|
||||
handleGoToPath();
|
||||
}
|
||||
};
|
||||
|
||||
const handleSelect = useCallback(() => {
|
||||
if (currentPath) {
|
||||
addRecentFolder(currentPath);
|
||||
@@ -235,7 +263,7 @@ export function FileBrowserDialog({
|
||||
|
||||
return (
|
||||
<Dialog open={open} onOpenChange={onOpenChange}>
|
||||
<DialogContent className="bg-popover border-border max-w-3xl max-h-[85vh] overflow-hidden flex flex-col p-4 focus:outline-none focus-visible:outline-none">
|
||||
<DialogContent className="bg-popover border-border max-w-3xl max-h-[85vh] overflow-hidden flex flex-col p-4">
|
||||
<DialogHeader className="pb-1">
|
||||
<DialogTitle className="flex items-center gap-2 text-base">
|
||||
<FolderOpen className="w-4 h-4 text-brand-500" />
|
||||
@@ -247,21 +275,31 @@ export function FileBrowserDialog({
|
||||
</DialogHeader>
|
||||
|
||||
<div className="flex flex-col gap-2 min-h-[350px] flex-1 overflow-hidden py-1">
|
||||
{/* Path navigation */}
|
||||
<PathInput
|
||||
currentPath={currentPath}
|
||||
parentPath={parentPath}
|
||||
loading={loading}
|
||||
error={!!error}
|
||||
onNavigate={handleNavigate}
|
||||
onHome={handleGoHome}
|
||||
entries={directories.map((dir) => ({ ...dir, isDirectory: true }))}
|
||||
onSelectEntry={(entry) => {
|
||||
if (entry.isDirectory) {
|
||||
handleSelectDirectory(entry);
|
||||
}
|
||||
}}
|
||||
/>
|
||||
{/* Direct path input */}
|
||||
<div className="flex items-center gap-1.5">
|
||||
<Input
|
||||
ref={pathInputRef}
|
||||
type="text"
|
||||
placeholder="Paste or type a full path (e.g., /home/user/projects/myapp)"
|
||||
value={pathInput}
|
||||
onChange={(e) => setPathInput(e.target.value)}
|
||||
onKeyDown={handlePathInputKeyDown}
|
||||
className="flex-1 font-mono text-xs h-8"
|
||||
data-testid="path-input"
|
||||
disabled={loading}
|
||||
/>
|
||||
<Button
|
||||
variant="secondary"
|
||||
size="sm"
|
||||
onClick={handleGoToPath}
|
||||
disabled={loading || !pathInput.trim()}
|
||||
data-testid="go-to-path-button"
|
||||
className="h-8 px-2"
|
||||
>
|
||||
<CornerDownLeft className="w-3.5 h-3.5 mr-1" />
|
||||
Go
|
||||
</Button>
|
||||
</div>
|
||||
|
||||
{/* Recent folders */}
|
||||
{recentFolders.length > 0 && (
|
||||
@@ -314,8 +352,35 @@ export function FileBrowserDialog({
|
||||
</div>
|
||||
)}
|
||||
|
||||
{/* Current path breadcrumb */}
|
||||
<div className="flex items-center gap-1.5 p-2 rounded-md bg-sidebar-accent/10 border border-sidebar-border">
|
||||
<Button
|
||||
variant="ghost"
|
||||
size="sm"
|
||||
onClick={handleGoHome}
|
||||
className="h-6 px-1.5"
|
||||
disabled={loading}
|
||||
>
|
||||
<Home className="w-3.5 h-3.5" />
|
||||
</Button>
|
||||
{parentPath && (
|
||||
<Button
|
||||
variant="ghost"
|
||||
size="sm"
|
||||
onClick={handleGoToParent}
|
||||
className="h-6 px-1.5"
|
||||
disabled={loading}
|
||||
>
|
||||
<ArrowLeft className="w-3.5 h-3.5" />
|
||||
</Button>
|
||||
)}
|
||||
<div className="flex-1 font-mono text-xs truncate text-muted-foreground">
|
||||
{currentPath || 'Loading...'}
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{/* Directory list */}
|
||||
<div className="flex-1 overflow-y-auto border border-sidebar-border rounded-md scrollbar-styled">
|
||||
<div className="flex-1 overflow-y-auto border border-sidebar-border rounded-md">
|
||||
{loading && (
|
||||
<div className="flex items-center justify-center h-full p-4">
|
||||
<div className="text-xs text-muted-foreground">Loading directories...</div>
|
||||
@@ -358,8 +423,8 @@ export function FileBrowserDialog({
|
||||
</div>
|
||||
|
||||
<div className="text-[10px] text-muted-foreground">
|
||||
Paste a full path above, or click on folders to navigate. Press Enter or click → to jump
|
||||
to a path.
|
||||
Paste a full path above, or click on folders to navigate. Press Enter or click Go to
|
||||
jump to a path.
|
||||
</div>
|
||||
</div>
|
||||
|
||||
@@ -375,10 +440,12 @@ export function FileBrowserDialog({
|
||||
>
|
||||
<FolderOpen className="w-3.5 h-3.5 mr-1.5" />
|
||||
Select Current Folder
|
||||
<KbdGroup className="ml-1">
|
||||
<Kbd>{isMac ? '⌘' : 'Ctrl'}</Kbd>
|
||||
<Kbd>↵</Kbd>
|
||||
</KbdGroup>
|
||||
<kbd className="ml-2 px-1.5 py-0.5 text-[10px] bg-background/50 rounded border border-border">
|
||||
{typeof navigator !== 'undefined' && navigator.platform?.includes('Mac')
|
||||
? '⌘'
|
||||
: 'Ctrl'}
|
||||
+↵
|
||||
</kbd>
|
||||
</Button>
|
||||
</DialogFooter>
|
||||
</DialogContent>
|
||||
|
||||
@@ -245,21 +245,18 @@ export function NewProjectModal({
|
||||
{/* Workspace Directory Display */}
|
||||
<div
|
||||
className={cn(
|
||||
'flex items-start gap-2 text-sm',
|
||||
'flex items-center gap-2 text-sm',
|
||||
errors.workspaceDir ? 'text-red-500' : 'text-muted-foreground'
|
||||
)}
|
||||
>
|
||||
<Folder className="w-4 h-4 shrink-0 mt-0.5" />
|
||||
<span className="flex-1 min-w-0 flex flex-col gap-1">
|
||||
<Folder className="w-4 h-4 shrink-0" />
|
||||
<span className="flex-1 min-w-0">
|
||||
{isLoadingWorkspace ? (
|
||||
'Loading workspace...'
|
||||
) : workspaceDir ? (
|
||||
<>
|
||||
<span>Will be created at:</span>
|
||||
<code
|
||||
className="text-xs bg-muted px-1.5 py-0.5 rounded truncate block max-w-full"
|
||||
title={projectPath || workspaceDir}
|
||||
>
|
||||
Will be created at:{' '}
|
||||
<code className="text-xs bg-muted px-1.5 py-0.5 rounded truncate">
|
||||
{projectPath || workspaceDir}
|
||||
</code>
|
||||
</>
|
||||
|
||||
118
apps/ui/src/components/layout/floating-dock.tsx
Normal file
118
apps/ui/src/components/layout/floating-dock.tsx
Normal file
@@ -0,0 +1,118 @@
|
||||
import { useRef } from 'react';
|
||||
import { motion, useMotionValue, useSpring, useTransform } from 'framer-motion';
|
||||
import { useNavigate, useLocation } from '@tanstack/react-router';
|
||||
import {
|
||||
LayoutDashboard,
|
||||
Bot,
|
||||
FileText,
|
||||
Database,
|
||||
Terminal,
|
||||
Settings,
|
||||
Users,
|
||||
type LucideIcon,
|
||||
} from 'lucide-react';
|
||||
import { cn } from '@/lib/utils';
|
||||
import { useAppStore } from '@/store/app-store';
|
||||
|
||||
export function FloatingDock() {
|
||||
const mouseX = useMotionValue(Infinity);
|
||||
const navigate = useNavigate();
|
||||
const location = useLocation();
|
||||
const { currentProject } = useAppStore();
|
||||
|
||||
const navItems = [
|
||||
{ id: 'board', icon: LayoutDashboard, label: 'Board', path: '/board' },
|
||||
{ id: 'agent', icon: Bot, label: 'Agent', path: '/agent' },
|
||||
{ id: 'spec', icon: FileText, label: 'Spec', path: '/spec' },
|
||||
{ id: 'context', icon: Database, label: 'Context', path: '/context' },
|
||||
{ id: 'profiles', icon: Users, label: 'Profiles', path: '/profiles' },
|
||||
{ id: 'terminal', icon: Terminal, label: 'Terminal', path: '/terminal' },
|
||||
{ id: 'settings', icon: Settings, label: 'Settings', path: '/settings' },
|
||||
];
|
||||
|
||||
if (!currentProject) return null;
|
||||
|
||||
return (
|
||||
<div className="fixed bottom-8 left-1/2 -translate-x-1/2 z-50">
|
||||
<motion.div
|
||||
onMouseMove={(e) => mouseX.set(e.pageX)}
|
||||
onMouseLeave={() => mouseX.set(Infinity)}
|
||||
className={cn(
|
||||
'flex h-16 items-end gap-4 rounded-2xl px-4 pb-3',
|
||||
'bg-white/5 backdrop-blur-2xl border border-white/10 shadow-2xl'
|
||||
)}
|
||||
>
|
||||
{navItems.map((item) => (
|
||||
<DockIcon
|
||||
key={item.id}
|
||||
mouseX={mouseX}
|
||||
icon={item.icon}
|
||||
path={item.path}
|
||||
label={item.label}
|
||||
isActive={location.pathname.startsWith(item.path)}
|
||||
onClick={() => navigate({ to: item.path })}
|
||||
/>
|
||||
))}
|
||||
</motion.div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
function DockIcon({
|
||||
mouseX,
|
||||
icon: Icon,
|
||||
path,
|
||||
label,
|
||||
isActive,
|
||||
onClick,
|
||||
}: {
|
||||
mouseX: any;
|
||||
icon: LucideIcon;
|
||||
path: string;
|
||||
label: string;
|
||||
isActive: boolean;
|
||||
onClick: () => void;
|
||||
}) {
|
||||
const ref = useRef<HTMLDivElement>(null);
|
||||
|
||||
const distance = useTransform(mouseX, (val: number) => {
|
||||
const bounds = ref.current?.getBoundingClientRect() ?? { x: 0, width: 0 };
|
||||
return val - bounds.x - bounds.width / 2;
|
||||
});
|
||||
|
||||
const widthSync = useTransform(distance, [-150, 0, 150], [40, 80, 40]);
|
||||
const width = useSpring(widthSync, { mass: 0.1, stiffness: 150, damping: 12 });
|
||||
|
||||
return (
|
||||
<motion.div
|
||||
ref={ref}
|
||||
style={{ width }}
|
||||
className="aspect-square cursor-pointer group relative"
|
||||
onClick={onClick}
|
||||
>
|
||||
{/* Tooltip */}
|
||||
<div className="absolute -top-10 left-1/2 -translate-x-1/2 opacity-0 group-hover:opacity-100 transition-opacity text-xs font-mono bg-black/80 text-white px-2 py-1 rounded backdrop-blur-md border border-white/10 pointer-events-none whitespace-nowrap">
|
||||
{label}
|
||||
</div>
|
||||
|
||||
<div
|
||||
className={cn(
|
||||
'flex h-full w-full items-center justify-center rounded-full transition-colors',
|
||||
isActive
|
||||
? 'bg-primary text-primary-foreground shadow-[0_0_20px_rgba(34,211,238,0.3)]'
|
||||
: 'bg-white/5 text-muted-foreground hover:bg-white/10'
|
||||
)}
|
||||
>
|
||||
<Icon className="h-[40%] w-[40%]" />
|
||||
</div>
|
||||
|
||||
{/* Active Dot */}
|
||||
{isActive && (
|
||||
<motion.div
|
||||
layoutId="activeDockDot"
|
||||
className="absolute -bottom-2 left-1/2 w-1 h-1 bg-primary rounded-full -translate-x-1/2"
|
||||
/>
|
||||
)}
|
||||
</motion.div>
|
||||
);
|
||||
}
|
||||
70
apps/ui/src/components/layout/hud.tsx
Normal file
70
apps/ui/src/components/layout/hud.tsx
Normal file
@@ -0,0 +1,70 @@
|
||||
import { ChevronDown, Command, Folder } from 'lucide-react';
|
||||
import { cn } from '@/lib/utils';
|
||||
import { useAppStore } from '@/store/app-store';
|
||||
import { Button } from '@/components/ui/button';
|
||||
import {
|
||||
DropdownMenu,
|
||||
DropdownMenuContent,
|
||||
DropdownMenuItem,
|
||||
DropdownMenuLabel,
|
||||
DropdownMenuSeparator,
|
||||
DropdownMenuTrigger,
|
||||
} from '@/components/ui/dropdown-menu';
|
||||
|
||||
interface HudProps {
|
||||
onOpenProjectPicker: () => void;
|
||||
onOpenFolder: () => void;
|
||||
}
|
||||
|
||||
export function Hud({ onOpenProjectPicker, onOpenFolder }: HudProps) {
|
||||
const { currentProject, projects, setCurrentProject } = useAppStore();
|
||||
|
||||
if (!currentProject) return null;
|
||||
|
||||
return (
|
||||
<div className="fixed top-4 left-4 z-50 flex items-center gap-3">
|
||||
{/* Project Pill */}
|
||||
<DropdownMenu>
|
||||
<DropdownMenuTrigger asChild>
|
||||
<div
|
||||
className={cn(
|
||||
'group flex items-center gap-3 px-4 py-2 rounded-full cursor-pointer',
|
||||
'bg-white/5 backdrop-blur-md border border-white/10',
|
||||
'hover:bg-white/10 transition-colors'
|
||||
)}
|
||||
>
|
||||
<div className="w-2 h-2 rounded-full bg-emerald-500 shadow-[0_0_10px_rgba(16,185,129,0.4)] animate-pulse" />
|
||||
<span className="font-mono text-sm font-medium tracking-tight">
|
||||
{currentProject.name}
|
||||
</span>
|
||||
<ChevronDown className="w-3 h-3 text-muted-foreground group-hover:text-foreground transition-colors" />
|
||||
</div>
|
||||
</DropdownMenuTrigger>
|
||||
<DropdownMenuContent className="w-56 glass border-white/10" align="start">
|
||||
<DropdownMenuLabel>Switch Project</DropdownMenuLabel>
|
||||
<DropdownMenuSeparator />
|
||||
{projects.slice(0, 5).map((p) => (
|
||||
<DropdownMenuItem
|
||||
key={p.id}
|
||||
onClick={() => setCurrentProject(p)}
|
||||
className="font-mono text-xs"
|
||||
>
|
||||
{p.name}
|
||||
</DropdownMenuItem>
|
||||
))}
|
||||
<DropdownMenuSeparator />
|
||||
<DropdownMenuItem onClick={onOpenProjectPicker}>
|
||||
<Command className="mr-2 w-3 h-3" />
|
||||
All Projects...
|
||||
</DropdownMenuItem>
|
||||
<DropdownMenuItem onClick={onOpenFolder}>
|
||||
<Folder className="mr-2 w-3 h-3" />
|
||||
Open Local Folder...
|
||||
</DropdownMenuItem>
|
||||
</DropdownMenuContent>
|
||||
</DropdownMenu>
|
||||
|
||||
{/* Dynamic Status / Breadcrumbs could go here */}
|
||||
</div>
|
||||
);
|
||||
}
|
||||
17
apps/ui/src/components/layout/noise-overlay.tsx
Normal file
17
apps/ui/src/components/layout/noise-overlay.tsx
Normal file
@@ -0,0 +1,17 @@
|
||||
export function NoiseOverlay() {
|
||||
return (
|
||||
<div className="fixed inset-0 z-50 pointer-events-none opacity-[0.015] mix-blend-overlay">
|
||||
<svg className="w-full h-full">
|
||||
<filter id="noiseFilter">
|
||||
<feTurbulence
|
||||
type="fractalNoise"
|
||||
baseFrequency="0.80"
|
||||
numOctaves="3"
|
||||
stitchTiles="stitch"
|
||||
/>
|
||||
</filter>
|
||||
<rect width="100%" height="100%" filter="url(#noiseFilter)" />
|
||||
</svg>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
30
apps/ui/src/components/layout/page-shell.tsx
Normal file
30
apps/ui/src/components/layout/page-shell.tsx
Normal file
@@ -0,0 +1,30 @@
|
||||
import { ReactNode } from 'react';
|
||||
import { cn } from '@/lib/utils';
|
||||
import { motion } from 'framer-motion';
|
||||
|
||||
interface PageShellProps {
|
||||
children: ReactNode;
|
||||
className?: string;
|
||||
fullWidth?: boolean;
|
||||
}
|
||||
|
||||
export function PageShell({ children, className, fullWidth = false }: PageShellProps) {
|
||||
return (
|
||||
<div className="relative w-full h-full pt-16 pb-24 px-6 overflow-hidden">
|
||||
<motion.div
|
||||
initial={{ opacity: 0, scale: 0.98, y: 10 }}
|
||||
animate={{ opacity: 1, scale: 1, y: 0 }}
|
||||
transition={{ duration: 0.4, ease: [0.2, 0, 0, 1] }}
|
||||
className={cn(
|
||||
'w-full h-full rounded-3xl overflow-hidden',
|
||||
'bg-black/20 backdrop-blur-2xl border border-white/5 shadow-2xl',
|
||||
'flex flex-col',
|
||||
!fullWidth && 'max-w-7xl mx-auto',
|
||||
className
|
||||
)}
|
||||
>
|
||||
{children}
|
||||
</motion.div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
69
apps/ui/src/components/layout/prism-field.tsx
Normal file
69
apps/ui/src/components/layout/prism-field.tsx
Normal file
@@ -0,0 +1,69 @@
|
||||
import { motion } from 'framer-motion';
|
||||
import { useEffect, useState } from 'react';
|
||||
|
||||
export function PrismField() {
|
||||
const [mousePosition, setMousePosition] = useState({ x: 0, y: 0 });
|
||||
|
||||
useEffect(() => {
|
||||
const handleMouseMove = (e: MouseEvent) => {
|
||||
setMousePosition({
|
||||
x: e.clientX,
|
||||
y: e.clientY,
|
||||
});
|
||||
};
|
||||
|
||||
window.addEventListener('mousemove', handleMouseMove);
|
||||
return () => window.removeEventListener('mousemove', handleMouseMove);
|
||||
}, []);
|
||||
|
||||
return (
|
||||
<div className="fixed inset-0 z-0 overflow-hidden pointer-events-none bg-[#0b101a]">
|
||||
{/* Deep Space Base */}
|
||||
<div className="absolute inset-0 bg-[radial-gradient(circle_at_50%_50%,rgba(17,24,39,1)_0%,rgba(11,16,26,1)_100%)]" />
|
||||
|
||||
{/* Animated Orbs */}
|
||||
<motion.div
|
||||
animate={{
|
||||
x: mousePosition.x * 0.02,
|
||||
y: mousePosition.y * 0.02,
|
||||
}}
|
||||
transition={{ type: 'spring', damping: 50, stiffness: 400 }}
|
||||
className="absolute top-[-20%] left-[-10%] w-[70vw] h-[70vw] rounded-full bg-cyan-500/5 blur-[120px] mix-blend-screen"
|
||||
/>
|
||||
|
||||
<motion.div
|
||||
animate={{
|
||||
x: mousePosition.x * -0.03,
|
||||
y: mousePosition.y * -0.03,
|
||||
}}
|
||||
transition={{ type: 'spring', damping: 50, stiffness: 400 }}
|
||||
className="absolute bottom-[-20%] right-[-10%] w-[60vw] h-[60vw] rounded-full bg-violet-600/5 blur-[120px] mix-blend-screen"
|
||||
/>
|
||||
|
||||
<motion.div
|
||||
animate={{
|
||||
scale: [1, 1.1, 1],
|
||||
opacity: [0.3, 0.5, 0.3],
|
||||
}}
|
||||
transition={{
|
||||
duration: 8,
|
||||
repeat: Infinity,
|
||||
ease: 'easeInOut',
|
||||
}}
|
||||
className="absolute top-[30%] left-[50%] transform -translate-x-1/2 -translate-y-1/2 w-[40vw] h-[40vw] rounded-full bg-blue-500/5 blur-[100px] mix-blend-screen"
|
||||
/>
|
||||
|
||||
{/* Grid Overlay */}
|
||||
<div
|
||||
className="absolute inset-0 z-10 opacity-[0.03]"
|
||||
style={{
|
||||
backgroundImage: `linear-gradient(#fff 1px, transparent 1px), linear-gradient(90deg, #fff 1px, transparent 1px)`,
|
||||
backgroundSize: '50px 50px',
|
||||
}}
|
||||
/>
|
||||
|
||||
{/* Vignette */}
|
||||
<div className="absolute inset-0 z-20 bg-[radial-gradient(circle_at_center,transparent_0%,rgba(11,16,26,0.8)_100%)]" />
|
||||
</div>
|
||||
);
|
||||
}
|
||||
32
apps/ui/src/components/layout/shell.tsx
Normal file
32
apps/ui/src/components/layout/shell.tsx
Normal file
@@ -0,0 +1,32 @@
|
||||
import { ReactNode } from 'react';
|
||||
import { cn } from '../../lib/utils';
|
||||
import { PrismField } from './prism-field';
|
||||
import { NoiseOverlay } from './noise-overlay';
|
||||
|
||||
interface ShellProps {
|
||||
children: ReactNode;
|
||||
className?: string;
|
||||
showBackgroundElements?: boolean;
|
||||
}
|
||||
|
||||
export function Shell({ children, className, showBackgroundElements = true }: ShellProps) {
|
||||
return (
|
||||
<div
|
||||
className={cn(
|
||||
'relative min-h-screen w-full overflow-hidden bg-background text-foreground transition-colors duration-500',
|
||||
className
|
||||
)}
|
||||
>
|
||||
{/* Animated Background Layers */}
|
||||
{showBackgroundElements && (
|
||||
<>
|
||||
<PrismField />
|
||||
<NoiseOverlay />
|
||||
</>
|
||||
)}
|
||||
|
||||
{/* Content wrapper */}
|
||||
<div className="relative z-10 flex h-screen flex-col">{children}</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
@@ -17,8 +17,9 @@ import {
|
||||
ProjectActions,
|
||||
SidebarNavigation,
|
||||
ProjectSelectorWithOptions,
|
||||
SidebarFooter,
|
||||
} from './sidebar/components';
|
||||
import { Hud } from './hud';
|
||||
import { FloatingDock } from './floating-dock';
|
||||
import { TrashDialog, OnboardingDialog } from './sidebar/dialogs';
|
||||
import { SIDEBAR_FEATURE_FLAGS } from './sidebar/constants';
|
||||
import {
|
||||
@@ -28,9 +29,8 @@ import {
|
||||
useNavigation,
|
||||
useProjectCreation,
|
||||
useSetupDialog,
|
||||
useTrashOperations,
|
||||
useTrashDialog,
|
||||
useProjectTheme,
|
||||
useUnviewedValidations,
|
||||
} from './sidebar/hooks';
|
||||
|
||||
export function Sidebar() {
|
||||
@@ -68,9 +68,6 @@ export function Sidebar() {
|
||||
// State for delete project confirmation dialog
|
||||
const [showDeleteProjectDialog, setShowDeleteProjectDialog] = useState(false);
|
||||
|
||||
// State for trash dialog
|
||||
const [showTrashDialog, setShowTrashDialog] = useState(false);
|
||||
|
||||
// Project theme management (must come before useProjectCreation which uses globalTheme)
|
||||
const { globalTheme } = useProjectTheme();
|
||||
|
||||
@@ -131,20 +128,20 @@ export function Sidebar() {
|
||||
// Running agents count
|
||||
const { runningAgentsCount } = useRunningAgents();
|
||||
|
||||
// Unviewed validations count
|
||||
const { count: unviewedValidationsCount } = useUnviewedValidations(currentProject);
|
||||
|
||||
// Trash operations
|
||||
// Trash dialog and operations
|
||||
const {
|
||||
showTrashDialog,
|
||||
setShowTrashDialog,
|
||||
activeTrashId,
|
||||
isEmptyingTrash,
|
||||
handleRestoreProject,
|
||||
handleDeleteProjectFromDisk,
|
||||
handleEmptyTrash,
|
||||
} = useTrashOperations({
|
||||
} = useTrashDialog({
|
||||
restoreTrashedProject,
|
||||
deleteTrashedProject,
|
||||
emptyTrash,
|
||||
trashedProjects,
|
||||
});
|
||||
|
||||
// Spec regeneration events
|
||||
@@ -239,7 +236,6 @@ export function Sidebar() {
|
||||
setIsProjectPickerOpen,
|
||||
cyclePrevProject,
|
||||
cycleNextProject,
|
||||
unviewedValidationsCount,
|
||||
});
|
||||
|
||||
// Register keyboard shortcuts
|
||||
@@ -252,64 +248,27 @@ export function Sidebar() {
|
||||
};
|
||||
|
||||
return (
|
||||
<aside
|
||||
className={cn(
|
||||
'flex-shrink-0 flex flex-col z-30 relative',
|
||||
// Glass morphism background with gradient
|
||||
'bg-gradient-to-b from-sidebar/95 via-sidebar/85 to-sidebar/90 backdrop-blur-2xl',
|
||||
// Premium border with subtle glow
|
||||
'border-r border-border/60 shadow-[1px_0_20px_-5px_rgba(0,0,0,0.1)]',
|
||||
// Smooth width transition
|
||||
'transition-all duration-300 ease-[cubic-bezier(0.4,0,0.2,1)]',
|
||||
sidebarOpen ? 'w-16 lg:w-72' : 'w-16'
|
||||
)}
|
||||
data-testid="sidebar"
|
||||
>
|
||||
<CollapseToggleButton
|
||||
sidebarOpen={sidebarOpen}
|
||||
toggleSidebar={toggleSidebar}
|
||||
shortcut={shortcuts.toggleSidebar}
|
||||
<>
|
||||
{/* Heads-Up Display (Top Bar) */}
|
||||
<Hud
|
||||
onOpenProjectPicker={() => setIsProjectPickerOpen(true)}
|
||||
onOpenFolder={handleOpenFolder}
|
||||
/>
|
||||
|
||||
<div className="flex-1 flex flex-col overflow-hidden">
|
||||
<SidebarHeader sidebarOpen={sidebarOpen} navigate={navigate} />
|
||||
|
||||
{/* Project Actions - Moved above project selector */}
|
||||
{sidebarOpen && (
|
||||
<ProjectActions
|
||||
setShowNewProjectModal={setShowNewProjectModal}
|
||||
handleOpenFolder={handleOpenFolder}
|
||||
setShowTrashDialog={setShowTrashDialog}
|
||||
trashedProjects={trashedProjects}
|
||||
shortcuts={{ openProject: shortcuts.openProject }}
|
||||
/>
|
||||
)}
|
||||
{/* Floating Navigation Dock */}
|
||||
<FloatingDock />
|
||||
|
||||
{/* Project Selector Dialog (Hidden logic, controlled by state) */}
|
||||
<div className="hidden">
|
||||
<ProjectSelectorWithOptions
|
||||
sidebarOpen={sidebarOpen}
|
||||
sidebarOpen={true}
|
||||
isProjectPickerOpen={isProjectPickerOpen}
|
||||
setIsProjectPickerOpen={setIsProjectPickerOpen}
|
||||
setShowDeleteProjectDialog={setShowDeleteProjectDialog}
|
||||
/>
|
||||
|
||||
<SidebarNavigation
|
||||
currentProject={currentProject}
|
||||
sidebarOpen={sidebarOpen}
|
||||
navSections={navSections}
|
||||
isActiveRoute={isActiveRoute}
|
||||
navigate={navigate}
|
||||
/>
|
||||
</div>
|
||||
|
||||
<SidebarFooter
|
||||
sidebarOpen={sidebarOpen}
|
||||
isActiveRoute={isActiveRoute}
|
||||
navigate={navigate}
|
||||
hideWiki={hideWiki}
|
||||
hideRunningAgents={hideRunningAgents}
|
||||
runningAgentsCount={runningAgentsCount}
|
||||
shortcuts={{ settings: shortcuts.settings }}
|
||||
/>
|
||||
{/* Dialogs & Modals - Preservation of Logic */}
|
||||
<TrashDialog
|
||||
open={showTrashDialog}
|
||||
onOpenChange={setShowTrashDialog}
|
||||
@@ -322,7 +281,6 @@ export function Sidebar() {
|
||||
isEmptyingTrash={isEmptyingTrash}
|
||||
/>
|
||||
|
||||
{/* New Project Setup Dialog */}
|
||||
<CreateSpecDialog
|
||||
open={showSetupDialog}
|
||||
onOpenChange={setShowSetupDialog}
|
||||
@@ -350,7 +308,6 @@ export function Sidebar() {
|
||||
onGenerateSpec={handleOnboardingGenerateSpec}
|
||||
/>
|
||||
|
||||
{/* Delete Project Confirmation Dialog */}
|
||||
<DeleteProjectDialog
|
||||
open={showDeleteProjectDialog}
|
||||
onOpenChange={setShowDeleteProjectDialog}
|
||||
@@ -358,7 +315,6 @@ export function Sidebar() {
|
||||
onConfirm={moveProjectToTrash}
|
||||
/>
|
||||
|
||||
{/* New Project Modal */}
|
||||
<NewProjectModal
|
||||
open={showNewProjectModal}
|
||||
onOpenChange={setShowNewProjectModal}
|
||||
@@ -367,6 +323,6 @@ export function Sidebar() {
|
||||
onCreateFromCustomUrl={handleCreateFromCustomUrl}
|
||||
isCreating={isCreatingProject}
|
||||
/>
|
||||
</aside>
|
||||
</>
|
||||
);
|
||||
}
|
||||
|
||||
@@ -47,6 +47,7 @@ export function ProjectSelectorWithOptions({
|
||||
setIsProjectPickerOpen,
|
||||
setShowDeleteProjectDialog,
|
||||
}: ProjectSelectorWithOptionsProps) {
|
||||
// Get data from store
|
||||
const {
|
||||
projects,
|
||||
currentProject,
|
||||
@@ -58,24 +59,25 @@ export function ProjectSelectorWithOptions({
|
||||
clearProjectHistory,
|
||||
} = useAppStore();
|
||||
|
||||
// Get keyboard shortcuts
|
||||
const shortcuts = useKeyboardShortcutsConfig();
|
||||
const {
|
||||
projectSearchQuery,
|
||||
setProjectSearchQuery,
|
||||
selectedProjectIndex,
|
||||
projectSearchInputRef,
|
||||
scrollContainerRef,
|
||||
filteredProjects,
|
||||
} = useProjectPicker({
|
||||
projects,
|
||||
currentProject,
|
||||
isProjectPickerOpen,
|
||||
setIsProjectPickerOpen,
|
||||
setCurrentProject,
|
||||
});
|
||||
|
||||
// Drag-and-drop handlers
|
||||
const { sensors, handleDragEnd } = useDragAndDrop({ projects, reorderProjects });
|
||||
|
||||
// Theme management
|
||||
const {
|
||||
globalTheme,
|
||||
setTheme,
|
||||
@@ -104,6 +106,7 @@ export function ProjectSelectorWithOptions({
|
||||
'shadow-sm shadow-black/5',
|
||||
'text-foreground titlebar-no-drag min-w-0',
|
||||
'transition-all duration-200 ease-out',
|
||||
'hover:scale-[1.01] active:scale-[0.99]',
|
||||
isProjectPickerOpen &&
|
||||
'from-brand-500/10 to-brand-600/5 border-brand-500/30 ring-2 ring-brand-500/20 shadow-lg shadow-brand-500/5'
|
||||
)}
|
||||
@@ -136,7 +139,7 @@ export function ProjectSelectorWithOptions({
|
||||
align="start"
|
||||
data-testid="project-picker-dropdown"
|
||||
>
|
||||
{/* Search input */}
|
||||
{/* Search input for type-ahead filtering */}
|
||||
<div className="px-1 pb-2">
|
||||
<div className="relative">
|
||||
<Search className="absolute left-2.5 top-1/2 -translate-y-1/2 h-3.5 w-3.5 text-muted-foreground" />
|
||||
@@ -147,10 +150,10 @@ export function ProjectSelectorWithOptions({
|
||||
value={projectSearchQuery}
|
||||
onChange={(e) => setProjectSearchQuery(e.target.value)}
|
||||
className={cn(
|
||||
'w-full h-8 pl-8 pr-3 text-sm rounded-lg',
|
||||
'w-full h-9 pl-8 pr-3 text-sm rounded-lg',
|
||||
'border border-border bg-background/50',
|
||||
'text-foreground placeholder:text-muted-foreground',
|
||||
'focus:outline-none focus:ring-1 focus:ring-brand-500/30 focus:border-brand-500/50',
|
||||
'focus:outline-none focus:ring-2 focus:ring-brand-500/30 focus:border-brand-500/50',
|
||||
'transition-all duration-200'
|
||||
)}
|
||||
data-testid="project-search-input"
|
||||
@@ -172,10 +175,7 @@ export function ProjectSelectorWithOptions({
|
||||
items={filteredProjects.map((p) => p.id)}
|
||||
strategy={verticalListSortingStrategy}
|
||||
>
|
||||
<div
|
||||
ref={scrollContainerRef}
|
||||
className="space-y-0.5 max-h-64 overflow-y-auto overflow-x-hidden scroll-smooth scrollbar-styled"
|
||||
>
|
||||
<div className="space-y-0.5 max-h-64 overflow-y-auto">
|
||||
{filteredProjects.map((project, index) => (
|
||||
<SortableProjectItem
|
||||
key={project.id}
|
||||
@@ -196,9 +196,9 @@ export function ProjectSelectorWithOptions({
|
||||
{/* Keyboard hint */}
|
||||
<div className="px-2 pt-2 mt-1.5 border-t border-border/50">
|
||||
<p className="text-[10px] text-muted-foreground text-center tracking-wide">
|
||||
<span className="text-foreground/60">↑↓</span> navigate{' '}
|
||||
<span className="text-foreground/60">arrow</span> navigate{' '}
|
||||
<span className="mx-1 text-foreground/30">|</span>{' '}
|
||||
<span className="text-foreground/60">↵</span> select{' '}
|
||||
<span className="text-foreground/60">enter</span> select{' '}
|
||||
<span className="mx-1 text-foreground/30">|</span>{' '}
|
||||
<span className="text-foreground/60">esc</span> close
|
||||
</p>
|
||||
@@ -206,7 +206,7 @@ export function ProjectSelectorWithOptions({
|
||||
</DropdownMenuContent>
|
||||
</DropdownMenu>
|
||||
|
||||
{/* Project Options Menu */}
|
||||
{/* Project Options Menu - theme and history */}
|
||||
{currentProject && (
|
||||
<DropdownMenu
|
||||
onOpenChange={(open) => {
|
||||
@@ -223,7 +223,8 @@ export function ProjectSelectorWithOptions({
|
||||
'text-muted-foreground hover:text-foreground',
|
||||
'bg-transparent hover:bg-accent/60',
|
||||
'border border-border/50 hover:border-border',
|
||||
'transition-all duration-200 ease-out titlebar-no-drag'
|
||||
'transition-all duration-200 ease-out titlebar-no-drag',
|
||||
'hover:scale-[1.02] active:scale-[0.98]'
|
||||
)}
|
||||
title="Project options"
|
||||
data-testid="project-options-menu"
|
||||
@@ -251,6 +252,7 @@ export function ProjectSelectorWithOptions({
|
||||
setPreviewTheme(null);
|
||||
}}
|
||||
>
|
||||
{/* Use Global Option */}
|
||||
<DropdownMenuRadioGroup
|
||||
value={currentProject.theme || ''}
|
||||
onValueChange={(value) => {
|
||||
@@ -326,7 +328,7 @@ export function ProjectSelectorWithOptions({
|
||||
</DropdownMenuSubContent>
|
||||
</DropdownMenuSub>
|
||||
|
||||
{/* Project History Section */}
|
||||
{/* Project History Section - only show when there's history */}
|
||||
{projectHistory.length > 1 && (
|
||||
<>
|
||||
<DropdownMenuSeparator />
|
||||
|
||||
@@ -78,29 +78,14 @@ export function SidebarNavigation({
|
||||
title={!sidebarOpen ? item.label : undefined}
|
||||
data-testid={`nav-${item.id}`}
|
||||
>
|
||||
<div className="relative">
|
||||
<Icon
|
||||
className={cn(
|
||||
'w-[18px] h-[18px] shrink-0 transition-all duration-200',
|
||||
isActive
|
||||
? 'text-brand-500 drop-shadow-sm'
|
||||
: 'group-hover:text-brand-400 group-hover:scale-110'
|
||||
)}
|
||||
/>
|
||||
{/* Count badge for collapsed state */}
|
||||
{!sidebarOpen && item.count !== undefined && item.count > 0 && (
|
||||
<span
|
||||
className={cn(
|
||||
'absolute -top-1.5 -right-1.5 flex items-center justify-center',
|
||||
'min-w-4 h-4 px-1 text-[9px] font-bold rounded-full',
|
||||
'bg-primary text-primary-foreground shadow-sm',
|
||||
'animate-in fade-in zoom-in duration-200'
|
||||
)}
|
||||
>
|
||||
{item.count > 99 ? '99' : item.count}
|
||||
</span>
|
||||
<Icon
|
||||
className={cn(
|
||||
'w-[18px] h-[18px] shrink-0 transition-all duration-200',
|
||||
isActive
|
||||
? 'text-brand-500 drop-shadow-sm'
|
||||
: 'group-hover:text-brand-400 group-hover:scale-110'
|
||||
)}
|
||||
</div>
|
||||
/>
|
||||
<span
|
||||
className={cn(
|
||||
'ml-3 font-medium text-sm flex-1 text-left',
|
||||
@@ -109,21 +94,7 @@ export function SidebarNavigation({
|
||||
>
|
||||
{item.label}
|
||||
</span>
|
||||
{/* Count badge */}
|
||||
{item.count !== undefined && item.count > 0 && sidebarOpen && (
|
||||
<span
|
||||
className={cn(
|
||||
'hidden lg:flex items-center justify-center',
|
||||
'min-w-5 h-5 px-1.5 text-[10px] font-bold rounded-full',
|
||||
'bg-primary text-primary-foreground shadow-sm',
|
||||
'animate-in fade-in zoom-in duration-200'
|
||||
)}
|
||||
data-testid={`count-${item.id}`}
|
||||
>
|
||||
{item.count > 99 ? '99+' : item.count}
|
||||
</span>
|
||||
)}
|
||||
{item.shortcut && sidebarOpen && !item.count && (
|
||||
{item.shortcut && sidebarOpen && (
|
||||
<span
|
||||
className={cn(
|
||||
'hidden lg:flex items-center justify-center min-w-5 h-5 px-1.5 text-[10px] font-mono rounded-md transition-all duration-200',
|
||||
|
||||
@@ -31,7 +31,6 @@ export function SortableProjectItem({
|
||||
isHighlighted && 'bg-brand-500/10 text-foreground ring-1 ring-brand-500/20'
|
||||
)}
|
||||
data-testid={`project-option-${project.id}`}
|
||||
onClick={() => onSelect(project)}
|
||||
>
|
||||
{/* Drag Handle */}
|
||||
<button
|
||||
@@ -44,14 +43,9 @@ export function SortableProjectItem({
|
||||
<GripVertical className="h-3.5 w-3.5 text-muted-foreground/60" />
|
||||
</button>
|
||||
|
||||
{/* Project content */}
|
||||
<div className="flex items-center gap-2.5 flex-1 min-w-0">
|
||||
<Folder
|
||||
className={cn(
|
||||
'h-4 w-4 shrink-0',
|
||||
currentProjectId === project.id ? 'text-brand-500' : 'text-muted-foreground'
|
||||
)}
|
||||
/>
|
||||
{/* Project content - clickable area */}
|
||||
<div className="flex items-center gap-2.5 flex-1 min-w-0" onClick={() => onSelect(project)}>
|
||||
<Folder className="h-4 w-4 shrink-0 text-muted-foreground" />
|
||||
<span className="flex-1 truncate text-sm font-medium">{project.name}</span>
|
||||
{currentProjectId === project.id && <Check className="h-4 w-4 text-brand-500 shrink-0" />}
|
||||
</div>
|
||||
|
||||
@@ -37,11 +37,11 @@ export function OnboardingDialog({
|
||||
<DialogContent className="max-w-2xl bg-popover/95 backdrop-blur-xl">
|
||||
<DialogHeader>
|
||||
<div className="flex items-center gap-3 mb-2">
|
||||
<div className="flex items-center justify-center w-12 h-12 rounded-full bg-brand-500/10 border border-brand-500/20 shrink-0">
|
||||
<div className="flex items-center justify-center w-12 h-12 rounded-full bg-brand-500/10 border border-brand-500/20">
|
||||
<Rocket className="w-6 h-6 text-brand-500" />
|
||||
</div>
|
||||
<div className="min-w-0 flex-1">
|
||||
<DialogTitle className="text-2xl truncate">Welcome to {newProjectName}!</DialogTitle>
|
||||
<div>
|
||||
<DialogTitle className="text-2xl">Welcome to {newProjectName}!</DialogTitle>
|
||||
<DialogDescription className="text-muted-foreground mt-1">
|
||||
Your new project is ready. Let's get you started.
|
||||
</DialogDescription>
|
||||
|
||||
@@ -1,4 +1,3 @@
|
||||
import { useState } from 'react';
|
||||
import { X, Trash2, Undo2 } from 'lucide-react';
|
||||
import {
|
||||
Dialog,
|
||||
@@ -9,8 +8,6 @@ import {
|
||||
DialogTitle,
|
||||
} from '@/components/ui/dialog';
|
||||
import { Button } from '@/components/ui/button';
|
||||
import { DeleteConfirmDialog } from '@/components/ui/delete-confirm-dialog';
|
||||
import { ConfirmDialog } from '@/components/ui/confirm-dialog';
|
||||
import type { TrashedProject } from '@/lib/electron';
|
||||
|
||||
interface TrashDialogProps {
|
||||
@@ -36,146 +33,84 @@ export function TrashDialog({
|
||||
handleEmptyTrash,
|
||||
isEmptyingTrash,
|
||||
}: TrashDialogProps) {
|
||||
// Confirmation dialog state (managed internally to avoid prop drilling)
|
||||
const [deleteFromDiskProject, setDeleteFromDiskProject] = useState<TrashedProject | null>(null);
|
||||
const [showEmptyTrashConfirm, setShowEmptyTrashConfirm] = useState(false);
|
||||
|
||||
// Reset confirmation dialog state when main dialog closes
|
||||
const handleOpenChange = (isOpen: boolean) => {
|
||||
if (!isOpen) {
|
||||
setDeleteFromDiskProject(null);
|
||||
setShowEmptyTrashConfirm(false);
|
||||
}
|
||||
onOpenChange(isOpen);
|
||||
};
|
||||
|
||||
const onDeleteFromDiskClick = (project: TrashedProject) => {
|
||||
setDeleteFromDiskProject(project);
|
||||
};
|
||||
|
||||
const onConfirmDeleteFromDisk = () => {
|
||||
if (deleteFromDiskProject) {
|
||||
handleDeleteProjectFromDisk(deleteFromDiskProject);
|
||||
setDeleteFromDiskProject(null);
|
||||
}
|
||||
};
|
||||
|
||||
const onEmptyTrashClick = () => {
|
||||
setShowEmptyTrashConfirm(true);
|
||||
};
|
||||
|
||||
const onConfirmEmptyTrash = () => {
|
||||
handleEmptyTrash();
|
||||
setShowEmptyTrashConfirm(false);
|
||||
};
|
||||
|
||||
return (
|
||||
<>
|
||||
<Dialog open={open} onOpenChange={handleOpenChange}>
|
||||
<DialogContent className="bg-popover/95 backdrop-blur-xl border-border max-w-2xl">
|
||||
<DialogHeader>
|
||||
<DialogTitle>Recycle Bin</DialogTitle>
|
||||
<DialogDescription className="text-muted-foreground">
|
||||
Restore projects to the sidebar or delete their folders using your system Trash.
|
||||
</DialogDescription>
|
||||
</DialogHeader>
|
||||
<Dialog open={open} onOpenChange={onOpenChange}>
|
||||
<DialogContent className="bg-popover/95 backdrop-blur-xl border-border max-w-2xl">
|
||||
<DialogHeader>
|
||||
<DialogTitle>Recycle Bin</DialogTitle>
|
||||
<DialogDescription className="text-muted-foreground">
|
||||
Restore projects to the sidebar or delete their folders using your system Trash.
|
||||
</DialogDescription>
|
||||
</DialogHeader>
|
||||
|
||||
{trashedProjects.length === 0 ? (
|
||||
<p className="text-sm text-muted-foreground">Recycle bin is empty.</p>
|
||||
) : (
|
||||
<div className="space-y-3 max-h-[360px] overflow-y-auto pr-1">
|
||||
{trashedProjects.map((project) => (
|
||||
<div
|
||||
key={project.id}
|
||||
className="flex items-start justify-between gap-3 rounded-lg border border-border bg-card/50 p-4"
|
||||
>
|
||||
<div className="space-y-1 min-w-0">
|
||||
<p className="text-sm font-medium text-foreground truncate">{project.name}</p>
|
||||
<p className="text-xs text-muted-foreground break-all">{project.path}</p>
|
||||
<p className="text-[11px] text-muted-foreground/80">
|
||||
Trashed {new Date(project.trashedAt).toLocaleString()}
|
||||
</p>
|
||||
</div>
|
||||
<div className="flex flex-col gap-2 shrink-0">
|
||||
<Button
|
||||
size="sm"
|
||||
variant="secondary"
|
||||
onClick={() => handleRestoreProject(project.id)}
|
||||
data-testid={`restore-project-${project.id}`}
|
||||
>
|
||||
<Undo2 className="h-3.5 w-3.5 mr-1.5" />
|
||||
Restore
|
||||
</Button>
|
||||
<Button
|
||||
size="sm"
|
||||
variant="destructive"
|
||||
onClick={() => onDeleteFromDiskClick(project)}
|
||||
disabled={activeTrashId === project.id}
|
||||
data-testid={`delete-project-disk-${project.id}`}
|
||||
>
|
||||
<Trash2 className="h-3.5 w-3.5 mr-1.5" />
|
||||
{activeTrashId === project.id ? 'Deleting...' : 'Delete from disk'}
|
||||
</Button>
|
||||
<Button
|
||||
size="sm"
|
||||
variant="ghost"
|
||||
className="text-muted-foreground hover:text-foreground"
|
||||
onClick={() => deleteTrashedProject(project.id)}
|
||||
data-testid={`remove-project-${project.id}`}
|
||||
>
|
||||
<X className="h-3.5 w-3.5 mr-1.5" />
|
||||
Remove from list
|
||||
</Button>
|
||||
</div>
|
||||
</div>
|
||||
))}
|
||||
</div>
|
||||
)}
|
||||
|
||||
<DialogFooter className="flex justify-between">
|
||||
<Button variant="ghost" onClick={() => onOpenChange(false)}>
|
||||
Close
|
||||
</Button>
|
||||
{trashedProjects.length > 0 && (
|
||||
<Button
|
||||
variant="outline"
|
||||
onClick={onEmptyTrashClick}
|
||||
disabled={isEmptyingTrash}
|
||||
data-testid="empty-trash"
|
||||
{trashedProjects.length === 0 ? (
|
||||
<p className="text-sm text-muted-foreground">Recycle bin is empty.</p>
|
||||
) : (
|
||||
<div className="space-y-3 max-h-[360px] overflow-y-auto pr-1">
|
||||
{trashedProjects.map((project) => (
|
||||
<div
|
||||
key={project.id}
|
||||
className="flex items-start justify-between gap-3 rounded-lg border border-border bg-card/50 p-4"
|
||||
>
|
||||
{isEmptyingTrash ? 'Clearing...' : 'Empty Recycle Bin'}
|
||||
</Button>
|
||||
)}
|
||||
</DialogFooter>
|
||||
</DialogContent>
|
||||
</Dialog>
|
||||
<div className="space-y-1 min-w-0">
|
||||
<p className="text-sm font-medium text-foreground truncate">{project.name}</p>
|
||||
<p className="text-xs text-muted-foreground break-all">{project.path}</p>
|
||||
<p className="text-[11px] text-muted-foreground/80">
|
||||
Trashed {new Date(project.trashedAt).toLocaleString()}
|
||||
</p>
|
||||
</div>
|
||||
<div className="flex flex-col gap-2 shrink-0">
|
||||
<Button
|
||||
size="sm"
|
||||
variant="secondary"
|
||||
onClick={() => handleRestoreProject(project.id)}
|
||||
data-testid={`restore-project-${project.id}`}
|
||||
>
|
||||
<Undo2 className="h-3.5 w-3.5 mr-1.5" />
|
||||
Restore
|
||||
</Button>
|
||||
<Button
|
||||
size="sm"
|
||||
variant="destructive"
|
||||
onClick={() => handleDeleteProjectFromDisk(project)}
|
||||
disabled={activeTrashId === project.id}
|
||||
data-testid={`delete-project-disk-${project.id}`}
|
||||
>
|
||||
<Trash2 className="h-3.5 w-3.5 mr-1.5" />
|
||||
{activeTrashId === project.id ? 'Deleting...' : 'Delete from disk'}
|
||||
</Button>
|
||||
<Button
|
||||
size="sm"
|
||||
variant="ghost"
|
||||
className="text-muted-foreground hover:text-foreground"
|
||||
onClick={() => deleteTrashedProject(project.id)}
|
||||
data-testid={`remove-project-${project.id}`}
|
||||
>
|
||||
<X className="h-3.5 w-3.5 mr-1.5" />
|
||||
Remove from list
|
||||
</Button>
|
||||
</div>
|
||||
</div>
|
||||
))}
|
||||
</div>
|
||||
)}
|
||||
|
||||
{/* Delete from disk confirmation dialog */}
|
||||
{deleteFromDiskProject && (
|
||||
<DeleteConfirmDialog
|
||||
open
|
||||
onOpenChange={(isOpen) => !isOpen && setDeleteFromDiskProject(null)}
|
||||
onConfirm={onConfirmDeleteFromDisk}
|
||||
title={`Delete "${deleteFromDiskProject.name}" from disk?`}
|
||||
description="This sends the folder to your system Trash."
|
||||
confirmText="Delete from disk"
|
||||
testId="delete-from-disk-confirm-dialog"
|
||||
confirmTestId="confirm-delete-from-disk-button"
|
||||
/>
|
||||
)}
|
||||
|
||||
{/* Empty trash confirmation dialog */}
|
||||
<ConfirmDialog
|
||||
open={showEmptyTrashConfirm}
|
||||
onOpenChange={setShowEmptyTrashConfirm}
|
||||
onConfirm={onConfirmEmptyTrash}
|
||||
title="Empty Recycle Bin"
|
||||
description="Clear all projects from recycle bin? This does not delete folders from disk."
|
||||
confirmText="Empty"
|
||||
confirmVariant="destructive"
|
||||
icon={Trash2}
|
||||
iconClassName="text-destructive"
|
||||
/>
|
||||
</>
|
||||
<DialogFooter className="flex justify-between">
|
||||
<Button variant="ghost" onClick={() => onOpenChange(false)}>
|
||||
Close
|
||||
</Button>
|
||||
{trashedProjects.length > 0 && (
|
||||
<Button
|
||||
variant="outline"
|
||||
onClick={handleEmptyTrash}
|
||||
disabled={isEmptyingTrash}
|
||||
data-testid="empty-trash"
|
||||
>
|
||||
{isEmptyingTrash ? 'Clearing...' : 'Empty Recycle Bin'}
|
||||
</Button>
|
||||
)}
|
||||
</DialogFooter>
|
||||
</DialogContent>
|
||||
</Dialog>
|
||||
);
|
||||
}
|
||||
|
||||
@@ -8,5 +8,5 @@ export { useSpecRegeneration } from './use-spec-regeneration';
|
||||
export { useNavigation } from './use-navigation';
|
||||
export { useProjectCreation } from './use-project-creation';
|
||||
export { useSetupDialog } from './use-setup-dialog';
|
||||
export { useTrashDialog } from './use-trash-dialog';
|
||||
export { useProjectTheme } from './use-project-theme';
|
||||
export { useUnviewedValidations } from './use-unviewed-validations';
|
||||
|
||||
@@ -44,8 +44,6 @@ interface UseNavigationProps {
|
||||
setIsProjectPickerOpen: (value: boolean | ((prev: boolean) => boolean)) => void;
|
||||
cyclePrevProject: () => void;
|
||||
cycleNextProject: () => void;
|
||||
/** Count of unviewed validations to show on GitHub Issues nav item */
|
||||
unviewedValidationsCount?: number;
|
||||
}
|
||||
|
||||
export function useNavigation({
|
||||
@@ -63,7 +61,6 @@ export function useNavigation({
|
||||
setIsProjectPickerOpen,
|
||||
cyclePrevProject,
|
||||
cycleNextProject,
|
||||
unviewedValidationsCount,
|
||||
}: UseNavigationProps) {
|
||||
// Track if current project has a GitHub remote
|
||||
const [hasGitHubRemote, setHasGitHubRemote] = useState(false);
|
||||
@@ -172,7 +169,6 @@ export function useNavigation({
|
||||
id: 'github-issues',
|
||||
label: 'Issues',
|
||||
icon: CircleDot,
|
||||
count: unviewedValidationsCount,
|
||||
},
|
||||
{
|
||||
id: 'github-prs',
|
||||
@@ -184,15 +180,7 @@ export function useNavigation({
|
||||
}
|
||||
|
||||
return sections;
|
||||
}, [
|
||||
shortcuts,
|
||||
hideSpecEditor,
|
||||
hideContext,
|
||||
hideTerminal,
|
||||
hideAiProfiles,
|
||||
hasGitHubRemote,
|
||||
unviewedValidationsCount,
|
||||
]);
|
||||
}, [shortcuts, hideSpecEditor, hideContext, hideTerminal, hideAiProfiles, hasGitHubRemote]);
|
||||
|
||||
// Build keyboard shortcuts for navigation
|
||||
const navigationShortcuts: KeyboardShortcut[] = useMemo(() => {
|
||||
|
||||
@@ -3,7 +3,6 @@ import type { Project } from '@/lib/electron';
|
||||
|
||||
interface UseProjectPickerProps {
|
||||
projects: Project[];
|
||||
currentProject: Project | null;
|
||||
isProjectPickerOpen: boolean;
|
||||
setIsProjectPickerOpen: (value: boolean | ((prev: boolean) => boolean)) => void;
|
||||
setCurrentProject: (project: Project) => void;
|
||||
@@ -11,7 +10,6 @@ interface UseProjectPickerProps {
|
||||
|
||||
export function useProjectPicker({
|
||||
projects,
|
||||
currentProject,
|
||||
isProjectPickerOpen,
|
||||
setIsProjectPickerOpen,
|
||||
setCurrentProject,
|
||||
@@ -19,7 +17,6 @@ export function useProjectPicker({
|
||||
const [projectSearchQuery, setProjectSearchQuery] = useState('');
|
||||
const [selectedProjectIndex, setSelectedProjectIndex] = useState(0);
|
||||
const projectSearchInputRef = useRef<HTMLInputElement>(null);
|
||||
const scrollContainerRef = useRef<HTMLDivElement>(null);
|
||||
|
||||
// Filtered projects based on search query
|
||||
const filteredProjects = useMemo(() => {
|
||||
@@ -30,66 +27,28 @@ export function useProjectPicker({
|
||||
return projects.filter((project) => project.name.toLowerCase().includes(query));
|
||||
}, [projects, projectSearchQuery]);
|
||||
|
||||
// Helper function to scroll to a specific project
|
||||
const scrollToProject = useCallback((projectId: string) => {
|
||||
if (!scrollContainerRef.current) return;
|
||||
|
||||
const element = scrollContainerRef.current.querySelector(
|
||||
`[data-testid="project-option-${projectId}"]`
|
||||
);
|
||||
|
||||
if (element) {
|
||||
element.scrollIntoView({
|
||||
behavior: 'smooth',
|
||||
block: 'nearest',
|
||||
});
|
||||
}
|
||||
}, []);
|
||||
|
||||
// On open/close, handle search query reset and focus
|
||||
// Reset selection when filtered results change
|
||||
useEffect(() => {
|
||||
if (isProjectPickerOpen) {
|
||||
// Focus search input after DOM renders
|
||||
requestAnimationFrame(() => {
|
||||
projectSearchInputRef.current?.focus();
|
||||
});
|
||||
} else {
|
||||
// Reset search when closing
|
||||
setSelectedProjectIndex(0);
|
||||
}, [filteredProjects.length, projectSearchQuery]);
|
||||
|
||||
// Reset search query when dropdown closes
|
||||
useEffect(() => {
|
||||
if (!isProjectPickerOpen) {
|
||||
setProjectSearchQuery('');
|
||||
setSelectedProjectIndex(0);
|
||||
}
|
||||
}, [isProjectPickerOpen]);
|
||||
|
||||
// Update selection when search query changes (while picker is open)
|
||||
// Focus the search input when dropdown opens
|
||||
useEffect(() => {
|
||||
if (!isProjectPickerOpen) {
|
||||
setSelectedProjectIndex(0);
|
||||
return;
|
||||
if (isProjectPickerOpen) {
|
||||
// Small delay to ensure the dropdown is rendered
|
||||
setTimeout(() => {
|
||||
projectSearchInputRef.current?.focus();
|
||||
}, 0);
|
||||
}
|
||||
|
||||
if (projectSearchQuery.trim()) {
|
||||
// When searching, reset to first result
|
||||
setSelectedProjectIndex(0);
|
||||
} else {
|
||||
// When not searching (e.g., on open or search cleared), find and select the current project
|
||||
const currentIndex = currentProject
|
||||
? filteredProjects.findIndex((p) => p.id === currentProject.id)
|
||||
: -1;
|
||||
setSelectedProjectIndex(currentIndex !== -1 ? currentIndex : 0);
|
||||
}
|
||||
}, [isProjectPickerOpen, projectSearchQuery, filteredProjects, currentProject]);
|
||||
|
||||
// Scroll to highlighted item when selection changes
|
||||
useEffect(() => {
|
||||
if (!isProjectPickerOpen) return;
|
||||
|
||||
const targetProject = filteredProjects[selectedProjectIndex];
|
||||
if (targetProject) {
|
||||
// Use requestAnimationFrame to ensure DOM is rendered before scrolling
|
||||
requestAnimationFrame(() => {
|
||||
scrollToProject(targetProject.id);
|
||||
});
|
||||
}
|
||||
}, [selectedProjectIndex, isProjectPickerOpen, filteredProjects, scrollToProject]);
|
||||
}, [isProjectPickerOpen]);
|
||||
|
||||
// Handle selecting the currently highlighted project
|
||||
const selectHighlightedProject = useCallback(() => {
|
||||
@@ -140,7 +99,6 @@ export function useProjectPicker({
|
||||
selectedProjectIndex,
|
||||
setSelectedProjectIndex,
|
||||
projectSearchInputRef,
|
||||
scrollContainerRef,
|
||||
filteredProjects,
|
||||
selectHighlightedProject,
|
||||
};
|
||||
|
||||
@@ -0,0 +1,40 @@
|
||||
import { useState } from 'react';
|
||||
import { useTrashOperations } from './use-trash-operations';
|
||||
import type { TrashedProject } from '@/lib/electron';
|
||||
|
||||
interface UseTrashDialogProps {
|
||||
restoreTrashedProject: (projectId: string) => void;
|
||||
deleteTrashedProject: (projectId: string) => void;
|
||||
emptyTrash: () => void;
|
||||
trashedProjects: TrashedProject[];
|
||||
}
|
||||
|
||||
/**
|
||||
* Hook that combines trash operations with dialog state management
|
||||
*/
|
||||
export function useTrashDialog({
|
||||
restoreTrashedProject,
|
||||
deleteTrashedProject,
|
||||
emptyTrash,
|
||||
trashedProjects,
|
||||
}: UseTrashDialogProps) {
|
||||
// Dialog state
|
||||
const [showTrashDialog, setShowTrashDialog] = useState(false);
|
||||
|
||||
// Reuse existing trash operations logic
|
||||
const trashOperations = useTrashOperations({
|
||||
restoreTrashedProject,
|
||||
deleteTrashedProject,
|
||||
emptyTrash,
|
||||
trashedProjects,
|
||||
});
|
||||
|
||||
return {
|
||||
// Dialog state
|
||||
showTrashDialog,
|
||||
setShowTrashDialog,
|
||||
|
||||
// Trash operations (spread from existing hook)
|
||||
...trashOperations,
|
||||
};
|
||||
}
|
||||
@@ -6,35 +6,35 @@ interface UseTrashOperationsProps {
|
||||
restoreTrashedProject: (projectId: string) => void;
|
||||
deleteTrashedProject: (projectId: string) => void;
|
||||
emptyTrash: () => void;
|
||||
trashedProjects: TrashedProject[];
|
||||
}
|
||||
|
||||
export function useTrashOperations({
|
||||
restoreTrashedProject,
|
||||
deleteTrashedProject,
|
||||
emptyTrash,
|
||||
trashedProjects,
|
||||
}: UseTrashOperationsProps) {
|
||||
const [activeTrashId, setActiveTrashId] = useState<string | null>(null);
|
||||
const [isEmptyingTrash, setIsEmptyingTrash] = useState(false);
|
||||
|
||||
const handleRestoreProject = useCallback(
|
||||
(projectId: string) => {
|
||||
try {
|
||||
restoreTrashedProject(projectId);
|
||||
toast.success('Project restored', {
|
||||
description: 'Added back to your project list.',
|
||||
});
|
||||
} catch (error) {
|
||||
console.error('[Sidebar] Failed to restore project:', error);
|
||||
toast.error('Failed to restore project', {
|
||||
description: error instanceof Error ? error.message : 'Unknown error',
|
||||
});
|
||||
}
|
||||
restoreTrashedProject(projectId);
|
||||
toast.success('Project restored', {
|
||||
description: 'Added back to your project list.',
|
||||
});
|
||||
},
|
||||
[restoreTrashedProject]
|
||||
);
|
||||
|
||||
const handleDeleteProjectFromDisk = useCallback(
|
||||
async (trashedProject: TrashedProject) => {
|
||||
const confirmed = window.confirm(
|
||||
`Delete "${trashedProject.name}" from disk?\nThis sends the folder to your system Trash.`
|
||||
);
|
||||
if (!confirmed) return;
|
||||
|
||||
setActiveTrashId(trashedProject.id);
|
||||
try {
|
||||
const api = getElectronAPI();
|
||||
@@ -64,19 +64,23 @@ export function useTrashOperations({
|
||||
);
|
||||
|
||||
const handleEmptyTrash = useCallback(() => {
|
||||
if (trashedProjects.length === 0) {
|
||||
return;
|
||||
}
|
||||
|
||||
const confirmed = window.confirm(
|
||||
'Clear all projects from recycle bin? This does not delete folders from disk.'
|
||||
);
|
||||
if (!confirmed) return;
|
||||
|
||||
setIsEmptyingTrash(true);
|
||||
try {
|
||||
emptyTrash();
|
||||
toast.success('Recycle bin cleared');
|
||||
} catch (error) {
|
||||
console.error('[Sidebar] Failed to empty trash:', error);
|
||||
toast.error('Failed to clear recycle bin', {
|
||||
description: error instanceof Error ? error.message : 'Unknown error',
|
||||
});
|
||||
} finally {
|
||||
setIsEmptyingTrash(false);
|
||||
}
|
||||
}, [emptyTrash]);
|
||||
}, [emptyTrash, trashedProjects.length]);
|
||||
|
||||
return {
|
||||
activeTrashId,
|
||||
|
||||
@@ -1,82 +0,0 @@
|
||||
import { useState, useEffect, useCallback, useRef } from 'react';
|
||||
import { getElectronAPI } from '@/lib/electron';
|
||||
import type { Project, StoredValidation } from '@/lib/electron';
|
||||
|
||||
/**
|
||||
* Hook to track the count of unviewed (fresh) issue validations for a project.
|
||||
* Also provides a function to decrement the count when a validation is viewed.
|
||||
*/
|
||||
export function useUnviewedValidations(currentProject: Project | null) {
|
||||
const [count, setCount] = useState(0);
|
||||
const projectPathRef = useRef<string | null>(null);
|
||||
|
||||
// Keep project path in ref for use in async functions
|
||||
useEffect(() => {
|
||||
projectPathRef.current = currentProject?.path ?? null;
|
||||
}, [currentProject?.path]);
|
||||
|
||||
// Fetch and update count from server
|
||||
const fetchUnviewedCount = useCallback(async () => {
|
||||
const projectPath = projectPathRef.current;
|
||||
if (!projectPath) return;
|
||||
|
||||
try {
|
||||
const api = getElectronAPI();
|
||||
if (api.github?.getValidations) {
|
||||
const result = await api.github.getValidations(projectPath);
|
||||
if (result.success && result.validations) {
|
||||
const unviewed = result.validations.filter((v: StoredValidation) => {
|
||||
if (v.viewedAt) return false;
|
||||
// Check if not stale (< 24 hours)
|
||||
const hoursSince = (Date.now() - new Date(v.validatedAt).getTime()) / (1000 * 60 * 60);
|
||||
return hoursSince <= 24;
|
||||
});
|
||||
// Only update count if we're still on the same project (guard against race condition)
|
||||
if (projectPathRef.current === projectPath) {
|
||||
setCount(unviewed.length);
|
||||
}
|
||||
}
|
||||
}
|
||||
} catch (err) {
|
||||
console.error('[useUnviewedValidations] Failed to load count:', err);
|
||||
}
|
||||
}, []);
|
||||
|
||||
// Load initial count and subscribe to events
|
||||
useEffect(() => {
|
||||
if (!currentProject?.path) {
|
||||
setCount(0);
|
||||
return;
|
||||
}
|
||||
|
||||
// Load initial count
|
||||
fetchUnviewedCount();
|
||||
|
||||
// Subscribe to validation events to update count
|
||||
const api = getElectronAPI();
|
||||
if (api.github?.onValidationEvent) {
|
||||
const unsubscribe = api.github.onValidationEvent((event) => {
|
||||
if (event.projectPath === currentProject.path) {
|
||||
if (event.type === 'issue_validation_complete') {
|
||||
// New validation completed - refresh count from server for consistency
|
||||
fetchUnviewedCount();
|
||||
} else if (event.type === 'issue_validation_viewed') {
|
||||
// Validation was viewed - refresh count from server for consistency
|
||||
fetchUnviewedCount();
|
||||
}
|
||||
}
|
||||
});
|
||||
return () => unsubscribe();
|
||||
}
|
||||
}, [currentProject?.path, fetchUnviewedCount]);
|
||||
|
||||
// Function to decrement count when a validation is viewed
|
||||
const decrementCount = useCallback(() => {
|
||||
setCount((prev) => Math.max(0, prev - 1));
|
||||
}, []);
|
||||
|
||||
// Expose refreshCount as an alias to fetchUnviewedCount for external use
|
||||
const refreshCount = fetchUnviewedCount;
|
||||
|
||||
return { count, decrementCount, refreshCount };
|
||||
}
|
||||
@@ -11,8 +11,6 @@ export interface NavItem {
|
||||
label: string;
|
||||
icon: React.ComponentType<{ className?: string }>;
|
||||
shortcut?: string;
|
||||
/** Optional count badge to display next to the nav item */
|
||||
count?: number;
|
||||
}
|
||||
|
||||
export interface SortableProjectItemProps {
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user