feat: add project-scoped agent memory system (#351)

* memory

* feat: add smart memory selection with task context

- Add taskContext parameter to loadContextFiles for intelligent file selection
- Memory files are scored based on tag matching with task keywords
- Category name matching (e.g., "terminals" matches terminals.md) with 4x weight
- Usage statistics influence scoring (files that helped before rank higher)
- Limit to top 5 files + always include gotchas.md
- Auto-mode passes feature title/description as context
- Chat sessions pass user message as context

This prevents loading 40+ memory files and killing context limits.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* refactor: enhance auto-mode service and context loader

- Improved context loading by adding task context for better memory selection.
- Updated JSON parsing logic to handle various formats and ensure robust error handling.
- Introduced file locking mechanisms to prevent race conditions during memory file updates.
- Enhanced metadata handling in memory files, including validation and sanitization.
- Refactored scoring logic for context files to improve selection accuracy based on task relevance.

These changes optimize memory file management and enhance the overall performance of the auto-mode service.

* refactor: enhance learning extraction and formatting in auto-mode service

- Improved the learning extraction process by refining the user prompt to focus on meaningful insights and structured JSON output.
- Updated the LearningEntry interface to include additional context fields for better documentation of decisions and patterns.
- Enhanced the formatLearning function to adopt an Architecture Decision Record (ADR) style, providing richer context for recorded learnings.
- Added detailed logging for better traceability during the learning extraction and appending processes.

These changes aim to improve the quality and clarity of learnings captured during the auto-mode service's operation.

* feat: integrate stripProviderPrefix utility for model ID handling

- Added stripProviderPrefix utility to various routes to ensure providers receive bare model IDs.
- Updated model references in executeQuery calls across multiple files, enhancing consistency in model ID handling.
- Introduced memoryExtractionModel in settings for improved learning extraction tasks.

These changes streamline the model ID processing and enhance the overall functionality of the provider interactions.

* feat: enhance error handling and server offline management in board actions

- Improved error handling in the handleRunFeature and handleStartImplementation functions to throw errors for better caller management.
- Integrated connection error detection and server offline handling, redirecting users to the login page when the server is unreachable.
- Updated follow-up feature logic to include rollback mechanisms and improved user feedback for error scenarios.

These changes enhance the robustness of the board actions by ensuring proper error management and user experience during server connectivity issues.

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: webdevcody <webdevcody@gmail.com>
This commit is contained in:
SuperComboGamer
2026-01-09 15:11:59 -05:00
committed by GitHub
parent 7e768b6290
commit b2cf17b53b
20 changed files with 1535 additions and 160 deletions

View File

@@ -9,7 +9,7 @@ import { query } from '@anthropic-ai/claude-agent-sdk';
import * as secureFs from '../../lib/secure-fs.js';
import type { EventEmitter } from '../../lib/events.js';
import { createLogger } from '@automaker/utils';
import { DEFAULT_PHASE_MODELS, isCursorModel } from '@automaker/types';
import { DEFAULT_PHASE_MODELS, isCursorModel, stripProviderPrefix } from '@automaker/types';
import { resolvePhaseModel } from '@automaker/model-resolver';
import { createFeatureGenerationOptions } from '../../lib/sdk-options.js';
import { ProviderFactory } from '../../providers/provider-factory.js';
@@ -124,6 +124,8 @@ IMPORTANT: Do not ask for clarification. The specification is provided above. Ge
logger.info('[FeatureGeneration] Using Cursor provider');
const provider = ProviderFactory.getProviderForModel(model);
// Strip provider prefix - providers expect bare model IDs
const bareModel = stripProviderPrefix(model);
// Add explicit instructions for Cursor to return JSON in response
const cursorPrompt = `${prompt}
@@ -135,7 +137,7 @@ CRITICAL INSTRUCTIONS:
for await (const msg of provider.executeQuery({
prompt: cursorPrompt,
model,
model: bareModel,
cwd: projectPath,
maxTurns: 250,
allowedTools: ['Read', 'Glob', 'Grep'],

View File

@@ -16,7 +16,7 @@ import {
type SpecOutput,
} from '../../lib/app-spec-format.js';
import { createLogger } from '@automaker/utils';
import { DEFAULT_PHASE_MODELS, isCursorModel } from '@automaker/types';
import { DEFAULT_PHASE_MODELS, isCursorModel, stripProviderPrefix } from '@automaker/types';
import { resolvePhaseModel } from '@automaker/model-resolver';
import { createSpecGenerationOptions } from '../../lib/sdk-options.js';
import { extractJson } from '../../lib/json-extractor.js';
@@ -118,6 +118,8 @@ ${getStructuredSpecPromptInstruction()}`;
logger.info('[SpecGeneration] Using Cursor provider');
const provider = ProviderFactory.getProviderForModel(model);
// Strip provider prefix - providers expect bare model IDs
const bareModel = stripProviderPrefix(model);
// For Cursor, include the JSON schema in the prompt with clear instructions
// to return JSON in the response (not write to a file)
@@ -134,7 +136,7 @@ Your entire response should be valid JSON starting with { and ending with }. No
for await (const msg of provider.executeQuery({
prompt: cursorPrompt,
model,
model: bareModel,
cwd: projectPath,
maxTurns: 250,
allowedTools: ['Read', 'Glob', 'Grep'],

View File

@@ -7,7 +7,12 @@
import type { EventEmitter } from '../../lib/events.js';
import type { Feature, BacklogPlanResult, BacklogChange, DependencyUpdate } from '@automaker/types';
import { DEFAULT_PHASE_MODELS, isCursorModel, type ThinkingLevel } from '@automaker/types';
import {
DEFAULT_PHASE_MODELS,
isCursorModel,
stripProviderPrefix,
type ThinkingLevel,
} from '@automaker/types';
import { resolvePhaseModel } from '@automaker/model-resolver';
import { FeatureLoader } from '../../services/feature-loader.js';
import { ProviderFactory } from '../../providers/provider-factory.js';
@@ -120,6 +125,8 @@ export async function generateBacklogPlan(
logger.info('[BacklogPlan] Using model:', effectiveModel);
const provider = ProviderFactory.getProviderForModel(effectiveModel);
// Strip provider prefix - providers expect bare model IDs
const bareModel = stripProviderPrefix(effectiveModel);
// Get autoLoadClaudeMd setting
const autoLoadClaudeMd = await getAutoLoadClaudeMdSetting(
@@ -151,7 +158,7 @@ ${userPrompt}`;
// Execute the query
const stream = provider.executeQuery({
prompt: finalPrompt,
model: effectiveModel,
model: bareModel,
cwd: projectPath,
systemPrompt: finalSystemPrompt,
maxTurns: 1,

View File

@@ -13,7 +13,7 @@
import type { Request, Response } from 'express';
import { query } from '@anthropic-ai/claude-agent-sdk';
import { createLogger } from '@automaker/utils';
import { DEFAULT_PHASE_MODELS, isCursorModel } from '@automaker/types';
import { DEFAULT_PHASE_MODELS, isCursorModel, stripProviderPrefix } from '@automaker/types';
import { PathNotAllowedError } from '@automaker/platform';
import { resolvePhaseModel } from '@automaker/model-resolver';
import { createCustomOptions } from '../../../lib/sdk-options.js';
@@ -198,6 +198,8 @@ File: ${fileName}${truncated ? ' (truncated)' : ''}`;
logger.info(`Using Cursor provider for model: ${model}`);
const provider = ProviderFactory.getProviderForModel(model);
// Strip provider prefix - providers expect bare model IDs
const bareModel = stripProviderPrefix(model);
// Build a simple text prompt for Cursor (no multi-part content blocks)
const cursorPrompt = `${instructionText}\n\n--- FILE CONTENT ---\n${contentToAnalyze}`;
@@ -205,7 +207,7 @@ File: ${fileName}${truncated ? ' (truncated)' : ''}`;
let responseText = '';
for await (const msg of provider.executeQuery({
prompt: cursorPrompt,
model,
model: bareModel,
cwd,
maxTurns: 1,
allowedTools: [],

View File

@@ -14,7 +14,7 @@
import type { Request, Response } from 'express';
import { query } from '@anthropic-ai/claude-agent-sdk';
import { createLogger, readImageAsBase64 } from '@automaker/utils';
import { DEFAULT_PHASE_MODELS, isCursorModel } from '@automaker/types';
import { DEFAULT_PHASE_MODELS, isCursorModel, stripProviderPrefix } from '@automaker/types';
import { resolvePhaseModel } from '@automaker/model-resolver';
import { createCustomOptions } from '../../../lib/sdk-options.js';
import { ProviderFactory } from '../../../providers/provider-factory.js';
@@ -357,6 +357,8 @@ export function createDescribeImageHandler(
logger.info(`[${requestId}] Using Cursor provider for model: ${model}`);
const provider = ProviderFactory.getProviderForModel(model);
// Strip provider prefix - providers expect bare model IDs
const bareModel = stripProviderPrefix(model);
// Build prompt with image reference for Cursor
// Note: Cursor CLI may not support base64 image blocks directly,
@@ -367,7 +369,7 @@ export function createDescribeImageHandler(
const queryStart = Date.now();
for await (const msg of provider.executeQuery({
prompt: cursorPrompt,
model,
model: bareModel,
cwd,
maxTurns: 1,
allowedTools: ['Read'], // Allow Read tool so Cursor can read the image if needed

View File

@@ -12,6 +12,7 @@ import { resolveModelString } from '@automaker/model-resolver';
import {
CLAUDE_MODEL_MAP,
isCursorModel,
stripProviderPrefix,
ThinkingLevel,
getThinkingTokenBudget,
} from '@automaker/types';
@@ -98,12 +99,14 @@ async function extractTextFromStream(
*/
async function executeWithCursor(prompt: string, model: string): Promise<string> {
const provider = ProviderFactory.getProviderForModel(model);
// Strip provider prefix - providers expect bare model IDs
const bareModel = stripProviderPrefix(model);
let responseText = '';
for await (const msg of provider.executeQuery({
prompt,
model,
model: bareModel,
cwd: process.cwd(), // Enhancement doesn't need a specific working directory
readOnly: true, // Prompt enhancement only generates text, doesn't write files
})) {

View File

@@ -18,7 +18,7 @@ import type {
LinkedPRInfo,
ThinkingLevel,
} from '@automaker/types';
import { isCursorModel, DEFAULT_PHASE_MODELS } from '@automaker/types';
import { isCursorModel, DEFAULT_PHASE_MODELS, stripProviderPrefix } from '@automaker/types';
import { resolvePhaseModel } from '@automaker/model-resolver';
import { createSuggestionsOptions } from '../../../lib/sdk-options.js';
import { extractJson } from '../../../lib/json-extractor.js';
@@ -120,6 +120,8 @@ async function runValidation(
logger.info(`Using Cursor provider for validation with model: ${model}`);
const provider = ProviderFactory.getProviderForModel(model);
// Strip provider prefix - providers expect bare model IDs
const bareModel = stripProviderPrefix(model);
// For Cursor, include the system prompt and schema in the user prompt
const cursorPrompt = `${ISSUE_VALIDATION_SYSTEM_PROMPT}
@@ -137,7 +139,7 @@ ${prompt}`;
for await (const msg of provider.executeQuery({
prompt: cursorPrompt,
model,
model: bareModel,
cwd: projectPath,
readOnly: true, // Issue validation only reads code, doesn't write
})) {

View File

@@ -8,7 +8,12 @@
import { query } from '@anthropic-ai/claude-agent-sdk';
import type { EventEmitter } from '../../lib/events.js';
import { createLogger } from '@automaker/utils';
import { DEFAULT_PHASE_MODELS, isCursorModel, type ThinkingLevel } from '@automaker/types';
import {
DEFAULT_PHASE_MODELS,
isCursorModel,
stripProviderPrefix,
type ThinkingLevel,
} from '@automaker/types';
import { resolvePhaseModel } from '@automaker/model-resolver';
import { createSuggestionsOptions } from '../../lib/sdk-options.js';
import { extractJsonWithArray } from '../../lib/json-extractor.js';
@@ -207,6 +212,8 @@ The response will be automatically formatted as structured JSON.`;
logger.info('[Suggestions] Using Cursor provider');
const provider = ProviderFactory.getProviderForModel(model);
// Strip provider prefix - providers expect bare model IDs
const bareModel = stripProviderPrefix(model);
// For Cursor, include the JSON schema in the prompt with clear instructions
const cursorPrompt = `${prompt}
@@ -222,7 +229,7 @@ Your entire response should be valid JSON starting with { and ending with }. No
for await (const msg of provider.executeQuery({
prompt: cursorPrompt,
model,
model: bareModel,
cwd: projectPath,
maxTurns: 250,
allowedTools: ['Read', 'Glob', 'Grep'],

View File

@@ -274,10 +274,15 @@ export class AgentService {
? await getCustomSubagents(this.settingsService, effectiveWorkDir)
: undefined;
// Load project context files (CLAUDE.md, CODE_QUALITY.md, etc.)
// Load project context files (CLAUDE.md, CODE_QUALITY.md, etc.) and memory files
// Use the user's message as task context for smart memory selection
const contextResult = await loadContextFiles({
projectPath: effectiveWorkDir,
fsModule: secureFs as Parameters<typeof loadContextFiles>[0]['fsModule'],
taskContext: {
title: message.substring(0, 200), // Use first 200 chars as title
description: message,
},
});
// When autoLoadClaudeMd is enabled, filter out CLAUDE.md to avoid duplication

View File

@@ -14,17 +14,17 @@ import type {
ExecuteOptions,
Feature,
ModelProvider,
PipelineConfig,
PipelineStep,
ThinkingLevel,
PlanningMode,
} from '@automaker/types';
import { DEFAULT_PHASE_MODELS } from '@automaker/types';
import { DEFAULT_PHASE_MODELS, stripProviderPrefix } from '@automaker/types';
import {
buildPromptWithImages,
isAbortError,
classifyError,
loadContextFiles,
appendLearning,
recordMemoryUsage,
createLogger,
} from '@automaker/utils';
@@ -322,6 +322,8 @@ export class AutoModeService {
projectPath,
});
// Note: Memory folder initialization is now handled by loadContextFiles
// Run the loop in the background
this.runAutoLoop().catch((error) => {
logger.error('Loop error:', error);
@@ -513,15 +515,21 @@ export class AutoModeService {
// Build the prompt - use continuation prompt if provided (for recovery after plan approval)
let prompt: string;
// Load project context files (CLAUDE.md, CODE_QUALITY.md, etc.) - passed as system prompt
// Load project context files (CLAUDE.md, CODE_QUALITY.md, etc.) and memory files
// Context loader uses task context to select relevant memory files
const contextResult = await loadContextFiles({
projectPath,
fsModule: secureFs as Parameters<typeof loadContextFiles>[0]['fsModule'],
taskContext: {
title: feature.title ?? '',
description: feature.description ?? '',
},
});
// When autoLoadClaudeMd is enabled, filter out CLAUDE.md to avoid duplication
// (SDK handles CLAUDE.md via settingSources), but keep other context files like CODE_QUALITY.md
const contextFilesPrompt = filterClaudeMdFromContext(contextResult, autoLoadClaudeMd);
// Note: contextResult.formattedPrompt now includes both context AND memory
const combinedSystemPrompt = filterClaudeMdFromContext(contextResult, autoLoadClaudeMd);
if (options?.continuationPrompt) {
// Continuation prompt is used when recovering from a plan approval
@@ -574,7 +582,7 @@ export class AutoModeService {
projectPath,
planningMode: feature.planningMode,
requirePlanApproval: feature.requirePlanApproval,
systemPrompt: contextFilesPrompt || undefined,
systemPrompt: combinedSystemPrompt || undefined,
autoLoadClaudeMd,
thinkingLevel: feature.thinkingLevel,
}
@@ -606,6 +614,36 @@ export class AutoModeService {
// Record success to reset consecutive failure tracking
this.recordSuccess();
// Record learnings and memory usage after successful feature completion
try {
const featureDir = getFeatureDir(projectPath, featureId);
const outputPath = path.join(featureDir, 'agent-output.md');
let agentOutput = '';
try {
const outputContent = await secureFs.readFile(outputPath, 'utf-8');
agentOutput =
typeof outputContent === 'string' ? outputContent : outputContent.toString();
} catch {
// Agent output might not exist yet
}
// Record memory usage if we loaded any memory files
if (contextResult.memoryFiles.length > 0 && agentOutput) {
await recordMemoryUsage(
projectPath,
contextResult.memoryFiles,
agentOutput,
true, // success
secureFs as Parameters<typeof recordMemoryUsage>[4]
);
}
// Extract and record learnings from the agent output
await this.recordLearningsFromFeature(projectPath, feature, agentOutput);
} catch (learningError) {
console.warn('[AutoMode] Failed to record learnings:', learningError);
}
this.emitAutoModeEvent('auto_mode_feature_complete', {
featureId,
passes: true,
@@ -674,10 +712,14 @@ export class AutoModeService {
): Promise<void> {
logger.info(`Executing ${steps.length} pipeline step(s) for feature ${featureId}`);
// Load context files once
// Load context files once with feature context for smart memory selection
const contextResult = await loadContextFiles({
projectPath,
fsModule: secureFs as Parameters<typeof loadContextFiles>[0]['fsModule'],
taskContext: {
title: feature.title ?? '',
description: feature.description ?? '',
},
});
const contextFilesPrompt = filterClaudeMdFromContext(contextResult, autoLoadClaudeMd);
@@ -910,6 +952,10 @@ Complete the pipeline step instructions above. Review the previous work and appl
const contextResult = await loadContextFiles({
projectPath,
fsModule: secureFs as Parameters<typeof loadContextFiles>[0]['fsModule'],
taskContext: {
title: feature?.title ?? prompt.substring(0, 200),
description: feature?.description ?? prompt,
},
});
// When autoLoadClaudeMd is enabled, filter out CLAUDE.md to avoid duplication
@@ -2103,7 +2149,12 @@ This mock response was generated because AUTOMAKER_MOCK_AGENT=true was set.
// Get provider for this model
const provider = ProviderFactory.getProviderForModel(finalModel);
logger.info(`Using provider "${provider.getName()}" for model "${finalModel}"`);
// Strip provider prefix - providers should receive bare model IDs
const bareModel = stripProviderPrefix(finalModel);
logger.info(
`Using provider "${provider.getName()}" for model "${finalModel}" (bare: ${bareModel})`
);
// Build prompt content with images using utility
const { content: promptContent } = await buildPromptWithImages(
@@ -2122,7 +2173,7 @@ This mock response was generated because AUTOMAKER_MOCK_AGENT=true was set.
const executeOptions: ExecuteOptions = {
prompt: promptContent,
model: finalModel,
model: bareModel,
maxTurns: maxTurns,
cwd: workDir,
allowedTools: allowedTools,
@@ -2427,7 +2478,7 @@ After generating the revised spec, output:
// Make revision call
const revisionStream = provider.executeQuery({
prompt: revisionPrompt,
model: finalModel,
model: bareModel,
maxTurns: maxTurns || 100,
cwd: workDir,
allowedTools: allowedTools,
@@ -2565,7 +2616,7 @@ After generating the revised spec, output:
// Execute task with dedicated agent
const taskStream = provider.executeQuery({
prompt: taskPrompt,
model: finalModel,
model: bareModel,
maxTurns: Math.min(maxTurns || 100, 50), // Limit turns per task
cwd: workDir,
allowedTools: allowedTools,
@@ -2653,7 +2704,7 @@ Implement all the changes described in the plan above.`;
const continuationStream = provider.executeQuery({
prompt: continuationPrompt,
model: finalModel,
model: bareModel,
maxTurns: maxTurns,
cwd: workDir,
allowedTools: allowedTools,
@@ -2898,4 +2949,207 @@ Begin implementing task ${task.id} now.`;
}
});
}
/**
* Extract and record learnings from a completed feature
* Uses a quick Claude call to identify important decisions and patterns
*/
private async recordLearningsFromFeature(
projectPath: string,
feature: Feature,
agentOutput: string
): Promise<void> {
if (!agentOutput || agentOutput.length < 100) {
// Not enough output to extract learnings from
console.log(
`[AutoMode] Skipping learning extraction - output too short (${agentOutput?.length || 0} chars)`
);
return;
}
console.log(
`[AutoMode] Extracting learnings from feature "${feature.title}" (${agentOutput.length} chars)`
);
// Limit output to avoid token limits
const truncatedOutput = agentOutput.length > 10000 ? agentOutput.slice(-10000) : agentOutput;
const userPrompt = `You are an Architecture Decision Record (ADR) extractor. Analyze this implementation and return ONLY JSON with learnings. No explanations.
Feature: "${feature.title}"
Implementation log:
${truncatedOutput}
Extract MEANINGFUL learnings - not obvious things. For each, capture:
- DECISIONS: Why this approach vs alternatives? What would break if changed?
- GOTCHAS: What was unexpected? What's the root cause? How to avoid?
- PATTERNS: Why this pattern? What problem does it solve? Trade-offs?
JSON format ONLY (no markdown, no text):
{"learnings": [{
"category": "architecture|api|ui|database|auth|testing|performance|security|gotchas",
"type": "decision|gotcha|pattern",
"content": "What was done/learned",
"context": "Problem being solved or situation faced",
"why": "Reasoning - why this approach",
"rejected": "Alternative considered and why rejected",
"tradeoffs": "What became easier/harder",
"breaking": "What breaks if this is changed/removed"
}]}
IMPORTANT: Only include NON-OBVIOUS learnings with real reasoning. Skip trivial patterns.
If nothing notable: {"learnings": []}`;
try {
// Import query dynamically to avoid circular dependencies
const { query } = await import('@anthropic-ai/claude-agent-sdk');
// Get model from phase settings
const settings = await this.settingsService?.getGlobalSettings();
const phaseModelEntry =
settings?.phaseModels?.memoryExtractionModel || DEFAULT_PHASE_MODELS.memoryExtractionModel;
const { model } = resolvePhaseModel(phaseModelEntry);
const stream = query({
prompt: userPrompt,
options: {
model,
maxTurns: 1,
allowedTools: [],
permissionMode: 'acceptEdits',
systemPrompt:
'You are a JSON extraction assistant. You MUST respond with ONLY valid JSON, no explanations, no markdown, no other text. Extract learnings from the provided implementation context and return them as JSON.',
},
});
// Extract text from stream
let responseText = '';
for await (const msg of stream) {
if (msg.type === 'assistant' && msg.message?.content) {
for (const block of msg.message.content) {
if (block.type === 'text' && block.text) {
responseText += block.text;
}
}
} else if (msg.type === 'result' && msg.subtype === 'success') {
responseText = msg.result || responseText;
}
}
console.log(`[AutoMode] Learning extraction response: ${responseText.length} chars`);
console.log(`[AutoMode] Response preview: ${responseText.substring(0, 300)}`);
// Parse the response - handle JSON in markdown code blocks or raw
let jsonStr: string | null = null;
// First try to find JSON in markdown code blocks
const codeBlockMatch = responseText.match(/```(?:json)?\s*(\{[\s\S]*?\})\s*```/);
if (codeBlockMatch) {
console.log('[AutoMode] Found JSON in code block');
jsonStr = codeBlockMatch[1];
} else {
// Fall back to finding balanced braces containing "learnings"
// Use a more precise approach: find the opening brace before "learnings"
const learningsIndex = responseText.indexOf('"learnings"');
if (learningsIndex !== -1) {
// Find the opening brace before "learnings"
let braceStart = responseText.lastIndexOf('{', learningsIndex);
if (braceStart !== -1) {
// Find matching closing brace
let braceCount = 0;
let braceEnd = -1;
for (let i = braceStart; i < responseText.length; i++) {
if (responseText[i] === '{') braceCount++;
if (responseText[i] === '}') braceCount--;
if (braceCount === 0) {
braceEnd = i;
break;
}
}
if (braceEnd !== -1) {
jsonStr = responseText.substring(braceStart, braceEnd + 1);
}
}
}
}
if (!jsonStr) {
console.log('[AutoMode] Could not extract JSON from response');
return;
}
console.log(`[AutoMode] Extracted JSON: ${jsonStr.substring(0, 200)}`);
let parsed: { learnings?: unknown[] };
try {
parsed = JSON.parse(jsonStr);
} catch {
console.warn('[AutoMode] Failed to parse learnings JSON:', jsonStr.substring(0, 200));
return;
}
if (!parsed.learnings || !Array.isArray(parsed.learnings)) {
console.log('[AutoMode] No learnings array in parsed response');
return;
}
console.log(`[AutoMode] Found ${parsed.learnings.length} potential learnings`);
// Valid learning types
const validTypes = new Set(['decision', 'learning', 'pattern', 'gotcha']);
// Record each learning
for (const item of parsed.learnings) {
// Validate required fields with proper type narrowing
if (!item || typeof item !== 'object') continue;
const learning = item as Record<string, unknown>;
if (
!learning.category ||
typeof learning.category !== 'string' ||
!learning.content ||
typeof learning.content !== 'string' ||
!learning.content.trim()
) {
continue;
}
// Validate and normalize type
const typeStr = typeof learning.type === 'string' ? learning.type : 'learning';
const learningType = validTypes.has(typeStr)
? (typeStr as 'decision' | 'learning' | 'pattern' | 'gotcha')
: 'learning';
console.log(
`[AutoMode] Appending learning: category=${learning.category}, type=${learningType}`
);
await appendLearning(
projectPath,
{
category: learning.category,
type: learningType,
content: learning.content.trim(),
context: typeof learning.context === 'string' ? learning.context : undefined,
why: typeof learning.why === 'string' ? learning.why : undefined,
rejected: typeof learning.rejected === 'string' ? learning.rejected : undefined,
tradeoffs: typeof learning.tradeoffs === 'string' ? learning.tradeoffs : undefined,
breaking: typeof learning.breaking === 'string' ? learning.breaking : undefined,
},
secureFs as Parameters<typeof appendLearning>[2]
);
}
const validLearnings = parsed.learnings.filter(
(l) => l && typeof l === 'object' && (l as Record<string, unknown>).content
);
if (validLearnings.length > 0) {
console.log(
`[AutoMode] Recorded ${parsed.learnings.length} learning(s) from feature ${feature.id}`
);
}
} catch (error) {
console.warn(`[AutoMode] Failed to extract learnings from feature ${feature.id}:`, error);
}
}
}

View File

@@ -18,57 +18,64 @@ export function AutomakerLogo({ sidebarOpen, navigate }: AutomakerLogoProps) {
onClick={() => navigate({ to: '/' })}
data-testid="logo-button"
>
{!sidebarOpen ? (
<div className="relative flex flex-col items-center justify-center rounded-lg gap-0.5">
<svg
xmlns="http://www.w3.org/2000/svg"
viewBox="0 0 256 256"
role="img"
aria-label="Automaker Logo"
className="size-8 group-hover:rotate-12 transition-transform duration-300 ease-out"
>
<defs>
<linearGradient
id="bg-collapsed"
x1="0"
y1="0"
x2="256"
y2="256"
gradientUnits="userSpaceOnUse"
>
<stop offset="0%" style={{ stopColor: 'var(--brand-400)' }} />
<stop offset="100%" style={{ stopColor: 'var(--brand-600)' }} />
</linearGradient>
<filter id="iconShadow-collapsed" x="-20%" y="-20%" width="140%" height="140%">
<feDropShadow
dx="0"
dy="4"
stdDeviation="4"
floodColor="#000000"
floodOpacity="0.25"
/>
</filter>
</defs>
<rect x="16" y="16" width="224" height="224" rx="56" fill="url(#bg-collapsed)" />
<g
fill="none"
stroke="#FFFFFF"
strokeWidth="20"
strokeLinecap="round"
strokeLinejoin="round"
filter="url(#iconShadow-collapsed)"
{/* Collapsed logo - shown when sidebar is closed OR on small screens when sidebar is open */}
<div
className={cn(
'relative flex flex-col items-center justify-center rounded-lg gap-0.5',
sidebarOpen ? 'flex lg:hidden' : 'flex'
)}
>
<svg
xmlns="http://www.w3.org/2000/svg"
viewBox="0 0 256 256"
role="img"
aria-label="Automaker Logo"
className="size-8 group-hover:rotate-12 transition-transform duration-300 ease-out"
>
<defs>
<linearGradient
id="bg-collapsed"
x1="0"
y1="0"
x2="256"
y2="256"
gradientUnits="userSpaceOnUse"
>
<path d="M92 92 L52 128 L92 164" />
<path d="M144 72 L116 184" />
<path d="M164 92 L204 128 L164 164" />
</g>
</svg>
<span className="text-[0.625rem] text-muted-foreground leading-none font-medium">
v{appVersion}
</span>
</div>
) : (
<div className={cn('flex flex-col', 'hidden lg:flex')}>
<stop offset="0%" style={{ stopColor: 'var(--brand-400)' }} />
<stop offset="100%" style={{ stopColor: 'var(--brand-600)' }} />
</linearGradient>
<filter id="iconShadow-collapsed" x="-20%" y="-20%" width="140%" height="140%">
<feDropShadow
dx="0"
dy="4"
stdDeviation="4"
floodColor="#000000"
floodOpacity="0.25"
/>
</filter>
</defs>
<rect x="16" y="16" width="224" height="224" rx="56" fill="url(#bg-collapsed)" />
<g
fill="none"
stroke="#FFFFFF"
strokeWidth="20"
strokeLinecap="round"
strokeLinejoin="round"
filter="url(#iconShadow-collapsed)"
>
<path d="M92 92 L52 128 L92 164" />
<path d="M144 72 L116 184" />
<path d="M164 92 L204 128 L164 164" />
</g>
</svg>
<span className="text-[0.625rem] text-muted-foreground leading-none font-medium">
v{appVersion}
</span>
</div>
{/* Expanded logo - only shown when sidebar is open on large screens */}
{sidebarOpen && (
<div className="hidden lg:flex flex-col">
<div className="flex items-center gap-1">
<svg
xmlns="http://www.w3.org/2000/svg"

View File

@@ -21,8 +21,10 @@ export function SidebarHeader({ sidebarOpen, navigate }: SidebarHeaderProps) {
'bg-gradient-to-b from-transparent to-background/5',
'flex items-center',
sidebarOpen ? 'px-3 lg:px-5 justify-start' : 'px-3 justify-center',
// Add left padding on macOS to avoid overlapping with traffic light buttons
isMac && 'pt-4 pl-20'
// Add left padding on macOS to avoid overlapping with traffic light buttons (only when expanded)
isMac && sidebarOpen && 'pt-4 pl-20',
// Smaller top padding on macOS when collapsed
isMac && !sidebarOpen && 'pt-4'
)}
>
<AutomakerLogo sidebarOpen={sidebarOpen} navigate={navigate} />

View File

@@ -11,6 +11,7 @@ import {
import type { ReasoningEffort } from '@automaker/types';
import { FeatureImagePath as DescriptionImagePath } from '@/components/ui/description-image-dropzone';
import { getElectronAPI } from '@/lib/electron';
import { isConnectionError, handleServerOffline } from '@/lib/http-api-client';
import { toast } from 'sonner';
import { useAutoMode } from '@/hooks/use-auto-mode';
import { truncateDescription } from '@/lib/utils';
@@ -337,35 +338,31 @@ export function useBoardActions({
const handleRunFeature = useCallback(
async (feature: Feature) => {
if (!currentProject) return;
if (!currentProject) {
throw new Error('No project selected');
}
try {
const api = getElectronAPI();
if (!api?.autoMode) {
logger.error('Auto mode API not available');
return;
}
const api = getElectronAPI();
if (!api?.autoMode) {
throw new Error('Auto mode API not available');
}
// Server derives workDir from feature.branchName at execution time
const result = await api.autoMode.runFeature(
currentProject.path,
feature.id,
useWorktrees
// No worktreePath - server derives from feature.branchName
);
// Server derives workDir from feature.branchName at execution time
const result = await api.autoMode.runFeature(
currentProject.path,
feature.id,
useWorktrees
// No worktreePath - server derives from feature.branchName
);
if (result.success) {
logger.info('Feature run started successfully, branch:', feature.branchName || 'default');
} else {
logger.error('Failed to run feature:', result.error);
await loadFeatures();
}
} catch (error) {
logger.error('Error running feature:', error);
await loadFeatures();
if (result.success) {
logger.info('Feature run started successfully, branch:', feature.branchName || 'default');
} else {
// Throw error so caller can handle rollback
throw new Error(result.error || 'Failed to start feature');
}
},
[currentProject, useWorktrees, loadFeatures]
[currentProject, useWorktrees]
);
const handleStartImplementation = useCallback(
@@ -401,11 +398,34 @@ export function useBoardActions({
startedAt: new Date().toISOString(),
};
updateFeature(feature.id, updates);
// Must await to ensure feature status is persisted before starting agent
await persistFeatureUpdate(feature.id, updates);
logger.info('Feature moved to in_progress, starting agent...');
await handleRunFeature(feature);
return true;
try {
// Must await to ensure feature status is persisted before starting agent
await persistFeatureUpdate(feature.id, updates);
logger.info('Feature moved to in_progress, starting agent...');
await handleRunFeature(feature);
return true;
} catch (error) {
// Rollback to backlog if persist or run fails (e.g., server offline)
logger.error('Failed to start feature, rolling back to backlog:', error);
const rollbackUpdates = {
status: 'backlog' as const,
startedAt: undefined,
};
updateFeature(feature.id, rollbackUpdates);
// If server is offline (connection refused), redirect to login page
if (isConnectionError(error)) {
handleServerOffline();
return false;
}
toast.error('Failed to start feature', {
description:
error instanceof Error ? error.message : 'Server may be offline. Please try again.',
});
return false;
}
},
[
autoMode,
@@ -531,6 +551,7 @@ export function useBoardActions({
const featureId = followUpFeature.id;
const featureDescription = followUpFeature.description;
const previousStatus = followUpFeature.status;
const api = getElectronAPI();
if (!api?.autoMode?.followUpFeature) {
@@ -547,35 +568,53 @@ export function useBoardActions({
justFinishedAt: undefined,
};
updateFeature(featureId, updates);
persistFeatureUpdate(featureId, updates);
setShowFollowUpDialog(false);
setFollowUpFeature(null);
setFollowUpPrompt('');
setFollowUpImagePaths([]);
setFollowUpPreviewMap(new Map());
try {
await persistFeatureUpdate(featureId, updates);
toast.success('Follow-up started', {
description: `Continuing work on: ${truncateDescription(featureDescription)}`,
});
setShowFollowUpDialog(false);
setFollowUpFeature(null);
setFollowUpPrompt('');
setFollowUpImagePaths([]);
setFollowUpPreviewMap(new Map());
const imagePaths = followUpImagePaths.map((img) => img.path);
// Server derives workDir from feature.branchName at execution time
api.autoMode
.followUpFeature(
toast.success('Follow-up started', {
description: `Continuing work on: ${truncateDescription(featureDescription)}`,
});
const imagePaths = followUpImagePaths.map((img) => img.path);
// Server derives workDir from feature.branchName at execution time
const result = await api.autoMode.followUpFeature(
currentProject.path,
followUpFeature.id,
followUpPrompt,
imagePaths
// No worktreePath - server derives from feature.branchName
)
.catch((error) => {
logger.error('Error sending follow-up:', error);
toast.error('Failed to send follow-up', {
description: error instanceof Error ? error.message : 'An error occurred',
});
loadFeatures();
);
if (!result.success) {
throw new Error(result.error || 'Failed to send follow-up');
}
} catch (error) {
// Rollback to previous status if follow-up fails
logger.error('Error sending follow-up, rolling back:', error);
const rollbackUpdates = {
status: previousStatus as 'backlog' | 'in_progress' | 'waiting_approval' | 'verified',
startedAt: undefined,
};
updateFeature(featureId, rollbackUpdates);
// If server is offline (connection refused), redirect to login page
if (isConnectionError(error)) {
handleServerOffline();
return;
}
toast.error('Failed to send follow-up', {
description:
error instanceof Error ? error.message : 'Server may be offline. Please try again.',
});
}
}, [
currentProject,
followUpFeature,
@@ -588,7 +627,6 @@ export function useBoardActions({
setFollowUpPrompt,
setFollowUpImagePaths,
setFollowUpPreviewMap,
loadFeatures,
]);
const handleCommitFeature = useCallback(

View File

@@ -66,6 +66,14 @@ const GENERATION_TASKS: PhaseConfig[] = [
},
];
const MEMORY_TASKS: PhaseConfig[] = [
{
key: 'memoryExtractionModel',
label: 'Memory Extraction',
description: 'Extracts learnings from completed agent sessions',
},
];
function PhaseGroup({
title,
subtitle,
@@ -155,6 +163,13 @@ export function ModelDefaultsSection() {
subtitle="Powerful models recommended for quality output"
phases={GENERATION_TASKS}
/>
{/* Memory Tasks */}
<PhaseGroup
title="Memory Tasks"
subtitle="Fast models recommended for learning extraction"
phases={MEMORY_TASKS}
/>
</div>
</div>
);

View File

@@ -73,6 +73,55 @@ const handleUnauthorized = (): void => {
notifyLoggedOut();
};
/**
* Notify the UI that the server is offline/unreachable.
* Used to redirect the user to the login page which will show server unavailable.
*/
const notifyServerOffline = (): void => {
if (typeof window === 'undefined') return;
try {
window.dispatchEvent(new CustomEvent('automaker:server-offline'));
} catch {
// Ignore
}
};
/**
* Check if an error is a connection error (server offline/unreachable).
* These are typically TypeError with 'Failed to fetch' or similar network errors.
*/
export const isConnectionError = (error: unknown): boolean => {
if (error instanceof TypeError) {
const message = error.message.toLowerCase();
return (
message.includes('failed to fetch') ||
message.includes('network') ||
message.includes('econnrefused') ||
message.includes('connection refused')
);
}
// Check for error objects with message property
if (error && typeof error === 'object' && 'message' in error) {
const message = String((error as { message: unknown }).message).toLowerCase();
return (
message.includes('failed to fetch') ||
message.includes('network') ||
message.includes('econnrefused') ||
message.includes('connection refused')
);
}
return false;
};
/**
* Handle a server offline error by notifying the UI to redirect.
* Call this when a connection error is detected.
*/
export const handleServerOffline = (): void => {
logger.error('Server appears to be offline, redirecting to login...');
notifyServerOffline();
};
/**
* Initialize server URL from Electron IPC.
* Must be called early in Electron mode before making API requests.

View File

@@ -229,6 +229,25 @@ function RootLayoutContent() {
};
}, [location.pathname, navigate]);
// Global listener for server offline/connection errors.
// This is triggered when a connection error is detected (e.g., server stopped).
// Redirects to login page which will detect server is offline and show error UI.
useEffect(() => {
const handleServerOffline = () => {
useAuthStore.getState().setAuthState({ isAuthenticated: false, authChecked: true });
// Navigate to login - the login page will detect server is offline and show appropriate UI
if (location.pathname !== '/login' && location.pathname !== '/logged-out') {
navigate({ to: '/login' });
}
};
window.addEventListener('automaker:server-offline', handleServerOffline);
return () => {
window.removeEventListener('automaker:server-offline', handleServerOffline);
};
}, [location.pathname, navigate]);
// Initialize authentication
// - Electron mode: Uses API key from IPC (header-based auth)
// - Web mode: Uses HTTP-only session cookie