feat: add project-scoped agent memory system (#351)

* memory

* feat: add smart memory selection with task context

- Add taskContext parameter to loadContextFiles for intelligent file selection
- Memory files are scored based on tag matching with task keywords
- Category name matching (e.g., "terminals" matches terminals.md) with 4x weight
- Usage statistics influence scoring (files that helped before rank higher)
- Limit to top 5 files + always include gotchas.md
- Auto-mode passes feature title/description as context
- Chat sessions pass user message as context

This prevents loading 40+ memory files and killing context limits.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* refactor: enhance auto-mode service and context loader

- Improved context loading by adding task context for better memory selection.
- Updated JSON parsing logic to handle various formats and ensure robust error handling.
- Introduced file locking mechanisms to prevent race conditions during memory file updates.
- Enhanced metadata handling in memory files, including validation and sanitization.
- Refactored scoring logic for context files to improve selection accuracy based on task relevance.

These changes optimize memory file management and enhance the overall performance of the auto-mode service.

* refactor: enhance learning extraction and formatting in auto-mode service

- Improved the learning extraction process by refining the user prompt to focus on meaningful insights and structured JSON output.
- Updated the LearningEntry interface to include additional context fields for better documentation of decisions and patterns.
- Enhanced the formatLearning function to adopt an Architecture Decision Record (ADR) style, providing richer context for recorded learnings.
- Added detailed logging for better traceability during the learning extraction and appending processes.

These changes aim to improve the quality and clarity of learnings captured during the auto-mode service's operation.

* feat: integrate stripProviderPrefix utility for model ID handling

- Added stripProviderPrefix utility to various routes to ensure providers receive bare model IDs.
- Updated model references in executeQuery calls across multiple files, enhancing consistency in model ID handling.
- Introduced memoryExtractionModel in settings for improved learning extraction tasks.

These changes streamline the model ID processing and enhance the overall functionality of the provider interactions.

* feat: enhance error handling and server offline management in board actions

- Improved error handling in the handleRunFeature and handleStartImplementation functions to throw errors for better caller management.
- Integrated connection error detection and server offline handling, redirecting users to the login page when the server is unreachable.
- Updated follow-up feature logic to include rollback mechanisms and improved user feedback for error scenarios.

These changes enhance the robustness of the board actions by ensuring proper error management and user experience during server connectivity issues.

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: webdevcody <webdevcody@gmail.com>
This commit is contained in:
SuperComboGamer
2026-01-09 15:11:59 -05:00
committed by GitHub
parent 7e768b6290
commit b2cf17b53b
20 changed files with 1535 additions and 160 deletions

View File

@@ -9,7 +9,7 @@ import { query } from '@anthropic-ai/claude-agent-sdk';
import * as secureFs from '../../lib/secure-fs.js';
import type { EventEmitter } from '../../lib/events.js';
import { createLogger } from '@automaker/utils';
import { DEFAULT_PHASE_MODELS, isCursorModel } from '@automaker/types';
import { DEFAULT_PHASE_MODELS, isCursorModel, stripProviderPrefix } from '@automaker/types';
import { resolvePhaseModel } from '@automaker/model-resolver';
import { createFeatureGenerationOptions } from '../../lib/sdk-options.js';
import { ProviderFactory } from '../../providers/provider-factory.js';
@@ -124,6 +124,8 @@ IMPORTANT: Do not ask for clarification. The specification is provided above. Ge
logger.info('[FeatureGeneration] Using Cursor provider');
const provider = ProviderFactory.getProviderForModel(model);
// Strip provider prefix - providers expect bare model IDs
const bareModel = stripProviderPrefix(model);
// Add explicit instructions for Cursor to return JSON in response
const cursorPrompt = `${prompt}
@@ -135,7 +137,7 @@ CRITICAL INSTRUCTIONS:
for await (const msg of provider.executeQuery({
prompt: cursorPrompt,
model,
model: bareModel,
cwd: projectPath,
maxTurns: 250,
allowedTools: ['Read', 'Glob', 'Grep'],

View File

@@ -16,7 +16,7 @@ import {
type SpecOutput,
} from '../../lib/app-spec-format.js';
import { createLogger } from '@automaker/utils';
import { DEFAULT_PHASE_MODELS, isCursorModel } from '@automaker/types';
import { DEFAULT_PHASE_MODELS, isCursorModel, stripProviderPrefix } from '@automaker/types';
import { resolvePhaseModel } from '@automaker/model-resolver';
import { createSpecGenerationOptions } from '../../lib/sdk-options.js';
import { extractJson } from '../../lib/json-extractor.js';
@@ -118,6 +118,8 @@ ${getStructuredSpecPromptInstruction()}`;
logger.info('[SpecGeneration] Using Cursor provider');
const provider = ProviderFactory.getProviderForModel(model);
// Strip provider prefix - providers expect bare model IDs
const bareModel = stripProviderPrefix(model);
// For Cursor, include the JSON schema in the prompt with clear instructions
// to return JSON in the response (not write to a file)
@@ -134,7 +136,7 @@ Your entire response should be valid JSON starting with { and ending with }. No
for await (const msg of provider.executeQuery({
prompt: cursorPrompt,
model,
model: bareModel,
cwd: projectPath,
maxTurns: 250,
allowedTools: ['Read', 'Glob', 'Grep'],

View File

@@ -7,7 +7,12 @@
import type { EventEmitter } from '../../lib/events.js';
import type { Feature, BacklogPlanResult, BacklogChange, DependencyUpdate } from '@automaker/types';
import { DEFAULT_PHASE_MODELS, isCursorModel, type ThinkingLevel } from '@automaker/types';
import {
DEFAULT_PHASE_MODELS,
isCursorModel,
stripProviderPrefix,
type ThinkingLevel,
} from '@automaker/types';
import { resolvePhaseModel } from '@automaker/model-resolver';
import { FeatureLoader } from '../../services/feature-loader.js';
import { ProviderFactory } from '../../providers/provider-factory.js';
@@ -120,6 +125,8 @@ export async function generateBacklogPlan(
logger.info('[BacklogPlan] Using model:', effectiveModel);
const provider = ProviderFactory.getProviderForModel(effectiveModel);
// Strip provider prefix - providers expect bare model IDs
const bareModel = stripProviderPrefix(effectiveModel);
// Get autoLoadClaudeMd setting
const autoLoadClaudeMd = await getAutoLoadClaudeMdSetting(
@@ -151,7 +158,7 @@ ${userPrompt}`;
// Execute the query
const stream = provider.executeQuery({
prompt: finalPrompt,
model: effectiveModel,
model: bareModel,
cwd: projectPath,
systemPrompt: finalSystemPrompt,
maxTurns: 1,

View File

@@ -13,7 +13,7 @@
import type { Request, Response } from 'express';
import { query } from '@anthropic-ai/claude-agent-sdk';
import { createLogger } from '@automaker/utils';
import { DEFAULT_PHASE_MODELS, isCursorModel } from '@automaker/types';
import { DEFAULT_PHASE_MODELS, isCursorModel, stripProviderPrefix } from '@automaker/types';
import { PathNotAllowedError } from '@automaker/platform';
import { resolvePhaseModel } from '@automaker/model-resolver';
import { createCustomOptions } from '../../../lib/sdk-options.js';
@@ -198,6 +198,8 @@ File: ${fileName}${truncated ? ' (truncated)' : ''}`;
logger.info(`Using Cursor provider for model: ${model}`);
const provider = ProviderFactory.getProviderForModel(model);
// Strip provider prefix - providers expect bare model IDs
const bareModel = stripProviderPrefix(model);
// Build a simple text prompt for Cursor (no multi-part content blocks)
const cursorPrompt = `${instructionText}\n\n--- FILE CONTENT ---\n${contentToAnalyze}`;
@@ -205,7 +207,7 @@ File: ${fileName}${truncated ? ' (truncated)' : ''}`;
let responseText = '';
for await (const msg of provider.executeQuery({
prompt: cursorPrompt,
model,
model: bareModel,
cwd,
maxTurns: 1,
allowedTools: [],

View File

@@ -14,7 +14,7 @@
import type { Request, Response } from 'express';
import { query } from '@anthropic-ai/claude-agent-sdk';
import { createLogger, readImageAsBase64 } from '@automaker/utils';
import { DEFAULT_PHASE_MODELS, isCursorModel } from '@automaker/types';
import { DEFAULT_PHASE_MODELS, isCursorModel, stripProviderPrefix } from '@automaker/types';
import { resolvePhaseModel } from '@automaker/model-resolver';
import { createCustomOptions } from '../../../lib/sdk-options.js';
import { ProviderFactory } from '../../../providers/provider-factory.js';
@@ -357,6 +357,8 @@ export function createDescribeImageHandler(
logger.info(`[${requestId}] Using Cursor provider for model: ${model}`);
const provider = ProviderFactory.getProviderForModel(model);
// Strip provider prefix - providers expect bare model IDs
const bareModel = stripProviderPrefix(model);
// Build prompt with image reference for Cursor
// Note: Cursor CLI may not support base64 image blocks directly,
@@ -367,7 +369,7 @@ export function createDescribeImageHandler(
const queryStart = Date.now();
for await (const msg of provider.executeQuery({
prompt: cursorPrompt,
model,
model: bareModel,
cwd,
maxTurns: 1,
allowedTools: ['Read'], // Allow Read tool so Cursor can read the image if needed

View File

@@ -12,6 +12,7 @@ import { resolveModelString } from '@automaker/model-resolver';
import {
CLAUDE_MODEL_MAP,
isCursorModel,
stripProviderPrefix,
ThinkingLevel,
getThinkingTokenBudget,
} from '@automaker/types';
@@ -98,12 +99,14 @@ async function extractTextFromStream(
*/
async function executeWithCursor(prompt: string, model: string): Promise<string> {
const provider = ProviderFactory.getProviderForModel(model);
// Strip provider prefix - providers expect bare model IDs
const bareModel = stripProviderPrefix(model);
let responseText = '';
for await (const msg of provider.executeQuery({
prompt,
model,
model: bareModel,
cwd: process.cwd(), // Enhancement doesn't need a specific working directory
readOnly: true, // Prompt enhancement only generates text, doesn't write files
})) {

View File

@@ -18,7 +18,7 @@ import type {
LinkedPRInfo,
ThinkingLevel,
} from '@automaker/types';
import { isCursorModel, DEFAULT_PHASE_MODELS } from '@automaker/types';
import { isCursorModel, DEFAULT_PHASE_MODELS, stripProviderPrefix } from '@automaker/types';
import { resolvePhaseModel } from '@automaker/model-resolver';
import { createSuggestionsOptions } from '../../../lib/sdk-options.js';
import { extractJson } from '../../../lib/json-extractor.js';
@@ -120,6 +120,8 @@ async function runValidation(
logger.info(`Using Cursor provider for validation with model: ${model}`);
const provider = ProviderFactory.getProviderForModel(model);
// Strip provider prefix - providers expect bare model IDs
const bareModel = stripProviderPrefix(model);
// For Cursor, include the system prompt and schema in the user prompt
const cursorPrompt = `${ISSUE_VALIDATION_SYSTEM_PROMPT}
@@ -137,7 +139,7 @@ ${prompt}`;
for await (const msg of provider.executeQuery({
prompt: cursorPrompt,
model,
model: bareModel,
cwd: projectPath,
readOnly: true, // Issue validation only reads code, doesn't write
})) {

View File

@@ -8,7 +8,12 @@
import { query } from '@anthropic-ai/claude-agent-sdk';
import type { EventEmitter } from '../../lib/events.js';
import { createLogger } from '@automaker/utils';
import { DEFAULT_PHASE_MODELS, isCursorModel, type ThinkingLevel } from '@automaker/types';
import {
DEFAULT_PHASE_MODELS,
isCursorModel,
stripProviderPrefix,
type ThinkingLevel,
} from '@automaker/types';
import { resolvePhaseModel } from '@automaker/model-resolver';
import { createSuggestionsOptions } from '../../lib/sdk-options.js';
import { extractJsonWithArray } from '../../lib/json-extractor.js';
@@ -207,6 +212,8 @@ The response will be automatically formatted as structured JSON.`;
logger.info('[Suggestions] Using Cursor provider');
const provider = ProviderFactory.getProviderForModel(model);
// Strip provider prefix - providers expect bare model IDs
const bareModel = stripProviderPrefix(model);
// For Cursor, include the JSON schema in the prompt with clear instructions
const cursorPrompt = `${prompt}
@@ -222,7 +229,7 @@ Your entire response should be valid JSON starting with { and ending with }. No
for await (const msg of provider.executeQuery({
prompt: cursorPrompt,
model,
model: bareModel,
cwd: projectPath,
maxTurns: 250,
allowedTools: ['Read', 'Glob', 'Grep'],

View File

@@ -274,10 +274,15 @@ export class AgentService {
? await getCustomSubagents(this.settingsService, effectiveWorkDir)
: undefined;
// Load project context files (CLAUDE.md, CODE_QUALITY.md, etc.)
// Load project context files (CLAUDE.md, CODE_QUALITY.md, etc.) and memory files
// Use the user's message as task context for smart memory selection
const contextResult = await loadContextFiles({
projectPath: effectiveWorkDir,
fsModule: secureFs as Parameters<typeof loadContextFiles>[0]['fsModule'],
taskContext: {
title: message.substring(0, 200), // Use first 200 chars as title
description: message,
},
});
// When autoLoadClaudeMd is enabled, filter out CLAUDE.md to avoid duplication

View File

@@ -14,17 +14,17 @@ import type {
ExecuteOptions,
Feature,
ModelProvider,
PipelineConfig,
PipelineStep,
ThinkingLevel,
PlanningMode,
} from '@automaker/types';
import { DEFAULT_PHASE_MODELS } from '@automaker/types';
import { DEFAULT_PHASE_MODELS, stripProviderPrefix } from '@automaker/types';
import {
buildPromptWithImages,
isAbortError,
classifyError,
loadContextFiles,
appendLearning,
recordMemoryUsage,
createLogger,
} from '@automaker/utils';
@@ -322,6 +322,8 @@ export class AutoModeService {
projectPath,
});
// Note: Memory folder initialization is now handled by loadContextFiles
// Run the loop in the background
this.runAutoLoop().catch((error) => {
logger.error('Loop error:', error);
@@ -513,15 +515,21 @@ export class AutoModeService {
// Build the prompt - use continuation prompt if provided (for recovery after plan approval)
let prompt: string;
// Load project context files (CLAUDE.md, CODE_QUALITY.md, etc.) - passed as system prompt
// Load project context files (CLAUDE.md, CODE_QUALITY.md, etc.) and memory files
// Context loader uses task context to select relevant memory files
const contextResult = await loadContextFiles({
projectPath,
fsModule: secureFs as Parameters<typeof loadContextFiles>[0]['fsModule'],
taskContext: {
title: feature.title ?? '',
description: feature.description ?? '',
},
});
// When autoLoadClaudeMd is enabled, filter out CLAUDE.md to avoid duplication
// (SDK handles CLAUDE.md via settingSources), but keep other context files like CODE_QUALITY.md
const contextFilesPrompt = filterClaudeMdFromContext(contextResult, autoLoadClaudeMd);
// Note: contextResult.formattedPrompt now includes both context AND memory
const combinedSystemPrompt = filterClaudeMdFromContext(contextResult, autoLoadClaudeMd);
if (options?.continuationPrompt) {
// Continuation prompt is used when recovering from a plan approval
@@ -574,7 +582,7 @@ export class AutoModeService {
projectPath,
planningMode: feature.planningMode,
requirePlanApproval: feature.requirePlanApproval,
systemPrompt: contextFilesPrompt || undefined,
systemPrompt: combinedSystemPrompt || undefined,
autoLoadClaudeMd,
thinkingLevel: feature.thinkingLevel,
}
@@ -606,6 +614,36 @@ export class AutoModeService {
// Record success to reset consecutive failure tracking
this.recordSuccess();
// Record learnings and memory usage after successful feature completion
try {
const featureDir = getFeatureDir(projectPath, featureId);
const outputPath = path.join(featureDir, 'agent-output.md');
let agentOutput = '';
try {
const outputContent = await secureFs.readFile(outputPath, 'utf-8');
agentOutput =
typeof outputContent === 'string' ? outputContent : outputContent.toString();
} catch {
// Agent output might not exist yet
}
// Record memory usage if we loaded any memory files
if (contextResult.memoryFiles.length > 0 && agentOutput) {
await recordMemoryUsage(
projectPath,
contextResult.memoryFiles,
agentOutput,
true, // success
secureFs as Parameters<typeof recordMemoryUsage>[4]
);
}
// Extract and record learnings from the agent output
await this.recordLearningsFromFeature(projectPath, feature, agentOutput);
} catch (learningError) {
console.warn('[AutoMode] Failed to record learnings:', learningError);
}
this.emitAutoModeEvent('auto_mode_feature_complete', {
featureId,
passes: true,
@@ -674,10 +712,14 @@ export class AutoModeService {
): Promise<void> {
logger.info(`Executing ${steps.length} pipeline step(s) for feature ${featureId}`);
// Load context files once
// Load context files once with feature context for smart memory selection
const contextResult = await loadContextFiles({
projectPath,
fsModule: secureFs as Parameters<typeof loadContextFiles>[0]['fsModule'],
taskContext: {
title: feature.title ?? '',
description: feature.description ?? '',
},
});
const contextFilesPrompt = filterClaudeMdFromContext(contextResult, autoLoadClaudeMd);
@@ -910,6 +952,10 @@ Complete the pipeline step instructions above. Review the previous work and appl
const contextResult = await loadContextFiles({
projectPath,
fsModule: secureFs as Parameters<typeof loadContextFiles>[0]['fsModule'],
taskContext: {
title: feature?.title ?? prompt.substring(0, 200),
description: feature?.description ?? prompt,
},
});
// When autoLoadClaudeMd is enabled, filter out CLAUDE.md to avoid duplication
@@ -2103,7 +2149,12 @@ This mock response was generated because AUTOMAKER_MOCK_AGENT=true was set.
// Get provider for this model
const provider = ProviderFactory.getProviderForModel(finalModel);
logger.info(`Using provider "${provider.getName()}" for model "${finalModel}"`);
// Strip provider prefix - providers should receive bare model IDs
const bareModel = stripProviderPrefix(finalModel);
logger.info(
`Using provider "${provider.getName()}" for model "${finalModel}" (bare: ${bareModel})`
);
// Build prompt content with images using utility
const { content: promptContent } = await buildPromptWithImages(
@@ -2122,7 +2173,7 @@ This mock response was generated because AUTOMAKER_MOCK_AGENT=true was set.
const executeOptions: ExecuteOptions = {
prompt: promptContent,
model: finalModel,
model: bareModel,
maxTurns: maxTurns,
cwd: workDir,
allowedTools: allowedTools,
@@ -2427,7 +2478,7 @@ After generating the revised spec, output:
// Make revision call
const revisionStream = provider.executeQuery({
prompt: revisionPrompt,
model: finalModel,
model: bareModel,
maxTurns: maxTurns || 100,
cwd: workDir,
allowedTools: allowedTools,
@@ -2565,7 +2616,7 @@ After generating the revised spec, output:
// Execute task with dedicated agent
const taskStream = provider.executeQuery({
prompt: taskPrompt,
model: finalModel,
model: bareModel,
maxTurns: Math.min(maxTurns || 100, 50), // Limit turns per task
cwd: workDir,
allowedTools: allowedTools,
@@ -2653,7 +2704,7 @@ Implement all the changes described in the plan above.`;
const continuationStream = provider.executeQuery({
prompt: continuationPrompt,
model: finalModel,
model: bareModel,
maxTurns: maxTurns,
cwd: workDir,
allowedTools: allowedTools,
@@ -2898,4 +2949,207 @@ Begin implementing task ${task.id} now.`;
}
});
}
/**
* Extract and record learnings from a completed feature
* Uses a quick Claude call to identify important decisions and patterns
*/
private async recordLearningsFromFeature(
projectPath: string,
feature: Feature,
agentOutput: string
): Promise<void> {
if (!agentOutput || agentOutput.length < 100) {
// Not enough output to extract learnings from
console.log(
`[AutoMode] Skipping learning extraction - output too short (${agentOutput?.length || 0} chars)`
);
return;
}
console.log(
`[AutoMode] Extracting learnings from feature "${feature.title}" (${agentOutput.length} chars)`
);
// Limit output to avoid token limits
const truncatedOutput = agentOutput.length > 10000 ? agentOutput.slice(-10000) : agentOutput;
const userPrompt = `You are an Architecture Decision Record (ADR) extractor. Analyze this implementation and return ONLY JSON with learnings. No explanations.
Feature: "${feature.title}"
Implementation log:
${truncatedOutput}
Extract MEANINGFUL learnings - not obvious things. For each, capture:
- DECISIONS: Why this approach vs alternatives? What would break if changed?
- GOTCHAS: What was unexpected? What's the root cause? How to avoid?
- PATTERNS: Why this pattern? What problem does it solve? Trade-offs?
JSON format ONLY (no markdown, no text):
{"learnings": [{
"category": "architecture|api|ui|database|auth|testing|performance|security|gotchas",
"type": "decision|gotcha|pattern",
"content": "What was done/learned",
"context": "Problem being solved or situation faced",
"why": "Reasoning - why this approach",
"rejected": "Alternative considered and why rejected",
"tradeoffs": "What became easier/harder",
"breaking": "What breaks if this is changed/removed"
}]}
IMPORTANT: Only include NON-OBVIOUS learnings with real reasoning. Skip trivial patterns.
If nothing notable: {"learnings": []}`;
try {
// Import query dynamically to avoid circular dependencies
const { query } = await import('@anthropic-ai/claude-agent-sdk');
// Get model from phase settings
const settings = await this.settingsService?.getGlobalSettings();
const phaseModelEntry =
settings?.phaseModels?.memoryExtractionModel || DEFAULT_PHASE_MODELS.memoryExtractionModel;
const { model } = resolvePhaseModel(phaseModelEntry);
const stream = query({
prompt: userPrompt,
options: {
model,
maxTurns: 1,
allowedTools: [],
permissionMode: 'acceptEdits',
systemPrompt:
'You are a JSON extraction assistant. You MUST respond with ONLY valid JSON, no explanations, no markdown, no other text. Extract learnings from the provided implementation context and return them as JSON.',
},
});
// Extract text from stream
let responseText = '';
for await (const msg of stream) {
if (msg.type === 'assistant' && msg.message?.content) {
for (const block of msg.message.content) {
if (block.type === 'text' && block.text) {
responseText += block.text;
}
}
} else if (msg.type === 'result' && msg.subtype === 'success') {
responseText = msg.result || responseText;
}
}
console.log(`[AutoMode] Learning extraction response: ${responseText.length} chars`);
console.log(`[AutoMode] Response preview: ${responseText.substring(0, 300)}`);
// Parse the response - handle JSON in markdown code blocks or raw
let jsonStr: string | null = null;
// First try to find JSON in markdown code blocks
const codeBlockMatch = responseText.match(/```(?:json)?\s*(\{[\s\S]*?\})\s*```/);
if (codeBlockMatch) {
console.log('[AutoMode] Found JSON in code block');
jsonStr = codeBlockMatch[1];
} else {
// Fall back to finding balanced braces containing "learnings"
// Use a more precise approach: find the opening brace before "learnings"
const learningsIndex = responseText.indexOf('"learnings"');
if (learningsIndex !== -1) {
// Find the opening brace before "learnings"
let braceStart = responseText.lastIndexOf('{', learningsIndex);
if (braceStart !== -1) {
// Find matching closing brace
let braceCount = 0;
let braceEnd = -1;
for (let i = braceStart; i < responseText.length; i++) {
if (responseText[i] === '{') braceCount++;
if (responseText[i] === '}') braceCount--;
if (braceCount === 0) {
braceEnd = i;
break;
}
}
if (braceEnd !== -1) {
jsonStr = responseText.substring(braceStart, braceEnd + 1);
}
}
}
}
if (!jsonStr) {
console.log('[AutoMode] Could not extract JSON from response');
return;
}
console.log(`[AutoMode] Extracted JSON: ${jsonStr.substring(0, 200)}`);
let parsed: { learnings?: unknown[] };
try {
parsed = JSON.parse(jsonStr);
} catch {
console.warn('[AutoMode] Failed to parse learnings JSON:', jsonStr.substring(0, 200));
return;
}
if (!parsed.learnings || !Array.isArray(parsed.learnings)) {
console.log('[AutoMode] No learnings array in parsed response');
return;
}
console.log(`[AutoMode] Found ${parsed.learnings.length} potential learnings`);
// Valid learning types
const validTypes = new Set(['decision', 'learning', 'pattern', 'gotcha']);
// Record each learning
for (const item of parsed.learnings) {
// Validate required fields with proper type narrowing
if (!item || typeof item !== 'object') continue;
const learning = item as Record<string, unknown>;
if (
!learning.category ||
typeof learning.category !== 'string' ||
!learning.content ||
typeof learning.content !== 'string' ||
!learning.content.trim()
) {
continue;
}
// Validate and normalize type
const typeStr = typeof learning.type === 'string' ? learning.type : 'learning';
const learningType = validTypes.has(typeStr)
? (typeStr as 'decision' | 'learning' | 'pattern' | 'gotcha')
: 'learning';
console.log(
`[AutoMode] Appending learning: category=${learning.category}, type=${learningType}`
);
await appendLearning(
projectPath,
{
category: learning.category,
type: learningType,
content: learning.content.trim(),
context: typeof learning.context === 'string' ? learning.context : undefined,
why: typeof learning.why === 'string' ? learning.why : undefined,
rejected: typeof learning.rejected === 'string' ? learning.rejected : undefined,
tradeoffs: typeof learning.tradeoffs === 'string' ? learning.tradeoffs : undefined,
breaking: typeof learning.breaking === 'string' ? learning.breaking : undefined,
},
secureFs as Parameters<typeof appendLearning>[2]
);
}
const validLearnings = parsed.learnings.filter(
(l) => l && typeof l === 'object' && (l as Record<string, unknown>).content
);
if (validLearnings.length > 0) {
console.log(
`[AutoMode] Recorded ${parsed.learnings.length} learning(s) from feature ${feature.id}`
);
}
} catch (error) {
console.warn(`[AutoMode] Failed to extract learnings from feature ${feature.id}:`, error);
}
}
}

View File

@@ -18,57 +18,64 @@ export function AutomakerLogo({ sidebarOpen, navigate }: AutomakerLogoProps) {
onClick={() => navigate({ to: '/' })}
data-testid="logo-button"
>
{!sidebarOpen ? (
<div className="relative flex flex-col items-center justify-center rounded-lg gap-0.5">
<svg
xmlns="http://www.w3.org/2000/svg"
viewBox="0 0 256 256"
role="img"
aria-label="Automaker Logo"
className="size-8 group-hover:rotate-12 transition-transform duration-300 ease-out"
>
<defs>
<linearGradient
id="bg-collapsed"
x1="0"
y1="0"
x2="256"
y2="256"
gradientUnits="userSpaceOnUse"
>
<stop offset="0%" style={{ stopColor: 'var(--brand-400)' }} />
<stop offset="100%" style={{ stopColor: 'var(--brand-600)' }} />
</linearGradient>
<filter id="iconShadow-collapsed" x="-20%" y="-20%" width="140%" height="140%">
<feDropShadow
dx="0"
dy="4"
stdDeviation="4"
floodColor="#000000"
floodOpacity="0.25"
/>
</filter>
</defs>
<rect x="16" y="16" width="224" height="224" rx="56" fill="url(#bg-collapsed)" />
<g
fill="none"
stroke="#FFFFFF"
strokeWidth="20"
strokeLinecap="round"
strokeLinejoin="round"
filter="url(#iconShadow-collapsed)"
{/* Collapsed logo - shown when sidebar is closed OR on small screens when sidebar is open */}
<div
className={cn(
'relative flex flex-col items-center justify-center rounded-lg gap-0.5',
sidebarOpen ? 'flex lg:hidden' : 'flex'
)}
>
<svg
xmlns="http://www.w3.org/2000/svg"
viewBox="0 0 256 256"
role="img"
aria-label="Automaker Logo"
className="size-8 group-hover:rotate-12 transition-transform duration-300 ease-out"
>
<defs>
<linearGradient
id="bg-collapsed"
x1="0"
y1="0"
x2="256"
y2="256"
gradientUnits="userSpaceOnUse"
>
<path d="M92 92 L52 128 L92 164" />
<path d="M144 72 L116 184" />
<path d="M164 92 L204 128 L164 164" />
</g>
</svg>
<span className="text-[0.625rem] text-muted-foreground leading-none font-medium">
v{appVersion}
</span>
</div>
) : (
<div className={cn('flex flex-col', 'hidden lg:flex')}>
<stop offset="0%" style={{ stopColor: 'var(--brand-400)' }} />
<stop offset="100%" style={{ stopColor: 'var(--brand-600)' }} />
</linearGradient>
<filter id="iconShadow-collapsed" x="-20%" y="-20%" width="140%" height="140%">
<feDropShadow
dx="0"
dy="4"
stdDeviation="4"
floodColor="#000000"
floodOpacity="0.25"
/>
</filter>
</defs>
<rect x="16" y="16" width="224" height="224" rx="56" fill="url(#bg-collapsed)" />
<g
fill="none"
stroke="#FFFFFF"
strokeWidth="20"
strokeLinecap="round"
strokeLinejoin="round"
filter="url(#iconShadow-collapsed)"
>
<path d="M92 92 L52 128 L92 164" />
<path d="M144 72 L116 184" />
<path d="M164 92 L204 128 L164 164" />
</g>
</svg>
<span className="text-[0.625rem] text-muted-foreground leading-none font-medium">
v{appVersion}
</span>
</div>
{/* Expanded logo - only shown when sidebar is open on large screens */}
{sidebarOpen && (
<div className="hidden lg:flex flex-col">
<div className="flex items-center gap-1">
<svg
xmlns="http://www.w3.org/2000/svg"

View File

@@ -21,8 +21,10 @@ export function SidebarHeader({ sidebarOpen, navigate }: SidebarHeaderProps) {
'bg-gradient-to-b from-transparent to-background/5',
'flex items-center',
sidebarOpen ? 'px-3 lg:px-5 justify-start' : 'px-3 justify-center',
// Add left padding on macOS to avoid overlapping with traffic light buttons
isMac && 'pt-4 pl-20'
// Add left padding on macOS to avoid overlapping with traffic light buttons (only when expanded)
isMac && sidebarOpen && 'pt-4 pl-20',
// Smaller top padding on macOS when collapsed
isMac && !sidebarOpen && 'pt-4'
)}
>
<AutomakerLogo sidebarOpen={sidebarOpen} navigate={navigate} />

View File

@@ -11,6 +11,7 @@ import {
import type { ReasoningEffort } from '@automaker/types';
import { FeatureImagePath as DescriptionImagePath } from '@/components/ui/description-image-dropzone';
import { getElectronAPI } from '@/lib/electron';
import { isConnectionError, handleServerOffline } from '@/lib/http-api-client';
import { toast } from 'sonner';
import { useAutoMode } from '@/hooks/use-auto-mode';
import { truncateDescription } from '@/lib/utils';
@@ -337,35 +338,31 @@ export function useBoardActions({
const handleRunFeature = useCallback(
async (feature: Feature) => {
if (!currentProject) return;
if (!currentProject) {
throw new Error('No project selected');
}
try {
const api = getElectronAPI();
if (!api?.autoMode) {
logger.error('Auto mode API not available');
return;
}
const api = getElectronAPI();
if (!api?.autoMode) {
throw new Error('Auto mode API not available');
}
// Server derives workDir from feature.branchName at execution time
const result = await api.autoMode.runFeature(
currentProject.path,
feature.id,
useWorktrees
// No worktreePath - server derives from feature.branchName
);
// Server derives workDir from feature.branchName at execution time
const result = await api.autoMode.runFeature(
currentProject.path,
feature.id,
useWorktrees
// No worktreePath - server derives from feature.branchName
);
if (result.success) {
logger.info('Feature run started successfully, branch:', feature.branchName || 'default');
} else {
logger.error('Failed to run feature:', result.error);
await loadFeatures();
}
} catch (error) {
logger.error('Error running feature:', error);
await loadFeatures();
if (result.success) {
logger.info('Feature run started successfully, branch:', feature.branchName || 'default');
} else {
// Throw error so caller can handle rollback
throw new Error(result.error || 'Failed to start feature');
}
},
[currentProject, useWorktrees, loadFeatures]
[currentProject, useWorktrees]
);
const handleStartImplementation = useCallback(
@@ -401,11 +398,34 @@ export function useBoardActions({
startedAt: new Date().toISOString(),
};
updateFeature(feature.id, updates);
// Must await to ensure feature status is persisted before starting agent
await persistFeatureUpdate(feature.id, updates);
logger.info('Feature moved to in_progress, starting agent...');
await handleRunFeature(feature);
return true;
try {
// Must await to ensure feature status is persisted before starting agent
await persistFeatureUpdate(feature.id, updates);
logger.info('Feature moved to in_progress, starting agent...');
await handleRunFeature(feature);
return true;
} catch (error) {
// Rollback to backlog if persist or run fails (e.g., server offline)
logger.error('Failed to start feature, rolling back to backlog:', error);
const rollbackUpdates = {
status: 'backlog' as const,
startedAt: undefined,
};
updateFeature(feature.id, rollbackUpdates);
// If server is offline (connection refused), redirect to login page
if (isConnectionError(error)) {
handleServerOffline();
return false;
}
toast.error('Failed to start feature', {
description:
error instanceof Error ? error.message : 'Server may be offline. Please try again.',
});
return false;
}
},
[
autoMode,
@@ -531,6 +551,7 @@ export function useBoardActions({
const featureId = followUpFeature.id;
const featureDescription = followUpFeature.description;
const previousStatus = followUpFeature.status;
const api = getElectronAPI();
if (!api?.autoMode?.followUpFeature) {
@@ -547,35 +568,53 @@ export function useBoardActions({
justFinishedAt: undefined,
};
updateFeature(featureId, updates);
persistFeatureUpdate(featureId, updates);
setShowFollowUpDialog(false);
setFollowUpFeature(null);
setFollowUpPrompt('');
setFollowUpImagePaths([]);
setFollowUpPreviewMap(new Map());
try {
await persistFeatureUpdate(featureId, updates);
toast.success('Follow-up started', {
description: `Continuing work on: ${truncateDescription(featureDescription)}`,
});
setShowFollowUpDialog(false);
setFollowUpFeature(null);
setFollowUpPrompt('');
setFollowUpImagePaths([]);
setFollowUpPreviewMap(new Map());
const imagePaths = followUpImagePaths.map((img) => img.path);
// Server derives workDir from feature.branchName at execution time
api.autoMode
.followUpFeature(
toast.success('Follow-up started', {
description: `Continuing work on: ${truncateDescription(featureDescription)}`,
});
const imagePaths = followUpImagePaths.map((img) => img.path);
// Server derives workDir from feature.branchName at execution time
const result = await api.autoMode.followUpFeature(
currentProject.path,
followUpFeature.id,
followUpPrompt,
imagePaths
// No worktreePath - server derives from feature.branchName
)
.catch((error) => {
logger.error('Error sending follow-up:', error);
toast.error('Failed to send follow-up', {
description: error instanceof Error ? error.message : 'An error occurred',
});
loadFeatures();
);
if (!result.success) {
throw new Error(result.error || 'Failed to send follow-up');
}
} catch (error) {
// Rollback to previous status if follow-up fails
logger.error('Error sending follow-up, rolling back:', error);
const rollbackUpdates = {
status: previousStatus as 'backlog' | 'in_progress' | 'waiting_approval' | 'verified',
startedAt: undefined,
};
updateFeature(featureId, rollbackUpdates);
// If server is offline (connection refused), redirect to login page
if (isConnectionError(error)) {
handleServerOffline();
return;
}
toast.error('Failed to send follow-up', {
description:
error instanceof Error ? error.message : 'Server may be offline. Please try again.',
});
}
}, [
currentProject,
followUpFeature,
@@ -588,7 +627,6 @@ export function useBoardActions({
setFollowUpPrompt,
setFollowUpImagePaths,
setFollowUpPreviewMap,
loadFeatures,
]);
const handleCommitFeature = useCallback(

View File

@@ -66,6 +66,14 @@ const GENERATION_TASKS: PhaseConfig[] = [
},
];
const MEMORY_TASKS: PhaseConfig[] = [
{
key: 'memoryExtractionModel',
label: 'Memory Extraction',
description: 'Extracts learnings from completed agent sessions',
},
];
function PhaseGroup({
title,
subtitle,
@@ -155,6 +163,13 @@ export function ModelDefaultsSection() {
subtitle="Powerful models recommended for quality output"
phases={GENERATION_TASKS}
/>
{/* Memory Tasks */}
<PhaseGroup
title="Memory Tasks"
subtitle="Fast models recommended for learning extraction"
phases={MEMORY_TASKS}
/>
</div>
</div>
);

View File

@@ -73,6 +73,55 @@ const handleUnauthorized = (): void => {
notifyLoggedOut();
};
/**
* Notify the UI that the server is offline/unreachable.
* Used to redirect the user to the login page which will show server unavailable.
*/
const notifyServerOffline = (): void => {
if (typeof window === 'undefined') return;
try {
window.dispatchEvent(new CustomEvent('automaker:server-offline'));
} catch {
// Ignore
}
};
/**
* Check if an error is a connection error (server offline/unreachable).
* These are typically TypeError with 'Failed to fetch' or similar network errors.
*/
export const isConnectionError = (error: unknown): boolean => {
if (error instanceof TypeError) {
const message = error.message.toLowerCase();
return (
message.includes('failed to fetch') ||
message.includes('network') ||
message.includes('econnrefused') ||
message.includes('connection refused')
);
}
// Check for error objects with message property
if (error && typeof error === 'object' && 'message' in error) {
const message = String((error as { message: unknown }).message).toLowerCase();
return (
message.includes('failed to fetch') ||
message.includes('network') ||
message.includes('econnrefused') ||
message.includes('connection refused')
);
}
return false;
};
/**
* Handle a server offline error by notifying the UI to redirect.
* Call this when a connection error is detected.
*/
export const handleServerOffline = (): void => {
logger.error('Server appears to be offline, redirecting to login...');
notifyServerOffline();
};
/**
* Initialize server URL from Electron IPC.
* Must be called early in Electron mode before making API requests.

View File

@@ -229,6 +229,25 @@ function RootLayoutContent() {
};
}, [location.pathname, navigate]);
// Global listener for server offline/connection errors.
// This is triggered when a connection error is detected (e.g., server stopped).
// Redirects to login page which will detect server is offline and show error UI.
useEffect(() => {
const handleServerOffline = () => {
useAuthStore.getState().setAuthState({ isAuthenticated: false, authChecked: true });
// Navigate to login - the login page will detect server is offline and show appropriate UI
if (location.pathname !== '/login' && location.pathname !== '/logged-out') {
navigate({ to: '/login' });
}
};
window.addEventListener('automaker:server-offline', handleServerOffline);
return () => {
window.removeEventListener('automaker:server-offline', handleServerOffline);
};
}, [location.pathname, navigate]);
// Initialize authentication
// - Electron mode: Uses API key from IPC (header-based auth)
// - Web mode: Uses HTTP-only session cookie

View File

@@ -156,6 +156,10 @@ export interface PhaseModelConfig {
projectAnalysisModel: PhaseModelEntry;
/** Model for AI suggestions (feature, refactoring, security, performance) */
suggestionsModel: PhaseModelEntry;
// Memory tasks - for learning extraction and memory operations
/** Model for extracting learnings from completed agent sessions */
memoryExtractionModel: PhaseModelEntry;
}
/** Keys of PhaseModelConfig for type-safe access */
@@ -731,6 +735,9 @@ export const DEFAULT_PHASE_MODELS: PhaseModelConfig = {
backlogPlanningModel: { model: 'sonnet' },
projectAnalysisModel: { model: 'sonnet' },
suggestionsModel: { model: 'sonnet' },
// Memory - use fast model for learning extraction (cost-effective)
memoryExtractionModel: { model: 'haiku' },
};
/** Current version of the global settings schema */

View File

@@ -2,15 +2,30 @@
* Context Loader - Loads project context files for agent prompts
*
* Provides a shared utility to load context files from .automaker/context/
* and format them as system prompt content. Used by both auto-mode-service
* and agent-service to ensure all agents are aware of project context.
* and memory files from .automaker/memory/, formatting them as system prompt
* content. Used by both auto-mode-service and agent-service to ensure all
* agents are aware of project context and past learnings.
*
* Context files contain project-specific rules, conventions, and guidelines
* that agents must follow when working on the project.
*
* Memory files contain learnings from past agent work, including decisions,
* gotchas, and patterns that should inform future work.
*/
import path from 'path';
import { secureFs } from '@automaker/platform';
import {
getMemoryDir,
parseFrontmatter,
initializeMemoryFolder,
extractTerms,
calculateUsageScore,
countMatches,
incrementUsageStat,
type MemoryFsModule,
type MemoryMetadata,
} from './memory-loader.js';
/**
* Metadata structure for context files
@@ -30,22 +45,48 @@ export interface ContextFileInfo {
description?: string;
}
/**
* Memory file info (from .automaker/memory/)
*/
export interface MemoryFileInfo {
name: string;
path: string;
content: string;
category: string;
}
/**
* Result of loading context files
*/
export interface ContextFilesResult {
files: ContextFileInfo[];
memoryFiles: MemoryFileInfo[];
formattedPrompt: string;
}
/**
* File system module interface for context loading
* Compatible with secureFs from @automaker/platform
* Includes write methods needed for memory initialization
*/
export interface ContextFsModule {
access: (path: string) => Promise<void>;
readdir: (path: string) => Promise<string[]>;
readFile: (path: string, encoding?: BufferEncoding) => Promise<string | Buffer>;
// Write methods needed for memory operations
writeFile: (path: string, content: string) => Promise<void>;
mkdir: (path: string, options?: { recursive?: boolean }) => Promise<string | undefined>;
appendFile: (path: string, content: string) => Promise<void>;
}
/**
* Task context for smart memory selection
*/
export interface TaskContext {
/** Title or name of the current task/feature */
title: string;
/** Description of what the task involves */
description?: string;
}
/**
@@ -56,6 +97,14 @@ export interface LoadContextFilesOptions {
projectPath: string;
/** Optional custom secure fs module (for dependency injection) */
fsModule?: ContextFsModule;
/** Whether to include memory files from .automaker/memory/ (default: true) */
includeMemory?: boolean;
/** Whether to initialize memory folder if it doesn't exist (default: true) */
initializeMemory?: boolean;
/** Task context for smart memory selection - if not provided, only loads high-importance files */
taskContext?: TaskContext;
/** Maximum number of memory files to load (default: 5) */
maxMemoryFiles?: number;
}
/**
@@ -130,17 +179,21 @@ ${formattedFiles.join('\n\n---\n\n')}
/**
* Load context files from a project's .automaker/context/ directory
* and optionally memory files from .automaker/memory/
*
* This function loads all .md and .txt files from the context directory,
* along with their metadata (descriptions), and formats them into a
* system prompt that can be prepended to agent prompts.
*
* By default, it also loads memory files containing learnings from past
* agent work, which helps agents make better decisions.
*
* @param options - Configuration options
* @returns Promise resolving to context files and formatted prompt
* @returns Promise resolving to context files, memory files, and formatted prompt
*
* @example
* ```typescript
* const { formattedPrompt, files } = await loadContextFiles({
* const { formattedPrompt, files, memoryFiles } = await loadContextFiles({
* projectPath: '/path/to/project'
* });
*
@@ -154,9 +207,20 @@ ${formattedFiles.join('\n\n---\n\n')}
export async function loadContextFiles(
options: LoadContextFilesOptions
): Promise<ContextFilesResult> {
const { projectPath, fsModule = secureFs } = options;
const {
projectPath,
fsModule = secureFs,
includeMemory = true,
initializeMemory = true,
taskContext,
maxMemoryFiles = 5,
} = options;
const contextDir = path.resolve(getContextDir(projectPath));
const files: ContextFileInfo[] = [];
const memoryFiles: MemoryFileInfo[] = [];
// Load context files
try {
// Check if directory exists
await fsModule.access(contextDir);
@@ -170,41 +234,218 @@ export async function loadContextFiles(
return (lower.endsWith('.md') || lower.endsWith('.txt')) && f !== 'context-metadata.json';
});
if (textFiles.length === 0) {
return { files: [], formattedPrompt: '' };
if (textFiles.length > 0) {
// Load metadata for descriptions
const metadata = await loadContextMetadata(contextDir, fsModule);
// Load each file with its content and metadata
for (const fileName of textFiles) {
const filePath = path.join(contextDir, fileName);
try {
const content = await fsModule.readFile(filePath, 'utf-8');
files.push({
name: fileName,
path: filePath,
content: content as string,
description: metadata.files[fileName]?.description,
});
} catch (error) {
console.warn(`[ContextLoader] Failed to read context file ${fileName}:`, error);
}
}
}
} catch {
// Context directory doesn't exist or is inaccessible - that's fine
}
// Load metadata for descriptions
const metadata = await loadContextMetadata(contextDir, fsModule);
// Load memory files if enabled (with smart selection)
if (includeMemory) {
const memoryDir = getMemoryDir(projectPath);
// Load each file with its content and metadata
const files: ContextFileInfo[] = [];
for (const fileName of textFiles) {
const filePath = path.join(contextDir, fileName);
// Initialize memory folder if needed
if (initializeMemory) {
try {
const content = await fsModule.readFile(filePath, 'utf-8');
files.push({
name: fileName,
path: filePath,
content: content as string,
description: metadata.files[fileName]?.description,
});
} catch (error) {
console.warn(`[ContextLoader] Failed to read context file ${fileName}:`, error);
await initializeMemoryFolder(projectPath, fsModule as MemoryFsModule);
} catch {
// Initialization failed, continue without memory
}
}
const formattedPrompt = buildContextPrompt(files);
try {
await fsModule.access(memoryDir);
const allMemoryFiles = await fsModule.readdir(memoryDir);
console.log(
`[ContextLoader] Loaded ${files.length} context file(s): ${files.map((f) => f.name).join(', ')}`
);
// Filter for markdown memory files (except _index.md, case-insensitive)
const memoryMdFiles = allMemoryFiles.filter((f) => {
const lower = f.toLowerCase();
return lower.endsWith('.md') && lower !== '_index.md';
});
return { files, formattedPrompt };
} catch {
// Context directory doesn't exist or is inaccessible - this is fine
return { files: [], formattedPrompt: '' };
// Extract terms from task context for matching
const taskTerms = taskContext
? extractTerms(taskContext.title + ' ' + (taskContext.description || ''))
: [];
// Score and load memory files
const scoredFiles: Array<{
fileName: string;
filePath: string;
body: string;
metadata: MemoryMetadata;
score: number;
}> = [];
for (const fileName of memoryMdFiles) {
const filePath = path.join(memoryDir, fileName);
try {
const rawContent = await fsModule.readFile(filePath, 'utf-8');
const { metadata, body } = parseFrontmatter(rawContent as string);
// Skip empty files
if (!body.trim()) continue;
// Calculate relevance score
let score = 0;
if (taskTerms.length > 0) {
// Match task terms against file metadata
const tagScore = countMatches(metadata.tags, taskTerms) * 3;
const relevantToScore = countMatches(metadata.relevantTo, taskTerms) * 2;
const summaryTerms = extractTerms(metadata.summary);
const summaryScore = countMatches(summaryTerms, taskTerms);
// Split category name on hyphens/underscores for better matching
// e.g., "authentication-decisions" matches "authentication"
const categoryTerms = fileName
.replace('.md', '')
.split(/[-_]/)
.filter((t) => t.length > 2);
const categoryScore = countMatches(categoryTerms, taskTerms) * 4;
// Usage-based scoring (files that helped before rank higher)
const usageScore = calculateUsageScore(metadata.usageStats);
score =
(tagScore + relevantToScore + summaryScore + categoryScore) *
metadata.importance *
usageScore;
} else {
// No task context - use importance as score
score = metadata.importance;
}
scoredFiles.push({ fileName, filePath, body, metadata, score });
} catch (error) {
console.warn(`[ContextLoader] Failed to read memory file ${fileName}:`, error);
}
}
// Sort by score (highest first)
scoredFiles.sort((a, b) => b.score - a.score);
// Select files to load:
// 1. Always include gotchas.md if it exists (unless maxMemoryFiles=0)
// 2. Include high-importance files (importance >= 0.9)
// 3. Include top scoring files up to maxMemoryFiles
const selectedFiles = new Set<string>();
// Skip selection if maxMemoryFiles is 0
if (maxMemoryFiles > 0) {
// Always include gotchas.md
const gotchasFile = scoredFiles.find((f) => f.fileName === 'gotchas.md');
if (gotchasFile) {
selectedFiles.add('gotchas.md');
}
// Add high-importance files
for (const file of scoredFiles) {
if (file.metadata.importance >= 0.9 && selectedFiles.size < maxMemoryFiles) {
selectedFiles.add(file.fileName);
}
}
// Add top scoring files (if we have task context and room)
if (taskTerms.length > 0) {
for (const file of scoredFiles) {
if (file.score > 0 && selectedFiles.size < maxMemoryFiles) {
selectedFiles.add(file.fileName);
}
}
}
}
// Load selected files and increment loaded stat
for (const file of scoredFiles) {
if (selectedFiles.has(file.fileName)) {
memoryFiles.push({
name: file.fileName,
path: file.filePath,
content: file.body,
category: file.fileName.replace('.md', ''),
});
// Increment the 'loaded' stat for this file (CRITICAL FIX)
// This makes calculateUsageScore work correctly
try {
await incrementUsageStat(file.filePath, 'loaded', fsModule as MemoryFsModule);
} catch {
// Non-critical - continue even if stat update fails
}
}
}
if (memoryFiles.length > 0) {
const selectedNames = memoryFiles.map((f) => f.category).join(', ');
console.log(`[ContextLoader] Selected memory files: ${selectedNames}`);
}
} catch {
// Memory directory doesn't exist - that's fine
}
}
// Build combined prompt
const contextPrompt = buildContextPrompt(files);
const memoryPrompt = buildMemoryPrompt(memoryFiles);
const formattedPrompt = [contextPrompt, memoryPrompt].filter(Boolean).join('\n\n');
const loadedItems = [];
if (files.length > 0) {
loadedItems.push(`${files.length} context file(s)`);
}
if (memoryFiles.length > 0) {
loadedItems.push(`${memoryFiles.length} memory file(s)`);
}
if (loadedItems.length > 0) {
console.log(`[ContextLoader] Loaded ${loadedItems.join(' and ')}`);
}
return { files, memoryFiles, formattedPrompt };
}
/**
* Build a formatted prompt from memory files
*/
function buildMemoryPrompt(memoryFiles: MemoryFileInfo[]): string {
if (memoryFiles.length === 0) {
return '';
}
const sections = memoryFiles.map((file) => {
return `## ${file.category.toUpperCase()}
${file.content}`;
});
return `# Project Memory
The following learnings and decisions from previous work are available.
**IMPORTANT**: Review these carefully before making changes that could conflict with past decisions.
---
${sections.join('\n\n---\n\n')}
---
`;
}
/**

View File

@@ -63,5 +63,31 @@ export {
type ContextMetadata,
type ContextFileInfo,
type ContextFilesResult,
type ContextFsModule,
type LoadContextFilesOptions,
type MemoryFileInfo,
type TaskContext,
} from './context-loader.js';
// Memory loading
export {
loadRelevantMemory,
initializeMemoryFolder,
appendLearning,
recordMemoryUsage,
getMemoryDir,
parseFrontmatter,
serializeFrontmatter,
extractTerms,
calculateUsageScore,
countMatches,
incrementUsageStat,
formatLearning,
type MemoryFsModule,
type MemoryMetadata,
type MemoryFile,
type MemoryLoadResult,
type UsageStats,
type LearningEntry,
type SimpleMemoryFile,
} from './memory-loader.js';

View File

@@ -0,0 +1,685 @@
/**
* Memory Loader - Smart loading of agent memory files
*
* Loads relevant memory files from .automaker/memory/ based on:
* - Tag matching with feature keywords
* - Historical usefulness (usage stats)
* - File importance
*
* Memory files use YAML frontmatter for metadata.
*/
import path from 'path';
/**
* File system module interface (compatible with secureFs)
*/
export interface MemoryFsModule {
access: (path: string) => Promise<void>;
readdir: (path: string) => Promise<string[]>;
readFile: (path: string, encoding?: BufferEncoding) => Promise<string | Buffer>;
writeFile: (path: string, content: string) => Promise<void>;
mkdir: (path: string, options?: { recursive?: boolean }) => Promise<string | undefined>;
appendFile: (path: string, content: string) => Promise<void>;
}
/**
* Usage statistics for learning which files are helpful
*/
export interface UsageStats {
loaded: number;
referenced: number;
successfulFeatures: number;
}
/**
* Metadata stored in YAML frontmatter of memory files
*/
export interface MemoryMetadata {
tags: string[];
summary: string;
relevantTo: string[];
importance: number;
relatedFiles: string[];
usageStats: UsageStats;
}
/**
* A loaded memory file with content and metadata
*/
export interface MemoryFile {
name: string;
content: string;
metadata: MemoryMetadata;
}
/**
* Result of loading memory files
*/
export interface MemoryLoadResult {
files: MemoryFile[];
formattedPrompt: string;
}
/**
* Learning entry to be recorded
* Based on Architecture Decision Record (ADR) format for rich context
*/
export interface LearningEntry {
category: string;
type: 'decision' | 'learning' | 'pattern' | 'gotcha';
content: string;
context?: string; // Problem being solved or situation faced
why?: string; // Reasoning behind the approach
rejected?: string; // Alternative considered and why rejected
tradeoffs?: string; // What became easier/harder
breaking?: string; // What breaks if changed/removed
}
/**
* Create default metadata for new memory files
* Returns a new object each time to avoid shared mutable state
*/
function createDefaultMetadata(): MemoryMetadata {
return {
tags: [],
summary: '',
relevantTo: [],
importance: 0.5,
relatedFiles: [],
usageStats: {
loaded: 0,
referenced: 0,
successfulFeatures: 0,
},
};
}
/**
* In-memory locks to prevent race conditions when updating files
*/
const fileLocks = new Map<string, Promise<void>>();
/**
* Acquire a lock for a file path, execute the operation, then release
*/
async function withFileLock<T>(filePath: string, operation: () => Promise<T>): Promise<T> {
// Wait for any existing lock on this file
const existingLock = fileLocks.get(filePath);
if (existingLock) {
await existingLock;
}
// Create a new lock
let releaseLock: () => void;
const lockPromise = new Promise<void>((resolve) => {
releaseLock = resolve;
});
fileLocks.set(filePath, lockPromise);
try {
return await operation();
} finally {
releaseLock!();
fileLocks.delete(filePath);
}
}
/**
* Get the memory directory path for a project
*/
export function getMemoryDir(projectPath: string): string {
return path.join(projectPath, '.automaker', 'memory');
}
/**
* Parse YAML frontmatter from markdown content
* Returns the metadata and the content without frontmatter
*/
export function parseFrontmatter(content: string): {
metadata: MemoryMetadata;
body: string;
} {
// Handle both Unix (\n) and Windows (\r\n) line endings
const frontmatterRegex = /^---\s*\r?\n([\s\S]*?)\r?\n---\s*\r?\n/;
const match = content.match(frontmatterRegex);
if (!match) {
return { metadata: createDefaultMetadata(), body: content };
}
const frontmatterStr = match[1];
const body = content.slice(match[0].length);
try {
// Simple YAML parsing for our specific format
const metadata: MemoryMetadata = createDefaultMetadata();
// Parse tags: [tag1, tag2, tag3]
const tagsMatch = frontmatterStr.match(/tags:\s*\[(.*?)\]/);
if (tagsMatch) {
metadata.tags = tagsMatch[1]
.split(',')
.map((t) => t.trim().replace(/['"]/g, ''))
.filter((t) => t.length > 0); // Filter out empty strings
}
// Parse summary
const summaryMatch = frontmatterStr.match(/summary:\s*(.+)/);
if (summaryMatch) {
metadata.summary = summaryMatch[1].trim().replace(/^["']|["']$/g, '');
}
// Parse relevantTo: [term1, term2]
const relevantMatch = frontmatterStr.match(/relevantTo:\s*\[(.*?)\]/);
if (relevantMatch) {
metadata.relevantTo = relevantMatch[1]
.split(',')
.map((t) => t.trim().replace(/['"]/g, ''))
.filter((t) => t.length > 0); // Filter out empty strings
}
// Parse importance (validate range 0-1)
const importanceMatch = frontmatterStr.match(/importance:\s*([\d.]+)/);
if (importanceMatch) {
const value = parseFloat(importanceMatch[1]);
metadata.importance = Math.max(0, Math.min(1, value)); // Clamp to 0-1
}
// Parse relatedFiles: [file1.md, file2.md]
const relatedMatch = frontmatterStr.match(/relatedFiles:\s*\[(.*?)\]/);
if (relatedMatch) {
metadata.relatedFiles = relatedMatch[1]
.split(',')
.map((t) => t.trim().replace(/['"]/g, ''))
.filter((t) => t.length > 0); // Filter out empty strings
}
// Parse usageStats
const loadedMatch = frontmatterStr.match(/loaded:\s*(\d+)/);
const referencedMatch = frontmatterStr.match(/referenced:\s*(\d+)/);
const successMatch = frontmatterStr.match(/successfulFeatures:\s*(\d+)/);
if (loadedMatch) metadata.usageStats.loaded = parseInt(loadedMatch[1], 10);
if (referencedMatch) metadata.usageStats.referenced = parseInt(referencedMatch[1], 10);
if (successMatch) metadata.usageStats.successfulFeatures = parseInt(successMatch[1], 10);
return { metadata, body };
} catch {
return { metadata: createDefaultMetadata(), body: content };
}
}
/**
* Escape a string for safe YAML output
* Quotes strings containing special characters
*/
function escapeYamlString(str: string): string {
// If string contains special YAML characters, wrap in quotes
if (/[:\[\]{}#&*!|>'"%@`\n\r]/.test(str) || str.trim() !== str) {
// Escape any existing quotes and wrap in double quotes
return `"${str.replace(/"/g, '\\"')}"`;
}
return str;
}
/**
* Serialize metadata back to YAML frontmatter
*/
export function serializeFrontmatter(metadata: MemoryMetadata): string {
const escapedTags = metadata.tags.map(escapeYamlString);
const escapedRelevantTo = metadata.relevantTo.map(escapeYamlString);
const escapedRelatedFiles = metadata.relatedFiles.map(escapeYamlString);
const escapedSummary = escapeYamlString(metadata.summary);
return `---
tags: [${escapedTags.join(', ')}]
summary: ${escapedSummary}
relevantTo: [${escapedRelevantTo.join(', ')}]
importance: ${metadata.importance}
relatedFiles: [${escapedRelatedFiles.join(', ')}]
usageStats:
loaded: ${metadata.usageStats.loaded}
referenced: ${metadata.usageStats.referenced}
successfulFeatures: ${metadata.usageStats.successfulFeatures}
---`;
}
/**
* Extract terms from text for matching
* Splits on spaces, removes common words, lowercases
*/
export function extractTerms(text: string): string[] {
const stopWords = new Set([
'a',
'an',
'the',
'and',
'or',
'but',
'in',
'on',
'at',
'to',
'for',
'of',
'with',
'by',
'is',
'it',
'this',
'that',
'be',
'as',
'are',
'was',
'were',
'been',
'being',
'have',
'has',
'had',
'do',
'does',
'did',
'will',
'would',
'could',
'should',
'may',
'might',
'must',
'shall',
'can',
'need',
'dare',
'ought',
'used',
'add',
'create',
'implement',
'build',
'make',
'update',
'fix',
'change',
'modify',
]);
return text
.toLowerCase()
.replace(/[^a-z0-9\s]/g, ' ')
.split(/\s+/)
.filter((word) => word.length > 2 && !stopWords.has(word));
}
/**
* Count how many terms match between two arrays (case-insensitive)
*/
export function countMatches(arr1: string[], arr2: string[]): number {
const set2 = new Set(arr2.map((t) => t.toLowerCase()));
return arr1.filter((t) => set2.has(t.toLowerCase())).length;
}
/**
* Calculate usage-based score for a memory file
* Files that are referenced in successful features get higher scores
*/
export function calculateUsageScore(stats: UsageStats): number {
if (stats.loaded === 0) return 1; // New file, neutral score
const referenceRate = stats.referenced / stats.loaded;
const successRate = stats.referenced > 0 ? stats.successfulFeatures / stats.referenced : 0;
// Base 0.5 + up to 0.3 for reference rate + up to 0.2 for success rate
return 0.5 + referenceRate * 0.3 + successRate * 0.2;
}
/**
* Load relevant memory files for a feature
*
* Selects files based on:
* - Tag matching with feature keywords (weight: 3)
* - RelevantTo matching (weight: 2)
* - Summary matching (weight: 1)
* - Usage score (multiplier)
* - Importance (multiplier)
*
* Always includes gotchas.md
*/
export async function loadRelevantMemory(
projectPath: string,
featureTitle: string,
featureDescription: string,
fsModule: MemoryFsModule
): Promise<MemoryLoadResult> {
const memoryDir = getMemoryDir(projectPath);
try {
await fsModule.access(memoryDir);
} catch {
// Memory directory doesn't exist yet
return { files: [], formattedPrompt: '' };
}
const allFiles = await fsModule.readdir(memoryDir);
const featureTerms = extractTerms(featureTitle + ' ' + featureDescription);
// Score each file
const scored: Array<{ file: string; score: number; content: string; metadata: MemoryMetadata }> =
[];
for (const file of allFiles) {
if (!file.endsWith('.md') || file === '_index.md') continue;
const filePath = path.join(memoryDir, file);
try {
const content = (await fsModule.readFile(filePath, 'utf-8')) as string;
const { metadata, body } = parseFrontmatter(content);
// Calculate relevance score
const tagScore = countMatches(metadata.tags, featureTerms) * 3;
const relevantToScore = countMatches(metadata.relevantTo, featureTerms) * 2;
const summaryTerms = extractTerms(metadata.summary);
const summaryScore = countMatches(summaryTerms, featureTerms);
// Usage-based scoring
const usageScore = calculateUsageScore(metadata.usageStats);
// Combined score
const score = (tagScore + relevantToScore + summaryScore) * metadata.importance * usageScore;
// Include if score > 0 or high importance
if (score > 0 || metadata.importance >= 0.9) {
scored.push({ file, score, content: body, metadata });
}
} catch {
// Skip files that can't be read
}
}
// Sort by score, take top 5
const topFiles = scored.sort((a, b) => b.score - a.score).slice(0, 5);
// Always include gotchas.md if it exists
const toLoad = new Set(['gotchas.md', ...topFiles.map((f) => f.file)]);
const loaded: MemoryFile[] = [];
for (const file of toLoad) {
const existing = scored.find((s) => s.file === file);
if (existing) {
loaded.push({
name: file,
content: existing.content,
metadata: existing.metadata,
});
} else if (file === 'gotchas.md') {
// Try to load gotchas.md even if it wasn't scored
const gotchasPath = path.join(memoryDir, 'gotchas.md');
try {
const content = (await fsModule.readFile(gotchasPath, 'utf-8')) as string;
const { metadata, body } = parseFrontmatter(content);
loaded.push({ name: file, content: body, metadata });
} catch {
// gotchas.md doesn't exist yet
}
}
}
// Build formatted prompt
const formattedPrompt = buildMemoryPrompt(loaded);
return { files: loaded, formattedPrompt };
}
/**
* Build a formatted prompt from loaded memory files
*/
function buildMemoryPrompt(files: MemoryFile[]): string {
if (files.length === 0) return '';
const sections = files.map((file) => {
return `## ${file.name.replace('.md', '').toUpperCase()}
${file.content}`;
});
return `# Project Memory
The following learnings and decisions from previous work are relevant to this task.
**IMPORTANT**: Review these carefully before making changes that could conflict with past decisions.
---
${sections.join('\n\n---\n\n')}
---
`;
}
/**
* Increment a usage stat in a memory file
* Uses file locking to prevent race conditions from concurrent updates
*/
export async function incrementUsageStat(
filePath: string,
stat: keyof UsageStats,
fsModule: MemoryFsModule
): Promise<void> {
await withFileLock(filePath, async () => {
try {
const content = (await fsModule.readFile(filePath, 'utf-8')) as string;
const { metadata, body } = parseFrontmatter(content);
metadata.usageStats[stat]++;
// serializeFrontmatter ends with "---", add newline before body
const newContent = serializeFrontmatter(metadata) + '\n' + body;
await fsModule.writeFile(filePath, newContent);
} catch {
// File doesn't exist or can't be updated - that's fine
}
});
}
/**
* Simple memory file reference for usage tracking
*/
export interface SimpleMemoryFile {
name: string;
content: string;
}
/**
* Record memory usage after feature completion
* Updates usage stats based on what was actually referenced
*/
export async function recordMemoryUsage(
projectPath: string,
loadedFiles: SimpleMemoryFile[],
agentOutput: string,
success: boolean,
fsModule: MemoryFsModule
): Promise<void> {
const memoryDir = getMemoryDir(projectPath);
for (const file of loadedFiles) {
const filePath = path.join(memoryDir, file.name);
// Check if agent actually referenced this file's content
// Simple heuristic: check if any significant terms from the file appear in output
const fileTerms = extractTerms(file.content);
const outputTerms = extractTerms(agentOutput);
const wasReferenced = countMatches(fileTerms, outputTerms) >= 3;
if (wasReferenced) {
await incrementUsageStat(filePath, 'referenced', fsModule);
if (success) {
await incrementUsageStat(filePath, 'successfulFeatures', fsModule);
}
}
}
}
/**
* Format a learning entry for appending to a memory file
* Uses ADR-style format for rich context
*/
export function formatLearning(learning: LearningEntry): string {
const date = new Date().toISOString().split('T')[0];
const lines: string[] = [];
if (learning.type === 'decision') {
lines.push(`\n### ${learning.content} (${date})`);
if (learning.context) lines.push(`- **Context:** ${learning.context}`);
if (learning.why) lines.push(`- **Why:** ${learning.why}`);
if (learning.rejected) lines.push(`- **Rejected:** ${learning.rejected}`);
if (learning.tradeoffs) lines.push(`- **Trade-offs:** ${learning.tradeoffs}`);
if (learning.breaking) lines.push(`- **Breaking if changed:** ${learning.breaking}`);
return lines.join('\n');
}
if (learning.type === 'gotcha') {
lines.push(`\n#### [Gotcha] ${learning.content} (${date})`);
if (learning.context) lines.push(`- **Situation:** ${learning.context}`);
if (learning.why) lines.push(`- **Root cause:** ${learning.why}`);
if (learning.tradeoffs) lines.push(`- **How to avoid:** ${learning.tradeoffs}`);
return lines.join('\n');
}
// Pattern or learning
const prefix = learning.type === 'pattern' ? '[Pattern]' : '[Learned]';
lines.push(`\n#### ${prefix} ${learning.content} (${date})`);
if (learning.context) lines.push(`- **Problem solved:** ${learning.context}`);
if (learning.why) lines.push(`- **Why this works:** ${learning.why}`);
if (learning.tradeoffs) lines.push(`- **Trade-offs:** ${learning.tradeoffs}`);
return lines.join('\n');
}
/**
* Append a learning to the appropriate category file
* Creates the file with frontmatter if it doesn't exist
* Uses file locking to prevent TOCTOU race conditions
*/
export async function appendLearning(
projectPath: string,
learning: LearningEntry,
fsModule: MemoryFsModule
): Promise<void> {
console.log(
`[MemoryLoader] appendLearning called: category=${learning.category}, type=${learning.type}`
);
const memoryDir = getMemoryDir(projectPath);
// Sanitize category name: lowercase, replace spaces with hyphens, remove special chars
const sanitizedCategory = learning.category
.toLowerCase()
.replace(/\s+/g, '-')
.replace(/[^a-z0-9-]/g, '');
const fileName = `${sanitizedCategory || 'general'}.md`;
const filePath = path.join(memoryDir, fileName);
// Use file locking to prevent race conditions when multiple processes
// try to create the same file simultaneously
await withFileLock(filePath, async () => {
try {
await fsModule.access(filePath);
// File exists, append to it
const formatted = formatLearning(learning);
await fsModule.appendFile(filePath, '\n' + formatted);
console.log(`[MemoryLoader] Appended learning to existing file: ${fileName}`);
} catch {
// File doesn't exist, create it with frontmatter
console.log(`[MemoryLoader] Creating new memory file: ${fileName}`);
const metadata: MemoryMetadata = {
tags: [sanitizedCategory || 'general'],
summary: `${learning.category} implementation decisions and patterns`,
relevantTo: [sanitizedCategory || 'general'],
importance: 0.7,
relatedFiles: [],
usageStats: { loaded: 0, referenced: 0, successfulFeatures: 0 },
};
const content =
serializeFrontmatter(metadata) + `\n# ${learning.category}\n` + formatLearning(learning);
await fsModule.writeFile(filePath, content);
}
});
}
/**
* Initialize the memory folder for a project
* Creates starter files if the folder doesn't exist
*/
export async function initializeMemoryFolder(
projectPath: string,
fsModule: MemoryFsModule
): Promise<void> {
const memoryDir = getMemoryDir(projectPath);
try {
await fsModule.access(memoryDir);
// Already exists
return;
} catch {
// Create the directory
await fsModule.mkdir(memoryDir, { recursive: true });
// Create _index.md
const indexMetadata: MemoryMetadata = {
tags: ['index', 'overview'],
summary: 'Overview of project memory categories',
relevantTo: ['project', 'memory', 'overview'],
importance: 0.5,
relatedFiles: [],
usageStats: { loaded: 0, referenced: 0, successfulFeatures: 0 },
};
const indexContent =
serializeFrontmatter(indexMetadata) +
`
# Project Memory Index
This folder contains agent learnings organized by category.
Categories are created automatically as agents work on features.
## How This Works
1. After each successful feature, learnings are extracted and categorized
2. Relevant memory files are loaded into agent context for future features
3. Usage statistics help prioritize which memories are most helpful
## Categories
- **gotchas.md** - Mistakes and edge cases to avoid
- Other categories are created automatically based on feature work
`;
await fsModule.writeFile(path.join(memoryDir, '_index.md'), indexContent);
// Create gotchas.md
const gotchasMetadata: MemoryMetadata = {
tags: ['gotcha', 'mistake', 'edge-case', 'bug', 'warning'],
summary: 'Mistakes and edge cases to avoid',
relevantTo: ['error', 'bug', 'fix', 'issue', 'problem'],
importance: 0.9,
relatedFiles: [],
usageStats: { loaded: 0, referenced: 0, successfulFeatures: 0 },
};
const gotchasContent =
serializeFrontmatter(gotchasMetadata) +
`
# Gotchas
Mistakes and edge cases to avoid. These are lessons learned from past issues.
---
`;
await fsModule.writeFile(path.join(memoryDir, 'gotchas.md'), gotchasContent);
console.log(`[MemoryLoader] Initialized memory folder at ${memoryDir}`);
}
}