mirror of
https://github.com/AutoMaker-Org/automaker.git
synced 2026-01-31 06:42:03 +00:00
feat: add project-scoped agent memory system (#351)
* memory * feat: add smart memory selection with task context - Add taskContext parameter to loadContextFiles for intelligent file selection - Memory files are scored based on tag matching with task keywords - Category name matching (e.g., "terminals" matches terminals.md) with 4x weight - Usage statistics influence scoring (files that helped before rank higher) - Limit to top 5 files + always include gotchas.md - Auto-mode passes feature title/description as context - Chat sessions pass user message as context This prevents loading 40+ memory files and killing context limits. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * refactor: enhance auto-mode service and context loader - Improved context loading by adding task context for better memory selection. - Updated JSON parsing logic to handle various formats and ensure robust error handling. - Introduced file locking mechanisms to prevent race conditions during memory file updates. - Enhanced metadata handling in memory files, including validation and sanitization. - Refactored scoring logic for context files to improve selection accuracy based on task relevance. These changes optimize memory file management and enhance the overall performance of the auto-mode service. * refactor: enhance learning extraction and formatting in auto-mode service - Improved the learning extraction process by refining the user prompt to focus on meaningful insights and structured JSON output. - Updated the LearningEntry interface to include additional context fields for better documentation of decisions and patterns. - Enhanced the formatLearning function to adopt an Architecture Decision Record (ADR) style, providing richer context for recorded learnings. - Added detailed logging for better traceability during the learning extraction and appending processes. These changes aim to improve the quality and clarity of learnings captured during the auto-mode service's operation. * feat: integrate stripProviderPrefix utility for model ID handling - Added stripProviderPrefix utility to various routes to ensure providers receive bare model IDs. - Updated model references in executeQuery calls across multiple files, enhancing consistency in model ID handling. - Introduced memoryExtractionModel in settings for improved learning extraction tasks. These changes streamline the model ID processing and enhance the overall functionality of the provider interactions. * feat: enhance error handling and server offline management in board actions - Improved error handling in the handleRunFeature and handleStartImplementation functions to throw errors for better caller management. - Integrated connection error detection and server offline handling, redirecting users to the login page when the server is unreachable. - Updated follow-up feature logic to include rollback mechanisms and improved user feedback for error scenarios. These changes enhance the robustness of the board actions by ensuring proper error management and user experience during server connectivity issues. --------- Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com> Co-authored-by: webdevcody <webdevcody@gmail.com>
This commit is contained in:
@@ -2,15 +2,30 @@
|
||||
* Context Loader - Loads project context files for agent prompts
|
||||
*
|
||||
* Provides a shared utility to load context files from .automaker/context/
|
||||
* and format them as system prompt content. Used by both auto-mode-service
|
||||
* and agent-service to ensure all agents are aware of project context.
|
||||
* and memory files from .automaker/memory/, formatting them as system prompt
|
||||
* content. Used by both auto-mode-service and agent-service to ensure all
|
||||
* agents are aware of project context and past learnings.
|
||||
*
|
||||
* Context files contain project-specific rules, conventions, and guidelines
|
||||
* that agents must follow when working on the project.
|
||||
*
|
||||
* Memory files contain learnings from past agent work, including decisions,
|
||||
* gotchas, and patterns that should inform future work.
|
||||
*/
|
||||
|
||||
import path from 'path';
|
||||
import { secureFs } from '@automaker/platform';
|
||||
import {
|
||||
getMemoryDir,
|
||||
parseFrontmatter,
|
||||
initializeMemoryFolder,
|
||||
extractTerms,
|
||||
calculateUsageScore,
|
||||
countMatches,
|
||||
incrementUsageStat,
|
||||
type MemoryFsModule,
|
||||
type MemoryMetadata,
|
||||
} from './memory-loader.js';
|
||||
|
||||
/**
|
||||
* Metadata structure for context files
|
||||
@@ -30,22 +45,48 @@ export interface ContextFileInfo {
|
||||
description?: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* Memory file info (from .automaker/memory/)
|
||||
*/
|
||||
export interface MemoryFileInfo {
|
||||
name: string;
|
||||
path: string;
|
||||
content: string;
|
||||
category: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* Result of loading context files
|
||||
*/
|
||||
export interface ContextFilesResult {
|
||||
files: ContextFileInfo[];
|
||||
memoryFiles: MemoryFileInfo[];
|
||||
formattedPrompt: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* File system module interface for context loading
|
||||
* Compatible with secureFs from @automaker/platform
|
||||
* Includes write methods needed for memory initialization
|
||||
*/
|
||||
export interface ContextFsModule {
|
||||
access: (path: string) => Promise<void>;
|
||||
readdir: (path: string) => Promise<string[]>;
|
||||
readFile: (path: string, encoding?: BufferEncoding) => Promise<string | Buffer>;
|
||||
// Write methods needed for memory operations
|
||||
writeFile: (path: string, content: string) => Promise<void>;
|
||||
mkdir: (path: string, options?: { recursive?: boolean }) => Promise<string | undefined>;
|
||||
appendFile: (path: string, content: string) => Promise<void>;
|
||||
}
|
||||
|
||||
/**
|
||||
* Task context for smart memory selection
|
||||
*/
|
||||
export interface TaskContext {
|
||||
/** Title or name of the current task/feature */
|
||||
title: string;
|
||||
/** Description of what the task involves */
|
||||
description?: string;
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -56,6 +97,14 @@ export interface LoadContextFilesOptions {
|
||||
projectPath: string;
|
||||
/** Optional custom secure fs module (for dependency injection) */
|
||||
fsModule?: ContextFsModule;
|
||||
/** Whether to include memory files from .automaker/memory/ (default: true) */
|
||||
includeMemory?: boolean;
|
||||
/** Whether to initialize memory folder if it doesn't exist (default: true) */
|
||||
initializeMemory?: boolean;
|
||||
/** Task context for smart memory selection - if not provided, only loads high-importance files */
|
||||
taskContext?: TaskContext;
|
||||
/** Maximum number of memory files to load (default: 5) */
|
||||
maxMemoryFiles?: number;
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -130,17 +179,21 @@ ${formattedFiles.join('\n\n---\n\n')}
|
||||
|
||||
/**
|
||||
* Load context files from a project's .automaker/context/ directory
|
||||
* and optionally memory files from .automaker/memory/
|
||||
*
|
||||
* This function loads all .md and .txt files from the context directory,
|
||||
* along with their metadata (descriptions), and formats them into a
|
||||
* system prompt that can be prepended to agent prompts.
|
||||
*
|
||||
* By default, it also loads memory files containing learnings from past
|
||||
* agent work, which helps agents make better decisions.
|
||||
*
|
||||
* @param options - Configuration options
|
||||
* @returns Promise resolving to context files and formatted prompt
|
||||
* @returns Promise resolving to context files, memory files, and formatted prompt
|
||||
*
|
||||
* @example
|
||||
* ```typescript
|
||||
* const { formattedPrompt, files } = await loadContextFiles({
|
||||
* const { formattedPrompt, files, memoryFiles } = await loadContextFiles({
|
||||
* projectPath: '/path/to/project'
|
||||
* });
|
||||
*
|
||||
@@ -154,9 +207,20 @@ ${formattedFiles.join('\n\n---\n\n')}
|
||||
export async function loadContextFiles(
|
||||
options: LoadContextFilesOptions
|
||||
): Promise<ContextFilesResult> {
|
||||
const { projectPath, fsModule = secureFs } = options;
|
||||
const {
|
||||
projectPath,
|
||||
fsModule = secureFs,
|
||||
includeMemory = true,
|
||||
initializeMemory = true,
|
||||
taskContext,
|
||||
maxMemoryFiles = 5,
|
||||
} = options;
|
||||
const contextDir = path.resolve(getContextDir(projectPath));
|
||||
|
||||
const files: ContextFileInfo[] = [];
|
||||
const memoryFiles: MemoryFileInfo[] = [];
|
||||
|
||||
// Load context files
|
||||
try {
|
||||
// Check if directory exists
|
||||
await fsModule.access(contextDir);
|
||||
@@ -170,41 +234,218 @@ export async function loadContextFiles(
|
||||
return (lower.endsWith('.md') || lower.endsWith('.txt')) && f !== 'context-metadata.json';
|
||||
});
|
||||
|
||||
if (textFiles.length === 0) {
|
||||
return { files: [], formattedPrompt: '' };
|
||||
if (textFiles.length > 0) {
|
||||
// Load metadata for descriptions
|
||||
const metadata = await loadContextMetadata(contextDir, fsModule);
|
||||
|
||||
// Load each file with its content and metadata
|
||||
for (const fileName of textFiles) {
|
||||
const filePath = path.join(contextDir, fileName);
|
||||
try {
|
||||
const content = await fsModule.readFile(filePath, 'utf-8');
|
||||
files.push({
|
||||
name: fileName,
|
||||
path: filePath,
|
||||
content: content as string,
|
||||
description: metadata.files[fileName]?.description,
|
||||
});
|
||||
} catch (error) {
|
||||
console.warn(`[ContextLoader] Failed to read context file ${fileName}:`, error);
|
||||
}
|
||||
}
|
||||
}
|
||||
} catch {
|
||||
// Context directory doesn't exist or is inaccessible - that's fine
|
||||
}
|
||||
|
||||
// Load metadata for descriptions
|
||||
const metadata = await loadContextMetadata(contextDir, fsModule);
|
||||
// Load memory files if enabled (with smart selection)
|
||||
if (includeMemory) {
|
||||
const memoryDir = getMemoryDir(projectPath);
|
||||
|
||||
// Load each file with its content and metadata
|
||||
const files: ContextFileInfo[] = [];
|
||||
for (const fileName of textFiles) {
|
||||
const filePath = path.join(contextDir, fileName);
|
||||
// Initialize memory folder if needed
|
||||
if (initializeMemory) {
|
||||
try {
|
||||
const content = await fsModule.readFile(filePath, 'utf-8');
|
||||
files.push({
|
||||
name: fileName,
|
||||
path: filePath,
|
||||
content: content as string,
|
||||
description: metadata.files[fileName]?.description,
|
||||
});
|
||||
} catch (error) {
|
||||
console.warn(`[ContextLoader] Failed to read context file ${fileName}:`, error);
|
||||
await initializeMemoryFolder(projectPath, fsModule as MemoryFsModule);
|
||||
} catch {
|
||||
// Initialization failed, continue without memory
|
||||
}
|
||||
}
|
||||
|
||||
const formattedPrompt = buildContextPrompt(files);
|
||||
try {
|
||||
await fsModule.access(memoryDir);
|
||||
const allMemoryFiles = await fsModule.readdir(memoryDir);
|
||||
|
||||
console.log(
|
||||
`[ContextLoader] Loaded ${files.length} context file(s): ${files.map((f) => f.name).join(', ')}`
|
||||
);
|
||||
// Filter for markdown memory files (except _index.md, case-insensitive)
|
||||
const memoryMdFiles = allMemoryFiles.filter((f) => {
|
||||
const lower = f.toLowerCase();
|
||||
return lower.endsWith('.md') && lower !== '_index.md';
|
||||
});
|
||||
|
||||
return { files, formattedPrompt };
|
||||
} catch {
|
||||
// Context directory doesn't exist or is inaccessible - this is fine
|
||||
return { files: [], formattedPrompt: '' };
|
||||
// Extract terms from task context for matching
|
||||
const taskTerms = taskContext
|
||||
? extractTerms(taskContext.title + ' ' + (taskContext.description || ''))
|
||||
: [];
|
||||
|
||||
// Score and load memory files
|
||||
const scoredFiles: Array<{
|
||||
fileName: string;
|
||||
filePath: string;
|
||||
body: string;
|
||||
metadata: MemoryMetadata;
|
||||
score: number;
|
||||
}> = [];
|
||||
|
||||
for (const fileName of memoryMdFiles) {
|
||||
const filePath = path.join(memoryDir, fileName);
|
||||
try {
|
||||
const rawContent = await fsModule.readFile(filePath, 'utf-8');
|
||||
const { metadata, body } = parseFrontmatter(rawContent as string);
|
||||
|
||||
// Skip empty files
|
||||
if (!body.trim()) continue;
|
||||
|
||||
// Calculate relevance score
|
||||
let score = 0;
|
||||
|
||||
if (taskTerms.length > 0) {
|
||||
// Match task terms against file metadata
|
||||
const tagScore = countMatches(metadata.tags, taskTerms) * 3;
|
||||
const relevantToScore = countMatches(metadata.relevantTo, taskTerms) * 2;
|
||||
const summaryTerms = extractTerms(metadata.summary);
|
||||
const summaryScore = countMatches(summaryTerms, taskTerms);
|
||||
// Split category name on hyphens/underscores for better matching
|
||||
// e.g., "authentication-decisions" matches "authentication"
|
||||
const categoryTerms = fileName
|
||||
.replace('.md', '')
|
||||
.split(/[-_]/)
|
||||
.filter((t) => t.length > 2);
|
||||
const categoryScore = countMatches(categoryTerms, taskTerms) * 4;
|
||||
|
||||
// Usage-based scoring (files that helped before rank higher)
|
||||
const usageScore = calculateUsageScore(metadata.usageStats);
|
||||
|
||||
score =
|
||||
(tagScore + relevantToScore + summaryScore + categoryScore) *
|
||||
metadata.importance *
|
||||
usageScore;
|
||||
} else {
|
||||
// No task context - use importance as score
|
||||
score = metadata.importance;
|
||||
}
|
||||
|
||||
scoredFiles.push({ fileName, filePath, body, metadata, score });
|
||||
} catch (error) {
|
||||
console.warn(`[ContextLoader] Failed to read memory file ${fileName}:`, error);
|
||||
}
|
||||
}
|
||||
|
||||
// Sort by score (highest first)
|
||||
scoredFiles.sort((a, b) => b.score - a.score);
|
||||
|
||||
// Select files to load:
|
||||
// 1. Always include gotchas.md if it exists (unless maxMemoryFiles=0)
|
||||
// 2. Include high-importance files (importance >= 0.9)
|
||||
// 3. Include top scoring files up to maxMemoryFiles
|
||||
const selectedFiles = new Set<string>();
|
||||
|
||||
// Skip selection if maxMemoryFiles is 0
|
||||
if (maxMemoryFiles > 0) {
|
||||
// Always include gotchas.md
|
||||
const gotchasFile = scoredFiles.find((f) => f.fileName === 'gotchas.md');
|
||||
if (gotchasFile) {
|
||||
selectedFiles.add('gotchas.md');
|
||||
}
|
||||
|
||||
// Add high-importance files
|
||||
for (const file of scoredFiles) {
|
||||
if (file.metadata.importance >= 0.9 && selectedFiles.size < maxMemoryFiles) {
|
||||
selectedFiles.add(file.fileName);
|
||||
}
|
||||
}
|
||||
|
||||
// Add top scoring files (if we have task context and room)
|
||||
if (taskTerms.length > 0) {
|
||||
for (const file of scoredFiles) {
|
||||
if (file.score > 0 && selectedFiles.size < maxMemoryFiles) {
|
||||
selectedFiles.add(file.fileName);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Load selected files and increment loaded stat
|
||||
for (const file of scoredFiles) {
|
||||
if (selectedFiles.has(file.fileName)) {
|
||||
memoryFiles.push({
|
||||
name: file.fileName,
|
||||
path: file.filePath,
|
||||
content: file.body,
|
||||
category: file.fileName.replace('.md', ''),
|
||||
});
|
||||
|
||||
// Increment the 'loaded' stat for this file (CRITICAL FIX)
|
||||
// This makes calculateUsageScore work correctly
|
||||
try {
|
||||
await incrementUsageStat(file.filePath, 'loaded', fsModule as MemoryFsModule);
|
||||
} catch {
|
||||
// Non-critical - continue even if stat update fails
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (memoryFiles.length > 0) {
|
||||
const selectedNames = memoryFiles.map((f) => f.category).join(', ');
|
||||
console.log(`[ContextLoader] Selected memory files: ${selectedNames}`);
|
||||
}
|
||||
} catch {
|
||||
// Memory directory doesn't exist - that's fine
|
||||
}
|
||||
}
|
||||
|
||||
// Build combined prompt
|
||||
const contextPrompt = buildContextPrompt(files);
|
||||
const memoryPrompt = buildMemoryPrompt(memoryFiles);
|
||||
const formattedPrompt = [contextPrompt, memoryPrompt].filter(Boolean).join('\n\n');
|
||||
|
||||
const loadedItems = [];
|
||||
if (files.length > 0) {
|
||||
loadedItems.push(`${files.length} context file(s)`);
|
||||
}
|
||||
if (memoryFiles.length > 0) {
|
||||
loadedItems.push(`${memoryFiles.length} memory file(s)`);
|
||||
}
|
||||
if (loadedItems.length > 0) {
|
||||
console.log(`[ContextLoader] Loaded ${loadedItems.join(' and ')}`);
|
||||
}
|
||||
|
||||
return { files, memoryFiles, formattedPrompt };
|
||||
}
|
||||
|
||||
/**
|
||||
* Build a formatted prompt from memory files
|
||||
*/
|
||||
function buildMemoryPrompt(memoryFiles: MemoryFileInfo[]): string {
|
||||
if (memoryFiles.length === 0) {
|
||||
return '';
|
||||
}
|
||||
|
||||
const sections = memoryFiles.map((file) => {
|
||||
return `## ${file.category.toUpperCase()}
|
||||
|
||||
${file.content}`;
|
||||
});
|
||||
|
||||
return `# Project Memory
|
||||
|
||||
The following learnings and decisions from previous work are available.
|
||||
**IMPORTANT**: Review these carefully before making changes that could conflict with past decisions.
|
||||
|
||||
---
|
||||
|
||||
${sections.join('\n\n---\n\n')}
|
||||
|
||||
---
|
||||
`;
|
||||
}
|
||||
|
||||
/**
|
||||
|
||||
@@ -63,5 +63,31 @@ export {
|
||||
type ContextMetadata,
|
||||
type ContextFileInfo,
|
||||
type ContextFilesResult,
|
||||
type ContextFsModule,
|
||||
type LoadContextFilesOptions,
|
||||
type MemoryFileInfo,
|
||||
type TaskContext,
|
||||
} from './context-loader.js';
|
||||
|
||||
// Memory loading
|
||||
export {
|
||||
loadRelevantMemory,
|
||||
initializeMemoryFolder,
|
||||
appendLearning,
|
||||
recordMemoryUsage,
|
||||
getMemoryDir,
|
||||
parseFrontmatter,
|
||||
serializeFrontmatter,
|
||||
extractTerms,
|
||||
calculateUsageScore,
|
||||
countMatches,
|
||||
incrementUsageStat,
|
||||
formatLearning,
|
||||
type MemoryFsModule,
|
||||
type MemoryMetadata,
|
||||
type MemoryFile,
|
||||
type MemoryLoadResult,
|
||||
type UsageStats,
|
||||
type LearningEntry,
|
||||
type SimpleMemoryFile,
|
||||
} from './memory-loader.js';
|
||||
|
||||
685
libs/utils/src/memory-loader.ts
Normal file
685
libs/utils/src/memory-loader.ts
Normal file
@@ -0,0 +1,685 @@
|
||||
/**
|
||||
* Memory Loader - Smart loading of agent memory files
|
||||
*
|
||||
* Loads relevant memory files from .automaker/memory/ based on:
|
||||
* - Tag matching with feature keywords
|
||||
* - Historical usefulness (usage stats)
|
||||
* - File importance
|
||||
*
|
||||
* Memory files use YAML frontmatter for metadata.
|
||||
*/
|
||||
|
||||
import path from 'path';
|
||||
|
||||
/**
|
||||
* File system module interface (compatible with secureFs)
|
||||
*/
|
||||
export interface MemoryFsModule {
|
||||
access: (path: string) => Promise<void>;
|
||||
readdir: (path: string) => Promise<string[]>;
|
||||
readFile: (path: string, encoding?: BufferEncoding) => Promise<string | Buffer>;
|
||||
writeFile: (path: string, content: string) => Promise<void>;
|
||||
mkdir: (path: string, options?: { recursive?: boolean }) => Promise<string | undefined>;
|
||||
appendFile: (path: string, content: string) => Promise<void>;
|
||||
}
|
||||
|
||||
/**
|
||||
* Usage statistics for learning which files are helpful
|
||||
*/
|
||||
export interface UsageStats {
|
||||
loaded: number;
|
||||
referenced: number;
|
||||
successfulFeatures: number;
|
||||
}
|
||||
|
||||
/**
|
||||
* Metadata stored in YAML frontmatter of memory files
|
||||
*/
|
||||
export interface MemoryMetadata {
|
||||
tags: string[];
|
||||
summary: string;
|
||||
relevantTo: string[];
|
||||
importance: number;
|
||||
relatedFiles: string[];
|
||||
usageStats: UsageStats;
|
||||
}
|
||||
|
||||
/**
|
||||
* A loaded memory file with content and metadata
|
||||
*/
|
||||
export interface MemoryFile {
|
||||
name: string;
|
||||
content: string;
|
||||
metadata: MemoryMetadata;
|
||||
}
|
||||
|
||||
/**
|
||||
* Result of loading memory files
|
||||
*/
|
||||
export interface MemoryLoadResult {
|
||||
files: MemoryFile[];
|
||||
formattedPrompt: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* Learning entry to be recorded
|
||||
* Based on Architecture Decision Record (ADR) format for rich context
|
||||
*/
|
||||
export interface LearningEntry {
|
||||
category: string;
|
||||
type: 'decision' | 'learning' | 'pattern' | 'gotcha';
|
||||
content: string;
|
||||
context?: string; // Problem being solved or situation faced
|
||||
why?: string; // Reasoning behind the approach
|
||||
rejected?: string; // Alternative considered and why rejected
|
||||
tradeoffs?: string; // What became easier/harder
|
||||
breaking?: string; // What breaks if changed/removed
|
||||
}
|
||||
|
||||
/**
|
||||
* Create default metadata for new memory files
|
||||
* Returns a new object each time to avoid shared mutable state
|
||||
*/
|
||||
function createDefaultMetadata(): MemoryMetadata {
|
||||
return {
|
||||
tags: [],
|
||||
summary: '',
|
||||
relevantTo: [],
|
||||
importance: 0.5,
|
||||
relatedFiles: [],
|
||||
usageStats: {
|
||||
loaded: 0,
|
||||
referenced: 0,
|
||||
successfulFeatures: 0,
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* In-memory locks to prevent race conditions when updating files
|
||||
*/
|
||||
const fileLocks = new Map<string, Promise<void>>();
|
||||
|
||||
/**
|
||||
* Acquire a lock for a file path, execute the operation, then release
|
||||
*/
|
||||
async function withFileLock<T>(filePath: string, operation: () => Promise<T>): Promise<T> {
|
||||
// Wait for any existing lock on this file
|
||||
const existingLock = fileLocks.get(filePath);
|
||||
if (existingLock) {
|
||||
await existingLock;
|
||||
}
|
||||
|
||||
// Create a new lock
|
||||
let releaseLock: () => void;
|
||||
const lockPromise = new Promise<void>((resolve) => {
|
||||
releaseLock = resolve;
|
||||
});
|
||||
fileLocks.set(filePath, lockPromise);
|
||||
|
||||
try {
|
||||
return await operation();
|
||||
} finally {
|
||||
releaseLock!();
|
||||
fileLocks.delete(filePath);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the memory directory path for a project
|
||||
*/
|
||||
export function getMemoryDir(projectPath: string): string {
|
||||
return path.join(projectPath, '.automaker', 'memory');
|
||||
}
|
||||
|
||||
/**
|
||||
* Parse YAML frontmatter from markdown content
|
||||
* Returns the metadata and the content without frontmatter
|
||||
*/
|
||||
export function parseFrontmatter(content: string): {
|
||||
metadata: MemoryMetadata;
|
||||
body: string;
|
||||
} {
|
||||
// Handle both Unix (\n) and Windows (\r\n) line endings
|
||||
const frontmatterRegex = /^---\s*\r?\n([\s\S]*?)\r?\n---\s*\r?\n/;
|
||||
const match = content.match(frontmatterRegex);
|
||||
|
||||
if (!match) {
|
||||
return { metadata: createDefaultMetadata(), body: content };
|
||||
}
|
||||
|
||||
const frontmatterStr = match[1];
|
||||
const body = content.slice(match[0].length);
|
||||
|
||||
try {
|
||||
// Simple YAML parsing for our specific format
|
||||
const metadata: MemoryMetadata = createDefaultMetadata();
|
||||
|
||||
// Parse tags: [tag1, tag2, tag3]
|
||||
const tagsMatch = frontmatterStr.match(/tags:\s*\[(.*?)\]/);
|
||||
if (tagsMatch) {
|
||||
metadata.tags = tagsMatch[1]
|
||||
.split(',')
|
||||
.map((t) => t.trim().replace(/['"]/g, ''))
|
||||
.filter((t) => t.length > 0); // Filter out empty strings
|
||||
}
|
||||
|
||||
// Parse summary
|
||||
const summaryMatch = frontmatterStr.match(/summary:\s*(.+)/);
|
||||
if (summaryMatch) {
|
||||
metadata.summary = summaryMatch[1].trim().replace(/^["']|["']$/g, '');
|
||||
}
|
||||
|
||||
// Parse relevantTo: [term1, term2]
|
||||
const relevantMatch = frontmatterStr.match(/relevantTo:\s*\[(.*?)\]/);
|
||||
if (relevantMatch) {
|
||||
metadata.relevantTo = relevantMatch[1]
|
||||
.split(',')
|
||||
.map((t) => t.trim().replace(/['"]/g, ''))
|
||||
.filter((t) => t.length > 0); // Filter out empty strings
|
||||
}
|
||||
|
||||
// Parse importance (validate range 0-1)
|
||||
const importanceMatch = frontmatterStr.match(/importance:\s*([\d.]+)/);
|
||||
if (importanceMatch) {
|
||||
const value = parseFloat(importanceMatch[1]);
|
||||
metadata.importance = Math.max(0, Math.min(1, value)); // Clamp to 0-1
|
||||
}
|
||||
|
||||
// Parse relatedFiles: [file1.md, file2.md]
|
||||
const relatedMatch = frontmatterStr.match(/relatedFiles:\s*\[(.*?)\]/);
|
||||
if (relatedMatch) {
|
||||
metadata.relatedFiles = relatedMatch[1]
|
||||
.split(',')
|
||||
.map((t) => t.trim().replace(/['"]/g, ''))
|
||||
.filter((t) => t.length > 0); // Filter out empty strings
|
||||
}
|
||||
|
||||
// Parse usageStats
|
||||
const loadedMatch = frontmatterStr.match(/loaded:\s*(\d+)/);
|
||||
const referencedMatch = frontmatterStr.match(/referenced:\s*(\d+)/);
|
||||
const successMatch = frontmatterStr.match(/successfulFeatures:\s*(\d+)/);
|
||||
|
||||
if (loadedMatch) metadata.usageStats.loaded = parseInt(loadedMatch[1], 10);
|
||||
if (referencedMatch) metadata.usageStats.referenced = parseInt(referencedMatch[1], 10);
|
||||
if (successMatch) metadata.usageStats.successfulFeatures = parseInt(successMatch[1], 10);
|
||||
|
||||
return { metadata, body };
|
||||
} catch {
|
||||
return { metadata: createDefaultMetadata(), body: content };
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Escape a string for safe YAML output
|
||||
* Quotes strings containing special characters
|
||||
*/
|
||||
function escapeYamlString(str: string): string {
|
||||
// If string contains special YAML characters, wrap in quotes
|
||||
if (/[:\[\]{}#&*!|>'"%@`\n\r]/.test(str) || str.trim() !== str) {
|
||||
// Escape any existing quotes and wrap in double quotes
|
||||
return `"${str.replace(/"/g, '\\"')}"`;
|
||||
}
|
||||
return str;
|
||||
}
|
||||
|
||||
/**
|
||||
* Serialize metadata back to YAML frontmatter
|
||||
*/
|
||||
export function serializeFrontmatter(metadata: MemoryMetadata): string {
|
||||
const escapedTags = metadata.tags.map(escapeYamlString);
|
||||
const escapedRelevantTo = metadata.relevantTo.map(escapeYamlString);
|
||||
const escapedRelatedFiles = metadata.relatedFiles.map(escapeYamlString);
|
||||
const escapedSummary = escapeYamlString(metadata.summary);
|
||||
|
||||
return `---
|
||||
tags: [${escapedTags.join(', ')}]
|
||||
summary: ${escapedSummary}
|
||||
relevantTo: [${escapedRelevantTo.join(', ')}]
|
||||
importance: ${metadata.importance}
|
||||
relatedFiles: [${escapedRelatedFiles.join(', ')}]
|
||||
usageStats:
|
||||
loaded: ${metadata.usageStats.loaded}
|
||||
referenced: ${metadata.usageStats.referenced}
|
||||
successfulFeatures: ${metadata.usageStats.successfulFeatures}
|
||||
---`;
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract terms from text for matching
|
||||
* Splits on spaces, removes common words, lowercases
|
||||
*/
|
||||
export function extractTerms(text: string): string[] {
|
||||
const stopWords = new Set([
|
||||
'a',
|
||||
'an',
|
||||
'the',
|
||||
'and',
|
||||
'or',
|
||||
'but',
|
||||
'in',
|
||||
'on',
|
||||
'at',
|
||||
'to',
|
||||
'for',
|
||||
'of',
|
||||
'with',
|
||||
'by',
|
||||
'is',
|
||||
'it',
|
||||
'this',
|
||||
'that',
|
||||
'be',
|
||||
'as',
|
||||
'are',
|
||||
'was',
|
||||
'were',
|
||||
'been',
|
||||
'being',
|
||||
'have',
|
||||
'has',
|
||||
'had',
|
||||
'do',
|
||||
'does',
|
||||
'did',
|
||||
'will',
|
||||
'would',
|
||||
'could',
|
||||
'should',
|
||||
'may',
|
||||
'might',
|
||||
'must',
|
||||
'shall',
|
||||
'can',
|
||||
'need',
|
||||
'dare',
|
||||
'ought',
|
||||
'used',
|
||||
'add',
|
||||
'create',
|
||||
'implement',
|
||||
'build',
|
||||
'make',
|
||||
'update',
|
||||
'fix',
|
||||
'change',
|
||||
'modify',
|
||||
]);
|
||||
|
||||
return text
|
||||
.toLowerCase()
|
||||
.replace(/[^a-z0-9\s]/g, ' ')
|
||||
.split(/\s+/)
|
||||
.filter((word) => word.length > 2 && !stopWords.has(word));
|
||||
}
|
||||
|
||||
/**
|
||||
* Count how many terms match between two arrays (case-insensitive)
|
||||
*/
|
||||
export function countMatches(arr1: string[], arr2: string[]): number {
|
||||
const set2 = new Set(arr2.map((t) => t.toLowerCase()));
|
||||
return arr1.filter((t) => set2.has(t.toLowerCase())).length;
|
||||
}
|
||||
|
||||
/**
|
||||
* Calculate usage-based score for a memory file
|
||||
* Files that are referenced in successful features get higher scores
|
||||
*/
|
||||
export function calculateUsageScore(stats: UsageStats): number {
|
||||
if (stats.loaded === 0) return 1; // New file, neutral score
|
||||
|
||||
const referenceRate = stats.referenced / stats.loaded;
|
||||
const successRate = stats.referenced > 0 ? stats.successfulFeatures / stats.referenced : 0;
|
||||
|
||||
// Base 0.5 + up to 0.3 for reference rate + up to 0.2 for success rate
|
||||
return 0.5 + referenceRate * 0.3 + successRate * 0.2;
|
||||
}
|
||||
|
||||
/**
|
||||
* Load relevant memory files for a feature
|
||||
*
|
||||
* Selects files based on:
|
||||
* - Tag matching with feature keywords (weight: 3)
|
||||
* - RelevantTo matching (weight: 2)
|
||||
* - Summary matching (weight: 1)
|
||||
* - Usage score (multiplier)
|
||||
* - Importance (multiplier)
|
||||
*
|
||||
* Always includes gotchas.md
|
||||
*/
|
||||
export async function loadRelevantMemory(
|
||||
projectPath: string,
|
||||
featureTitle: string,
|
||||
featureDescription: string,
|
||||
fsModule: MemoryFsModule
|
||||
): Promise<MemoryLoadResult> {
|
||||
const memoryDir = getMemoryDir(projectPath);
|
||||
|
||||
try {
|
||||
await fsModule.access(memoryDir);
|
||||
} catch {
|
||||
// Memory directory doesn't exist yet
|
||||
return { files: [], formattedPrompt: '' };
|
||||
}
|
||||
|
||||
const allFiles = await fsModule.readdir(memoryDir);
|
||||
const featureTerms = extractTerms(featureTitle + ' ' + featureDescription);
|
||||
|
||||
// Score each file
|
||||
const scored: Array<{ file: string; score: number; content: string; metadata: MemoryMetadata }> =
|
||||
[];
|
||||
|
||||
for (const file of allFiles) {
|
||||
if (!file.endsWith('.md') || file === '_index.md') continue;
|
||||
|
||||
const filePath = path.join(memoryDir, file);
|
||||
try {
|
||||
const content = (await fsModule.readFile(filePath, 'utf-8')) as string;
|
||||
const { metadata, body } = parseFrontmatter(content);
|
||||
|
||||
// Calculate relevance score
|
||||
const tagScore = countMatches(metadata.tags, featureTerms) * 3;
|
||||
const relevantToScore = countMatches(metadata.relevantTo, featureTerms) * 2;
|
||||
const summaryTerms = extractTerms(metadata.summary);
|
||||
const summaryScore = countMatches(summaryTerms, featureTerms);
|
||||
|
||||
// Usage-based scoring
|
||||
const usageScore = calculateUsageScore(metadata.usageStats);
|
||||
|
||||
// Combined score
|
||||
const score = (tagScore + relevantToScore + summaryScore) * metadata.importance * usageScore;
|
||||
|
||||
// Include if score > 0 or high importance
|
||||
if (score > 0 || metadata.importance >= 0.9) {
|
||||
scored.push({ file, score, content: body, metadata });
|
||||
}
|
||||
} catch {
|
||||
// Skip files that can't be read
|
||||
}
|
||||
}
|
||||
|
||||
// Sort by score, take top 5
|
||||
const topFiles = scored.sort((a, b) => b.score - a.score).slice(0, 5);
|
||||
|
||||
// Always include gotchas.md if it exists
|
||||
const toLoad = new Set(['gotchas.md', ...topFiles.map((f) => f.file)]);
|
||||
|
||||
const loaded: MemoryFile[] = [];
|
||||
for (const file of toLoad) {
|
||||
const existing = scored.find((s) => s.file === file);
|
||||
if (existing) {
|
||||
loaded.push({
|
||||
name: file,
|
||||
content: existing.content,
|
||||
metadata: existing.metadata,
|
||||
});
|
||||
} else if (file === 'gotchas.md') {
|
||||
// Try to load gotchas.md even if it wasn't scored
|
||||
const gotchasPath = path.join(memoryDir, 'gotchas.md');
|
||||
try {
|
||||
const content = (await fsModule.readFile(gotchasPath, 'utf-8')) as string;
|
||||
const { metadata, body } = parseFrontmatter(content);
|
||||
loaded.push({ name: file, content: body, metadata });
|
||||
} catch {
|
||||
// gotchas.md doesn't exist yet
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Build formatted prompt
|
||||
const formattedPrompt = buildMemoryPrompt(loaded);
|
||||
|
||||
return { files: loaded, formattedPrompt };
|
||||
}
|
||||
|
||||
/**
|
||||
* Build a formatted prompt from loaded memory files
|
||||
*/
|
||||
function buildMemoryPrompt(files: MemoryFile[]): string {
|
||||
if (files.length === 0) return '';
|
||||
|
||||
const sections = files.map((file) => {
|
||||
return `## ${file.name.replace('.md', '').toUpperCase()}
|
||||
|
||||
${file.content}`;
|
||||
});
|
||||
|
||||
return `# Project Memory
|
||||
|
||||
The following learnings and decisions from previous work are relevant to this task.
|
||||
**IMPORTANT**: Review these carefully before making changes that could conflict with past decisions.
|
||||
|
||||
---
|
||||
|
||||
${sections.join('\n\n---\n\n')}
|
||||
|
||||
---
|
||||
`;
|
||||
}
|
||||
|
||||
/**
|
||||
* Increment a usage stat in a memory file
|
||||
* Uses file locking to prevent race conditions from concurrent updates
|
||||
*/
|
||||
export async function incrementUsageStat(
|
||||
filePath: string,
|
||||
stat: keyof UsageStats,
|
||||
fsModule: MemoryFsModule
|
||||
): Promise<void> {
|
||||
await withFileLock(filePath, async () => {
|
||||
try {
|
||||
const content = (await fsModule.readFile(filePath, 'utf-8')) as string;
|
||||
const { metadata, body } = parseFrontmatter(content);
|
||||
|
||||
metadata.usageStats[stat]++;
|
||||
|
||||
// serializeFrontmatter ends with "---", add newline before body
|
||||
const newContent = serializeFrontmatter(metadata) + '\n' + body;
|
||||
await fsModule.writeFile(filePath, newContent);
|
||||
} catch {
|
||||
// File doesn't exist or can't be updated - that's fine
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Simple memory file reference for usage tracking
|
||||
*/
|
||||
export interface SimpleMemoryFile {
|
||||
name: string;
|
||||
content: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* Record memory usage after feature completion
|
||||
* Updates usage stats based on what was actually referenced
|
||||
*/
|
||||
export async function recordMemoryUsage(
|
||||
projectPath: string,
|
||||
loadedFiles: SimpleMemoryFile[],
|
||||
agentOutput: string,
|
||||
success: boolean,
|
||||
fsModule: MemoryFsModule
|
||||
): Promise<void> {
|
||||
const memoryDir = getMemoryDir(projectPath);
|
||||
|
||||
for (const file of loadedFiles) {
|
||||
const filePath = path.join(memoryDir, file.name);
|
||||
|
||||
// Check if agent actually referenced this file's content
|
||||
// Simple heuristic: check if any significant terms from the file appear in output
|
||||
const fileTerms = extractTerms(file.content);
|
||||
const outputTerms = extractTerms(agentOutput);
|
||||
const wasReferenced = countMatches(fileTerms, outputTerms) >= 3;
|
||||
|
||||
if (wasReferenced) {
|
||||
await incrementUsageStat(filePath, 'referenced', fsModule);
|
||||
if (success) {
|
||||
await incrementUsageStat(filePath, 'successfulFeatures', fsModule);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Format a learning entry for appending to a memory file
|
||||
* Uses ADR-style format for rich context
|
||||
*/
|
||||
export function formatLearning(learning: LearningEntry): string {
|
||||
const date = new Date().toISOString().split('T')[0];
|
||||
const lines: string[] = [];
|
||||
|
||||
if (learning.type === 'decision') {
|
||||
lines.push(`\n### ${learning.content} (${date})`);
|
||||
if (learning.context) lines.push(`- **Context:** ${learning.context}`);
|
||||
if (learning.why) lines.push(`- **Why:** ${learning.why}`);
|
||||
if (learning.rejected) lines.push(`- **Rejected:** ${learning.rejected}`);
|
||||
if (learning.tradeoffs) lines.push(`- **Trade-offs:** ${learning.tradeoffs}`);
|
||||
if (learning.breaking) lines.push(`- **Breaking if changed:** ${learning.breaking}`);
|
||||
return lines.join('\n');
|
||||
}
|
||||
|
||||
if (learning.type === 'gotcha') {
|
||||
lines.push(`\n#### [Gotcha] ${learning.content} (${date})`);
|
||||
if (learning.context) lines.push(`- **Situation:** ${learning.context}`);
|
||||
if (learning.why) lines.push(`- **Root cause:** ${learning.why}`);
|
||||
if (learning.tradeoffs) lines.push(`- **How to avoid:** ${learning.tradeoffs}`);
|
||||
return lines.join('\n');
|
||||
}
|
||||
|
||||
// Pattern or learning
|
||||
const prefix = learning.type === 'pattern' ? '[Pattern]' : '[Learned]';
|
||||
lines.push(`\n#### ${prefix} ${learning.content} (${date})`);
|
||||
if (learning.context) lines.push(`- **Problem solved:** ${learning.context}`);
|
||||
if (learning.why) lines.push(`- **Why this works:** ${learning.why}`);
|
||||
if (learning.tradeoffs) lines.push(`- **Trade-offs:** ${learning.tradeoffs}`);
|
||||
return lines.join('\n');
|
||||
}
|
||||
|
||||
/**
|
||||
* Append a learning to the appropriate category file
|
||||
* Creates the file with frontmatter if it doesn't exist
|
||||
* Uses file locking to prevent TOCTOU race conditions
|
||||
*/
|
||||
export async function appendLearning(
|
||||
projectPath: string,
|
||||
learning: LearningEntry,
|
||||
fsModule: MemoryFsModule
|
||||
): Promise<void> {
|
||||
console.log(
|
||||
`[MemoryLoader] appendLearning called: category=${learning.category}, type=${learning.type}`
|
||||
);
|
||||
const memoryDir = getMemoryDir(projectPath);
|
||||
// Sanitize category name: lowercase, replace spaces with hyphens, remove special chars
|
||||
const sanitizedCategory = learning.category
|
||||
.toLowerCase()
|
||||
.replace(/\s+/g, '-')
|
||||
.replace(/[^a-z0-9-]/g, '');
|
||||
const fileName = `${sanitizedCategory || 'general'}.md`;
|
||||
const filePath = path.join(memoryDir, fileName);
|
||||
|
||||
// Use file locking to prevent race conditions when multiple processes
|
||||
// try to create the same file simultaneously
|
||||
await withFileLock(filePath, async () => {
|
||||
try {
|
||||
await fsModule.access(filePath);
|
||||
// File exists, append to it
|
||||
const formatted = formatLearning(learning);
|
||||
await fsModule.appendFile(filePath, '\n' + formatted);
|
||||
console.log(`[MemoryLoader] Appended learning to existing file: ${fileName}`);
|
||||
} catch {
|
||||
// File doesn't exist, create it with frontmatter
|
||||
console.log(`[MemoryLoader] Creating new memory file: ${fileName}`);
|
||||
const metadata: MemoryMetadata = {
|
||||
tags: [sanitizedCategory || 'general'],
|
||||
summary: `${learning.category} implementation decisions and patterns`,
|
||||
relevantTo: [sanitizedCategory || 'general'],
|
||||
importance: 0.7,
|
||||
relatedFiles: [],
|
||||
usageStats: { loaded: 0, referenced: 0, successfulFeatures: 0 },
|
||||
};
|
||||
|
||||
const content =
|
||||
serializeFrontmatter(metadata) + `\n# ${learning.category}\n` + formatLearning(learning);
|
||||
|
||||
await fsModule.writeFile(filePath, content);
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Initialize the memory folder for a project
|
||||
* Creates starter files if the folder doesn't exist
|
||||
*/
|
||||
export async function initializeMemoryFolder(
|
||||
projectPath: string,
|
||||
fsModule: MemoryFsModule
|
||||
): Promise<void> {
|
||||
const memoryDir = getMemoryDir(projectPath);
|
||||
|
||||
try {
|
||||
await fsModule.access(memoryDir);
|
||||
// Already exists
|
||||
return;
|
||||
} catch {
|
||||
// Create the directory
|
||||
await fsModule.mkdir(memoryDir, { recursive: true });
|
||||
|
||||
// Create _index.md
|
||||
const indexMetadata: MemoryMetadata = {
|
||||
tags: ['index', 'overview'],
|
||||
summary: 'Overview of project memory categories',
|
||||
relevantTo: ['project', 'memory', 'overview'],
|
||||
importance: 0.5,
|
||||
relatedFiles: [],
|
||||
usageStats: { loaded: 0, referenced: 0, successfulFeatures: 0 },
|
||||
};
|
||||
|
||||
const indexContent =
|
||||
serializeFrontmatter(indexMetadata) +
|
||||
`
|
||||
# Project Memory Index
|
||||
|
||||
This folder contains agent learnings organized by category.
|
||||
Categories are created automatically as agents work on features.
|
||||
|
||||
## How This Works
|
||||
|
||||
1. After each successful feature, learnings are extracted and categorized
|
||||
2. Relevant memory files are loaded into agent context for future features
|
||||
3. Usage statistics help prioritize which memories are most helpful
|
||||
|
||||
## Categories
|
||||
|
||||
- **gotchas.md** - Mistakes and edge cases to avoid
|
||||
- Other categories are created automatically based on feature work
|
||||
`;
|
||||
|
||||
await fsModule.writeFile(path.join(memoryDir, '_index.md'), indexContent);
|
||||
|
||||
// Create gotchas.md
|
||||
const gotchasMetadata: MemoryMetadata = {
|
||||
tags: ['gotcha', 'mistake', 'edge-case', 'bug', 'warning'],
|
||||
summary: 'Mistakes and edge cases to avoid',
|
||||
relevantTo: ['error', 'bug', 'fix', 'issue', 'problem'],
|
||||
importance: 0.9,
|
||||
relatedFiles: [],
|
||||
usageStats: { loaded: 0, referenced: 0, successfulFeatures: 0 },
|
||||
};
|
||||
|
||||
const gotchasContent =
|
||||
serializeFrontmatter(gotchasMetadata) +
|
||||
`
|
||||
# Gotchas
|
||||
|
||||
Mistakes and edge cases to avoid. These are lessons learned from past issues.
|
||||
|
||||
---
|
||||
|
||||
`;
|
||||
|
||||
await fsModule.writeFile(path.join(memoryDir, 'gotchas.md'), gotchasContent);
|
||||
|
||||
console.log(`[MemoryLoader] Initialized memory folder at ${memoryDir}`);
|
||||
}
|
||||
}
|
||||
Reference in New Issue
Block a user